Common Accounting System for Monitoring the ATLAS Distributed Computing Resources
NASA Astrophysics Data System (ADS)
Karavakis, E.; Andreeva, J.; Campana, S.; Gayazov, S.; Jezequel, S.; Saiz, P.; Sargsyan, L.; Schovancova, J.; Ueda, I.; Atlas Collaboration
2014-06-01
This paper covers in detail a variety of accounting tools used to monitor the utilisation of the available computational and storage resources within the ATLAS Distributed Computing during the first three years of Large Hadron Collider data taking. The Experiment Dashboard provides a set of common accounting tools that combine monitoring information originating from many different information sources; either generic or ATLAS specific. This set of tools provides quality and scalable solutions that are flexible enough to support the constantly evolving requirements of the ATLAS user community.
Behavior-Based Fault Monitoring
1990-12-03
processor targeted for avionics and space applications . It appears that the signature monitoring technique can be extended to detect computer viruses as...most common approach is structural duplication. Although effective, duplication is too expensive for all but a few applications . Redundancy can also be...Signature Monitoring and Encryption," Int. Conf. on Dependable Computing for Critical Applications , August 1989. 7. K.D. Wilken and J.P. Shen
Simple, inexpensive computerized rodent activity meters.
Horton, R M; Karachunski, P I; Kellermann, S A; Conti-Fine, B M
1995-10-01
We describe two approaches for using obsolescent computers, either an IBM PC XT or an Apple Macintosh Plus, to accurately quantify spontaneous rodent activity, as revealed by continuous monitoring of the spontaneous usage of running activity wheels. Because such computers can commonly be obtained at little or no expense, and other commonly available materials and inexpensive parts can be used, these meters can be built quite economically. Construction of these meters requires no specialized electronics expertise, and their software requirements are simple. The computer interfaces are potentially of general interest, as they could also be used for monitoring a variety of events in a research setting.
Launch Processing System. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Byrne, F.; Doolittle, G. V.; Hockenberger, R. W.
1976-01-01
This paper presents a functional description of the Launch Processing System, which provides automatic ground checkout and control of the Space Shuttle launch site and airborne systems, with emphasis placed on the Checkout, Control, and Monitor Subsystem. Hardware and software modular design concepts for the distributed computer system are reviewed relative to performing system tests, launch operations control, and status monitoring during ground operations. The communication network design, which uses a Common Data Buffer interface to all computers to allow computer-to-computer communication, is discussed in detail.
The “Common Solutions” Strategy of the Experiment Support group at CERN for the LHC Experiments
NASA Astrophysics Data System (ADS)
Girone, M.; Andreeva, J.; Barreiro Megino, F. H.; Campana, S.; Cinquilli, M.; Di Girolamo, A.; Dimou, M.; Giordano, D.; Karavakis, E.; Kenyon, M. J.; Kokozkiewicz, L.; Lanciotti, E.; Litmaath, M.; Magini, N.; Negri, G.; Roiser, S.; Saiz, P.; Saiz Santos, M. D.; Schovancova, J.; Sciabà, A.; Spiga, D.; Trentadue, R.; Tuckett, D.; Valassi, A.; Van der Ster, D. C.; Shiers, J. D.
2012-12-01
After two years of LHC data taking, processing and analysis and with numerous changes in computing technology, a number of aspects of the experiments’ computing, as well as WLCG deployment and operations, need to evolve. As part of the activities of the Experiment Support group in CERN's IT department, and reinforced by effort from the EGI-InSPIRE project, we present work aimed at common solutions across all LHC experiments. Such solutions allow us not only to optimize development manpower but also offer lower long-term maintenance and support costs. The main areas cover Distributed Data Management, Data Analysis, Monitoring and the LCG Persistency Framework. Specific tools have been developed including the HammerCloud framework, automated services for data placement, data cleaning and data integrity (such as the data popularity service for CMS, the common Victor cleaning agent for ATLAS and CMS and tools for catalogue/storage consistency), the Dashboard Monitoring framework (job monitoring, data management monitoring, File Transfer monitoring) and the Site Status Board. This talk focuses primarily on the strategic aspects of providing such common solutions and how this relates to the overall goals of long-term sustainability and the relationship to the various WLCG Technical Evolution Groups. The success of the service components has given us confidence in the process, and has developed the trust of the stakeholders. We are now attempting to expand the development of common solutions into the more critical workflows. The first is a feasibility study of common analysis workflow execution elements between ATLAS and CMS. We look forward to additional common development in the future.
NASA Technical Reports Server (NTRS)
Byrne, F. (Inventor)
1981-01-01
A high speed common data buffer system is described for providing an interface and communications medium between a plurality of computers utilized in a distributed computer complex forming part of a checkout, command and control system for space vehicles and associated ground support equipment. The system includes the capability for temporarily storing data to be transferred between computers, for transferring a plurality of interrupts between computers, for monitoring and recording these transfers, and for correcting errors incurred in these transfers. Validity checks are made on each transfer and appropriate error notification is given to the computer associated with that transfer.
Unobtrusive monitoring of computer interactions to detect cognitive status in elders.
Jimison, Holly; Pavel, Misha; McKanna, James; Pavel, Jesse
2004-09-01
The U.S. has experienced a rapid growth in the use of computers by elders. E-mail, Web browsing, and computer games are among the most common routine activities for this group of users. In this paper, we describe techniques for unobtrusively monitoring naturally occurring computer interactions to detect sustained changes in cognitive performance. Researchers have demonstrated the importance of the early detection of cognitive decline. Users over the age of 75 are at risk for medically related cognitive problems and confusion, and early detection allows for more effective clinical intervention. In this paper, we present algorithms for inferring a user's cognitive performance using monitoring data from computer games and psychomotor measurements associated with keyboard entry and mouse movement. The inferences are then used to classify significant performance changes, and additionally, to adapt computer interfaces with tailored hints and assistance when needed. These methods were tested in a group of elders in a residential facility.
Shortcomings of low-cost imaging systems for viewing computed radiographs.
Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N
2000-01-01
To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.
Code of Federal Regulations, 2011 CFR
2011-07-01
... operating parameter value and corrective action taken. (6) For each continuous monitoring system, records... operator may retain records on microfilm, computer disks, magnetic tape, or microfiche; and (3) The owner or operator may report required information on paper or on a labeled computer disk using commonly...
Consolidation of cloud computing in ATLAS
NASA Astrophysics Data System (ADS)
Taylor, Ryan P.; Domingues Cordeiro, Cristovao Jose; Giordano, Domenico; Hover, John; Kouba, Tomas; Love, Peter; McNab, Andrew; Schovancova, Jaroslava; Sobie, Randall; ATLAS Collaboration
2017-10-01
Throughout the first half of LHC Run 2, ATLAS cloud computing has undergone a period of consolidation, characterized by building upon previously established systems, with the aim of reducing operational effort, improving robustness, and reaching higher scale. This paper describes the current state of ATLAS cloud computing. Cloud activities are converging on a common contextualization approach for virtual machines, and cloud resources are sharing monitoring and service discovery components. We describe the integration of Vacuum resources, streamlined usage of the Simulation at Point 1 cloud for offline processing, extreme scaling on Amazon compute resources, and procurement of commercial cloud capacity in Europe. Finally, building on the previously established monitoring infrastructure, we have deployed a real-time monitoring and alerting platform which coalesces data from multiple sources, provides flexible visualization via customizable dashboards, and issues alerts and carries out corrective actions in response to problems.
The Effect of Age and Task Difficulty
ERIC Educational Resources Information Center
Mallo, Jason; Nordstrom, Cynthia R.; Bartels, Lynn K.; Traxler, Anthony
2007-01-01
Electronic Performance Monitoring (EPM) is a common technique used to record employee performance. EPM may include counting computer keystrokes, monitoring employees' phone calls or internet activity, or documenting time spent on work activities. Despite EPM's prevalence, no studies have examined how this management tool affects older workers--a…
Monitoring system including an electronic sensor platform and an interrogation transceiver
Kinzel, Robert L.; Sheets, Larry R.
2003-09-23
A wireless monitoring system suitable for a wide range of remote data collection applications. The system includes at least one Electronic Sensor Platform (ESP), an Interrogator Transceiver (IT) and a general purpose host computer. The ESP functions as a remote data collector from a number of digital and analog sensors located therein. The host computer provides for data logging, testing, demonstration, installation checkout, and troubleshooting of the system. The IT transmits signals from one or more ESP's to the host computer to the ESP's. The IT host computer may be powered by a common power supply, and each ESP is individually powered by a battery. This monitoring system has an extremely low power consumption which allows remote operation of the ESP for long periods; provides authenticated message traffic over a wireless network; utilizes state-of-health and tamper sensors to ensure that the ESP is secure and undamaged; has robust housing of the ESP suitable for use in radiation environments; and is low in cost. With one base station (host computer and interrogator transceiver), multiple ESP's may be controlled at a single monitoring site.
The report gives results of a screening evaluation of volatile organic emissions from printed circuit board laminates and potential pollution prevention alternatives. In the evaluation, printed circuit board laminates, without circuitry, commonly found in personal computer (PC) m...
Agile Infrastructure Monitoring
NASA Astrophysics Data System (ADS)
Andrade, P.; Ascenso, J.; Fedorko, I.; Fiorini, B.; Paladin, M.; Pigueiras, L.; Santos, M.
2014-06-01
At the present time, data centres are facing a massive rise in virtualisation and cloud computing. The Agile Infrastructure (AI) project is working to deliver new solutions to ease the management of CERN data centres. Part of the solution consists in a new "shared monitoring architecture" which collects and manages monitoring data from all data centre resources. In this article, we present the building blocks of this new monitoring architecture, the different open source technologies selected for each architecture layer, and how we are building a community around this common effort.
Unified Monitoring Architecture for IT and Grid Services
NASA Astrophysics Data System (ADS)
Aimar, A.; Aguado Corman, A.; Andrade, P.; Belov, S.; Delgado Fernandez, J.; Garrido Bear, B.; Georgiou, M.; Karavakis, E.; Magnoni, L.; Rama Ballesteros, R.; Riahi, H.; Rodriguez Martinez, J.; Saiz, P.; Zolnai, D.
2017-10-01
This paper provides a detailed overview of the Unified Monitoring Architecture (UMA) that aims at merging the monitoring of the CERN IT data centres and the WLCG monitoring using common and widely-adopted open source technologies such as Flume, Elasticsearch, Hadoop, Spark, Kibana, Grafana and Zeppelin. It provides insights and details on the lessons learned, explaining the work performed in order to monitor the CERN IT data centres and the WLCG computing activities such as the job processing, data access and transfers, and the status of sites and services.
Wagner, Richard J.; Boulger, Robert W.; Oblinger, Carolyn J.; Smith, Brett A.
2006-01-01
The U.S. Geological Survey uses continuous water-quality monitors to assess the quality of the Nation's surface water. A common monitoring-system configuration for water-quality data collection is the four-parameter monitoring system, which collects temperature, specific conductance, dissolved oxygen, and pH data. Such systems also can be configured to measure other properties, such as turbidity or fluorescence. Data from sensors can be used in conjunction with chemical analyses of samples to estimate chemical loads. The sensors that are used to measure water-quality field parameters require careful field observation, cleaning, and calibration procedures, as well as thorough procedures for the computation and publication of final records. This report provides guidelines for site- and monitor-selection considerations; sensor inspection and calibration methods; field procedures; data evaluation, correction, and computation; and record-review and data-reporting processes, which supersede the guidelines presented previously in U.S. Geological Survey Water-Resources Investigations Report WRIR 00-4252. These procedures have evolved over the past three decades, and the process continues to evolve with newer technologies.
Gregory Elmes; Thomas Millette; Charles B. Yuill
1991-01-01
GypsES, a decision-support and expert system for the management of Gypsy Moth addresses five related research problems in a modular, computer-based project. The modules are hazard rating, monitoring, prediction, treatment decision and treatment implementation. One common component is a geographic information system designed to function intelligently. We refer to this...
INTEGRATED MONITORING HARDWARE DEVELOPMENTS AT LOS ALAMOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. PARKER; J. HALBIG; ET AL
1999-09-01
The hardware of the integrated monitoring system supports a family of instruments having a common internal architecture and firmware. Instruments can be easily configured from application-specific personality boards combined with common master-processor and high- and low-voltage power supply boards, and basic operating firmware. The instruments are designed to function autonomously to survive power and communication outages and to adapt to changing conditions. The personality boards allow measurement of gross gammas and neutrons, neutron coincidence and multiplicity, and gamma spectra. In addition, the Intelligent Local Node (ILON) provides a moderate-bandwidth network to tie together instruments, sensors, and computers.
De Georgia, Michael A.; Kaffashi, Farhad; Jacono, Frank J.; Loparo, Kenneth A.
2015-01-01
There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes. PMID:25734185
De Georgia, Michael A; Kaffashi, Farhad; Jacono, Frank J; Loparo, Kenneth A
2015-01-01
There is a broad consensus that 21st century health care will require intensive use of information technology to acquire and analyze data and then manage and disseminate information extracted from the data. No area is more data intensive than the intensive care unit. While there have been major improvements in intensive care monitoring, the medical industry, for the most part, has not incorporated many of the advances in computer science, biomedical engineering, signal processing, and mathematics that many other industries have embraced. Acquiring, synchronizing, integrating, and analyzing patient data remain frustratingly difficult because of incompatibilities among monitoring equipment, proprietary limitations from industry, and the absence of standard data formatting. In this paper, we will review the history of computers in the intensive care unit along with commonly used monitoring and data acquisition systems, both those commercially available and those being developed for research purposes.
Factors leading to the computer vision syndrome: an issue at the contemporary workplace.
Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J
2007-01-01
Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan-1(D/A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degree to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p < 0.10). Participants in Group 1 reported more pain than participants in Group 2. Associations between the CVS and other personal or ergonomic variables were not statistically significant. Our findings show that the most important factor leading to the syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.
Factors leading to the Computer Vision Syndrome: an issue at the contemporary workplace.
Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J
2004-01-01
Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan(-1)(D/ A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degrees to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p<0.10). Participants in Group 1 reported more pain than participants in Group 2. Associations between the CVS and other personal or ergonomic variables were not statistically significant. Our findings show that most important factor leading to the syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.
ERIC Educational Resources Information Center
Kerr, Matthew A.; Symons, Sonya E.
2006-01-01
This study examined whether children's reading rate, comprehension, and recall are affected by computer presentation of text. Participants were 60 grade five students, who each read two expository texts, one in a traditional print format and the other from a computer monitor, which used a common scrolling text interface. After reading each text,…
Farias Zuniga, Amanda M; Côté, Julie N
2017-06-01
The effects of performing a 90-minute computer task with a laptop versus a dual monitor desktop workstation were investigated in healthy young male and female adults. Work-related musculoskeletal disorders are common among computer (especially female) users. Laptops have surpassed desktop computer sales, and working with multiple monitors has also become popular. However, few studies have provided objective evidence on how they affect the musculoskeletal system in both genders. Twenty-seven healthy participants (mean age = 24.6 years; 13 males) completed a 90-minute computer task while using a laptop or dual monitor (DualMon) desktop. Electromyography (EMG) from eight upper body muscles and visual strain were measured throughout the task. Neck proprioception was tested before and after the computer task using a head-repositioning test. EMG amplitude (root mean square [RMS]), variability (coefficients of variation [CV]), and normalized mutual information (NMI) were computed. Visual strain ( p < .01) and right upper trapezius RMS ( p = .03) increased significantly over time regardless of workstation. Right cervical erector spinae RMS and cervical NMI were smaller, while degrees of overshoot (mean = 4.15°) and end position error (mean = 1.26°) were larger in DualMon regardless of time. Effects on muscle activity were more pronounced in males, whereas effects on proprioception were more pronounced in females. Results suggest that compared to laptop, DualMon work is effective in reducing cervical muscle activity, dissociating cervical connectivity, and maintaining more typical neck repositioning patterns, suggesting some health-protective effects. This evidence could be considered when deciding on computer workstation designs.
Developing the human-computer interface for Space Station Freedom
NASA Technical Reports Server (NTRS)
Holden, Kritina L.
1991-01-01
For the past two years, the Human-Computer Interaction Laboratory (HCIL) at the Johnson Space Center has been involved in prototyping and prototype reviews of in support of the definition phase of the Space Station Freedom program. On the Space Station, crew members will be interacting with multi-monitor workstations where interaction with several displays at one time will be common. The HCIL has conducted several experiments to begin to address design issues for this complex system. Experiments have dealt with design of ON/OFF indicators, the movement of the cursor across multiple monitors, and the importance of various windowing capabilities for users performing multiple tasks simultaneously.
Wagner, Richard J.; Mattraw, Harold C.; Ritz, George F.; Smith, Brett A.
2000-01-01
The U.S. Geological Survey uses continuous water-quality monitors to assess variations in the quality of the Nation's surface water. A common system configuration for data collection is the four-parameter water-quality monitoring system, which collects temperature, specific conductance, dissolved oxygen, and pH data, although systems can be configured to measure other properties such as turbidity or chlorophyll. The sensors that are used to measure these water properties require careful field observation, cleaning, and calibration procedures, as well as thorough procedures for the computation and publication of final records. Data from sensors can be used in conjunction with collected samples and chemical analyses to estimate chemical loads. This report provides guidelines for site-selection considerations, sensor test methods, field procedures, error correction, data computation, and review and publication processes. These procedures have evolved over the past three decades, and the process continues to evolve with newer technologies.
NASA Astrophysics Data System (ADS)
Varela Rodriguez, F.
2011-12-01
The control system of each of the four major Experiments at the CERN Large Hadron Collider (LHC) is distributed over up to 160 computers running either Linux or Microsoft Windows. A quick response to abnormal situations of the computer infrastructure is crucial to maximize the physics usage. For this reason, a tool was developed to supervise, identify errors and troubleshoot such a large system. Although the monitoring of the performance of the Linux computers and their processes was available since the first versions of the tool, it is only recently that the software package has been extended to provide similar functionality for the nodes running Microsoft Windows as this platform is the most commonly used in the LHC detector control systems. In this paper, the architecture and the functionality of the Windows Management Instrumentation (WMI) client developed to provide centralized monitoring of the nodes running different flavour of the Microsoft platform, as well as the interface to the SCADA software of the control systems are presented. The tool is currently being commissioned by the Experiments and it has already proven to be very efficient optimize the running systems and to detect misbehaving processes or nodes.
NASA Technical Reports Server (NTRS)
1988-01-01
TherEx Inc.'s AT-1 Computerized Ataxiameter precisely evaluates posture and balance disturbances that commonly accompany neurological and musculoskeletal disorders. Complete system includes two-strain gauged footplates, signal conditioning circuitry, a computer monitor, printer and a stand-alone tiltable balance platform. AT-1 serves as assessment tool, treatment monitor, and rehabilitation training device. It allows clinician to document quantitatively the outcome of treatment and analyze data over time to develop outcome standards for several classifications of patients. It can evaluate specifically the effects of surgery, drug treatment, physical therapy or prosthetic devices.
Packet flow monitoring tool and method
Thiede, David R [Richland, WA
2009-07-14
A system and method for converting packet streams into session summaries. Session summaries are a group of packets each having a common source and destination internet protocol (IP) address, and, if present in the packets, common ports. The system first captures packets from a transport layer of a network of computer systems, then decodes the packets captured to determine the destination IP address and the source IP address. The system then identifies packets having common destination IP addresses and source IP addresses, then writes the decoded packets to an allocated memory structure as session summaries in a queue.
Time and number of displays impact critical signal detection in fetal heart rate tracings.
Anderson, Brittany L; Scerbo, Mark W; Belfore, Lee A; Abuhamad, Alfred Z
2011-06-01
Interest in centralized monitoring in labor and delivery units is growing because it affords the opportunity to monitor multiple patients simultaneously. However, a long history of research on sustained attention reveals these types of monitoring tasks can be problematic. The goal of the present experiment was to examine the ability of individuals to detect critical signals in fetal heart rate (FHR) tracings in one or more displays over an extended period of time. Seventy-two participants monitored one, two, or four computer-simulated FHR tracings on a computer display for the appearance of late decelerations over a 48-minute vigil. Measures of subjective stress and workload were also obtained before and after the vigil. The results showed that detection accuracy decreased over time and also declined as the number of displays increased. The subjective reports indicated that participants found the task to be stressful and mentally demanding, effortful, and frustrating. The results suggest that centralized monitoring that allows many patients to be monitored simultaneously may impose a detrimental attentional burden on the observer. Furthermore, this seemingly benign task may impose an additional source of stress and mental workload above what is commonly found in labor and delivery units. © Thieme Medical Publishers.
NASA Astrophysics Data System (ADS)
Kerst, Stijn; Shyrokau, Barys; Holweg, Edward
2018-05-01
This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Miner, Paul S.; Koppen, Sandra V.
2008-01-01
This report describes the design of the test articles and monitoring systems developed to characterize the response of a fault-tolerant computer communication system when stressed beyond the theoretical limits for guaranteed correct performance. A high-intensity radiated electromagnetic field (HIRF) environment was selected as the means of injecting faults, as such environments are known to have the potential to cause arbitrary and coincident common-mode fault manifestations that can overwhelm redundancy management mechanisms. The monitors generate stimuli for the systems-under-test (SUTs) and collect data in real-time on the internal state and the response at the external interfaces. A real-time health assessment capability was developed to support the automation of the test. A detailed description of the nature and structure of the collected data is included. The goal of the report is to provide insight into the design and operation of these systems, and to serve as a reference document for use in post-test analyses.
Aquatic Toxic Analysis by Monitoring Fish Behavior Using Computer Vision: A Recent Progress
Fu, Longwen; Liu, Zuoyi
2018-01-01
Video tracking based biological early warning system achieved a great progress with advanced computer vision and machine learning methods. Ability of video tracking of multiple biological organisms has been largely improved in recent years. Video based behavioral monitoring has become a common tool for acquiring quantified behavioral data for aquatic risk assessment. Investigation of behavioral responses under chemical and environmental stress has been boosted by rapidly developed machine learning and artificial intelligence. In this paper, we introduce the fundamental of video tracking and present the pioneer works in precise tracking of a group of individuals in 2D and 3D space. Technical and practical issues suffered in video tracking are explained. Subsequently, the toxic analysis based on fish behavioral data is summarized. Frequently used computational methods and machine learning are explained with their applications in aquatic toxicity detection and abnormal pattern analysis. Finally, advantages of recent developed deep learning approach in toxic prediction are presented. PMID:29849612
Assessment of traffic noise levels in urban areas using different soft computing techniques.
Tomić, J; Bogojević, N; Pljakić, M; Šumarac-Pavlović, D
2016-10-01
Available traffic noise prediction models are usually based on regression analysis of experimental data, and this paper presents the application of soft computing techniques in traffic noise prediction. Two mathematical models are proposed and their predictions are compared to data collected by traffic noise monitoring in urban areas, as well as to predictions of commonly used traffic noise models. The results show that application of evolutionary algorithms and neural networks may improve process of development, as well as accuracy of traffic noise prediction.
Advances in Grid Computing for the FabrIc for Frontier Experiments Project at Fermialb
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herner, K.; Alba Hernandex, A. F.; Bhat, S.
The FabrIc for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientic Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of diering size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certicate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have signicantly matured, and present an increasinglymore » complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the eorts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production work ows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular work ows, and support troubleshooting and triage in case of problems. Recently a new certicate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specic third-party Certicate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.« less
Advances in Grid Computing for the Fabric for Frontier Experiments Project at Fermilab
NASA Astrophysics Data System (ADS)
Herner, K.; Alba Hernandez, A. F.; Bhat, S.; Box, D.; Boyd, J.; Di Benedetto, V.; Ding, P.; Dykstra, D.; Fattoruso, M.; Garzoglio, G.; Kirby, M.; Kreymer, A.; Levshina, T.; Mazzacane, A.; Mengel, M.; Mhashilkar, P.; Podstavkov, V.; Retzke, K.; Sharma, N.; Teheran, J.
2017-10-01
The Fabric for Frontier Experiments (FIFE) project is a major initiative within the Fermilab Scientific Computing Division charged with leading the computing model for Fermilab experiments. Work within the FIFE project creates close collaboration between experimenters and computing professionals to serve high-energy physics experiments of differing size, scope, and physics area. The FIFE project has worked to develop common tools for job submission, certificate management, software and reference data distribution through CVMFS repositories, robust data transfer, job monitoring, and databases for project tracking. Since the projects inception the experiments under the FIFE umbrella have significantly matured, and present an increasingly complex list of requirements to service providers. To meet these requirements, the FIFE project has been involved in transitioning the Fermilab General Purpose Grid cluster to support a partitionable slot model, expanding the resources available to experiments via the Open Science Grid, assisting with commissioning dedicated high-throughput computing resources for individual experiments, supporting the efforts of the HEP Cloud projects to provision a variety of back end resources, including public clouds and high performance computers, and developing rapid onboarding procedures for new experiments and collaborations. The larger demands also require enhanced job monitoring tools, which the project has developed using such tools as ElasticSearch and Grafana. in helping experiments manage their large-scale production workflows. This group in turn requires a structured service to facilitate smooth management of experiment requests, which FIFE provides in the form of the Production Operations Management Service (POMS). POMS is designed to track and manage requests from the FIFE experiments to run particular workflows, and support troubleshooting and triage in case of problems. Recently a new certificate management infrastructure called Distributed Computing Access with Federated Identities (DCAFI) has been put in place that has eliminated our dependence on a Fermilab-specific third-party Certificate Authority service and better accommodates FIFE collaborators without a Fermilab Kerberos account. DCAFI integrates the existing InCommon federated identity infrastructure, CILogon Basic CA, and a MyProxy service using a new general purpose open source tool. We will discuss the general FIFE onboarding strategy, progress in expanding FIFE experiments presence on the Open Science Grid, new tools for job monitoring, the POMS service, and the DCAFI project.
Goostrey, Sonya; Treleaven, Julia; Johnston, Venerina
2014-05-01
This study evaluated the impact on neck movement and muscle activity of placing documents in three commonly used locations: in-line, flat desktop left of the keyboard and laterally placed level with the computer screen. Neck excursion during three standard head movements between the computer monitor and each document location and neck extensor and upper trapezius muscle activity during a 5 min typing task for each of the document locations was measured in 20 healthy participants. Results indicated that muscle activity and neck flexion were least when documents were placed laterally suggesting it may be the optimal location. The desktop option produced both the greatest neck movement and muscle activity in all muscle groups. The in-line document location required significantly more neck flexion but less lateral flexion and rotation than the laterally placed document. Evaluation of other holders is needed to guide decision making for this commonly used office equipment. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
HappyFace as a generic monitoring tool for HEP experiments
NASA Astrophysics Data System (ADS)
Kawamura, Gen; Magradze, Erekle; Musheghyan, Haykuhi; Quadt, Arnulf; Rzehorz, Gerhard
2015-12-01
The importance of monitoring on HEP grid computing systems is growing due to a significant increase in their complexity. Computer scientists and administrators have been studying and building effective ways to gather information on and clarify a status of each local grid infrastructure. The HappyFace project aims at making the above-mentioned workflow possible. It aggregates, processes and stores the information and the status of different HEP monitoring resources into the common database of HappyFace. The system displays the information and the status through a single interface. However, this model of HappyFace relied on the monitoring resources which are always under development in the HEP experiments. Consequently, HappyFace needed to have direct access methods to the grid application and grid service layers in the different HEP grid systems. To cope with this issue, we use a reliable HEP software repository, the CernVM File System. We propose a new implementation and an architecture of HappyFace, the so-called grid-enabled HappyFace. It allows its basic framework to connect directly to the grid user applications and the grid collective services, without involving the monitoring resources in the HEP grid systems. This approach gives HappyFace several advantages: Portability, to provide an independent and generic monitoring system among the HEP grid systems. Eunctionality, to allow users to perform various diagnostic tools in the individual HEP grid systems and grid sites. Elexibility, to make HappyFace beneficial and open for the various distributed grid computing environments. Different grid-enabled modules, to connect to the Ganga job monitoring system and to check the performance of grid transfers among the grid sites, have been implemented. The new HappyFace system has been successfully integrated and now it displays the information and the status of both the monitoring resources and the direct access to the grid user applications and the grid collective services.
Computer hardware for radiologists: Part 2
Indrajit, IK; Alam, A
2010-01-01
Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. “Storage drive” is a term describing a “memory” hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. “Drive interfaces” connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular “input/output devices” used commonly with computers are the printer, monitor, mouse, and keyboard. The “bus” is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. “Ports” are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the ‘ever increasing’ digital future. PMID:21423895
Computer hardware for radiologists: Part 2.
Indrajit, Ik; Alam, A
2010-11-01
Computers are an integral part of modern radiology equipment. In the first half of this two-part article, we dwelt upon some fundamental concepts regarding computer hardware, covering components like motherboard, central processing unit (CPU), chipset, random access memory (RAM), and memory modules. In this article, we describe the remaining computer hardware components that are of relevance to radiology. "Storage drive" is a term describing a "memory" hardware used to store data for later retrieval. Commonly used storage drives are hard drives, floppy drives, optical drives, flash drives, and network drives. The capacity of a hard drive is dependent on many factors, including the number of disk sides, number of tracks per side, number of sectors on each track, and the amount of data that can be stored in each sector. "Drive interfaces" connect hard drives and optical drives to a computer. The connections of such drives require both a power cable and a data cable. The four most popular "input/output devices" used commonly with computers are the printer, monitor, mouse, and keyboard. The "bus" is a built-in electronic signal pathway in the motherboard to permit efficient and uninterrupted data transfer. A motherboard can have several buses, including the system bus, the PCI express bus, the PCI bus, the AGP bus, and the (outdated) ISA bus. "Ports" are the location at which external devices are connected to a computer motherboard. All commonly used peripheral devices, such as printers, scanners, and portable drives, need ports. A working knowledge of computers is necessary for the radiologist if the workflow is to realize its full potential and, besides, this knowledge will prepare the radiologist for the coming innovations in the 'ever increasing' digital future.
First in vivo traumatic brain injury imaging via magnetic particle imaging
NASA Astrophysics Data System (ADS)
Orendorff, Ryan; Peck, Austin J.; Zheng, Bo; Shirazi, Shawn N.; Ferguson, R. Matthew; Khandhar, Amit P.; Kemp, Scott J.; Goodwill, Patrick; Krishnan, Kannan M.; Brooks, George A.; Kaufer, Daniela; Conolly, Steven
2017-05-01
Emergency room visits due to traumatic brain injury (TBI) is common, but classifying the severity of the injury remains an open challenge. Some subjective methods such as the Glasgow Coma Scale attempt to classify traumatic brain injuries, as well as some imaging based modalities such as computed tomography and magnetic resonance imaging. However, to date it is still difficult to detect and monitor mild to moderate injuries. In this report, we demonstrate that the magnetic particle imaging (MPI) modality can be applied to imaging TBI events with excellent contrast. MPI can monitor injected iron nanoparticles over long time scales without signal loss, allowing researchers and clinicians to monitor the change in blood pools as the wound heals.
Real-time simulation of the retina allowing visualization of each processing stage
NASA Astrophysics Data System (ADS)
Teeters, Jeffrey L.; Werblin, Frank S.
1991-08-01
The retina computes to let us see, but can we see the retina compute? Until now, the answer has been no, because the unconscious nature of the processing hides it from our view. Here the authors describe a method of seeing computations performed throughout the retina. This is achieved by using neurophysiological data to construct a model of the retina, and using a special-purpose image processing computer (PIPE) to implement the model in real time. Processing in the model is organized into stages corresponding to computations performed by each retinal cell type. The final stage is the transient (change detecting) ganglion cell. A CCD camera forms the input image, and the activity of a selected retinal cell type is the output which is displayed on a TV monitor. By changing the retina cell driving the monitor, the progressive transformations of the image by the retina can be observed. These simulations demonstrate the ubiquitous presence of temporal and spatial variations in the patterns of activity generated by the retina which are fed into the brain. The dynamical aspects make these patterns very different from those generated by the common DOG (Difference of Gaussian) model of receptive field. Because the retina is so successful in biological vision systems, the processing described here may be useful in machine vision.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Compliant Microcomputers, Including Personal Computers, Monitors and Printers. 1552.239-103 Section 1552.239... Star Compliant Microcomputers, Including Personal Computers, Monitors and Printers. As prescribed in... Personal Computers, Monitors, and Printers (APR 1996) (a) The Contractor shall provide computer products...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Compliant Microcomputers, Including Personal Computers, Monitors and Printers. 1552.239-103 Section 1552.239... Star Compliant Microcomputers, Including Personal Computers, Monitors and Printers. As prescribed in... Personal Computers, Monitors, and Printers (APR 1996) (a) The Contractor shall provide computer products...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Compliant Microcomputers, Including Personal Computers, Monitors and Printers. 1552.239-103 Section 1552.239... Star Compliant Microcomputers, Including Personal Computers, Monitors and Printers. As prescribed in... Personal Computers, Monitors, and Printers (APR 1996) (a) The Contractor shall provide computer products...
Ergonomics in the electronic library.
Thibodeau, P L; Melamut, S J
1995-01-01
New technologies are changing the face of information services and how those services are delivered. Libraries spend a great deal of time planning the hardware and software implementations of electronic information services, but the human factors are often overlooked. Computers and electronic tools have changed the nature of many librarians' daily work, creating new problems, including stress, fatigue, and cumulative trauma disorders. Ergonomic issues need to be considered when designing or redesigning facilities for electronic resources and services. Libraries can prevent some of the common problems that appear in the digital workplace by paying attention to basic ergonomic issues when designing workstations and work areas. Proper monitor placement, lighting, workstation setup, and seating prevent many of the common occupational problems associated with computers. Staff training will further reduce the likelihood of ergonomic problems in the electronic workplace. PMID:7581189
Advanced CO2 removal process control and monitor instrumentation development
NASA Technical Reports Server (NTRS)
Heppner, D. B.; Dalhausen, M. J.; Klimes, R.
1982-01-01
A progam to evaluate, design and demonstrate major advances in control and monitor instrumentation was undertaken. A carbon dioxide removal process, one whose maturity level makes it a prime candidate for early flight demonstration was investigated. The instrumentation design incorporates features which are compatible with anticipated flight requirements. Current electronics technology and projected advances are included. In addition, the program established commonality of components for all advanced life support subsystems. It was concluded from the studies and design activities conducted under this program that the next generation of instrumentation will be greatly smaller than the prior one. Not only physical size but weight, power and heat rejection requirements were reduced in the range of 80 to 85% from the former level of research and development instrumentation. Using a microprocessor based computer, a standard computer bus structure and nonvolatile memory, improved fabrication techniques and aerospace packaging this instrumentation will greatly enhance overall reliability and total system availability.
Gray, John R.; Gartner, Jeffrey W.
2010-01-01
Traditional methods for characterizing selected properties of suspended sediments in rivers are being augmented and in some cases replaced by cost-effective surrogate instruments and methods that produce a temporally dense time series of quantifiably accurate data for use primarily in sediment-flux computations. Turbidity is the most common such surrogate technology, and the first to be sanctioned by the U.S. Geological Survey for use in producing data used in concert with water-discharge data to compute sediment concentrations and fluxes for storage in the National Water Information System. Other technologies, including laser-diffraction, digital photo-optic, acoustic-attenuation and backscatter, and pressure-difference techniques are being evaluated for producing reliable sediment concentration and, in some cases, particle-size distribution data. Each technology addresses a niche for sediment monitoring. Their performances range from compelling to disappointing. Some of these technologies have the potential to revolutionize fluvial-sediment data collection, analysis, and availability.
Alzheimer disease: focus on computed tomography.
Reynolds, April
2013-01-01
Alzheimer disease is the most common type of dementia, affecting approximately 5.3 million Americans. This debilitating disease is marked by memory loss, confusion, and loss of cognitive ability. The exact cause of Alzheimer disease is unknown although research suggests that it might result from a combination of factors. The hallmarks of Alzheimer disease are the presence of beta-amyloid plaques and neurofibrillary tangles in the brain. Radiologic imaging can help physicians detect these structural characteristics and monitor disease progression and brain function. Computed tomography and magnetic resonance imaging are considered first-line imaging modalities for the routine evaluation of Alzheimer disease.
Study to design and develop remote manipulator system
NASA Technical Reports Server (NTRS)
Hill, J. W.; Sword, A. J.
1973-01-01
Human performance measurement techniques for remote manipulation tasks and remote sensing techniques for manipulators are described for common manipulation tasks, performance is monitored by means of an on-line computer capable of measuring the joint angles of both master and slave arms as a function of time. The computer programs allow measurements of the operator's strategy and physical quantities such as task time and power consumed. The results are printed out after a test run to compare different experimental conditions. For tracking tasks, we describe a method of displaying errors in three dimensions and measuring the end-effector position in three dimensions.
Devices Would Detect Drugs In Sweat
NASA Technical Reports Server (NTRS)
Mintz, Fredrick W.; Richards, Gil; Kidwell, David A.; Foster, Conrad; Kern, Roger G.; Nelson, Gregory A.
1996-01-01
Proposed devices worn on skin detect such substances as methamphetamine, morphine, tetrahydrocannabinol (THC), and cocaine in wearers' sweat and transmits radio signals in response to computer queries. Called Remote Biochemical Assay Telemetering System (R-BATS), commonly referred to as "drug badge," attached to wearer by use of adhesive wristband. Used for noninvasive monitoring of levels of prescribed medications in hospital and home-care settings and to detect overdoses quickly.
Monitoring system and methods for a distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2010-01-01
A monitoring system and methods are provided for a distributed and recoverable digital control system. The monitoring system generally comprises two independent monitoring planes within the control system. The first monitoring plane is internal to the computing units in the control system, and the second monitoring plane is external to the computing units. The internal first monitoring plane includes two in-line monitors. The first internal monitor is a self-checking, lock-step-processing monitor with integrated rapid recovery capability. The second internal monitor includes one or more reasonableness monitors, which compare actual effector position with commanded effector position. The external second monitor plane includes two monitors. The first external monitor includes a pre-recovery computing monitor, and the second external monitor includes a post recovery computing monitor. Various methods for implementing the monitoring functions are also disclosed.
Common View Time Transfer Using Worldwide GPS and DMA Monitor Stations
NASA Technical Reports Server (NTRS)
Reid, Wilson G.; McCaskill, Thomas B.; Oaks, Orville J.; Buisson, James A.; Warren, Hugh E.
1996-01-01
Analysis of the on-orbit Navstar clocks and the Global Positioning System (GPS) monitor station reference clocks is performed by the Naval Research Laboratory using both broadcast and postprocessed precise ephemerides. The precise ephemerides are produced by the Defense Mapping Agency (DMA) for each of the GPS space vehicles from pseudo-range measurements collected at five GPS and at five DMA monitor stations spaced around the world. Recently, DMA established an additional site co-located with the US Naval Observatory precise time site. The time reference for the new DMA site is the DoD Master Clock. Now, for the first time, it is possible to transfer time every 15 minutes via common view from the DoD Master Clock to the 11 GPS and DMA monitor stations. The estimated precision of a single common-view time transfer measurement taken over a 15-minute interval was between 1.4 and 2.7 nanoseconds. Using the measurements from all Navstar space vehicles in common view during the 15-minute interval, typically 3-7 space vehicles, improved the estimate of the precision to between 0.65 and 1.13 nanoseconds. The mean phase error obtained from closure of the time transfer around the world using the 11 monitor stations and the 25 space vehicle clocks over a period of 4 months had a magnitude of 31 picoseconds. Analysis of the low noise time transfer from the DoD Master Clock to each of the monitor stations yields not only the bias in the time of the reference clock, but also focuses attention on structure in the behaviour of the reference clock not previously seen. Furthermore, the time transfer provides a a uniformly sampled database of 15-minute measurements that make possible, for the first time, the direct and exhaustive computation of the frequency stability of the monitor station reference clocks. To lend perspective to the analysis, a summary is given of the discontinuities in phase and frequency that occurred in the reference clock at the Master Control Station during the period covered by the analysis.
Digital photocontrol of the network of live excitable cells
NASA Astrophysics Data System (ADS)
Erofeev, I. S.; Magome, N.; Agladze, K. I.
2011-11-01
Recent development of tissue engineering techniques allows creating and maintaining almost indefinitely networks of excitable cells with desired architecture. We coupled the network of live excitable cardiac cells with a common computer by sensitizing them to light, projecting a light pattern on the layer of cells, and monitoring excitation with the aid of fluorescent probes (optical mapping). As a sensitizing substance we used azobenzene trimethylammonium bromide (AzoTAB). This substance undergoes cis-trans-photoisomerization and trans-isomer of AzoTAB inhibits excitation in the cardiac cells, while cis-isomer does not. AzoTAB-mediated sensitization allows, thus, reversible and dynamic control of the excitation waves through the entire cardiomyocyte network either uniformly, or in a preferred spatial pattern. Technically, it was achieved by coupling a common digital projector with a macroview microscope and using computer graphic software for creating the projected pattern of conducting pathways. This approach allows real time interactive photocontrol of the heart tissue.
The evolution of computer monitoring of real time data during the Atlas Centaur launch countdown
NASA Technical Reports Server (NTRS)
Thomas, W. F.
1981-01-01
In the last decade, improvements in computer technology have provided new 'tools' for controlling and monitoring critical missile systems. In this connection, computers have gradually taken a large role in monitoring all flights and ground systems on the Atlas Centaur. The wide body Centaur which will be launched in the Space Shuttle Cargo Bay will use computers to an even greater extent. It is planned to use the wide body Centaur to boost the Galileo spacecraft toward Jupiter in 1985. The critical systems which must be monitored prior to liftoff are examined. Computers have now been programmed to monitor all critical parameters continuously. At this time, there are two separate computer systems used to monitor these parameters.
40 CFR 86.005-17 - On-board diagnostics.
Code of Federal Regulations, 2013 CFR
2013-07-01
... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...
40 CFR 86.005-17 - On-board diagnostics.
Code of Federal Regulations, 2012 CFR
2012-07-01
... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...
Code of Federal Regulations, 2010 CFR
2010-07-01
... maintain an SO2 continuous emission monitoring system and flow monitoring system in the duct to the common... emission monitoring system and flow monitoring system in the common stack and combine emissions for the... continuous emission monitoring system and flow monitoring system in the duct to the common stack from each...
One Way of Testing a Distributed Processor
NASA Technical Reports Server (NTRS)
Edstrom, R.; Kleckner, D.
1982-01-01
Launch processing for Space Shuttle is checked out, controlled, and monitored with new system. Entire system can be exercised by two computer programs--one in master console and other in each of operations consoles. Control program in each operations console detects change in status and begins task initiation. All of front-end processors are exercised from consoles through common data buffer, and all data are logged to processed-data recorder for posttest analysis.
Digital Topographic Support System (DTSS).
1987-07-29
effects applications software, a word processing package and a Special Purpose Product Builder ( SPPB ) in terms common to his Job. Through the MI, the...communicating with the TA in terms he understands, the applications software, the SPPB and the GIS form the underlying tools which perform the computations and...displayed on the monitors or plotted on paper or Mylar. The SPPB will guide the TA enabling him to design products which are not included in the applications
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S.; Dyer, James D.; Martinez Morales, Carlos A.
2013-03-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Wide-area, real-time monitoring and visualization system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2011-11-15
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Real-time performance monitoring and management system
Budhraja, Vikram S [Los Angeles, CA; Dyer, James D [La Mirada, CA; Martinez Morales, Carlos A [Upland, CA
2007-06-19
A real-time performance monitoring system for monitoring an electric power grid. The electric power grid has a plurality of grid portions, each grid portion corresponding to one of a plurality of control areas. The real-time performance monitoring system includes a monitor computer for monitoring at least one of reliability metrics, generation metrics, transmission metrics, suppliers metrics, grid infrastructure security metrics, and markets metrics for the electric power grid. The data for metrics being monitored by the monitor computer are stored in a data base, and a visualization of the metrics is displayed on at least one display computer having a monitor. The at least one display computer in one said control area enables an operator to monitor the grid portion corresponding to a different said control area.
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Freebody, John; Wegner, Eva A; Rossleigh, Monica A
2014-01-01
Positron emission tomography (PET) is a minimally invasive technique which has been well validated for the diagnosis, staging, monitoring of response to therapy, and disease surveillance of adult oncology patients. Traditionally the value of PET and PET/computed tomography (CT) hybrid imaging has been less clearly defined for paediatric oncology. However recent evidence has emerged regarding the diagnostic utility of these modalities, and they are becoming increasingly important tools in the evaluation and monitoring of children with known or suspected malignant disease. Important indications for 2-deoxy-2-(18F)fluoro-D-glucose (FDG) PET in paediatric oncology include lymphoma, brain tumours, sarcoma, neuroblastoma, Langerhans cell histiocytosis, urogenital tumours and neurofibromatosis type I. This article aims to review current evidence for the use of FDG PET and PET/CT in these indications. Attention will also be given to technical and logistical issues, the description of common imaging pitfalls, and dosimetric concerns as they relate to paediatric oncology. PMID:25349660
NASA Astrophysics Data System (ADS)
Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.
2015-12-01
Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.
Chiva, Andreea
2011-08-15
Dry eye is the most prevalent condition seen by the ophthalmologist, in particular in elderly. The identification of new common risk factors (computer use and contact lens wear) extends the disease among the young people. The early diagnosis of dry eye is essential, but difficult, because the biochemical changes in tear film usually occur before any detectable signs. Due its advantages, electrophoresis of tear proteins could be an important tool for diagnosis of tear film impairment in high risk groups for dry eye. The role of tear proteins electrophoresis in early diagnosis of dry eye related to computer use and contact lens wear, as well as the biochemical changes in these high risk groups are presented. This review will summarize the actual data concerning the electrophoretic changes of tear proteins in computer users and contact lens wearers, two common high risk groups for dry eye. Electrophoresis of tear proteins using automated system Hyrys-Hydrasys SEBIA France is an important tool for early diagnosis of tear film alterations and monitoring of therapy. The quantification of many proteins in a single analysis using a small quantity of unconcentrated reflex tears is the main advantage of this technique. Electrophoresis of tear proteins should became a prerequisite, in particular for computer users less than 3 h/day, as well as at prescribing contact lenses.
Forecasting PM10 in metropolitan areas: Efficacy of neural networks.
Fernando, H J S; Mammarella, M C; Grandoni, G; Fedele, P; Di Marco, R; Dimitrova, R; Hyde, P
2012-04-01
Deterministic photochemical air quality models are commonly used for regulatory management and planning of urban airsheds. These models are complex, computer intensive, and hence are prohibitively expensive for routine air quality predictions. Stochastic methods are becoming increasingly popular as an alternative, which relegate decision making to artificial intelligence based on Neural Networks that are made of artificial neurons or 'nodes' capable of 'learning through training' via historic data. A Neural Network was used to predict particulate matter concentration at a regulatory monitoring site in Phoenix, Arizona; its development, efficacy as a predictive tool and performance vis-à-vis a commonly used regulatory photochemical model are described in this paper. It is concluded that Neural Networks are much easier, quicker and economical to implement without compromising the accuracy of predictions. Neural Networks can be used to develop rapid air quality warning systems based on a network of automated monitoring stations. Copyright © 2011 Elsevier Ltd. All rights reserved.
McNeil, Michael J; Parisi, Marguerite T; Hijiya, Nobuko; Meshinchi, Soheil; Cooper, Todd; Tarlock, Katherine
2018-05-04
Extramedullary leukemia (EML) is common in pediatric acute leukemia and can present at diagnosis or relapse. CD33 is detected on the surface of myeloid blasts in many patients with acute myelogenous leukemia and is the target of the antibody drug conjugate gemtuzumab ozogamicin (GO). Here we present 2 patients with CD33 EML treated with GO. They achieved significant response, with reduction of EML on both clinical and radiographic exams, specifically fluorine fluorodeoxyglucose positron emission tomography/computed tomography, demonstrating potential for targeted therapy with GO as a means of treating EML in patients with CD33 leukemia and the utility of fluorine fluorodeoxyglucose positron emission tomography/computed tomography monitoring in EML.
An Examination of the MH-60S Common Cockpit from a Design Methodology and Acquisitions Standpoint
2009-06-01
estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to Washington headquarters Services...LCD monitors and a host of keypads and other more “computer interface” oriented input devices. To the author, the potential of this transition was...year during fiscal years 1998–2003 but had shifted these monies to other priorities [21]. This mess quickly drew in the Marine Corps again, this
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
Autonomic Intelligent Cyber Sensor to Support Industrial Control Network Awareness
Vollmer, Todd; Manic, Milos; Linda, Ondrej
2013-06-01
The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of Autonomic computing and a SOAP based IF-MAP external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, self-managed framework. The contribution of this paper is two-fold: 1) A flexible two level communication layer based on Autonomic computing and Service Oriented Architecture is detailed and 2) Three complementary modules that dynamically reconfiguremore » in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific Operating System and port configurations. Additionally the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.« less
User-level framework for performance monitoring of HPC applications
NASA Astrophysics Data System (ADS)
Hristova, R.; Goranov, G.
2013-10-01
HP-SEE is an infrastructure that links the existing HPC facilities in South East Europe in a common infrastructure. The analysis of the performance monitoring of the High-Performance Computing (HPC) applications in the infrastructure can be useful for the end user as diagnostic for the overall performance of his applications. The existing monitoring tools for HP-SEE provide to the end user only aggregated information for all applications. Usually, the user does not have permissions to select only the relevant information for him and for his applications. In this article we present a framework for performance monitoring of the HPC applications in the HP-SEE infrastructure. The framework provides standardized performance metrics, which every user can use in order to monitor his applications. Furthermore as a part of the framework a program interface is developed. The interface allows the user to publish metrics data from his application and to read and analyze gathered information. Publishing and reading through the framework is possible only with grid certificate valid for the infrastructure. Therefore the user is authorized to access only the data for his applications.
Space-Proven Medical Monitor: The Total Patient-Care Package
NASA Technical Reports Server (NTRS)
2006-01-01
The primary objective of the Gemini Program was to develop techniques that would allow for advanced, long-duration space travel, a prerequisite of the ensuing Apollo Program that would put man safely on the Moon before the end of the decade. In order to carry out this objective, NASA worked with a variety of innovative companies to develop propulsion systems, onboard computers, and docking capabilities that were critical to the health of Gemini spacecraft, as well as life-support systems and physiological-monitoring devices that were critical to the health of Gemini astronauts. One of these companies was Spacelabs Medical, Inc., the pioneer of what is commonly known today as medical telemetry. Spacelabs Medical helped NASA better understand man s reaction to space through a series of bioinstrumentation devices that, for the first time ever, were capable of monitoring orbiting astronauts physical conditions in real time, from Earth. The company went on to further expand its knowledge of monitoring and maintaining health in space, and then brought it down to Earth, to dramatically change the course of patient monitoring in the field of health care.
NASA Astrophysics Data System (ADS)
Lee, Songhyun; Jeong, Hyeryun; Seong, Myeongsu; Kim, Jae Gwan
2017-12-01
Breast cancer is one of the most common cancers in females. To monitor chemotherapeutic efficacy for breast cancer, medical imaging systems such as x-ray mammography, computed tomography, magnetic resonance imaging, and ultrasound imaging have been used. Currently, it can take up to 3 to 6 weeks to see the tumor response from chemotherapy by monitoring tumor volume changes. We used near-infrared spectroscopy (NIRS) to predict breast cancer treatment efficacy earlier than tumor volume changes by monitoring tumor vascular reactivity during inhalational gas interventions. The results show that the amplitude of oxy-hemoglobin changes (vascular reactivity) during hyperoxic gas inhalation is well correlated with tumor growth and responded one day earlier than tumor volume changes after chemotherapy. These results may imply that NIRS with respiratory challenges can be useful in early detection of tumor and in the prediction of tumor response to chemotherapy.
Development of a cloud-based system for remote monitoring of a PVT panel
NASA Astrophysics Data System (ADS)
Saraiva, Luis; Alcaso, Adérito; Vieira, Paulo; Ramos, Carlos Figueiredo; Cardoso, Antonio Marques
2016-10-01
The paper presents a monitoring system developed for an energy conversion system based on the sun and known as thermophotovoltaic panel (PVT). The project was implemented using two embedded microcontrollers platforms (arduino Leonardo and arduino yún), wireless transmission systems (WI-FI and XBEE) and net computing ,commonly known as cloud (Google cloud). The main objective of the project is to provide remote access and real-time data monitoring (like: electrical current, electrical voltage, input fluid temperature, output fluid temperature, backward fluid temperature, up PV glass temperature, down PV glass temperature, ambient temperature, solar radiation, wind speed, wind direction and fluid mass flow). This project demonstrates the feasibility of using inexpensive microcontroller's platforms and free internet service in theWeb, to support the remote study of renewable energy systems, eliminating the acquisition of dedicated systems typically more expensive and limited in the kind of processing proposed.
Using a software-defined computer in teaching the basics of computer architecture and operation
NASA Astrophysics Data System (ADS)
Kosowska, Julia; Mazur, Grzegorz
2017-08-01
The paper describes the concept and implementation of SDC_One software-defined computer designed for experimental and didactic purposes. Equipped with extensive hardware monitoring mechanisms, the device enables the students to monitor the computer's operation on bus transfer cycle or instruction cycle basis, providing the practical illustration of basic aspects of computer's operation. In the paper, we describe the hardware monitoring capabilities of SDC_One and some scenarios of using it in teaching the basics of computer architecture and microprocessor operation.
Quantification of Posterior Globe Flattening: Methodology Development and Validation
NASA Technical Reports Server (NTRS)
Lumpkins, Sarah B.; Garcia, Kathleen M.; Sargsyan, Ashot E.; Hamilton, Douglas R.; Berggren, Michael D.; Ebert, Douglas
2012-01-01
Microgravity exposure affects visual acuity in a subset of astronauts and mechanisms may include structural changes in the posterior globe and orbit. Particularly, posterior globe flattening has been implicated in the eyes of several astronauts. This phenomenon is known to affect some terrestrial patient populations and has been shown to be associated with intracranial hypertension. It is commonly assessed by magnetic resonance imaging (MRI), computed tomography (CT) or B-mode Ultrasound (US), without consistent objective criteria. NASA uses a semiquantitative scale of 0-3 as part of eye/orbit MRI and US analysis for occupational monitoring purposes. The goal of this study was ot initiate development of an objective quantification methodology to monitor small changes in posterior globe flattening.
Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karbach, Carsten; Frings, Wolfgang
2013-02-22
This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP.more » The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems.« less
Cloud Computing for Geosciences--GeoCloud for standardized geospatial service platforms (Invited)
NASA Astrophysics Data System (ADS)
Nebert, D. D.; Huang, Q.; Yang, C.
2013-12-01
The 21st century geoscience faces challenges of Big Data, spike computing requirements (e.g., when natural disaster happens), and sharing resources through cyberinfrastructure across different organizations (Yang et al., 2011). With flexibility and cost-efficiency of computing resources a primary concern, cloud computing emerges as a promising solution to provide core capabilities to address these challenges. Many governmental and federal agencies are adopting cloud technologies to cut costs and to make federal IT operations more efficient (Huang et al., 2010). However, it is still difficult for geoscientists to take advantage of the benefits of cloud computing to facilitate the scientific research and discoveries. This presentation reports using GeoCloud to illustrate the process and strategies used in building a common platform for geoscience communities to enable the sharing, integration of geospatial data, information and knowledge across different domains. GeoCloud is an annual incubator project coordinated by the Federal Geographic Data Committee (FGDC) in collaboration with the U.S. General Services Administration (GSA) and the Department of Health and Human Services. It is designed as a staging environment to test and document the deployment of a common GeoCloud community platform that can be implemented by multiple agencies. With these standardized virtual geospatial servers, a variety of government geospatial applications can be quickly migrated to the cloud. In order to achieve this objective, multiple projects are nominated each year by federal agencies as existing public-facing geospatial data services. From the initial candidate projects, a set of common operating system and software requirements was identified as the baseline for platform as a service (PaaS) packages. Based on these developed common platform packages, each project deploys and monitors its web application, develops best practices, and documents cost and performance information. This paper presents the background, architectural design, and activities of GeoCloud in support of the Geospatial Platform Initiative. System security strategies and approval processes for migrating federal geospatial data, information, and applications into cloud, and cost estimation for cloud operations are covered. Finally, some lessons learned from the GeoCloud project are discussed as reference for geoscientists to consider in the adoption of cloud computing.
Processing of the WLCG monitoring data using NoSQL
NASA Astrophysics Data System (ADS)
Andreeva, J.; Beche, A.; Belov, S.; Dzhunov, I.; Kadochnikov, I.; Karavakis, E.; Saiz, P.; Schovancova, J.; Tuckett, D.
2014-06-01
The Worldwide LHC Computing Grid (WLCG) today includes more than 150 computing centres where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. Monitoring the computing activities of the LHC experiments, over such a huge heterogeneous infrastructure, is extremely demanding in terms of computation, performance and reliability. Furthermore, the generated monitoring flow is constantly increasing, which represents another challenge for the monitoring systems. While existing solutions are traditionally based on Oracle for data storage and processing, recent developments evaluate NoSQL for processing large-scale monitoring datasets. NoSQL databases are getting increasingly popular for processing datasets at the terabyte and petabyte scale using commodity hardware. In this contribution, the integration of NoSQL data processing in the Experiment Dashboard framework is described along with first experiences of using this technology for monitoring the LHC computing activities.
Evaluation of the Factors which Contribute to the Ocular Complaints in Computer Users.
Agarwal, Smita; Goel, Dishanter; Sharma, Anshu
2013-02-01
Use of information technology hardware given new heights to professional success rate and saves time but on the other hand its harmful effect has introduced an array of health related complaints causing hazards for our human health. Increased use of computers has led to an increase in the number of patients with ocular complaints which are being grouped together as computer vision syndrome (CVS). In view of that, this study was undertaken to find out the ocular complaints and the factors contributing to occurrence of such problems in computer users. To evaluate the factors contributing to Ocular complaints in computer users in Teerthanker Mahaveer University, Moradabad, U.P. India. Community-based cross-sectional study of 150 subjects who work on computer for varying period of time in Teerthanker Mahaveer University, Moradabad, Uttar Pradesh. Two hundred computer operators working in different institutes offices and bank of were selected randomly in Teerthanker Mahaveer University, Moradabad, and Uttar Pradesh. 16 were non responders 18 did not come for assessment and 16 were excluded due to complaints prior to computer use making no response rate Twenty-one did not participate in the study, making the no response rate 25%. Rest of the subjects (n = 150) were asked to fill a pre-tested questionnaire, after obtaining their verbal consent Depending on the average hours of usage in a day, they were categorized into three categories viz. <2 hrs, 2-6 hrs, >6 hrs of usage. All the responders were asked to come to the Ophthalmic OPD for further interview and assessment. Simple proportions and Chi-square test. Among the 150 subjects studied major ocular complaint reported in descending order were eyestrain. (53%). Occurrence of eye strain, ( 53.8%), itching ( 47.6%) and burning (66.7%) in subjects using computer for more than 6 hours. distance from computer screen with respect to eyes, use of antiglare screen, taking frequent breaks, use of LCD monitor and adjustment of brightness of monitor screen bear a significant association with these ocular complaints in computer users. Eye strain is the most common ocular complaints among computer users working for more than 6 hours a day. We also found that maintaining ideal distance from screen, keeping level of eyes above the top of screen, taking frequent breaks, using LCD monitors and using antiglare screen and adjusting brightness levels according to workplace reduced these ocular complaints to a significant level.
Evaluation of the Factors which Contribute to the Ocular Complaints in Computer Users
Agarwal, Smita; Goel, Dishanter; Sharma, Anshu
2013-01-01
Context: Use of information technology hardware given new heights to professional success rate and saves time but on the other hand its harmful effect has introduced an array of health related complaints causing hazards for our human health. Increased use of computers has led to an increase in the number of patients with ocular complaints which are being grouped together as computer vision syndrome (CVS). In view of that, this study was undertaken to find out the ocular complaints and the factors contributing to occurrence of such problems in computer users. Aims: To evaluate the factors contributing to Ocular complaints in computer users in Teerthanker Mahaveer University, Moradabad, U.P. India. Settings and Design: Community-based cross-sectional study of 150 subjects who work on computer for varying period of time in Teerthanker Mahaveer University, Moradabad, Uttar Pradesh. Materials and Methods: Two hundred computer operators working in different institutes offices and bank of were selected randomly in Teerthanker Mahaveer University, Moradabad, and Uttar Pradesh. 16 were non responders 18 did not come for assessment and 16 were excluded due to complaints prior to computer use making no response rate Twenty-one did not participate in the study, making the no response rate 25%. Rest of the subjects (n = 150) were asked to fill a pre-tested questionnaire, after obtaining their verbal consent Depending on the average hours of usage in a day, they were categorized into three categories viz. <2 hrs, 2-6 hrs, >6 hrs of usage. All the responders were asked to come to the Ophthalmic OPD for further interview and assessment. Statistical Analysis Used: Simple proportions and Chi-square test. Results: Among the 150 subjects studied major ocular complaint reported in descending order were eyestrain. (53%). Occurrence of eye strain, ( 53.8%), itching ( 47.6%) and burning (66.7%) in subjects using computer for more than 6 hours. distance from computer screen with respect to eyes, use of antiglare screen, taking frequent breaks, use of LCD monitor and adjustment of brightness of monitor screen bear a significant association with these ocular complaints in computer users. Conclusions: Eye strain is the most common ocular complaints among computer users working for more than 6 hours a day. We also found that maintaining ideal distance from screen, keeping level of eyes above the top of screen, taking frequent breaks, using LCD monitors and using antiglare screen and adjusting brightness levels according to workplace reduced these ocular complaints to a significant level. PMID:23543722
A New Network Modeling Tool for the Ground-based Nuclear Explosion Monitoring Community
NASA Astrophysics Data System (ADS)
Merchant, B. J.; Chael, E. P.; Young, C. J.
2013-12-01
Network simulations have long been used to assess the performance of monitoring networks to detect events for such purposes as planning station deployments and network resilience to outages. The standard tool has been the SAIC-developed NetSim package. With correct parameters, NetSim can produce useful simulations; however, the package has several shortcomings: an older language (FORTRAN), an emphasis on seismic monitoring with limited support for other technologies, limited documentation, and a limited parameter set. Thus, we are developing NetMOD (Network Monitoring for Optimal Detection), a Java-based tool designed to assess the performance of ground-based networks. NetMOD's advantages include: coded in a modern language that is multi-platform, utilizes modern computing performance (e.g. multi-core processors), incorporates monitoring technologies other than seismic, and includes a well-validated default parameter set for the IMS stations. NetMOD is designed to be extendable through a plugin infrastructure, so new phenomenological models can be added. Development of the Seismic Detection Plugin is being pursued first. Seismic location and infrasound and hydroacoustic detection plugins will follow. By making NetMOD an open-release package, it can hopefully provide a common tool that the monitoring community can use to produce assessments of monitoring networks and to verify assessments made by others.
Computer users' postures and associations with workstation characteristics.
Gerr, F; Marcus, M; Ortiz, D; White, B; Jones, W; Cohen, S; Gentry, E; Edwards, A; Bauer, E
2000-01-01
This investigation tested the hypotheses that (1) physical workstation dimensions are important determinants of operator posture, (2) specific workstation characteristics systematically affect worker posture, and (3) computer operators assume "neutral" upper limb postures while keying. Operator head, neck, and upper extremity posture and selected workstation dimensions and characteristics were measured among 379 computer users. Operator postures were measured with manual goniometers, workstation characteristics were evaluated by observation, and workstation dimensions by direct measurement. Considerably greater variability in all postures was observed than was expected from application of basic geometric principles to measured workstation dimensions. Few strong correlations were observed between worker posture and workstation physical dimensions; findings suggest that preference is given to keyboard placement with respect to the eyes (r = 0.60 for association between keyboard height and seated elbow height) compared with monitor placement with respect to the eyes (r = 0.18 for association between monitor height and seated eye height). Wrist extension was weakly correlated with keyboard height (r = -0.24) and virtually not at all with keyboard thickness (r = 0.07). Use of a wrist rest was associated with decreased wrist flexion (21.9 versus 25.1 degrees, p < 0.01). Participants who had easily adjustable chairs had essentially the same neck and upper limb postures as did those with nonadjustable chairs. Sixty-one percent of computer operators were observed in nonneutral shoulder postures and 41% in nonneutral wrist postures. Findings suggest that (1) workstation dimensions are not strong determinants of at least several neck and upper extremity postures among computer operators, (2) only some workstation characteristics affect posture, and (3) contrary to common recommendations, a large proportion of computer users do not work in so-called neutral postures.
Real-time human collaboration monitoring and intervention
Merkle, Peter B.; Johnson, Curtis M.; Jones, Wendell B.; Yonas, Gerold; Doser, Adele B.; Warner, David J.
2010-07-13
A method of and apparatus for monitoring and intervening in, in real time, a collaboration between a plurality of subjects comprising measuring indicia of physiological and cognitive states of each of the plurality of subjects, communicating the indicia to a monitoring computer system, with the monitoring computer system, comparing the indicia with one or more models of previous collaborative performance of one or more of the plurality of subjects, and with the monitoring computer system, employing the results of the comparison to communicate commands or suggestions to one or more of the plurality of subjects.
Energy Use and Power Levels in New Monitors and Personal Computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay
2002-07-23
Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can usemore » to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC). Cur rent ENERGY STAR monitor and computer criteria do not specify off or on power, but our results suggest opportunities for saving energy in these modes. Also, significant differences between CRT and LCD technology, and between field-measured and manufacturer-reported power levels reveal the need for standard methods and metrics for measuring and comparing monitor power consumption.« less
Monitoring mangrove forests: Are we taking full advantage of technology?
NASA Astrophysics Data System (ADS)
Younes Cárdenas, Nicolás; Joyce, Karen E.; Maier, Stefan W.
2017-12-01
Mangrove forests grow in the estuaries of 124 tropical countries around the world. Because in-situ monitoring of mangroves is difficult and time-consuming, remote sensing technologies are commonly used to monitor these ecosystems. Landsat satellites have provided regular and systematic images of mangrove ecosystems for over 30 years, yet researchers often cite budget and infrastructure constraints to justify the underuse this resource. Since 2001, over 50 studies have used Landsat or ASTER imagery for mangrove monitoring, and most focus on the spatial extent of mangroves, rarely using more than five images. Even after the Landsat archive was made free for public use, few studies used more than five images, despite the clear advantages of using more images (e.g. lower signal-to-noise ratios). The main argument of this paper is that, with freely available imagery and high performance computing facilities around the world, it is up to researchers to acquire the necessary programming skills to use these resources. Programming skills allow researchers to automate repetitive and time-consuming tasks, such as image acquisition and processing, consequently reducing up to 60% of the time dedicated to these activities. These skills also help scientists to review and re-use algorithms, hence making mangrove research more agile. This paper contributes to the debate on why scientists need to learn to program, not only to challenge prevailing approaches to mangrove research, but also to expand the temporal and spatial extents that are commonly used for mangrove research.
Namuslu, Mehmet; Devrim, Erdinç; Durak, İlker
2009-01-01
Purpose This study aims to investigate the possible effects of computer monitor-emitted radiation on the oxidant/antioxidant balance in corneal and lens tissues and to observe any protective effects of vitamin C (vit C). Methods Four groups (PC monitor, PC monitor plus vitamin C, vitamin C, and control) each consisting of ten Wistar rats were studied. The study lasted for three weeks. Vitamin C was administered in oral doses of 250 mg/kg/day. The computer and computer plus vitamin C groups were exposed to computer monitors while the other groups were not. Malondialdehyde (MDA) levels and superoxide dismutase (SOD), glutathione peroxidase (GSH-Px), and catalase (CAT) activities were measured in corneal and lens tissues of the rats. Results In corneal tissue, MDA levels and CAT activity were found to increase in the computer group compared with the control group. In the computer plus vitamin C group, MDA level, SOD, and GSH-Px activities were higher and CAT activity lower than those in the computer and control groups. Regarding lens tissue, in the computer group, MDA levels and GSH-Px activity were found to increase, as compared to the control and computer plus vitamin C groups, and SOD activity was higher than that of the control group. In the computer plus vitamin C group, SOD activity was found to be higher and CAT activity to be lower than those in the control group. Conclusion The results of this study suggest that computer-monitor radiation leads to oxidative stress in the corneal and lens tissues, and that vitamin C may prevent oxidative effects in the lens. PMID:19960068
Forghani-Arani, Farnoush; Behura, Jyoti; Haines, Seth S.; Batzle, Mike
2013-01-01
In studies on heavy oil, shale reservoirs, tight gas and enhanced geothermal systems, the use of surface passive seismic data to monitor induced microseismicity due to the fluid flow in the subsurface is becoming more common. However, in most studies passive seismic records contain days and months of data and manually analysing the data can be expensive and inaccurate. Moreover, in the presence of noise, detecting the arrival of weak microseismic events becomes challenging. Hence, the use of an automated, accurate and computationally fast technique for event detection in passive seismic data is essential. The conventional automatic event identification algorithm computes a running-window energy ratio of the short-term average to the long-term average of the passive seismic data for each trace. We show that for the common case of a low signal-to-noise ratio in surface passive records, the conventional method is not sufficiently effective at event identification. Here, we extend the conventional algorithm by introducing a technique that is based on the cross-correlation of the energy ratios computed by the conventional method. With our technique we can measure the similarities amongst the computed energy ratios at different traces. Our approach is successful at improving the detectability of events with a low signal-to-noise ratio that are not detectable with the conventional algorithm. Also, our algorithm has the advantage to identify if an event is common to all stations (a regional event) or to a limited number of stations (a local event). We provide examples of applying our technique to synthetic data and a field surface passive data set recorded at a geothermal site.
Soil Monitor: an open source web application for real-time soil sealing monitoring and assessment
NASA Astrophysics Data System (ADS)
Langella, Giuliano; Basile, Angelo; Giannecchini, Simone; Iamarino, Michela; Munafò, Michele; Terribile, Fabio
2016-04-01
Soil sealing is one of the most important causes of land degradation and desertification. In Europe, soil covered by impermeable materials has increased by about 80% from the Second World War till nowadays, while population has only grown by one third. There is an increasing concern at the high political levels about the need to attenuate imperviousness itself and its effects on soil functions. European Commission promulgated a roadmap (COM(2011) 571) by which the net land take would be zero by 2050. Furthermore, European Commission also published a report in 2011 providing best practices and guidelines for limiting soil sealing and imperviousness. In this scenario, we developed an open source and an open source based Soil Sealing Geospatial Cyber Infrastructure (SS-GCI) named as "Soil Monitor". This tool merges a webGIS with parallel geospatial computation in a fast and dynamic fashion in order to provide real-time assessments of soil sealing at high spatial resolution (20 meters and below) over the whole Italy. Common open source webGIS packages are used to implement both the data management and visualization infrastructures, such as GeoServer and MapStore. The high-speed geospatial computation is ensured by a GPU parallelism using the CUDA (Computing Unified Device Architecture) framework by NVIDIA®. This kind of parallelism required the writing - from scratch - all codes needed to fulfil the geospatial computation built behind the soil sealing toolbox. The combination of GPU computing with webGIS infrastructures is relatively novel and required particular attention at the Java-CUDA programming interface. As a result, Soil Monitor is smart because it can perform very high time-consuming calculations (querying for instance an Italian administrative region as area of interest) in less than one minute. The web application is embedded in a web browser and nothing must be installed before using it. Potentially everybody can use it, but the main targets are the stakeholders dealing with sealing, such as policy makers, land owners and asphalt/cement companies. As a matter of fact, Soil Monitor can be used to improve the spatial planning therefore limiting the progression of disordered soil sealing which causes both the direct loss of soils due to imperviousness but also the indirect loss caused by fragmentation of soils (which has different negative effects on the durability of soil functions, such as habitat corridors). Further, in a future version, Soil Monitor would estimate the best location for a new building or help compensating soil losses by actions in other areas to offset drawbacks at zero. The presented SS-GCI dealing with soil sealing - if opportunely scaled - would aid the implementation of best practices for limiting soil sealing or mitigating its effects on soil functions.
Possibilities in optical monitoring of laser welding process
NASA Astrophysics Data System (ADS)
Horník, Petr; Mrňa, Libor; Pavelka, Jan
2016-11-01
Laser welding is a modern, widely used but still not really common method of welding. With increasing demands on the quality of the welds, it is usual to apply automated machine welding and with on-line monitoring of the welding process. The resulting quality of the weld is largely affected by the behavior of keyhole. However, its direct observation during the welding process is practically impossible and it is necessary to use indirect methods. At ISI we have developed optical methods of monitoring the process. Most advanced is an analysis of radiation of laser-induced plasma plume forming in the keyhole where changes in the frequency of the plasma bursts are monitored and evaluated using Fourier and autocorrelation analysis. Another solution, robust and suitable for industry, is based on the observation of the keyhole inlet opening through a coaxial camera mounted in the welding head and the subsequent image processing by computer vision methods. A high-speed camera is used to understand the dynamics of the plasma plume. Through optical spectroscopy of the plume, we can study the excitation of elements in a material. It is also beneficial to monitor the gas flow of shielding gas using schlieren method.
Monitoring data transfer latency in CMS computing operations
Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo; ...
2015-12-23
During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less
Monitoring data transfer latency in CMS computing operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, Daniele; Diotalevi, Tommaso; Magini, Nicolo
During the first LHC run, the CMS experiment collected tens of Petabytes of collision and simulated data, which need to be distributed among dozens of computing centres with low latency in order to make efficient use of the resources. While the desired level of throughput has been successfully achieved, it is still common to observe transfer workflows that cannot reach full completion in a timely manner due to a small fraction of stuck files which require operator intervention.For this reason, in 2012 the CMS transfer management system, PhEDEx, was instrumented with a monitoring system to measure file transfer latencies, andmore » to predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies while the transfer is still in progress, and monitor the long-term performance of the transfer infrastructure to plan the data placement strategy.Based on the data collected for one year with the latency monitoring system, we present a study on the different factors that contribute to transfer completion time. As case studies, we analyze several typical CMS transfer workflows, such as distribution of collision event data from CERN or upload of simulated event data from the Tier-2 centres to the archival Tier-1 centres. For each workflow, we present the typical patterns of transfer latencies that have been identified with the latency monitor.We identify the areas in PhEDEx where a development effort can reduce the latency, and we show how we are able to detect stuck transfers which need operator intervention. Lastly, we propose a set of metrics to alert about stuck subscriptions and prompt for manual intervention, with the aim of improving transfer completion times.« less
"Internet of Things" Real-Time Free Flap Monitoring.
Kim, Sang Hun; Shin, Ho Seong; Lee, Sang Hwan
2018-01-01
Free flaps are a common treatment option for head and neck reconstruction in plastic reconstructive surgery, and monitoring of the free flap is the most important factor for flap survival. In this study, the authors performed real-time free flap monitoring based on an implanted Doppler system and "internet of things" (IoT)/wireless Wi-Fi, which is a convenient, accurate, and efficient approach for surgeons to monitor a free flap. Implanted Doppler signals were checked continuously until the patient was discharged by the surgeon and residents using their own cellular phone or personal computer. If the surgeon decided that a revision procedure or exploration was required, the authors checked the consumed time (positive signal-to-operating room time) from the first notification when the flap's status was questioned to the determination for revision surgery according to a chart review. To compare the efficacy of real-time monitoring, the authors paired the same number of free flaps performed by the same surgeon and monitored the flaps using conventional methods such as a physical examination. The total survival rate was greater in the real-time monitoring group (94.7% versus 89.5%). The average time for the real-time monitoring group was shorter than that for the conventional group (65 minutes versus 86 minutes). Based on this study, real-time free flap monitoring using IoT technology is a method that surgeon and reconstruction team can monitor simultaneously at any time in any situation.
District Computer Concerns: Checklist for Monitoring Instructional Use of Computers.
ERIC Educational Resources Information Center
Coe, Merilyn
Designed to assist those involved with planning, organizing, and implementing computer use in schools, this checklist can be applied to: (1) assess the present state of instructional computer use in the district; (2) assist with the development of plans or guidelines for computer use; (3) support a start-up phase; and (4) monitor the…
Photogrammetry and Its Potential Application in Medical Science on the Basis of Selected Literature.
Ey-Chmielewska, Halina; Chruściel-Nogalska, Małgorzata; Frączak, Bogumiła
2015-01-01
Photogrammetry is a science and technology which allows quantitative traits to be determined, i.e. the reproduction of object shapes, sizes and positions on the basis of their photographs. Images can be recorded in a wide range of wavelengths of electromagnetic radiation. The most common is the visible range, but near- and medium-infrared, thermal infrared, microwaves and X-rays are also used. The importance of photogrammetry has increased with the development of computer software. Digital image processing and real-time measurement have allowed the automation of many complex manufacturing processes. Photogrammetry has been widely used in many areas, especially in geodesy and cartography. In medicine, this method is used for measuring the widely understood human body for the planning and monitoring of therapeutic treatment and its results. Digital images obtained from optical-electronic sensors combined with computer technology have the potential of objective measurement thanks to the remote nature of the data acquisition, with no contact with the measured object and with high accuracy. Photogrammetry also allows the adoption of common standards for archiving and processing patient data.
Implementing Machine Learning in Radiology Practice and Research.
Kohli, Marc; Prevedello, Luciano M; Filice, Ross W; Geis, J Raymond
2017-04-01
The purposes of this article are to describe concepts that radiologists should understand to evaluate machine learning projects, including common algorithms, supervised as opposed to unsupervised techniques, statistical pitfalls, and data considerations for training and evaluation, and to briefly describe ethical dilemmas and legal risk. Machine learning includes a broad class of computer programs that improve with experience. The complexity of creating, training, and monitoring machine learning indicates that the success of the algorithms will require radiologist involvement for years to come, leading to engagement rather than replacement.
Water transport in limestone by X-ray CAT scanning
Mossoti, Victor G.; Castanier, Louis M.
1989-01-01
The transport of water through the interior of Salem limestone test briquettes can be dynamically monitored by computer aided tomography (commonly called CAT scanning in medical diagnostics). Most significantly, unless evaporation from a particular face of the briquette is accelerated by forced air flow (wind simulation), the distribution of water in the interior of the briquette remains more or less uniform throughout the complete drying cycle. Moreover, simulated solar illumination of the test briquette does not result in the production of significant water gradients in the briquette under steady-state drying conditions.
Coordinated Fault Tolerance for High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack; Bosilca, George; et al.
2013-04-08
Our work to meet our goal of end-to-end fault tolerance has focused on two areas: (1) improving fault tolerance in various software currently available and widely used throughout the HEC domain and (2) using fault information exchange and coordination to achieve holistic, systemwide fault tolerance and understanding how to design and implement interfaces for integrating fault tolerance features for multiple layers of the software stack—from the application, math libraries, and programming language runtime to other common system software such as jobs schedulers, resource managers, and monitoring tools.
Rostami, Elham; Engquist, Henrik; Enblad, Per
2014-01-01
Ischemia is a common and deleterious secondary injury following traumatic brain injury (TBI). A great challenge for the treatment of TBI patients in the neurointensive care unit (NICU) is to detect early signs of ischemia in order to prevent further advancement and deterioration of the brain tissue. Today, several imaging techniques are available to monitor cerebral blood flow (CBF) in the injured brain such as positron emission tomography (PET), single-photon emission computed tomography, xenon computed tomography (Xenon-CT), perfusion-weighted magnetic resonance imaging (MRI), and CT perfusion scan. An ideal imaging technique would enable continuous non-invasive measurement of blood flow and metabolism across the whole brain. Unfortunately, no current imaging method meets all these criteria. These techniques offer snapshots of the CBF. MRI may also provide some information about the metabolic state of the brain. PET provides images with high resolution and quantitative measurements of CBF and metabolism; however, it is a complex and costly method limited to few TBI centers. All of these methods except mobile Xenon-CT require transfer of TBI patients to the radiological department. Mobile Xenon-CT emerges as a feasible technique to monitor CBF in the NICU, with lower risk of adverse effects. Promising results have been demonstrated with Xenon-CT in predicting outcome in TBI patients. This review covers available imaging methods used to monitor CBF in patients with severe TBI.
Rostami, Elham; Engquist, Henrik; Enblad, Per
2014-01-01
Ischemia is a common and deleterious secondary injury following traumatic brain injury (TBI). A great challenge for the treatment of TBI patients in the neurointensive care unit (NICU) is to detect early signs of ischemia in order to prevent further advancement and deterioration of the brain tissue. Today, several imaging techniques are available to monitor cerebral blood flow (CBF) in the injured brain such as positron emission tomography (PET), single-photon emission computed tomography, xenon computed tomography (Xenon-CT), perfusion-weighted magnetic resonance imaging (MRI), and CT perfusion scan. An ideal imaging technique would enable continuous non-invasive measurement of blood flow and metabolism across the whole brain. Unfortunately, no current imaging method meets all these criteria. These techniques offer snapshots of the CBF. MRI may also provide some information about the metabolic state of the brain. PET provides images with high resolution and quantitative measurements of CBF and metabolism; however, it is a complex and costly method limited to few TBI centers. All of these methods except mobile Xenon-CT require transfer of TBI patients to the radiological department. Mobile Xenon-CT emerges as a feasible technique to monitor CBF in the NICU, with lower risk of adverse effects. Promising results have been demonstrated with Xenon-CT in predicting outcome in TBI patients. This review covers available imaging methods used to monitor CBF in patients with severe TBI. PMID:25071702
NASA Astrophysics Data System (ADS)
Li, Peng; Olmi, Claudio; Song, Gangbing
2010-04-01
Piezoceramic based transducers are widely researched and used for structural health monitoring (SHM) systems due to the piezoceramic material's inherent advantage of dual sensing and actuation. Wireless sensor network (WSN) technology benefits from advances made in piezoceramic based structural health monitoring systems, allowing easy and flexible installation, low system cost, and increased robustness over wired system. However, piezoceramic wireless SHM systems still faces some drawbacks, one of these is that the piezoceramic based SHM systems require relatively high computational capabilities to calculate damage information, however, battery powered WSN sensor nodes have strict power consumption limitation and hence limited computational power. On the other hand, commonly used centralized processing networks require wireless sensors to transmit all data back to the network coordinator for analysis. This signal processing procedure can be problematic for piezoceramic based SHM applications as it is neither energy efficient nor robust. In this paper, we aim to solve these problems with a distributed wireless sensor network for piezoceramic base structural health monitoring systems. Three important issues: power system, waking up from sleep impact detection, and local data processing, are addressed to reach optimized energy efficiency. Instead of sweep sine excitation that was used in the early research, several sine frequencies were used in sequence to excite the concrete structure. The wireless sensors record the sine excitations and compute the time domain energy for each sine frequency locally to detect the energy change. By comparing the data of the damaged concrete frame with the healthy data, we are able to find out the damage information of the concrete frame. A relative powerful wireless microcontroller was used to carry out the sampling and distributed data processing in real-time. The distributed wireless network dramatically reduced the data transmission between wireless sensor and the wireless coordinator, which in turn reduced the power consumption of the overall system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the hourly stack flow rate (in scfh). Only one methodology for determining NOX mass emissions shall be...-diluent continuous emissions monitoring system and a flow monitoring system in the common stack, record... maintain a flow monitoring system and diluent monitor in the duct to the common stack from each unit; or...
Wu, Hau-Tieng; Lewis, Gregory F; Davila, Maria I; Daubechies, Ingrid; Porges, Stephen W
2016-10-17
With recent advances in sensor and computer technologies, the ability to monitor peripheral pulse activity is no longer limited to the laboratory and clinic. Now inexpensive sensors, which interface with smartphones or other computer-based devices, are expanding into the consumer market. When appropriate algorithms are applied, these new technologies enable ambulatory monitoring of dynamic physiological responses outside the clinic in a variety of applications including monitoring fatigue, health, workload, fitness, and rehabilitation. Several of these applications rely upon measures derived from peripheral pulse waves measured via contact or non-contact photoplethysmography (PPG). As technologies move from contact to non-contact PPG, there are new challenges. The technology necessary to estimate average heart rate over a few seconds from a noncontact PPG is available. However, a technology to precisely measure instantaneous heat rate (IHR) from non-contact sensors, on a beat-to-beat basis, is more challenging. The objective of this paper is to develop an algorithm with the ability to accurately monitor IHR from peripheral pulse waves, which provides an opportunity to measure the neural regulation of the heart from the beat-to-beat heart rate pattern (i.e., heart rate variability). The adaptive harmonic model is applied to model the contact or non-contact PPG signals, and a new methodology, the Synchrosqueezing Transform (SST), is applied to extract IHR. The body sway rhythm inherited in the non-contact PPG signal is modeled and handled by the notion of wave-shape function. The SST optimizes the extraction of IHR from the PPG signals and the technique functions well even during periods of poor signal to noise. We contrast the contact and non-contact indices of PPG derived heart rate with a criterion electrocardiogram (ECG). ECG and PPG signals were monitored in 21 healthy subjects performing tasks with different physical demands. The root mean square error of IHR estimated by SST is significantly better than commonly applied methods such as autoregressive (AR) method. In the walking situation, while AR method fails, SST still provides a reasonably good result. The SST processed PPG data provided an accurate estimate of the ECG derived IHR and consistently performed better than commonly applied methods such as autoregressive method.
Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C
2014-01-01
Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patients pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (SBM), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or QCP) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patients physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patients condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.
Development of seismic tomography software for hybrid supercomputers
NASA Astrophysics Data System (ADS)
Nikitin, Alexandr; Serdyukov, Alexandr; Duchkov, Anton
2015-04-01
Seismic tomography is a technique used for computing velocity model of geologic structure from first arrival travel times of seismic waves. The technique is used in processing of regional and global seismic data, in seismic exploration for prospecting and exploration of mineral and hydrocarbon deposits, and in seismic engineering for monitoring the condition of engineering structures and the surrounding host medium. As a consequence of development of seismic monitoring systems and increasing volume of seismic data, there is a growing need for new, more effective computational algorithms for use in seismic tomography applications with improved performance, accuracy and resolution. To achieve this goal, it is necessary to use modern high performance computing systems, such as supercomputers with hybrid architecture that use not only CPUs, but also accelerators and co-processors for computation. The goal of this research is the development of parallel seismic tomography algorithms and software package for such systems, to be used in processing of large volumes of seismic data (hundreds of gigabytes and more). These algorithms and software package will be optimized for the most common computing devices used in modern hybrid supercomputers, such as Intel Xeon CPUs, NVIDIA Tesla accelerators and Intel Xeon Phi co-processors. In this work, the following general scheme of seismic tomography is utilized. Using the eikonal equation solver, arrival times of seismic waves are computed based on assumed velocity model of geologic structure being analyzed. In order to solve the linearized inverse problem, tomographic matrix is computed that connects model adjustments with travel time residuals, and the resulting system of linear equations is regularized and solved to adjust the model. The effectiveness of parallel implementations of existing algorithms on target architectures is considered. During the first stage of this work, algorithms were developed for execution on supercomputers using multicore CPUs only, with preliminary performance tests showing good parallel efficiency on large numerical grids. Porting of the algorithms to hybrid supercomputers is currently ongoing.
Smart, Luke R; Mangat, Halinder S; Issarow, Benson; McClelland, Paul; Mayaya, Gerald; Kanumba, Emmanuel; Gerber, Linda M; Wu, Xian; Peck, Robert N; Ngayomela, Isidore; Fakhar, Malik; Stieg, Philip E; Härtl, Roger
2017-09-01
Severe traumatic brain injury (TBI) is a major cause of death and disability worldwide. Prospective TBI data from sub-Saharan Africa are sparse. This study examines epidemiology and explores management of patients with severe TBI and adherence to Brain Trauma Foundation Guidelines at a tertiary care referral hospital in Tanzania. Patients with severe TBI hospitalized at Bugando Medical Centre were recorded in a prospective registry including epidemiologic, clinical, treatment, and outcome data. Between September 2013 and October 2015, 371 patients with TBI were admitted; 33% (115/371) had severe TBI. Mean age was 32.0 years ± 20.1, and most patients were male (80.0%). Vehicular injuries were the most common cause of injury (65.2%). Approximately half of the patients (47.8%) were hospitalized on the day of injury. Computed tomography of the brain was performed in 49.6% of patients, and 58.3% were admitted to the intensive care unit. Continuous arterial blood pressure monitoring and intracranial pressure monitoring were not performed in any patient. Of patients with severe TBI, 38.3% received hyperosmolar therapy, and 35.7% underwent craniotomy. The 2-week mortality was 34.8%. Mortality of patients with severe TBI at Bugando Medical Centre, Tanzania, is approximately twice that in high-income countries. Intensive care unit care, computed tomography imaging, and continuous arterial blood pressure and intracranial pressure monitoring are underused or unavailable in the tertiary referral hospital setting. Improving outcomes after severe TBI will require concerted investment in prehospital care and improvement in availability of intensive care unit resources, computed tomography, and expertise in multidisciplinary care. Copyright © 2017 Elsevier Inc. All rights reserved.
Continuously Deformation Monitoring of Subway Tunnel Based on Terrestrial Point Clouds
NASA Astrophysics Data System (ADS)
Kang, Z.; Tuo, L.; Zlatanova, S.
2012-07-01
The deformation monitoring of subway tunnel is of extraordinary necessity. Therefore, a method for deformation monitoring based on terrestrial point clouds is proposed in this paper. First, the traditional adjacent stations registration is replaced by sectioncontrolled registration, so that the common control points can be used by each station and thus the error accumulation avoided within a section. Afterwards, the central axis of the subway tunnel is determined through RANSAC (Random Sample Consensus) algorithm and curve fitting. Although with very high resolution, laser points are still discrete and thus the vertical section is computed via the quadric fitting of the vicinity of interest, instead of the fitting of the whole model of a subway tunnel, which is determined by the intersection line rotated about the central axis of tunnel within a vertical plane. The extraction of the vertical section is then optimized using RANSAC for the purpose of filtering out noises. Based on the extracted vertical sections, the volume of tunnel deformation is estimated by the comparison between vertical sections extracted at the same position from different epochs of point clouds. Furthermore, the continuously extracted vertical sections are deployed to evaluate the convergent tendency of the tunnel. The proposed algorithms are verified using real datasets in terms of accuracy and computation efficiency. The experimental result of fitting accuracy analysis shows the maximum deviation between interpolated point and real point is 1.5 mm, and the minimum one is 0.1 mm; the convergent tendency of the tunnel was detected by the comparison of adjacent fitting radius. The maximum error is 6 mm, while the minimum one is 1 mm. The computation cost of vertical section abstraction is within 3 seconds/section, which proves high efficiency..
Computer-Assisted Monitoring Of A Complex System
NASA Technical Reports Server (NTRS)
Beil, Bob J.; Mickelson, Eric M.; Sterritt, John M.; Costantino, Rob W.; Houvener, Bob C.; Super, Mike A.
1995-01-01
Propulsion System Advisor (PSA) computer-based system assists engineers and technicians in analyzing masses of sensory data indicative of operating conditions of space shuttle propulsion system during pre-launch and launch activities. Designed solely for monitoring; does not perform any control functions. Although PSA developed for highly specialized application, serves as prototype of noncontrolling, computer-based subsystems for monitoring other complex systems like electric-power-distribution networks and factories.
NASA Astrophysics Data System (ADS)
Beauducel, François; Bosson, Alexis; Randriamora, Frédéric; Anténor-Habazac, Christian; Lemarchand, Arnaud; Saurel, Jean-Marie; Nercessian, Alexandre; Bouin, Marie-Paule; de Chabalier, Jean-Bernard; Clouard, Valérie
2010-05-01
Seismological and Volcanological observatories have common needs and often common practical problems for multi disciplinary data monitoring applications. In fact, access to integrated data in real-time and estimation of measurements uncertainties are keys for an efficient interpretation, but instruments variety, heterogeneity of data sampling and acquisition systems lead to difficulties that may hinder crisis management. In Guadeloupe observatory, we have developed in the last years an operational system that attempts to answer the questions in the context of a pluri-instrumental observatory. Based on a single computer server, open source scripts (Matlab, Perl, Bash, Nagios) and a Web interface, the system proposes: an extended database for networks management, stations and sensors (maps, station file with log history, technical characteristics, meta-data, photos and associated documents); a web-form interfaces for manual data input/editing and export (like geochemical analysis, some of the deformation measurements, ...); routine data processing with dedicated automatic scripts for each technique, production of validated data outputs, static graphs on preset moving time intervals, and possible e-mail alarms; computers, acquisition processes, stations and individual sensors status automatic check with simple criteria (files update and signal quality), displayed as synthetic pages for technical control. In the special case of seismology, WebObs includes a digital stripchart multichannel continuous seismogram associated with EarthWorm acquisition chain (see companion paper Part 1), event classification database, location scripts, automatic shakemaps and regional catalog with associated hypocenter maps accessed through a user request form. This system leads to a real-time Internet access for integrated monitoring and becomes a strong support for scientists and technicians exchange, and is widely open to interdisciplinary real-time modeling. It has been set up at Martinique observatory and installation is planned this year at Montserrat Volcanological Observatory. It also in production at the geomagnetic observatory of Addis Abeba in Ethiopia.
Computation offloading for real-time health-monitoring devices.
Kalantarian, Haik; Sideris, Costas; Tuan Le; Hosseini, Anahita; Sarrafzadeh, Majid
2016-08-01
Among the major challenges in the development of real-time wearable health monitoring systems is to optimize battery life. One of the major techniques with which this objective can be achieved is computation offloading, in which portions of computation can be partitioned between the device and other resources such as a server or cloud. In this paper, we describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data between the wearable device and mobile application as a function of desired classification accuracy.
Automated Monitoring and Analysis of Social Behavior in Drosophila
Dankert, Heiko; Wang, Liming; Hoopfer, Eric D.; Anderson, David J.; Perona, Pietro
2009-01-01
We introduce a method based on machine vision for automatically measuring aggression and courtship in Drosophila melanogaster. The genetic and neural circuit bases of these innate social behaviors are poorly understood. High-throughput behavioral screening in this genetically tractable model organism is a potentially powerful approach, but it is currently very laborious. Our system monitors interacting pairs of flies, and computes their location, orientation and wing posture. These features are used for detecting behaviors exhibited during aggression and courtship. Among these, wing threat, lunging and tussling are specific to aggression; circling, wing extension (courtship “song”) and copulation are specific to courtship; locomotion and chasing are common to both. Ethograms may be constructed automatically from these measurements, saving considerable time and effort. This technology should enable large-scale screens for genes and neural circuits controlling courtship and aggression. PMID:19270697
Chappell, James; Freemont, Paul
2013-01-01
The characterization of DNA regulatory elements such as ribosome binding sites and transcriptional promoters is a fundamental aim of synthetic biology. Characterization of such DNA regulatory elements by monitoring the synthesis of fluorescent proteins is a commonly used technique to resolve the relative or absolute strengths. These measurements can be used in combination with mathematical models and computer simulation to rapidly assess performance of DNA regulatory elements both in isolation and in combination, to assist predictable and efficient engineering of complex novel biological devices and systems. Here we describe the construction and relative characterization of Escherichia coli (E. coli) σ(70) transcriptional promoters by monitoring the synthesis of green fluorescent protein (GFP) both in vivo in E. coli and in vitro in a E. coli cell-free transcription and translation reaction.
NASA Astrophysics Data System (ADS)
Rodes, C. E.; Chillrud, S. N.; Haskell, W. L.; Intille, S. S.; Albinali, F.; Rosenberger, M. E.
2012-09-01
BackgroundMetabolic functions typically increase with human activity, but optimal methods to characterize activity levels for real-time predictions of ventilation volume (l min-1) during exposure assessments have not been available. Could tiny, triaxial accelerometers be incorporated into personal level monitors to define periods of acceptable wearing compliance, and allow the exposures (μg m-3) to be extended to potential doses in μg min-1 kg-1 of body weight? ObjectivesIn a pilot effort, we tested: 1) whether appropriately-processed accelerometer data could be utilized to predict compliance and in linear regressions to predict ventilation volumes in real-time as an on-board component of personal level exposure sensor systems, and 2) whether locating the exposure monitors on the chest in the breathing zone, provided comparable accelerometric data to other locations more typically utilized (waist, thigh, wrist, etc.). MethodsPrototype exposure monitors from RTI International and Columbia University were worn on the chest by a pilot cohort of adults while conducting an array of scripted activities (all <10 METS), spanning common recumbent, sedentary, and ambulatory activity categories. Referee Wocket accelerometers that were placed at various body locations allowed comparison with the chest-located exposure sensor accelerometers. An Oxycon Mobile mask was used to measure oral-nasal ventilation volumes in-situ. For the subset of participants with complete data (n = 22), linear regressions were constructed (processed accelerometric variable versus ventilation rate) for each participant and exposure monitor type, and Pearson correlations computed to compare across scenarios. ResultsTriaxial accelerometer data were demonstrated to be adequately sensitive indicators for predicting exposure monitor wearing compliance. Strong linear correlations (R values from 0.77 to 0.99) were observed for all participants for both exposure sensor accelerometer variables against ventilation volume for recumbent, sedentary, and ambulatory activities with MET values ˜<6. The RTI monitors mean R value of 0.91 was slightly higher than the Columbia monitors mean of 0.86 due to utilizing a 20 Hz data rate instead of a slower 1 Hz rate. A nominal mean regression slope was computed for the RTI system across participants and showed a modest RSD of +/-36.6%. Comparison of the correlation values of the exposure monitors with the Wocket accelerometers at various body locations showed statistically identical regressions for all sensors at alternate hip, ankle, upper arm, thigh, and pocket locations, but not for the Wocket accelerometer located at the dominant side wrist location (R = 0.57; p = 0.016). ConclusionsEven with a modest number of adult volunteers, the consistency and linearity of regression slopes for all subjects were very good with excellent within-person Pearson correlations for the accelerometer versus ventilation volume data. Computing accelerometric standard deviations allowed good sensitivity for compliance assessments even for sedentary activities. These pilot findings supported the hypothesis that a common linear regression is likely to be usable for a wider range of adults to predict ventilation volumes from accelerometry data over a range of low to moderate energy level activities. The predicted volumes would then allow real-time estimates of potential dose, enabling more robust panel studies. The poorer correlation in predicting ventilation rate for an accelerometer located on the wrist suggested that this location should not be considered for predictions of ventilation volume.
Data processing for water monitoring system
NASA Technical Reports Server (NTRS)
Monford, L.; Linton, A. T.
1978-01-01
Water monitoring data acquisition system is structured about central computer that controls sampling and sensor operation, and analyzes and displays data in real time. Unit is essentially separated into two systems: computer system, and hard wire backup system which may function separately or with computer.
Gorodetskiy, Vadim R; Mukhortova, Olga V; Aslanidis, Irakli P; Klapper, Wolfram; Probatova, Natalya A
2016-01-01
Subcutaneous panniculitis-like T cell lymphoma (SPTCL) is a very rare variant of non-Hodgkin’s lymphoma. Currently, there is no standard imaging method for staging of SPTCL nor for assessment of treatment response. Here, we describe our use of fluorine-18 fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) for staging and monitoring of treatment response in 3 cases of SPTCL. Primary staging by PET/CT showed that all 3 patients had multiple foci in the subcutaneous fat tissue, with SUVmax from 10.5 to 14.6. Involvement of intra-abdominal fat with high SUVmax was identified in 2 of the patients. Use of the triple drug regimen of gemcitabine, cisplatin and methylprednisolone (commonly known as “GEM-P”) as first-line therapy or second-line therapy facilitated complete metabolic response for all 3 cases. FDG PET/CT provides valuable information for staging and monitoring of treatment response and can reveal occult involvement of the intra-abdominal visceral fat. High FDG uptake on pre-treatment PET can identify patients with aggressive disease and help in selection of first-line therapy. PMID:27672640
The monitoring and managing application of cloud computing based on Internet of Things.
Luo, Shiliang; Ren, Bin
2016-07-01
Cloud computing and the Internet of Things are the two hot points in the Internet application field. The application of the two new technologies is in hot discussion and research, but quite less on the field of medical monitoring and managing application. Thus, in this paper, we study and analyze the application of cloud computing and the Internet of Things on the medical field. And we manage to make a combination of the two techniques in the medical monitoring and managing field. The model architecture for remote monitoring cloud platform of healthcare information (RMCPHI) was established firstly. Then the RMCPHI architecture was analyzed. Finally an efficient PSOSAA algorithm was proposed for the medical monitoring and managing application of cloud computing. Simulation results showed that our proposed scheme can improve the efficiency about 50%. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Reactor Operations Monitoring System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, M.M.
1989-01-01
The Reactor Operations Monitoring System (ROMS) is a VME based, parallel processor data acquisition and safety action system designed by the Equipment Engineering Section and Reactor Engineering Department of the Savannah River Site. The ROMS will be analyzing over 8 million signal samples per minute. Sixty-eight microprocessors are used in the ROMS in order to achieve a real-time data analysis. The ROMS is composed of multiple computer subsystems. Four redundant computer subsystems monitor 600 temperatures with 2400 thermocouples. Two computer subsystems share the monitoring of 600 reactor coolant flows. Additional computer subsystems are dedicated to monitoring 400 signals from assortedmore » process sensors. Data from these computer subsystems are transferred to two redundant process display computer subsystems which present process information to reactor operators and to reactor control computers. The ROMS is also designed to carry out safety functions based on its analysis of process data. The safety functions include initiating a reactor scram (shutdown), the injection of neutron poison, and the loadshed of selected equipment. A complete development Reactor Operations Monitoring System has been built. It is located in the Program Development Center at the Savannah River Site and is currently being used by the Reactor Engineering Department in software development. The Equipment Engineering Section is designing and fabricating the process interface hardware. Upon proof of hardware and design concept, orders will be placed for the final five systems located in the three reactor areas, the reactor training simulator, and the hardware maintenance center.« less
The Accuracy of Cognitive Monitoring during Computer-Based Instruction.
ERIC Educational Resources Information Center
Garhart, Casey; Hannafin, Michael J.
This study was conducted to determine the accuracy of learners' comprehension monitoring during computer-based instruction and to assess the relationship between enroute monitoring and different levels of learning. Participants were 50 university undergraduate students enrolled in an introductory educational psychology class. All students received…
Buniatian, A A; Sablin, I N; Flerov, E V; Mierbekov, E M; Broĭtman, O G; Shevchenko, V V; Shitikov, I I
1995-01-01
Creation of computer monitoring systems (CMS) for operating rooms is one of the most important spheres of personal computer employment in anesthesiology. The authors developed a PC RS/AT-based CMS and effectively used it for more than 2 years. This system permits comprehensive monitoring in cardiosurgical operations by real time processing the values of arterial and central venous pressure, pressure in the pulmonary artery, bioelectrical activity of the brain, and two temperature values. Use of this CMS helped appreciably improve patients' safety during surgery. The possibility to assess brain function by computer monitoring the EEF simultaneously with central hemodynamics and body temperature permit the anesthesiologist to objectively assess the depth of anesthesia and to diagnose cerebral hypoxia. Automated anesthesiological chart issued by the CMS after surgery reliably reflects the patient's status and the measures taken by the anesthesiologist.
A Computer Interview for Multivariate Monitoring of Psychiatric Outcome.
ERIC Educational Resources Information Center
Stevenson, John F.; And Others
Application of computer technology to psychiatric outcome measurement offers the promise of coping with increasing demands for extensive patient interviews repeated longitudinally. Described is the development of a cost-effective multi-dimensional tracking device to monitor psychiatric functioning, building on a previous local computer interview…
Monitoring Collaborative Activities in Computer Supported Collaborative Learning
ERIC Educational Resources Information Center
Persico, Donatella; Pozzi, Francesca; Sarti, Luigi
2010-01-01
Monitoring the learning process in computer supported collaborative learning (CSCL) environments is a key element for supporting the efficacy of tutor actions. This article proposes an approach for analysing learning processes in a CSCL environment to support tutors in their monitoring tasks. The approach entails tracking the interactions within…
The Crazy Business of Internet Peeping, Privacy, and Anonymity.
ERIC Educational Resources Information Center
Van Horn, Royal
2000-01-01
Peeping software takes several forms and can be used on a network or to monitor a certain computer. E-Mail Plus, for example, hides inside a computer and sends exact copies of incoming or outgoing e-mail anywhere. School staff with monitored computers should demand e-mail privacy. (MLH)
Graphical user interface for wireless sensor networks simulator
NASA Astrophysics Data System (ADS)
Paczesny, Tomasz; Paczesny, Daniel; Weremczuk, Jerzy
2008-01-01
Wireless Sensor Networks (WSN) are currently very popular area of development. It can be suited in many applications form military through environment monitoring, healthcare, home automation and others. Those networks, when working in dynamic, ad-hoc model, need effective protocols which must differ from common computer networks algorithms. Research on those protocols would be difficult without simulation tool, because real applications often use many nodes and tests on such a big networks take much effort and costs. The paper presents Graphical User Interface (GUI) for simulator which is dedicated for WSN studies, especially in routing and data link protocols evaluation.
Potential techniques for non-destructive evaluation of cable materials
NASA Astrophysics Data System (ADS)
Gillen, Kenneth T.; Clough, Roger L.; Mattson, Bengt; Stenberg, Bengt; Oestman, Erik
This paper describes the connection between mechanical degradation of common cable materials, in radiation and elevated temperature environments, and density increases caused by the oxidation which leads to this degradation. Two techniques based on density changes are suggested as potential non-destructive evaluation (NDE) procedures which may be applicable to monitoring the mechanical condition of cable materials in power plant environments. The first technique is direct measurement of density changes, via a density gradient column, using small shavings removed from the surface of cable jackets at selected locations. The second technique is computed X-ray tomography, utilizing a portable scanning device.
Respiratory medicine of reptiles.
Schumacher, Juergen
2011-05-01
Noninfectious and infectious causes have been implicated in the development of respiratory tract disease in reptiles. Treatment modalities in reptiles have to account for species differences in response to therapeutic agents as well as interpretation of diagnostic findings. Data on effective drugs and dosages for the treatment of respiratory diseases are often lacking in reptiles. Recently, advances have been made on the application of advanced imaging modalities, especially computed tomography for the diagnosis and treatment monitoring of reptiles. This article describes common infectious and noninfectious causes of respiratory disease in reptiles, including diagnostic and therapeutic regimen. Copyright © 2011 Elsevier Inc. All rights reserved.
Person-Locator System Based On Wristband Radio Transponders
NASA Technical Reports Server (NTRS)
Mintz, Frederick W.; Blaes, Brent R.; Chandler, Charles W.
1995-01-01
Computerized system based on wristband radio frequency (RF), passive transponders is being developed for use in real-time tracking of individuals in custodial institutions like prisons and mental hospitals. Includes monitoring system that contains central computer connected to low-power, high-frequency central transceiver. Transceiver connected to miniature transceiver nodes mounted unobtrusively at known locations throughout the institution. Wristband transponders embedded in common hospital wristbands. Wristbands tamperproof: each contains embedded wire loop which, when broken or torn off and discarded, causes wristband to disappear from system, thus causing alarm. Individuals could be located in a timely fashion at relatively low cost.
Dynamic Computation Offloading for Low-Power Wearable Health Monitoring Systems.
Kalantarian, Haik; Sideris, Costas; Mortazavi, Bobak; Alshurafa, Nabil; Sarrafzadeh, Majid
2017-03-01
The objective of this paper is to describe and evaluate an algorithm to reduce power usage and increase battery lifetime for wearable health-monitoring devices. We describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data processing between the wearable device and mobile application as a function of desired classification accuracy. By making the correct offloading decision based on current system parameters, we show that we are able to reduce system power by as much as 20%. We demonstrate that computation offloading can be applied to real-time monitoring systems, and yields significant power savings. Making correct offloading decisions for health monitoring devices can extend battery life and improve adherence.
Performance evaluation of the Engineering Analysis and Data Systems (EADS) 2
NASA Technical Reports Server (NTRS)
Debrunner, Linda S.
1994-01-01
The Engineering Analysis and Data System (EADS)II (1) was installed in March 1993 to provide high performance computing for science and engineering at Marshall Space Flight Center (MSFC). EADS II increased the computing capabilities over the existing EADS facility in the areas of throughput and mass storage. EADS II includes a Vector Processor Compute System (VPCS), a Virtual Memory Compute System (CFS), a Common Output System (COS), as well as Image Processing Station, Mini Super Computers, and Intelligent Workstations. These facilities are interconnected by a sophisticated network system. This work considers only the performance of the VPCS and the CFS. The VPCS is a Cray YMP. The CFS is implemented on an RS 6000 using the UniTree Mass Storage System. To better meet the science and engineering computing requirements, EADS II must be monitored, its performance analyzed, and appropriate modifications for performance improvement made. Implementing this approach requires tool(s) to assist in performance monitoring and analysis. In Spring 1994, PerfStat 2.0 was purchased to meet these needs for the VPCS and the CFS. PerfStat(2) is a set of tools that can be used to analyze both historical and real-time performance data. Its flexible design allows significant user customization. The user identifies what data is collected, how it is classified, and how it is displayed for evaluation. Both graphical and tabular displays are supported. The capability of the PerfStat tool was evaluated, appropriate modifications to EADS II to optimize throughput and enhance productivity were suggested and implemented, and the effects of these modifications on the systems performance were observed. In this paper, the PerfStat tool is described, then its use with EADS II is outlined briefly. Next, the evaluation of the VPCS, as well as the modifications made to the system are described. Finally, conclusions are drawn and recommendations for future worked are outlined.
NASA Technical Reports Server (NTRS)
Simon, Donald L.
2010-01-01
Aircraft engine performance trend monitoring and gas path fault diagnostics are closely related technologies that assist operators in managing the health of their gas turbine engine assets. Trend monitoring is the process of monitoring the gradual performance change that an aircraft engine will naturally incur over time due to turbomachinery deterioration, while gas path diagnostics is the process of detecting and isolating the occurrence of any faults impacting engine flow-path performance. Today, performance trend monitoring and gas path fault diagnostic functions are performed by a combination of on-board and off-board strategies. On-board engine control computers contain logic that monitors for anomalous engine operation in real-time. Off-board ground stations are used to conduct fleet-wide engine trend monitoring and fault diagnostics based on data collected from each engine each flight. Continuing advances in avionics are enabling the migration of portions of the ground-based functionality on-board, giving rise to more sophisticated on-board engine health management capabilities. This paper reviews the conventional engine performance trend monitoring and gas path fault diagnostic architecture commonly applied today, and presents a proposed enhanced on-board architecture for future applications. The enhanced architecture gains real-time access to an expanded quantity of engine parameters, and provides advanced on-board model-based estimation capabilities. The benefits of the enhanced architecture include the real-time continuous monitoring of engine health, the early diagnosis of fault conditions, and the estimation of unmeasured engine performance parameters. A future vision to advance the enhanced architecture is also presented and discussed
Accommodative and convergence response to computer screen and printed text
NASA Astrophysics Data System (ADS)
Ferreira, Andreia; Lira, Madalena; Franco, Sandra
2011-05-01
The aim of this work was to find out if differences exist in accommodative and convergence response for different computer monitors' and a printed text. It was also tried to relate the horizontal heterophoria value and accommodative response with the symptoms associated with computer use. Two independents experiments were carried out in this study. The first experiment was measuring the accommodative response on 89 subjects using the Grand Seiko WAM-5500 (Grand Seiko Co., Ltd., Japan). The accommodative response was measured using three computer monitors: a 17-inch cathode ray tube (CRT), two liquid crystal displays LCDs, one 17-inch (LCD17) and one 15 inches (LCD15) and a printed text. The text displayed was always the same for all the subjects and tests. A second experiment aimed to measure the value of habitual horizontal heterophoria on 80 subjects using the Von Graefe technique. The measurements were obtained using the same target presented on two different computer monitors, one 19-inch cathode ray tube (CRT) and other 19 inches liquid crystal displays (LCD) and printed on paper. A small survey about the incidence and prevalence of symptoms was performed similarly in both experiments. In the first experiment, the accommodation response was higher in the CRT and LCD's than for paper. There were not found significantly different response for both LCD monitors'. The second experiment showed that, the heterophoria values were similar for all the stimuli. On average, participants presented a small exophoria. In both experiments, asthenopia was the symptom that presented higher incidence. There are different accommodative responses when reading on paper or on computer monitors. This difference is more significant for CRT monitors. On the other hand, there was no difference in the values of convergence for the computer monitors' and paper. The symptoms associated with the use of computers are not related with the increase in accommodation and with the horizontal heterophoria values.
Lopes, Marta B; Calado, Cecília R C; Figueiredo, Mário A T; Bioucas-Dias, José M
2017-06-01
The monitoring of biopharmaceutical products using Fourier transform infrared (FT-IR) spectroscopy relies on calibration techniques involving the acquisition of spectra of bioprocess samples along the process. The most commonly used method for that purpose is partial least squares (PLS) regression, under the assumption that a linear model is valid. Despite being successful in the presence of small nonlinearities, linear methods may fail in the presence of strong nonlinearities. This paper studies the potential usefulness of nonlinear regression methods for predicting, from in situ near-infrared (NIR) and mid-infrared (MIR) spectra acquired in high-throughput mode, biomass and plasmid concentrations in Escherichia coli DH5-α cultures producing the plasmid model pVAX-LacZ. The linear methods PLS and ridge regression (RR) are compared with their kernel (nonlinear) versions, kPLS and kRR, as well as with the (also nonlinear) relevance vector machine (RVM) and Gaussian process regression (GPR). For the systems studied, RR provided better predictive performances compared to the remaining methods. Moreover, the results point to further investigation based on larger data sets whenever differences in predictive accuracy between a linear method and its kernelized version could not be found. The use of nonlinear methods, however, shall be judged regarding the additional computational cost required to tune their additional parameters, especially when the less computationally demanding linear methods herein studied are able to successfully monitor the variables under study.
Monitoring techniques and alarm procedures for CMS services and sites in WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molina-Perez, J.; Bonacorsi, D.; Gutsche, O.
2012-01-01
The CMS offline computing system is composed of roughly 80 sites (including most experienced T3s) and a number of central services to distribute, process and analyze data worldwide. A high level of stability and reliability is required from the underlying infrastructure and services, partially covered by local or automated monitoring and alarming systems such as Lemon and SLS, the former collects metrics from sensors installed on computing nodes and triggers alarms when values are out of range, the latter measures the quality of service and warns managers when service is affected. CMS has established computing shift procedures with personnel operatingmore » worldwide from remote Computing Centers, under the supervision of the Computing Run Coordinator at CERN. This dedicated 24/7 computing shift personnel is contributing to detect and react timely on any unexpected error and hence ensure that CMS workflows are carried out efficiently and in a sustained manner. Synergy among all the involved actors is exploited to ensure the 24/7 monitoring, alarming and troubleshooting of the CMS computing sites and services. We review the deployment of the monitoring and alarming procedures, and report on the experience gained throughout the first two years of LHC operation. We describe the efficiency of the communication tools employed, the coherent monitoring framework, the proactive alarming systems and the proficient troubleshooting procedures that helped the CMS Computing facilities and infrastructure to operate at high reliability levels.« less
Yang, Shu; Qiu, Yuyan; Shi, Bo
2016-09-01
This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.
NASA Astrophysics Data System (ADS)
Samodurov, V. A.; Rodin, A. E.; Kitaeva, M. A.; Isaev, E. A.; Dumsky, D. V.; Churakov, D. D.; Manzyuk, M. O.
From 2012 on radio telescope BSA FIAN multi beams diagram was started. It capable at July 2014 daily observing by 96 beams in declination -8 .. 42 degrees in the frequency band 109-111.5 MHz. The number of frequency bands are from 6 to 32, the time constant are from 0.1 to 0.0125 sec. In receiving mode with 32 band (plus one common band) with a time constant of 12.5 ms (80 times per second) respectively produced 33x96x80 four byte real and so daily we produced 87.5 Gbt (yearly to 32 Tbt). These data are enormous opportunities for both short and long-term monitoring of various classes of radio sources (including radio transients) and for space weather and the Earth's ionosphere monitoring, for search for different classes of radio sources, etc. The base aims of our work are: a) to obtain new scientific data on different classes of discrete radio sources, the construction of physical models and their evolution - obtained on the basis of the clock continuous digital sky radio monitoring at frequency 109-111.5 MHz and cross-analysis of data from third-party reviews on other frequencies; c) launch the streaming data on various types of high-performance computing systems, including to create a public system of distributed computing for thousands of users on the basis of BOINC technology. The BOINC client for astronomical data from the monitoring survey of the big part of entire sky almost have not analogies. We have some first science results (new pulsars, and some new type of radiosources).
Overview of selected surrogate technologies for continuous suspended-sediment monitoring
Gray, J.R.; Gartner, J.W.
2006-01-01
Surrogate technologies for inferring selected characteristics of suspended sediments in surface waters are being tested by the U.S. Geological Survey and several partners with the ultimate goal of augmenting or replacing traditional monitoring methods. Optical properties of water such as turbidity and optical backscatter are the most commonly used surrogates for suspended-sediment concentration, but use of other techniques such as those based on acoustic backscatter, laser diffraction, digital photo-optic, and pressure-difference principles is increasing for concentration and, in some cases, particle-size distribution and flux determinations. The potential benefits of these technologies include acquisition of automated, continuous, quantifiably accurate data obtained with increased safety and at less expense. When suspended-sediment surrogate data meet consensus accuracy criteria and appropriate sediment-record computation techniques are applied, these technologies have the potential to revolutionize the way fluvial-sediment data are collected, analyzed, and disseminated.
Reconfigurable intelligent sensors for health monitoring: a case study of pulse oximeter sensor.
Jovanov, E; Milenkovic, A; Basham, S; Clark, D; Kelley, D
2004-01-01
Design of low-cost, miniature, lightweight, ultra low-power, intelligent sensors capable of customization and seamless integration into a body area network for health monitoring applications presents one of the most challenging tasks for system designers. To answer this challenge we propose a reconfigurable intelligent sensor platform featuring a low-power microcontroller, a low-power programmable logic device, a communication interface, and a signal conditioning circuit. The proposed solution promises a cost-effective, flexible platform that allows easy customization, run-time reconfiguration, and energy-efficient computation and communication. The development of a common platform for multiple physical sensors and a repository of both software procedures and soft intellectual property cores for hardware acceleration will increase reuse and alleviate costs of transition to a new generation of sensors. As a case study, we present an implementation of a reconfigurable pulse oximeter sensor.
Revealing livestock effects on bunchgrass vegetation with Landsat ETM+ data across a grazing season
NASA Astrophysics Data System (ADS)
Jansen, Vincent S.
Remote sensing provides monitoring solutions for more informed grazing management. To investigate the ability to detect the effects of cattle grazing on bunchgrass vegetation with Landsat Enhanced Thematic Mapper Plus (ETM+) data, we conducted a study on the Zumwalt Prairie in northeastern Oregon across a gradient of grazing intensities. Biophysical vegetation data was collected on vertical structure, biomass, and cover at three different time periods during the grazing season: June, August, and October 2012. To relate these measures to the remotely sensed Landsat ETM+ data, Pearson's correlations and multiple regression models were computed. Using the best models, predicted vegetation metrics were then mapped across the study area. Results indicated that models using common vegetation indices had the ability to discern different levels of grazing across the study area. Results can be distributed to land managers to help guide grassland conservation by improving monitoring of bunchgrass vegetation for sustainable livestock management.
Detection of endoscopic looping during colonoscopy procedure by using embedded bending sensors
Bruce, Michael; Choi, JungHun
2018-01-01
Background Looping of the colonoscope shaft during procedure is one of the most common obstacles encountered by colonoscopists. It occurs in 91% of cases with the N-sigmoid loop being the most common, occurring in 79% of cases. Purpose Herein, a novel system is developed that will give a complete three-dimensional (3D) vector image of the shaft as it passes through the colon, to aid the colonoscopist in detecting loops before they form. Patients and methods A series of connected links spans the middle 50% of the shaft, where loops are likely to form. Two potentiometers are attached at each joint to measure angular deflection in two directions to allow for 3D positioning. This 3D positioning is converted into a 3D vector image using computer software. MATLAB software has been used to display the image on a computer monitor. For the different configuration of the colon model, the system determined the looping status. Results Different configurations (N loop, reverse gamma loop, and reverse splenic flexure) of the loops were well defined using 3D vector image. Conclusion The novel sensory system can accurately define the various configuration of the colon during the colonoscopy procedure. PMID:29849469
SISYPHUS: A high performance seismic inversion factory
NASA Astrophysics Data System (ADS)
Gokhberg, Alexey; Simutė, Saulė; Boehm, Christian; Fichtner, Andreas
2016-04-01
In the recent years the massively parallel high performance computers became the standard instruments for solving the forward and inverse problems in seismology. The respective software packages dedicated to forward and inverse waveform modelling specially designed for such computers (SPECFEM3D, SES3D) became mature and widely available. These packages achieve significant computational performance and provide researchers with an opportunity to solve problems of bigger size at higher resolution within a shorter time. However, a typical seismic inversion process contains various activities that are beyond the common solver functionality. They include management of information on seismic events and stations, 3D models, observed and synthetic seismograms, pre-processing of the observed signals, computation of misfits and adjoint sources, minimization of misfits, and process workflow management. These activities are time consuming, seldom sufficiently automated, and therefore represent a bottleneck that can substantially offset performance benefits provided by even the most powerful modern supercomputers. Furthermore, a typical system architecture of modern supercomputing platforms is oriented towards the maximum computational performance and provides limited standard facilities for automation of the supporting activities. We present a prototype solution that automates all aspects of the seismic inversion process and is tuned for the modern massively parallel high performance computing systems. We address several major aspects of the solution architecture, which include (1) design of an inversion state database for tracing all relevant aspects of the entire solution process, (2) design of an extensible workflow management framework, (3) integration with wave propagation solvers, (4) integration with optimization packages, (5) computation of misfits and adjoint sources, and (6) process monitoring. The inversion state database represents a hierarchical structure with branches for the static process setup, inversion iterations, and solver runs, each branch specifying information at the event, station and channel levels. The workflow management framework is based on an embedded scripting engine that allows definition of various workflow scenarios using a high-level scripting language and provides access to all available inversion components represented as standard library functions. At present the SES3D wave propagation solver is integrated in the solution; the work is in progress for interfacing with SPECFEM3D. A separate framework is designed for interoperability with an optimization module; the workflow manager and optimization process run in parallel and cooperate by exchanging messages according to a specially designed protocol. A library of high-performance modules implementing signal pre-processing, misfit and adjoint computations according to established good practices is included. Monitoring is based on information stored in the inversion state database and at present implements a command line interface; design of a graphical user interface is in progress. The software design fits well into the common massively parallel system architecture featuring a large number of computational nodes running distributed applications under control of batch-oriented resource managers. The solution prototype has been implemented on the "Piz Daint" supercomputer provided by the Swiss Supercomputing Centre (CSCS).
System and Method for Monitoring Distributed Asset Data
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry (Inventor)
2015-01-01
A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.
Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał
2016-01-01
Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper. PMID:27649186
An affordable cuff-less blood pressure estimation solution.
Jain, Monika; Kumar, Niranjan; Deb, Sujay
2016-08-01
This paper presents a cuff-less hypertension pre-screening device that non-invasively monitors the Blood Pressure (BP) and Heart Rate (HR) continuously. The proposed device simultaneously records two clinically significant and highly correlated biomedical signals, viz., Electrocardiogram (ECG) and Photoplethysmogram (PPG). The device provides a common data acquisition platform that can interface with PC/laptop, Smart phone/tablet and Raspberry-pi etc. The hardware stores and processes the recorded ECG and PPG in order to extract the real-time BP and HR using kernel regression approach. The BP and HR estimation error is measured in terms of normalized mean square error, Error Standard Deviation (ESD) and Mean Absolute Error (MAE), with respect to a clinically proven digital BP monitor (OMRON HBP1300). The computed error falls under the maximum standard allowable error mentioned by Association for the Advancement of Medical Instrumentation; MAE <; 5 mmHg and ESD <; 8mmHg. The results are validated using two-tailed dependent sample t-test also. The proposed device is a portable low-cost home and clinic bases solution for continuous health monitoring.
Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał
2016-09-14
Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper.
Pandey, Parul; Lee, Eun Kyung; Pompili, Dario
2016-11-01
Stress is one of the key factor that impacts the quality of our daily life: From the productivity and efficiency in the production processes to the ability of (civilian and military) individuals in making rational decisions. Also, stress can propagate from one individual to other working in a close proximity or toward a common goal, e.g., in a military operation or workforce. Real-time assessment of the stress of individuals alone is, however, not sufficient, as understanding its source and direction in which it propagates in a group of people is equally-if not more-important. A continuous near real-time in situ personal stress monitoring system to quantify level of stress of individuals and its direction of propagation in a team is envisioned. However, stress monitoring of an individual via his/her mobile device may not always be possible for extended periods of time due to limited battery capacity of these devices. To overcome this challenge a novel distributed mobile computing framework is proposed to organize the resources in the vicinity and form a mobile device cloud that enables offloading of computation tasks in stress detection algorithm from resource constrained devices (low residual battery, limited CPU cycles) to resource rich devices. Our framework also supports computing parallelization and workflows, defining how the data and tasks divided/assigned among the entities of the framework are designed. The direction of propagation and magnitude of influence of stress in a group of individuals are studied by applying real-time, in situ analysis of Granger Causality. Tangible benefits (in terms of energy expenditure and execution time) of the proposed framework in comparison to a centralized framework are presented via thorough simulations and real experiments.
48 CFR 252.204-7011 - Alternative Line Item Structure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Unit Unit price Amount 0001 Computer, Desktop with CPU, Monitor, Keyboard and Mouse 20 EA Alternative... Unit Unit Price Amount 0001 Computer, Desktop with CPU, Keyboard and Mouse 20 EA 0002 Monitor 20 EA...
48 CFR 252.204-7011 - Alternative Line Item Structure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Unit Unit price Amount 0001 Computer, Desktop with CPU, Monitor, Keyboard and Mouse 20 EA Alternative... Unit Unit Price Amount 0001 Computer, Desktop with CPU, Keyboard and Mouse 20 EA 0002 Monitor 20 EA...
48 CFR 252.204-7011 - Alternative Line Item Structure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Unit Unit price Amount 0001 Computer, Desktop with CPU, Monitor, Keyboard and Mouse 20 EA Alternative... Unit Unit Price Amount 0001 Computer, Desktop with CPU, Keyboard and Mouse 20 EA 0002 Monitor 20 EA...
48 CFR 252.204-7011 - Alternative Line Item Structure.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Unit Unit price Amount 0001 Computer, Desktop with CPU, Monitor, Keyboard and Mouse 20 EA Alternative... Unit Unit Price Amount 0001 Computer, Desktop with CPU, Keyboard and Mouse 20 EA 0002 Monitor 20 EA...
ATLAS Distributed Computing Monitoring tools during the LHC Run I
NASA Astrophysics Data System (ADS)
Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration
2014-06-01
This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.
Computer-aided video exposure monitoring.
Walsh, P T; Clark, R D; Flaherty, S; Gentry, S J
2000-01-01
A computer-aided video exposure monitoring system was used to record exposure information. The system comprised a handheld camcorder, portable video cassette recorder, radio-telemetry transmitter/receiver, and handheld or notebook computers for remote data logging, photoionization gas/vapor detectors (PIDs), and a personal aerosol monitor. The following workplaces were surveyed using the system: dry cleaning establishments--monitoring tetrachoroethylene in the air and in breath; printing works--monitoring white spirit type solvent; tire manufacturing factory--monitoring rubber fume; and a slate quarry--monitoring respirable dust and quartz. The system based on the handheld computer, in particular, simplified the data acquisition process compared with earlier systems in use by our laboratory. The equipment is more compact and easier to operate, and allows more accurate calibration of the instrument reading on the video image. Although a variety of data display formats are possible, the best format for videos intended for educational and training purposes was the review-preview chart superimposed on the video image of the work process. Recommendations for reducing exposure by engineering or by modifying work practice were possible through use of the video exposure system in the dry cleaning and tire manufacturing applications. The slate quarry work illustrated how the technique can be used to test ventilation configurations quickly to see their effect on the worker's personal exposure.
Dormann, H; Criegee-Rieck, M; Neubert, A; Egger, T; Levy, M; Hahn, E G; Brune, K
2004-02-01
To investigate the effectiveness of a computer monitoring system that detects adverse drug reactions (ADRs) by laboratory signals in gastroenterology. A prospective, 6-month, pharmaco-epidemiological survey was carried out on a gastroenterological ward at the University Hospital Erlangen-Nuremberg. Two methods were used to identify ADRs. (i) All charts were reviewed daily by physicians and clinical pharmacists. (ii) A computer monitoring system generated a daily list of automatic laboratory signals and alerts of ADRs, including patient data and dates of events. One hundred and nine ADRs were detected in 474 admissions (377 patients). The computer monitoring system generated 4454 automatic laboratory signals from 39 819 laboratory parameters tested, and issued 2328 alerts, 914 (39%) of which were associated with ADRs; 574 (25%) were associated with ADR-positive admissions. Of all the alerts generated, signals of hepatotoxicity (1255), followed by coagulation disorders (407) and haematological toxicity (207), were prevalent. Correspondingly, the prevailing ADRs were concerned with the metabolic and hepato-gastrointestinal system (61). The sensitivity was 91%: 69 of 76 ADR-positive patients were indicated by an alert. The specificity of alerts was increased from 23% to 76% after implementation of an automatic laboratory signal trend monitoring algorithm. This study shows that a computer monitoring system is a useful tool for the systematic and automated detection of ADRs in gastroenterological patients.
ERIC Educational Resources Information Center
Nelson, Peter M.; Van Norman, Ethan R.; Klingbeil, Dave A.; Parker, David C.
2017-01-01
Although extensive research exists on the use of curriculum-based measures for progress monitoring, little is known about using computer adaptive tests (CATs) for progress-monitoring purposes. The purpose of this study was to evaluate the impact of the frequency of data collection on individual and group growth estimates using a CAT. Data were…
LEMON - LHC Era Monitoring for Large-Scale Infrastructures
NASA Astrophysics Data System (ADS)
Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron
2011-12-01
At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.
The Use of Signal Dimensionality for Automatic QC of Seismic Array Data
NASA Astrophysics Data System (ADS)
Rowe, C. A.; Stead, R. J.; Begnaud, M. L.; Draganov, D.; Maceira, M.; Gomez, M.
2014-12-01
A significant problem in seismic array analysis is the inclusion of bad sensor channels in the beam-forming process. We are testing an approach to automated, on-the-fly quality control (QC) to aid in the identification of poorly performing sensor channels prior to beam-forming in routine event detection or location processing. The idea stems from methods used for large computer servers, when monitoring traffic at enormous numbers of nodes is impractical on a node-by-node basis, so the dimensionality of the node traffic is instead monitored for anomalies that could represent malware, cyber-attacks or other problems. The technique relies upon the use of subspace dimensionality or principal components of the overall system traffic. The subspace technique is not new to seismology, but its most common application has been limited to comparing waveforms to an a priori collection of templates for detecting highly similar events in a swarm or seismic cluster. We examine the signal dimension in similar way to the method addressing node traffic anomalies in large computer systems. We explore the effects of malfunctioning channels on the dimension of the data and its derivatives, and how to leverage this effect for identifying bad array elements. We show preliminary results applied to arrays in Kazakhstan (Makanchi) and Argentina (Malargue).
Kang, Sung-Won; Choi, Hyeob; Park, Hyung-Il; Choi, Byoung-Gun; Im, Hyobin; Shin, Dongjun; Jung, Young-Giu; Lee, Jun-Young; Park, Hong-Won; Park, Sukyung; Roh, Jung-Sim
2017-11-07
Spinal disease is a common yet important condition that occurs because of inappropriate posture. Prevention could be achieved by continuous posture monitoring, but most measurement systems cannot be used in daily life due to factors such as burdensome wires and large sensing modules. To improve upon these weaknesses, we developed comfortable "smart wear" for posture measurement using conductive yarn for circuit patterning and a flexible printed circuit board (FPCB) for interconnections. The conductive yarn was made by twisting polyester yarn and metal filaments, and the resistance per unit length was about 0.05 Ω/cm. An embroidered circuit was made using the conductive yarn, which showed increased yield strength and uniform electrical resistance per unit length. Circuit networks of sensors and FPCBs for interconnection were integrated into clothes using a computer numerical control (CNC) embroidery process. The system was calibrated and verified by comparing the values measured by the smart wear with those measured by a motion capture camera system. Six subjects performed fixed movements and free computer work, and, with this system, we were able to measure the anterior/posterior direction tilt angle with an error of less than 4°. The smart wear does not have excessive wires, and its structure will be optimized for better posture estimation in a later study.
Automated Euler and Navier-Stokes Database Generation for a Glide-Back Booster
NASA Technical Reports Server (NTRS)
Chaderjian, Neal M.; Rogers, Stuart E.; Aftosmis, Mike J.; Pandya, Shishir A.; Ahmad, Jasim U.; Tejnil, Edward
2004-01-01
The past two decades have seen a sustained increase in the use of high fidelity Computational Fluid Dynamics (CFD) in basic research, aircraft design, and the analysis of post-design issues. As the fidelity of a CFD method increases, the number of cases that can be readily and affordably computed greatly diminishes. However, computer speeds now exceed 2 GHz, hundreds of processors are currently available and more affordable, and advances in parallel CFD algorithms scale more readily with large numbers of processors. All of these factors make it feasible to compute thousands of high fidelity cases. However, there still remains the overwhelming task of monitoring the solution process. This paper presents an approach to automate the CFD solution process. A new software tool, AeroDB, is used to compute thousands of Euler and Navier-Stokes solutions for a 2nd generation glide-back booster in one week. The solution process exploits a common job-submission grid environment, the NASA Information Power Grid (IPG), using 13 computers located at 4 different geographical sites. Process automation and web-based access to a MySql database greatly reduces the user workload, removing much of the tedium and tendency for user input errors. The AeroDB framework is shown. The user submits/deletes jobs, monitors AeroDB's progress, and retrieves data and plots via a web portal. Once a job is in the database, a job launcher uses an IPG resource broker to decide which computers are best suited to run the job. Job/code requirements, the number of CPUs free on a remote system, and queue lengths are some of the parameters the broker takes into account. The Globus software provides secure services for user authentication, remote shell execution, and secure file transfers over an open network. AeroDB automatically decides when a job is completed. Currently, the Cart3D unstructured flow solver is used for the Euler equations, and the Overflow structured overset flow solver is used for the Navier-Stokes equations. Other codes can be readily included into the AeroDB framework.
Software For Monitoring A Computer Network
NASA Technical Reports Server (NTRS)
Lee, Young H.
1992-01-01
SNMAT is rule-based expert-system computer program designed to assist personnel in monitoring status of computer network and identifying defective computers, workstations, and other components of network. Also assists in training network operators. Network for SNMAT located at Space Flight Operations Center (SFOC) at NASA's Jet Propulsion Laboratory. Intended to serve as data-reduction system providing windows, menus, and graphs, enabling users to focus on relevant information. SNMAT expected to be adaptable to other computer networks; for example in management of repair, maintenance, and security, or in administration of planning systems, billing systems, or archives.
Ultra Low Power Signal Oriented Approach for Wireless Health Monitoring
Marinkovic, Stevan; Popovici, Emanuel
2012-01-01
In recent years there is growing pressure on the medical sector to reduce costs while maintaining or even improving the quality of care. A potential solution to this problem is real time and/or remote patient monitoring by using mobile devices. To achieve this, medical sensors with wireless communication, computational and energy harvesting capabilities are networked on, or in, the human body forming what is commonly called a Wireless Body Area Network (WBAN). We present the implementation of a novel Wake Up Receiver (WUR) in the context of standardised wireless protocols, in a signal-oriented WBAN environment and present a novel protocol intended for wireless health monitoring (WhMAC). WhMAC is a TDMA-based protocol with very low power consumption. It utilises WBAN-specific features and a novel ultra low power wake up receiver technology, to achieve flexible and at the same time very low power wireless data transfer of physiological signals. As the main application is in the medical domain, or personal health monitoring, the protocol caters for different types of medical sensors. We define four sensor modes, in which the sensors can transmit data, depending on the sensor type and emergency level. A full power dissipation model is provided for the protocol, with individual hardware and application parameters. Finally, an example application shows the reduction in the power consumption for different data monitoring scenarios. PMID:22969379
Ultra low power signal oriented approach for wireless health monitoring.
Marinkovic, Stevan; Popovici, Emanuel
2012-01-01
In recent years there is growing pressure on the medical sector to reduce costs while maintaining or even improving the quality of care. A potential solution to this problem is real time and/or remote patient monitoring by using mobile devices. To achieve this, medical sensors with wireless communication, computational and energy harvesting capabilities are networked on, or in, the human body forming what is commonly called a Wireless Body Area Network (WBAN). We present the implementation of a novel Wake Up Receiver (WUR) in the context of standardised wireless protocols, in a signal-oriented WBAN environment and present a novel protocol intended for wireless health monitoring (WhMAC). WhMAC is a TDMA-based protocol with very low power consumption. It utilises WBAN-specific features and a novel ultra low power wake up receiver technology, to achieve flexible and at the same time very low power wireless data transfer of physiological signals. As the main application is in the medical domain, or personal health monitoring, the protocol caters for different types of medical sensors. We define four sensor modes, in which the sensors can transmit data, depending on the sensor type and emergency level. A full power dissipation model is provided for the protocol, with individual hardware and application parameters. Finally, an example application shows the reduction in the power consumption for different data monitoring scenarios.
Miao, Fen; Cheng, Yayu; He, Yi; He, Qingyun; Li, Ye
2015-05-19
Continuously monitoring the ECG signals over hours combined with activity status is very important for preventing cardiovascular diseases. A traditional ECG holter is often inconvenient to carry because it has many electrodes attached to the chest and because it is heavy. This work proposes a wearable, low power context-aware ECG monitoring system integrated built-in kinetic sensors of the smartphone with a self-designed ECG sensor. The wearable ECG sensor is comprised of a fully integrated analog front-end (AFE), a commercial micro control unit (MCU), a secure digital (SD) card, and a Bluetooth module. The whole sensor is very small with a size of only 58 × 50 × 10 mm for wearable monitoring application due to the AFE design, and the total power dissipation in a full round of ECG acquisition is only 12.5 mW. With the help of built-in kinetic sensors of the smartphone, the proposed system can compute and recognize user's physical activity, and thus provide context-aware information for the continuous ECG monitoring. The experimental results demonstrated the performance of proposed system in improving diagnosis accuracy for arrhythmias and identifying the most common abnormal ECG patterns in different activities. In conclusion, we provide a wearable, accurate and energy-efficient system for long-term and context-aware ECG monitoring without any extra cost on kinetic sensor design but with the help of the widespread smartphone.
Dual-modal photoacoustic and ultrasound imaging of dental implants
NASA Astrophysics Data System (ADS)
Lee, Donghyun; Park, Sungjo; Kim, Chulhong
2018-02-01
Dental implants are common method to replace decayed or broken tooth. As the implant treatment procedures varies according to the patients' jawbone, bone ridge, and sinus structure, appropriate examinations are necessary for successful treatment. Currently, radiographic examinations including periapical radiology, panoramic X-ray, and computed tomography are commonly used for diagnosing and monitoring. However, these radiographic examinations have limitations in that patients and operators are exposed to radioactivity and multiple examinations are performed during the treatment. In this study, we demonstrated photoacoustic (PA) and ultrasound (US) combined imaging of dental implant that can lower the total amount of absorbed radiation dose in dental implant treatment. An acoustic resolution PA macroscopy and a clinical PA/US system was used for dental implant imaging. The acquired dual modal PA/US imaging results support that the proposed photoacoustic imaging strategy can reduce the radiation dose rate during dental implant treatment.
Biomedical Wireless Ambulatory Crew Monitor
NASA Technical Reports Server (NTRS)
Chmiel, Alan; Humphreys, Brad
2009-01-01
A compact, ambulatory biometric data acquisition system has been developed for space and commercial terrestrial use. BioWATCH (Bio medical Wireless and Ambulatory Telemetry for Crew Health) acquires signals from biomedical sensors using acquisition modules attached to a common data and power bus. Several slots allow the user to configure the unit by inserting sensor-specific modules. The data are then sent real-time from the unit over any commercially implemented wireless network including 802.11b/g, WCDMA, 3G. This system has a distributed computing hierarchy and has a common data controller on each sensor module. This allows for the modularity of the device along with the tailored ability to control the cards using a relatively small master processor. The distributed nature of this system affords the modularity, size, and power consumption that betters the current state of the art in medical ambulatory data acquisition. A new company was created to market this technology.
Fault tolerant features and experiments of ANTS distributed real-time system
NASA Astrophysics Data System (ADS)
Dominic-Savio, Patrick; Lo, Jien-Chung; Tufts, Donald W.
1995-01-01
The ANTS project at the University of Rhode Island introduces the concept of Active Nodal Task Seeking (ANTS) as a way to efficiently design and implement dependable, high-performance, distributed computing. This paper presents the fault tolerant design features that have been incorporated in the ANTS experimental system implementation. The results of performance evaluations and fault injection experiments are reported. The fault-tolerant version of ANTS categorizes all computing nodes into three groups. They are: the up-and-running green group, the self-diagnosing yellow group and the failed red group. Each available computing node will be placed in the yellow group periodically for a routine diagnosis. In addition, for long-life missions, ANTS uses a monitoring scheme to identify faulty computing nodes. In this monitoring scheme, the communication pattern of each computing node is monitored by two other nodes.
Using the Electrocorticographic Speech Network to Control a Brain-Computer Interface in Humans
Leuthardt, Eric C.; Gaona, Charles; Sharma, Mohit; Szrama, Nicholas; Roland, Jarod; Freudenberg, Zac; Solis, Jamie; Breshears, Jonathan; Schalk, Gerwin
2013-01-01
Electrocorticography (ECoG) has emerged as a new signal platform for brain-computer interface (BCI) systems. Classically, the cortical physiology that has been commonly investigated and utilized for device control in humans has been brain signals from sensorimotor cortex. Hence, it was unknown whether other neurophysiological substrates, such as the speech network, could be used to further improve on or complement existing motor-based control paradigms. We demonstrate here for the first time that ECoG signals associated with different overt and imagined phoneme articulation can enable invasively monitored human patients to control a one-dimensional computer cursor rapidly and accurately. This phonetic content was distinguishable within higher gamma frequency oscillations and enabled users to achieve final target accuracies between 68 and 91% within 15 minutes. Additionally, one of the patients achieved robust control using recordings from a microarray consisting of 1 mm spaced microwires. These findings suggest that the cortical network associated with speech could provide an additional cognitive and physiologic substrate for BCI operation and that these signals can be acquired from a cortical array that is small and minimally invasive. PMID:21471638
Application research of Ganglia in Hadoop monitoring and management
NASA Astrophysics Data System (ADS)
Li, Gang; Ding, Jing; Zhou, Lixia; Yang, Yi; Liu, Lei; Wang, Xiaolei
2017-03-01
There are many applications of Hadoop System in the field of large data, cloud computing. The test bench of storage and application in seismic network at Earthquake Administration of Tianjin use with Hadoop system, which is used the open source software of Ganglia to operate and monitor. This paper reviews the function, installation and configuration process, application effect of operating and monitoring in Hadoop system of the Ganglia system. It briefly introduces the idea and effect of Nagios software monitoring Hadoop system. It is valuable for the industry in the monitoring system of cloud computing platform.
A handheld wireless device for diffuse optical spectroscopic assessment of infantile hemangiomas
NASA Astrophysics Data System (ADS)
Fong, Christopher J.; Flexman, Molly; Hoi, Jennifer W.; Geller, Lauren; Garzon, Maria; Kim, Hyun K.; Hielscher, Andreas H.
2013-03-01
Infantile hemangiomas (IH) are common vascular growths that occur in 5-10% of neonates and have the potential to cause disfiguring and even life-threatening complications. With no objective tool to monitor IH, a handheld wireless device (HWD) that uses diffuse optical spectroscopy has been developed for use in assessment of IH by measurements in absolute oxygenated and deoxygenated hemoglobin concentration as well as scattering in tissue. Reconstructions of these variables can be computed using a multispectral evolution algorithm. We validated the new system by experimental studies using phantom experiments and a clinical study is under way to assess the utility of DOI for IH.
Björn, Lars Olof; Li, Shaoshan
2013-10-01
Solar energy absorbed by plants results in either reflection or absorption. The latter results in photosynthesis, fluorescence, or heat. Measurements of fluorescence changes have been used for monitoring processes associated with photosynthesis. A simple method to follow changes in leaf fluorescence and leaf reflectance associated with nonphotochemical quenching and light acclimation of leaves is described. The main equipment needed consists of a green-light emitting laser pointer, a digital camera, and a personal computer equipped with the camera acquisition software and the programs ImageJ and Excel. Otherwise, only commonly available cheap materials are required.
ERIC Educational Resources Information Center
Sng, Dennis Cheng-Hong
The University of Illinois at Urbana-Champaign (UIUC) has a large campus computer network serving a community of about 20,000 users. With such a large network, it is inevitable that there are a wide variety of technologies co-existing in a multi-vendor environment. Effective network monitoring tools can help monitor traffic and link usage, as well…
Electronics Environmental Benefits Calculator
The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase, use and disposal of electronics.The EEBC estimates the environmental and economic benefits of: Purchasing Electronic Product Environmental Assessment Tool (EPEAT)-registered products; Enabling power management features on computers and monitors above default percentages; Extending the life of equipment beyond baseline values; Reusing computers, monitors and cell phones; and Recycling computers, monitors, cell phones and loads of mixed electronic products.The EEBC may be downloaded as a Microsoft Excel spreadsheet.See https://www.federalelectronicschallenge.net/resources/bencalc.htm for more details.
Models for interrupted monitoring of a stochastic process
NASA Technical Reports Server (NTRS)
Palmer, E.
1977-01-01
As computers are added to the cockpit, the pilot's job is changing from of manually flying the aircraft, to one of supervising computers which are doing navigation, guidance and energy management calculations as well as automatically flying the aircraft. In this supervisorial role the pilot must divide his attention between monitoring the aircraft's performance and giving commands to the computer. Normative strategies are developed for tasks where the pilot must interrupt his monitoring of a stochastic process in order to attend to other duties. Results are given as to how characteristics of the stochastic process and the other tasks affect the optimal strategies.
Filho, Mercedes; Ma, Zhen; Tavares, João Manuel R S
2015-11-01
In recent years, the incidence of skin cancer cases has risen, worldwide, mainly due to the prolonged exposure to harmful ultraviolet radiation. Concurrently, the computer-assisted medical diagnosis of skin cancer has undergone major advances, through an improvement in the instrument and detection technology, and the development of algorithms to process the information. Moreover, because there has been an increased need to store medical data, for monitoring, comparative and assisted-learning purposes, algorithms for data processing and storage have also become more efficient in handling the increase of data. In addition, the potential use of common mobile devices to register high-resolution images of skin lesions has also fueled the need to create real-time processing algorithms that may provide a likelihood for the development of malignancy. This last possibility allows even non-specialists to monitor and follow-up suspected skin cancer cases. In this review, we present the major steps in the pre-processing, processing and post-processing of skin lesion images, with a particular emphasis on the quantification and classification of pigmented skin lesions. We further review and outline the future challenges for the creation of minimum-feature, automated and real-time algorithms for the detection of skin cancer from images acquired via common mobile devices.
NASA Astrophysics Data System (ADS)
Li, Y. J.; Kokkinaki, Amalia; Darve, Eric F.; Kitanidis, Peter K.
2017-08-01
The operation of most engineered hydrogeological systems relies on simulating physical processes using numerical models with uncertain parameters and initial conditions. Predictions by such uncertain models can be greatly improved by Kalman-filter techniques that sequentially assimilate monitoring data. Each assimilation constitutes a nonlinear optimization, which is solved by linearizing an objective function about the model prediction and applying a linear correction to this prediction. However, if model parameters and initial conditions are uncertain, the optimization problem becomes strongly nonlinear and a linear correction may yield unphysical results. In this paper, we investigate the utility of one-step ahead smoothing, a variant of the traditional filtering process, to eliminate nonphysical results and reduce estimation artifacts caused by nonlinearities. We present the smoothing-based compressed state Kalman filter (sCSKF), an algorithm that combines one step ahead smoothing, in which current observations are used to correct the state and parameters one step back in time, with a nonensemble covariance compression scheme, that reduces the computational cost by efficiently exploring the high-dimensional state and parameter space. Numerical experiments show that when model parameters are uncertain and the states exhibit hyperbolic behavior with sharp fronts, as in CO2 storage applications, one-step ahead smoothing reduces overshooting errors and, by design, gives physically consistent state and parameter estimates. We compared sCSKF with commonly used data assimilation methods and showed that for the same computational cost, combining one step ahead smoothing and nonensemble compression is advantageous for real-time characterization and monitoring of large-scale hydrogeological systems with sharp moving fronts.
[Economic efficiency of computer monitoring of health].
Il'icheva, N P; Stazhadze, L L
2001-01-01
Presents the method of computer monitoring of health, based on utilization of modern information technologies in public health. The method helps organize preventive activities of an outpatient clinic at a high level and essentially decrease the time and money loss. Efficiency of such preventive measures, increased number of computer and Internet users suggests that such methods are promising and further studies in this field are needed.
Multiple-User, Multitasking, Virtual-Memory Computer System
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Roth, Don J.; Stang, David B.
1993-01-01
Computer system designed and programmed to serve multiple users in research laboratory. Provides for computer control and monitoring of laboratory instruments, acquisition and anlaysis of data from those instruments, and interaction with users via remote terminals. System provides fast access to shared central processing units and associated large (from megabytes to gigabytes) memories. Underlying concept of system also applicable to monitoring and control of industrial processes.
Carrasco, Alejandro; Jalali, Elnaz; Dhingra, Ajay; Lurie, Alan; Yadav, Sumit; Tadinada, Aditya
2017-06-01
The aim of this study was to compare a medical-grade PACS (picture archiving and communication system) monitor, a consumer-grade monitor, a laptop computer, and a tablet computer for linear measurements of height and width for specific implant sites in the posterior maxilla and mandible, along with visualization of the associated anatomical structures. Cone beam computed tomography (CBCT) scans were evaluated. The images were reviewed using PACS-LCD monitor, consumer-grade LCD monitor using CB-Works software, a 13″ MacBook Pro, and an iPad 4 using OsiriX DICOM reader software. The operators had to identify anatomical structures in each display using a 2-point scale. User experience between PACS and iPad was also evaluated by means of a questionnaire. The measurements were very similar for each device. P-values were all greater than 0.05, indicating no significant difference between the monitors for each measurement. The intraoperator reliability was very high. The user experience was similar in each category with the most significant difference regarding the portability where the PACS display received the lowest score and the iPad received the highest score. The iPad with retina display was comparable with the medical-grade monitor, producing similar measurements and image visualization, and thus providing an inexpensive, portable, and reliable screen to analyze CBCT images in the operating room during the implant surgery.
Oldroyd, Rachel A; Morris, Michelle A; Birkin, Mark
2018-06-06
Traditional methods of monitoring foodborne illness are associated with problems of untimeliness and underreporting. In recent years, alternative data sources such as social media data have been used to monitor the incidence of disease in the population (infodemiology and infoveillance). These data sources prove timelier than traditional general practitioner data, they can help to fill the gaps in the reporting process, and they often include additional metadata that is useful for supplementary research. The aim of the study was to identify and formally analyze research papers using consumer-generated data, such as social media data or restaurant reviews, to quantify a disease or public health ailment. Studies of this nature are scarce within the food safety domain, therefore identification and understanding of transferrable methods in other health-related fields are of particular interest. Structured scoping methods were used to identify and analyze primary research papers using consumer-generated data for disease or public health surveillance. The title, abstract, and keyword fields of 5 databases were searched using predetermined search terms. A total of 5239 papers matched the search criteria, of which 145 were taken to full-text review-62 papers were deemed relevant and were subjected to data characterization and thematic analysis. The majority of studies (40/62, 65%) focused on the surveillance of influenza-like illness. Only 10 studies (16%) used consumer-generated data to monitor outbreaks of foodborne illness. Twitter data (58/62, 94%) and Yelp reviews (3/62, 5%) were the most commonly used data sources. Studies reporting high correlations against baseline statistics used advanced statistical and computational approaches to calculate the incidence of disease. These include classification and regression approaches, clustering approaches, and lexicon-based approaches. Although they are computationally intensive due to the requirement of training data, studies using classification approaches reported the best performance. By analyzing studies in digital epidemiology, computer science, and public health, this paper has identified and analyzed methods of disease monitoring that can be transferred to foodborne disease surveillance. These methods fall into 4 main categories: basic approach, classification and regression, clustering approaches, and lexicon-based approaches. Although studies using a basic approach to calculate disease incidence generally report good performance against baseline measures, they are sensitive to chatter generated by media reports. More computationally advanced approaches are required to filter spurious messages and protect predictive systems against false alarms. Research using consumer-generated data for monitoring influenza-like illness is expansive; however, research regarding the use of restaurant reviews and social media data in the context of food safety is limited. Considering the advantages reported in this review, methods using consumer-generated data for foodborne disease surveillance warrant further investment. ©Rachel A Oldroyd, Michelle A Morris, Mark Birkin. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 06.06.2018.
Active Cyber Defense: Enhancing National Cyber Defense
2011-12-01
Prevention System ISP Internet Service Provider IT Information Technology IWM Information Warfare Monitor LOAC Law of Armed Conflict NATO...the Information Warfare Monitor ( IWM ) discovered that GhostNet had infected 1,295 computers in 103 countries. As many as thirty percent of these...By monitoring the computers in Dharamsala and at various Tibetan missions, IWM was able to determine the IP addresses of the servers hosting Gh0st
15. NBS TOP SIDE CONTROL ROOM. THE SUIT SYSTEMS CONSOLE ...
15. NBS TOP SIDE CONTROL ROOM. THE SUIT SYSTEMS CONSOLE IS USED TO CONTROL AIR FLOW AND WATER FLOW TO THE UNDERWATER SPACE SUIT DURING THE TEST. THE SUIT SYSTEMS ENGINEER MONITORS AIR FLOW ON THE PANEL TO THE LEFT, AND SUIT DATA ON THE COMPUTER MONITOR JUST SLIGHTLY TO HIS LEFT. WATER FLOW IS MONITORED ON THE PANEL JUST SLIGHTLY TO HIS RIGHT AND TEST VIDEO TO HIS FAR RIGHT. THE DECK CHIEF MONITORS THE DIVER'S DIVE TIMES ON THE COMPUTER IN THE UPPER RIGHT. THE DECK CHIEF LOGS THEM IN AS THEY ENTER THE WATER, AND LOGS THEM OUT AS THEY EXIT THE WATER. THE COMPUTER CALCULATES TOTAL DIVE TIME. - Marshall Space Flight Center, Neutral Buoyancy Simulator Facility, Rideout Road, Huntsville, Madison County, AL
Computer program analyzes and monitors electrical power systems (POSIMO)
NASA Technical Reports Server (NTRS)
Jaeger, K.
1972-01-01
Requirements to monitor and/or simulate electric power distribution, power balance, and charge budget are discussed. Computer program to analyze power system and generate set of characteristic power system data is described. Application to status indicators to denote different exclusive conditions is presented.
A daily living activity remote monitoring system for solitary elderly people.
Maki, Hiromichi; Ogawa, Hidekuni; Matsuoka, Shingo; Yonezawa, Yoshiharu; Caldwell, W Morton
2011-01-01
A daily living activity remote monitoring system has been developed for supporting solitary elderly people. The monitoring system consists of a tri-axis accelerometer, six low-power active filters, a low-power 8-bit microcontroller (MC), a 1GB SD memory card (SDMC) and a 2.4 GHz low transmitting power mobile phone (PHS). The tri-axis accelerometer attached to the subject's chest can simultaneously measure dynamic and static acceleration forces produced by heart sound, respiration, posture and behavior. The heart rate, respiration rate, activity, posture and behavior are detected from the dynamic and static acceleration forces. These data are stored in the SD. The MC sends the data to the server computer every hour. The server computer stores the data and makes a graphic chart from the data. When the caregiver calls from his/her mobile phone to the server computer, the server computer sends the graphical chart via the PHS. The caregiver's mobile phone displays the chart to the monitor graphically.
Chiou, Wen-Ko; Chou, Wei-Ying; Chen, Bi-Hui
2012-01-01
This study aimed to evaluate the posture, muscle activities, and self reported discomforts of neck pain notebook computer users on three monitor tilt conditions: 100°, 115°, and 130°. Six subjects were recruited in this study to completed typing tasks. Results showed subjects have a trend to show the forward head posture in the condition that monitor was set at 100°, and the significant less neck and shoulder discomfort were noted in the condition that monitor was set at 130°. These result suggested neck pain notebook user to set their monitor tilt angle at 130°.
Mohammadi, Abdolreza Rashidi; Chen, Keqin; Ali, Mohamed Sultan Mohamed; Takahata, Kenichi
2011-12-15
The rupture of a cerebral aneurysm is the most common cause of subarachnoid hemorrhage. Endovascular embolization of the aneurysms by implantation of Guglielmi detachable coils (GDC) has become a major treatment approach in the prevention of a rupture. Implantation of the coils induces formation of tissues over the coils, embolizing the aneurysm. However, blood entry into the coiled aneurysm often occurs due to failures in the embolization process. Current diagnostic methods used for aneurysms, such as X-ray angiography and computer tomography, are ineffective for continuous monitoring of the disease and require extremely expensive equipment. Here we present a novel technique for wireless monitoring of cerebral aneurysms using implanted embolization coils as radiofrequency resonant sensors that detect the blood entry. The experiments show that commonly used embolization coils could be utilized as electrical inductors or antennas. As the blood flows into a coil-implanted aneurysm, parasitic capacitance of the coil is modified because of the difference in permittivity between the blood and the tissues grown around the coil, resulting in a change in the coil's resonant frequency. The resonances of platinum GDC-like coils embedded in aneurysm models are detected to show average responses of 224-819 MHz/ml to saline injected into the models. This preliminary demonstration indicates a new possibility in the use of implanted GDC as a wireless sensor for embolization failures, the first step toward realizing long-term, noninvasive, and cost-effective remote monitoring of cerebral aneurysms treated with coil embolization. Copyright © 2011 Elsevier B.V. All rights reserved.
Brem, M H; Böhner, C; Brenning, A; Gelse, K; Radkow, T; Blanke, M; Schlechtweg, P M; Neumann, G; Wu, I Y; Bautz, W; Hennig, F F; Richter, H
2006-11-01
To compare the diagnostic value of low-cost computer monitors and a Picture Archiving and Communication System (PACS) workstation for the evaluation of cervical spine fractures in the emergency room. Two groups of readers blinded to the diagnoses (2 radiologists and 3 orthopaedic surgeons) independently assessed-digital radiographs of the cervical spine (anterior-posterior, oblique and trans-oral-dens views). The radiographs of 57 patients who arrived consecutively to the emergency room in 2004 with clinical suspicion of a cervical spine injury were evaluated. The diagnostic values of these radiographs were scored on a 3-point scale (1 = diagnosis not possible/bad image quality, 2 = diagnosis uncertain, 3 = clear diagnosis of fracture or no fracture) on a PACS workstation and on two different liquid crystal display (LCD) personal computer monitors. The images were randomised to avoid memory effects. We used logistic mixed-effects models to determine the possible effects of monitor type on the evaluation of x ray images. To determine the overall effects of monitor type, this variable was used as a fixed effect, and the image number and reader group (radiologist or orthopaedic surgeon) were used as random effects on display quality. Group-specific effects were examined, with the reader group and additional fixed effects as terms. A significance level of 0.05 was established for assessing the contribution of each fixed effect to the model. Overall, the diagnostic score did not differ significantly between standard personal computer monitors and the PACS workstation (both p values were 0.78). Low-cost LCD personal computer monitors may be useful in establishing a diagnosis of cervical spine fractures in the emergency room.
2010-01-01
Background A tendency to develop reentry orthostasis after a prolonged exposure to microgravity is a common problem among astronauts. The problem is 5 times more prevalent in female astronauts as compared to their male counterparts. The mechanisms responsible for this gender differentiation are poorly understood despite many detailed and complex investigations directed toward an analysis of the physiologic control systems involved. Methods In this study, a series of computer simulation studies using a mathematical model of cardiovascular functioning were performed to examine the proposed hypothesis that this phenomenon could be explained by basic physical forces acting through the simple common anatomic differences between men and women. In the computer simulations, the circulatory components and hydrostatic gradients of the model were allowed to adapt to the physical constraints of microgravity. After a simulated period of one month, the model was returned to the conditions of earth's gravity and the standard postflight tilt test protocol was performed while the model output depicting the typical vital signs was monitored. Conclusions The analysis demonstrated that a 15% lowering of the longitudinal center of gravity in the anatomic structure of the model was all that was necessary to prevent the physiologic compensatory mechanisms from overcoming the propensity for reentry orthostasis leading to syncope. PMID:20298577
Scripes, Paola G; Yaparpalvi, Ravindra
2012-09-01
The usage of functional data in radiation therapy (RT) treatment planning (RTP) process is currently the focus of significant technical, scientific, and clinical development. Positron emission tomography (PET) using ((18)F) fluorodeoxyglucose is being increasingly used in RT planning in recent years. Fluorodeoxyglucose is the most commonly used radiotracer for diagnosis, staging, recurrent disease detection, and monitoring of tumor response to therapy (Lung Cancer 2012;76:344-349; Lung Cancer 2009;64:301-307; J Nucl Med 2008;49:532-540; J Nucl Med 2007;48:58S-67S). All the efforts to improve both PET and computed tomography (CT) image quality and, consequently, lesion detectability have a common objective to increase the accuracy in functional imaging and thus of coregistration into RT planning systems. In radiotherapy, improvement in target localization permits reduction of tumor margins, consequently reducing volume of normal tissue irradiated. Furthermore, smaller treated target volumes create the possibility of dose escalation, leading to increased chances of tumor cure and control. This article focuses on the technical aspects of PET/CT image acquisition, fusion, usage, and impact on the physics of RTP. The authors review the basic elements of RTP, modern radiation delivery, and the technical parameters of coregistration of PET/CT into RT computerized planning systems. Copyright © 2012 Elsevier Inc. All rights reserved.
Slonecker, E.T.; Tilley, J.S.
2004-01-01
The percentage of impervious surface area in a watershed has been widely recognized as a key indicator of terrestrial and aquatic ecosystem condition. Although the use of the impervious indicator is widespread, there is currently no consistent or mutually accepted method of computing impervious area and the approach of various commonly used techniques varies widely. Further, we do not have reliable information on the components of impervious surfaces, which would be critical in any future planning attempts to remediate problems associated with impervious surface coverage. In cooperation with the USGS Geographic Analysis and Monitoring Program (GAM) and The National Map, and the EPA Landscape Ecology Program, this collaborative research project utilized very high resolution imagery and GIS techniques to map and quantify the individual components of total impervious area in six urban/suburban watersheds in different parts of the United States. These data were served as ground reference, or "truth," for the evaluation for four techniques used to compute impervious area. The results show some important aspects about the component make-up of impervious cover and the variability of methods commonly used to compile this critical emerging indicator of ecosystem condition. ?? 2004 by V. H. Winston and Sons, Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nitao, J J
The goal of the Event Reconstruction Project is to find the location and strength of atmospheric release points, both stationary and moving. Source inversion relies on observational data as input. The methodology is sufficiently general to allow various forms of data. In this report, the authors will focus primarily on concentration measurements obtained at point monitoring locations at various times. The algorithms being investigated in the Project are the MCMC (Markov Chain Monte Carlo), SMC (Sequential Monte Carlo) Methods, classical inversion methods, and hybrids of these. They refer the reader to the report by Johannesson et al. (2004) for explanationsmore » of these methods. These methods require computing the concentrations at all monitoring locations for a given ''proposed'' source characteristic (locations and strength history). It is anticipated that the largest portion of the CPU time will take place performing this computation. MCMC and SMC will require this computation to be done at least tens of thousands of times. Therefore, an efficient means of computing forward model predictions is important to making the inversion practical. In this report they show how Green's functions and reciprocal Green's functions can significantly accelerate forward model computations. First, instead of computing a plume for each possible source strength history, they can compute plumes from unit impulse sources only. By using linear superposition, they can obtain the response for any strength history. This response is given by the forward Green's function. Second, they may use the law of reciprocity. Suppose that they require the concentration at a single monitoring point x{sub m} due to a potential (unit impulse) source that is located at x{sub s}. instead of computing a plume with source location x{sub s}, they compute a ''reciprocal plume'' whose (unit impulse) source is at the monitoring locations x{sub m}. The reciprocal plume is computed using a reversed-direction wind field. The wind field and transport coefficients must also be appropriately time-reversed. Reciprocity says that the concentration of reciprocal plume at x{sub s} is related to the desired concentration at x{sub m}. Since there are many less monitoring points than potential source locations, the number of forward model computations is drastically reduced.« less
Dual compile strategy for parallel heterogeneous execution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Tyler Barratt; Perry, James Thomas
2012-06-01
The purpose of the Dual Compile Strategy is to increase our trust in the Compute Engine during its execution of instructions. This is accomplished by introducing a heterogeneous Monitor Engine that checks the execution of the Compute Engine. This leads to the production of a second and custom set of instructions designed for monitoring the execution of the Compute Engine at runtime. This use of multiple engines differs from redundancy in that one engine is working on the application while the other engine is monitoring and checking in parallel instead of both applications (and engines) performing the same work atmore » the same time.« less
Grid site availability evaluation and monitoring at CMS
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; ...
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
Grid site availability evaluation and monitoring at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impactmore » data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Furthermore, enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.« less
Grid site availability evaluation and monitoring at CMS
NASA Astrophysics Data System (ADS)
Lyons, Gaston; Maciulaitis, Rokas; Bagliesi, Giuseppe; Lammel, Stephan; Sciabà, Andrea
2017-10-01
The Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) uses distributed grid computing to store, process, and analyse the vast quantity of scientific data recorded every year. The computing resources are grouped into sites and organized in a tiered structure. Each site provides computing and storage to the CMS computing grid. Over a hundred sites worldwide contribute with resources from hundred to well over ten thousand computing cores and storage from tens of TBytes to tens of PBytes. In such a large computing setup scheduled and unscheduled outages occur continually and are not allowed to significantly impact data handling, processing, and analysis. Unscheduled capacity and performance reductions need to be detected promptly and corrected. CMS developed a sophisticated site evaluation and monitoring system for Run 1 of the LHC based on tools of the Worldwide LHC Computing Grid. For Run 2 of the LHC the site evaluation and monitoring system is being overhauled to enable faster detection/reaction to failures and a more dynamic handling of computing resources. Enhancements to better distinguish site from central service issues and to make evaluations more transparent and informative to site support staff are planned.
Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo
2008-01-01
Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.
View southeast of computer controlled energy monitoring system. System replaced ...
View southeast of computer controlled energy monitoring system. System replaced strip chart recorders and other instruments under the direct observation of the load dispatcher. - Thirtieth Street Station, Load Dispatch Center, Thirtieth & Market Streets, Railroad Station, Amtrak (formerly Pennsylvania Railroad Station), Philadelphia, Philadelphia County, PA
Brain-Congruent Instruction: Does the Computer Make It Feasible?
ERIC Educational Resources Information Center
Stewart, William J.
1984-01-01
Based on the premise that computers could translate brain research findings into classroom practice, this article presents discoveries concerning human brain development, organization, and operation, and describes brain activity monitoring devices, brain function and structure variables, and a procedure for monitoring and analyzing brain activity…
The Macintosh Lab Monitor, Numbers 1-4.
ERIC Educational Resources Information Center
Wanderman, Richard; And Others
1987-01-01
Four issues of the "Macintosh Lab Monitor" document the Computer-Aided Writing Project at the Forman School (Connecticut) which is a college preparatory school for bright dyslexic adolescents. The project uses Macintosh computers to teach outlining, writing, organizational and thinking skills. Sample articles have the following titles:…
NASA Astrophysics Data System (ADS)
Audigier, Chloé; Kim, Younsu; Dillow, Austin; Boctor, Emad M.
2017-03-01
Radiofrequency ablation (RFA) is the most widely used minimally invasive ablative therapy for liver cancer, but it is challenged by a lack of patient-specific monitoring. Inter-patient tissue variability and the presence of blood vessels make the prediction of the RFA difficult. A monitoring tool which can be personalized for a given patient during the intervention would be helpful to achieve a complete tumor ablation. However, the clinicians do not have access to such a tool, which results in incomplete treatment and a large number of recurrences. Computational models can simulate the phenomena and mechanisms governing this therapy. The temperature evolution as well as the resulted ablation can be modeled. When combined together with intraoperative measurements, computational modeling becomes an accurate and powerful tool to gain quantitative understanding and to enable improvements in the ongoing clinical settings. This paper shows how computational models of RFA can be evaluated using intra-operative measurements. First, simulations are used to demonstrate the feasibility of the method, which is then evaluated on two ex vivo datasets. RFA is simulated on a simplified geometry to generate realistic longitudinal temperature maps and the resulted necrosis. Computed temperatures are compared with the temperature evolution recorded using thermometers, and with temperatures monitored by ultrasound (US) in a 2D plane containing the ablation tip. Two ablations are performed on two cadaveric bovine livers, and we achieve error of 2.2 °C on average between the computed and the thermistors temperature and 1.4 °C and 2.7 °C on average between the temperature computed and monitored by US during the ablation at two different time points (t = 240 s and t = 900 s).
CMS users data management service integration and first experiences with its NoSQL data storage
NASA Astrophysics Data System (ADS)
Riahi, H.; Spiga, D.; Boccali, T.; Ciangottini, D.; Cinquilli, M.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Santocchia, A.
2014-06-01
The distributed data analysis workflow in CMS assumes that jobs run in a different location to where their results are finally stored. Typically the user outputs must be transferred from one site to another by a dedicated CMS service, AsyncStageOut. This new service is originally developed to address the inefficiency in using the CMS computing resources when transferring the analysis job outputs, synchronously, once they are produced in the job execution node to the remote site. The AsyncStageOut is designed as a thin application relying only on the NoSQL database (CouchDB) as input and data storage. It has progressed from a limited prototype to a highly adaptable service which manages and monitors the whole user files steps, namely file transfer and publication. The AsyncStageOut is integrated with the Common CMS/Atlas Analysis Framework. It foresees the management of nearly nearly 200k users' files per day of close to 1000 individual users per month with minimal delays, and providing a real time monitoring and reports to users and service operators, while being highly available. The associated data volume represents a new set of challenges in the areas of database scalability and service performance and efficiency. In this paper, we present an overview of the AsyncStageOut model and the integration strategy with the Common Analysis Framework. The motivations for using the NoSQL technology are also presented, as well as data design and the techniques used for efficient indexing and monitoring of the data. We describe deployment model for the high availability and scalability of the service. We also discuss the hardware requirements and the results achieved as they were determined by testing with actual data and realistic loads during the commissioning and the initial production phase with the Common Analysis Framework.
The effect of monitor raster latency on VEPs, ERPs and Brain-Computer Interface performance.
Nagel, Sebastian; Dreher, Werner; Rosenstiel, Wolfgang; Spüler, Martin
2018-02-01
Visual neuroscience experiments and Brain-Computer Interface (BCI) control often require strict timings in a millisecond scale. As most experiments are performed using a personal computer (PC), the latencies that are introduced by the setup should be taken into account and be corrected. As a standard computer monitor uses a rastering to update each line of the image sequentially, this causes a monitor raster latency which depends on the position, on the monitor and the refresh rate. We technically measured the raster latencies of different monitors and present the effects on visual evoked potentials (VEPs) and event-related potentials (ERPs). Additionally we present a method for correcting the monitor raster latency and analyzed the performance difference of a code-modulated VEP BCI speller by correcting the latency. There are currently no other methods validating the effects of monitor raster latency on VEPs and ERPs. The timings of VEPs and ERPs are directly affected by the raster latency. Furthermore, correcting the raster latency resulted in a significant reduction of the target prediction error from 7.98% to 4.61% and also in a more reliable classification of targets by significantly increasing the distance between the most probable and the second most probable target by 18.23%. The monitor raster latency affects the timings of VEPs and ERPs, and correcting resulted in a significant error reduction of 42.23%. It is recommend to correct the raster latency for an increased BCI performance and methodical correctness. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimizing study design for multi-species avian monitoring programmes
Jamie S. Sanderlin; William M. Block; Joseph L. Ganey
2014-01-01
Many monitoring programmes are successful at monitoring common species, whereas rare species, which are often of highest conservation concern, may be detected infrequently. Study designs that increase the probability of detecting rare species at least once over the study period, while collecting adequate data on common species, strengthen programme ability to address...
A handheld computer as part of a portable in vivo knee joint load monitoring system
Szivek, JA; Nandakumar, VS; Geffre, CP; Townsend, CP
2009-01-01
In vivo measurement of loads and pressures acting on articular cartilage in the knee joint during various activities and rehabilitative therapies following focal defect repair will provide a means of designing activities that encourage faster and more complete healing of focal defects. It was the goal of this study to develop a totally portable monitoring system that could be used during various activities and allow continuous monitoring of forces acting on the knee. In order to make the monitoring system portable, a handheld computer with custom software, a USB powered miniature wireless receiver and a battery-powered coil were developed to replace a currently used computer, AC powered bench top receiver and power supply. A Dell handheld running Windows Mobile operating system(OS) programmed using Labview was used to collect strain measurements. Measurements collected by the handheld based system connected to the miniature wireless receiver were compared with the measurements collected by a hardwired system and a computer based system during bench top testing and in vivo testing. The newly developed handheld based system had a maximum accuracy of 99% when compared to the computer based system. PMID:19789715
NASA Astrophysics Data System (ADS)
Zhang, Hong
2017-06-01
In recent years, with the continuous development and application of network technology, network security has gradually entered people's field of vision. The host computer network external network of violations is an important reason for the threat of network security. At present, most of the work units have a certain degree of attention to network security, has taken a lot of means and methods to prevent network security problems such as the physical isolation of the internal network, install the firewall at the exit. However, these measures and methods to improve network security are often not comply with the safety rules of human behavior damage. For example, the host to wireless Internet access and dual-network card to access the Internet, inadvertently formed a two-way network of external networks and computer connections [1]. As a result, it is possible to cause some important documents and confidentiality leak even in the the circumstances of user unaware completely. Secrecy Computer Violation Out-of-band monitoring technology can largely prevent the violation by monitoring the behavior of the offending connection. In this paper, we mainly research and discuss the technology of secret computer monitoring.
The design of an m-Health monitoring system based on a cloud computing platform
NASA Astrophysics Data System (ADS)
Xu, Boyi; Xu, Lida; Cai, Hongming; Jiang, Lihong; Luo, Yang; Gu, Yizhi
2017-01-01
Compared to traditional medical services provided within hospitals, m-Health monitoring systems (MHMSs) face more challenges in personalised health data processing. To achieve personalised and high-quality health monitoring by means of new technologies, such as mobile network and cloud computing, in this paper, a framework of an m-Health monitoring system based on a cloud computing platform (Cloud-MHMS) is designed to implement pervasive health monitoring. Furthermore, the modules of the framework, which are Cloud Storage and Multiple Tenants Access Control Layer, Healthcare Data Annotation Layer, and Healthcare Data Analysis Layer, are discussed. In the data storage layer, a multiple tenant access method is designed to protect patient privacy. In the data annotation layer, linked open data are adopted to augment health data interoperability semantically. In the data analysis layer, the process mining algorithm and similarity calculating method are implemented to support personalised treatment plan selection. These three modules cooperate to implement the core functions in the process of health monitoring, which are data storage, data processing, and data analysis. Finally, we study the application of our architecture in the monitoring of antimicrobial drug usage to demonstrate the usability of our method in personal healthcare analysis.
Mucins and Cytokeratins as Serum Tumor Markers in Breast Cancer.
Nicolini, Andrea; Ferrari, Paola; Rossi, Giuseppe
2015-01-01
Structural and functional characteristics of mucins and cytokeratins are shortly described. Thereafter, those commonly used in breast cancer as serum tumor markers are considered. First CA15.3, MCA, CA549, CA27.29 mucins and CYFRA21.1, TPA, TPS cytokeratins alone or in association have been examined in different stages and conditions. Then their usefulness in monitoring disease-free breast cancer patients is evaluated. The central role of the established cut-off and critical change, the "early" treatment of recurrent disease and the potential benefit in survival are other issues that have been highlighted and discussed. The successive sections and subsections deal with the monitoring of advanced disease. In them, the current recommendations and the principal findings on using the above mentioned mucins and cytokeratins have been reported. A computer program for interpreting consecutive measurements of serum tumor markers also has been illustrated. The final part of the chapter is devoted to mucins and cytokeratins as markers of circulating and disseminated tumor cells and their usefulness for prognosis.
Cho, Soojin; Park, Jong-Woong; Sim, Sung-Han
2015-01-01
Wireless sensor networks (WSNs) facilitate a new paradigm to structural identification and monitoring for civil infrastructure. Conventional structural monitoring systems based on wired sensors and centralized data acquisition systems are costly for installation as well as maintenance. WSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. In this paper, the stochastic subspace identification (SSI) technique is selected for system identification, and SSI-based decentralized system identification (SDSI) is proposed to be implemented in a WSN composed of Imote2 wireless sensors that measure acceleration. The SDSI is tightly scheduled in the hierarchical WSN, and its performance is experimentally verified in a laboratory test using a 5-story shear building model. PMID:25856325
Wireless biopotential acquisition system for portable healthcare monitoring.
Wang, W-S; Huang, H-Y; Wu, Z-C; Chen, S-C; Wang, W-F; Wu, C-F; Luo, C-H
2011-07-01
A complete biopotential acquisition system with an analogue front-end (AFE) chip is proposed for portable healthcare monitoring. A graphical user interface (GUI) is also implemented to display the extracted biopotential signals in real-time on a computer for patients or in a hospital via the internet for doctors. The AFE circuit defines the quality of the acquired biosignals. Thus, an AFE chip with low power consumption and a high common-mode rejection ratio (CMRR) was implemented in the TSMC 0.18-μm CMOS process. The measurement results show that the proposed AFE, with a core area of 0.1 mm(2), has a CMRR of 90 dB, and power consumption of 21.6 μW. Biopotential signals of electroencephalogram (EEG), electrocardiogram (ECG) and electromyogram (EMG) were measured to verify the proposed system. The board size of the proposed system is 6 cm × 2.5 cm and the weight is 30 g. The total power consumption of the proposed system is 66 mW. Copyright © 2011 Informa UK, Ltd.
Olson, Christine M
2016-07-17
e- and m-Health communication technologies are now common approaches to improving population health. The efficacy of behavioral nutrition interventions using e-health technologies to decrease fat intake and increase fruit and vegetable intake was demonstrated in studies conducted from 2005 to 2009, with approximately 75% of trials showing positive effects. By 2010, an increasing number of behavioral nutrition interventions were focusing on body weight. The early emphasis on interventions that were highly computer tailored shifted to personalized electronic interventions that included weight and behavioral self-monitoring as key features. More diverse target audiences began to participate, and mobile components were added to interventions. Little progress has been made on using objective measures rather than self-reported measures of dietary behavior. A challenge for nutritionists is to link with the private sector in the design, use, and evaluation of the many electronic devices that are now available in the marketplace for nutrition monitoring and behavioral change.
Automated Instructional Monitors for Complex Operational Tasks. Final Report.
ERIC Educational Resources Information Center
Feurzeig, Wallace
A computer-based instructional system is described which incorporates diagnosis of students difficulties in acquiring complex concepts and skills. A computer automatically generated a simulated display. It then monitored and analyzed a student's work in the performance of assigned training tasks. Two major tasks were studied. The first,…
JAVA CLASSES FOR NONPROCEDURAL VARIOGRAM MONITORING. JOURNAL OF COMPUTERS AND GEOSCIENCE
NRMRL-ADA-00229 Faulkner*, B.P. Java Classes for Nonprocedural Variogram Monitoring. Journal of Computers and Geosciences ( Elsevier Science, Ltd.) 28:387-397 (2002). EPA/600/J-02/235. A set of Java classes was written for variogram modeling to support research for US EPA's Reg...
Design of a specialized computer for on-line monitoring of cardiac stroke volume
NASA Technical Reports Server (NTRS)
Webb, J. A., Jr.; Gebben, V. D.
1972-01-01
The design of a specialized analog computer for on-line determination of cardiac stroke volume by means of a modified version of the pressure pulse contour method is presented. The design consists of an analog circuit for computation and a timing circuit for detecting necessary events on the pressure waveform. Readouts of arterial pressures, systolic duration, heart rate, percent change in stroke volume, and percent change in cardiac output are provided for monitoring cardiac patients. Laboratory results showed that computational accuracy was within 3 percent, while animal experiments verified the operational capability of the computer. Patient safety considerations are also discussed.
Real-time seismic monitoring and functionality assessment of a building
Celebi, M.; ,
2005-01-01
This paper presents recent developments and approaches (using GPS technology and real-time double-integration) to obtain displacements and, in turn, drift ratios, in real-time or near real-time to meet the needs of the engineering and user community in seismic monitoring and assessing the functionality and damage condition of structures. Drift ratios computed in near real-time allow technical assessment of the damage condition of a building. Relevant parameters, such as the type of connections and story structural characteristics (including geometry) are used in computing drifts corresponding to several pre-selected threshold stages of damage. Thus, drift ratios determined from real-time monitoring can be compared to pre-computed threshold drift ratios. The approaches described herein can be used for performance evaluation of structures and can be considered as building health-monitoring applications.
The Role of Parents and Related Factors on Adolescent Computer Use
Epstein, Jennifer A.
2012-01-01
Background Research suggested the importance of parents on their adolescents’ computer activity. Spending too much time on the computer for recreational purposes in particular has been found to be related to areas of public health concern in children/adolescents, including obesity and substance use. Design and Methods The goal of the research was to determine the association between recreational computer use and potentially linked factors (parental monitoring, social influences to use computers including parents, age of first computer use, self-control, and particular internet activities). Participants (aged 13-17 years and residing in the United States) were recruited via the Internet to complete an anonymous survey online using a survey tool. The target sample of 200 participants who completed the survey was achieved. The sample’s average age was 16 and was 63% girls. Results A set of regressions with recreational computer use as dependent variables were run. Conclusions Less parental monitoring, younger age at first computer use, listening or downloading music from the internet more frequently, using the internet for educational purposes less frequently, and parent’s use of the computer for pleasure were related to spending a greater percentage of time on non-school computer use. These findings suggest the importance of parental monitoring and parental computer use on their children’s own computer use, and the influence of some internet activities on adolescent computer use. Finally, programs aimed at parents to help them increase the age when their children start using computers and learn how to place limits on recreational computer use are needed. PMID:25170449
Baker, Nancy A; Rubinstein, Elaine N; Rogers, Joan C
2012-09-01
Little is known about the problems experienced by and the accommodation strategies used by computer users with rheumatoid arthritis (RA) or fibromyalgia (FM). This study (1) describes specific problems and accommodation strategies used by people with RA and FM during computer use; and (2) examines if there were significant differences in the problems and accommodation strategies between the different equipment items for each diagnosis. Subjects were recruited from the Arthritis Network Disease Registry. Respondents completed a self-report survey, the Computer Problems Survey. Data were analyzed descriptively (percentages; 95% confidence intervals). Differences in the number of problems and accommodation strategies were calculated using nonparametric tests (Friedman's test and Wilcoxon Signed Rank Test). Eighty-four percent of respondents reported at least one problem with at least one equipment item (RA = 81.5%; FM = 88.9%), with most respondents reporting problems with their chair. Respondents most commonly used timing accommodation strategies to cope with mouse and keyboard problems, personal accommodation strategies to cope with chair problems and environmental accommodation strategies to cope with monitor problems. The number of problems during computer use was substantial in our sample, and our respondents with RA and FM may not implement the most effective strategies to deal with their chair, keyboard, or mouse problems. This study suggests that workers with RA and FM might potentially benefit from education and interventions to assist with the development of accommodation strategies to reduce problems related to computer use.
The ICT monitoring system of the ASTRI SST-2M prototype proposed for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Gianotti, F.; Bruno, P.; Tacchini, A.; Conforti, V.; Fioretti, V.; Tanci, C.; Grillo, A.; Leto, G.; Malaguti, G.; Trifoglio, M.
2016-08-01
In the framework of the international Cherenkov Telescope Array (CTA) observatory, the Italian National Institute for Astrophysics (INAF) has developed a dual mirror, small sized, telescope prototype (ASTRI SST-2M), installed in Italy at the INAF observing station located at Serra La Nave, Mt. Etna. The ASTRI SST-2M prototype is the basis of the ASTRI telescopes that will form the mini-array proposed to be installed at the CTA southern site during its preproduction phase. This contribution presents the solutions implemented to realize the monitoring system for the Information and Communication Technology (ICT) infrastructure of the ASTRI SST-2M prototype. The ASTRI ICT monitoring system has been implemented by integrating traditional tools used in computer centers, with specific custom tools which interface via Open Platform Communication Unified Architecture (OPC UA) to the Alma Common Software (ACS) that is used to operate the ASTRI SST-2M prototype. The traditional monitoring tools are based on Simple Network Management Protocol (SNMP) and commercial solutions and features embedded in the devices themselves. They generate alerts by email and SMS. The specific custom tools convert the SNMP protocol into the OPC UA protocol and implement an OPC UA server. The server interacts with an OPC UA client implemented in an ACS component that, through the ACS Notification Channel, sends monitor data and alerts to the central console of the ASTRI SST-2M prototype. The same approach has been proposed also for the monitoring of the CTA onsite ICT infrastructures.
Adaptive runtime for a multiprocessing API
Antao, Samuel F.; Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.
2016-11-15
A computer-implemented method includes selecting a runtime for executing a program. The runtime includes a first combination of feature implementations, where each feature implementation implements a feature of an application programming interface (API). Execution of the program is monitored, and the execution uses the runtime. Monitor data is generated based on the monitoring. A second combination of feature implementations are selected, by a computer processor, where the selection is based at least in part on the monitor data. The runtime is modified by activating the second combination of feature implementations to replace the first combination of feature implementations.
Adaptive runtime for a multiprocessing API
Antao, Samuel F.; Bertolli, Carlo; Eichenberger, Alexandre E.; O'Brien, John K.
2016-10-11
A computer-implemented method includes selecting a runtime for executing a program. The runtime includes a first combination of feature implementations, where each feature implementation implements a feature of an application programming interface (API). Execution of the program is monitored, and the execution uses the runtime. Monitor data is generated based on the monitoring. A second combination of feature implementations are selected, by a computer processor, where the selection is based at least in part on the monitor data. The runtime is modified by activating the second combination of feature implementations to replace the first combination of feature implementations.
Experimental evaluations of wearable ECG monitor.
Ha, Kiryong; Kim, Youngsung; Jung, Junyoung; Lee, Jeunwoo
2008-01-01
Healthcare industry is changing with ubiquitous computing environment and wearable ECG measurement is one of the most popular approaches in this healthcare industry. Reliability and performance of healthcare device is fundamental issue for widespread adoptions, and interdisciplinary perspectives of wearable ECG monitor make this more difficult. In this paper, we propose evaluation criteria considering characteristic of both ECG measurement and ubiquitous computing. With our wearable ECG monitors, various levels of experimental analysis are performed based on evaluation strategy.
26 CFR 1.584-3 - Computation of common trust fund income.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Computation of common trust fund income. 1.584-3 Section 1.584-3 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Banking Institutions § 1.584-3 Computation of common trust...
Isley, Michael R; Edmonds, Harvey L; Stecker, Mark
2009-12-01
Electroencephalography (EEG) is one of the oldest and most commonly utilized modalities for intraoperative neuromonitoring. Historically, interest in the EEG patterns associated with anesthesia is as old as the discovery of the EEG itself. The evolution of its intraoperative use was also expanded to include monitoring for assessing cortical perfusion and oxygenation during a variety of vascular, cardiac, and neurosurgical procedures. Furthermore, a number of quantitative or computer-processed algorithms have also been developed to aid in its visual representation and interpretation. The primary clinical outcomes for which modern EEG technology has made significant intraoperative contributions include: (1) recognizing and/or preventing perioperative ischemic insults, and (2) monitoring of brain function for anesthetic drug administration in order to determine depth of anesthesia (and level of consciousness), including the tailoring of drug levels to achieve a predefined neural effect (e.g., burst suppression). While the accelerated development of microprocessor technologies has fostered an extraordinarily rapid growth in the use of intraoperative EEG, there is still no universal adoption of a monitoring technique(s) or of criteria for its neural end-point(s) by anesthesiologists, surgeons, neurologists, and neurophysiologists. One of the most important limitations to routine intraoperative use of EEG may be the lack of standardization of methods, alarm criteria, and recommendations related to its application. Lastly, refinements in technology and signal processing can be expected to advance the usefulness of the intraoperative EEG for both anesthetic and surgical management of patients. This paper is the position statement of the American Society of Neurophysiological Monitoring. It is the practice guidelines for the intraoperative use of raw (analog and digital) and quantitative EEG. The following recommendations are based on trends in the current scientific and clinical literature and meetings, guidelines published by other organizations, expert opinion, and public review by the members of the American Society of Neurophysiological Monitoring. This document may not include all possible methodologies and interpretative criteria, nor do the authors and their sponsor intentionally exclude any new alternatives. The use of the techniques reviewed in these guidelines may reduce perioperative neurological morbidity and mortality. This position paper summarizes commonly used protocols for recording and interpreting the intraoperative use of EEG. Furthermore, the American Society of Neurophysiological Monitoring recognizes this as primarily an educational service.
Chun, Hyeong Jin; Han, Yong Duk; Park, Yoo Min; Kim, Ka Ram; Lee, Seok Jae
2018-01-01
To overcome the time and space constraints in disease diagnosis via the biosensing approach, we developed a new signal-transducing strategy that can be applied to colorimetric optical biosensors. Our study is focused on implementation of a signal transduction technology that can directly translate the color intensity signals—that require complicated optical equipment for the analysis—into signals that can be easily counted with the naked eye. Based on the selective light absorption and wavelength-filtering principles, our new optical signaling transducer was built from a common computer monitor and a smartphone. In this signal transducer, the liquid crystal display (LCD) panel of the computer monitor served as a light source and a signal guide generator. In addition, the smartphone was used as an optical receiver and signal display. As a biorecognition layer, a transparent and soft material-based biosensing channel was employed generating blue output via a target-specific bienzymatic chromogenic reaction. Using graphics editor software, we displayed the optical signal guide patterns containing multiple polygons (a triangle, circle, pentagon, heptagon, and 3/4 circle, each associated with a specified color ratio) on the LCD monitor panel. During observation of signal guide patterns displayed on the LCD monitor panel using a smartphone camera via the target analyte-loaded biosensing channel as a color-filtering layer, the number of observed polygons changed according to the concentration of the target analyte via the spectral correlation between absorbance changes in a solution of the biosensing channel and color emission properties of each type of polygon. By simple counting of the changes in the number of polygons registered by the smartphone camera, we could efficiently measure the concentration of a target analyte in a sample without complicated and expensive optical instruments. In a demonstration test on glucose as a model analyte, we could easily measure the concentration of glucose in the range from 0 to 10 mM. PMID:29509682
Chun, Hyeong Jin; Han, Yong Duk; Park, Yoo Min; Kim, Ka Ram; Lee, Seok Jae; Yoon, Hyun C
2018-03-06
To overcome the time and space constraints in disease diagnosis via the biosensing approach, we developed a new signal-transducing strategy that can be applied to colorimetric optical biosensors. Our study is focused on implementation of a signal transduction technology that can directly translate the color intensity signals-that require complicated optical equipment for the analysis-into signals that can be easily counted with the naked eye. Based on the selective light absorption and wavelength-filtering principles, our new optical signaling transducer was built from a common computer monitor and a smartphone. In this signal transducer, the liquid crystal display (LCD) panel of the computer monitor served as a light source and a signal guide generator. In addition, the smartphone was used as an optical receiver and signal display. As a biorecognition layer, a transparent and soft material-based biosensing channel was employed generating blue output via a target-specific bienzymatic chromogenic reaction. Using graphics editor software, we displayed the optical signal guide patterns containing multiple polygons (a triangle, circle, pentagon, heptagon, and 3/4 circle, each associated with a specified color ratio) on the LCD monitor panel. During observation of signal guide patterns displayed on the LCD monitor panel using a smartphone camera via the target analyte-loaded biosensing channel as a color-filtering layer, the number of observed polygons changed according to the concentration of the target analyte via the spectral correlation between absorbance changes in a solution of the biosensing channel and color emission properties of each type of polygon. By simple counting of the changes in the number of polygons registered by the smartphone camera, we could efficiently measure the concentration of a target analyte in a sample without complicated and expensive optical instruments. In a demonstration test on glucose as a model analyte, we could easily measure the concentration of glucose in the range from 0 to 10 mM.
Use of computer-assisted drug therapy outside the operating room.
Singh, Preet Mohinder; Borle, Anuradha; Goudra, Basavana G
2016-08-01
The number of procedures performed in the out-of-operating room setting under sedation has increased many fold in recent years. Sedation techniques aim to achieve rapid patient turnover through the use of short-acting drugs with minimal residual side-effects (mainly propofol and opioids). Even for common procedures, the practice of sedation delivery varies widely among providers. Computer-based sedation models have the potential to assist sedation providers and offer a more consistent and safer sedation experience for patients. Target-controlled infusions using propofol and other short-acting opioids for sedation have shown promising results in terms of increasing patient safety and allowing for more rapid wake-up times. Target-controlled infusion systems with real-time patient monitoring can titrate drug doses automatically to maintain optimal depth of sedation. The best recent example of this is the propofol-based Sedasys sedation system. Sedasys redefined individualized sedation by the addition of an automated clinical parameter that monitors depth of sedation. However, because of poor adoption and cost issues, it has been recently withdrawn by the manufacturer. Present automated drug delivery systems can assist in the provision of sedation for out-of-operating room procedures but cannot substitute for anesthesia providers. Use of the available technology has the potential to improve patient outcomes, decrease provider workload, and have a long-term economic impact on anesthesia care delivery outside of the operating room.
Kang, Sung-Won; Park, Hyung-Il; Choi, Byoung-Gun; Shin, Dongjun; Jung, Young-Giu; Lee, Jun-Young; Park, Hong-Won; Park, Sukyung
2017-01-01
Spinal disease is a common yet important condition that occurs because of inappropriate posture. Prevention could be achieved by continuous posture monitoring, but most measurement systems cannot be used in daily life due to factors such as burdensome wires and large sensing modules. To improve upon these weaknesses, we developed comfortable “smart wear” for posture measurement using conductive yarn for circuit patterning and a flexible printed circuit board (FPCB) for interconnections. The conductive yarn was made by twisting polyester yarn and metal filaments, and the resistance per unit length was about 0.05 Ω/cm. An embroidered circuit was made using the conductive yarn, which showed increased yield strength and uniform electrical resistance per unit length. Circuit networks of sensors and FPCBs for interconnection were integrated into clothes using a computer numerical control (CNC) embroidery process. The system was calibrated and verified by comparing the values measured by the smart wear with those measured by a motion capture camera system. Six subjects performed fixed movements and free computer work, and, with this system, we were able to measure the anterior/posterior direction tilt angle with an error of less than 4°. The smart wear does not have excessive wires, and its structure will be optimized for better posture estimation in a later study. PMID:29112125
Monitoring benthic aIgal communides: A comparison of targeted and coefficient sampling methods
Edwards, Matthew S.; Tinker, M. Tim
2009-01-01
Choosing an appropriate sample unit is a fundamental decision in the design of ecological studies. While numerous methods have been developed to estimate organism abundance, they differ in cost, accuracy and precision.Using both field data and computer simulation modeling, we evaluated the costs and benefits associated with two methods commonly used to sample benthic organisms in temperate kelp forests. One of these methods, the Targeted Sampling method, relies on different sample units, each "targeted" for a specific species or group of species while the other method relies on coefficients that represent ranges of bottom cover obtained from visual esti-mates within standardized sample units. Both the field data and the computer simulations suggest that both methods yield remarkably similar estimates of organism abundance and among-site variability, although the Coefficient method slightly underestimates variability among sample units when abundances are low. In contrast, the two methods differ considerably in the effort needed to sample these communities; the Targeted Sampling requires more time and twice the personnel to complete. We conclude that the Coefficent Sampling method may be better for environmental monitoring programs where changes in mean abundance are of central concern and resources are limiting, but that the Targeted sampling methods may be better for ecological studies where quantitative relationships among species and small-scale variability in abundance are of central concern.
Evidence supporting the need for a common soil monitoring protocol
Derrick A. Reeves; Mark D. Coleman; Deborah S. Page-Dumroese
2013-01-01
Many public land management agencies monitor forest soils for levels of disturbance related to management activities. Although several soil disturbance monitoring protocols based on visual observation have been developed to assess the amount and types of disturbance caused by forest management, no common method is currently used on National Forest lands in the United...
Herpetological Monitoring Using a Pitfall Trapping Design in Southern California
Fisher, Robert; Stokes, Drew; Rochester, Carlton; Brehme, Cheryl; Hathaway, Stacie; Case, Ted
2008-01-01
The steps necessary to conduct a pitfall trapping survey for small terrestrial vertebrates are presented. Descriptions of the materials needed and the methods to build trapping equipment from raw materials are discussed. Recommended data collection techniques are given along with suggested data fields. Animal specimen processing procedures, including toe- and scale-clipping, are described for lizards, snakes, frogs, and salamanders. Methods are presented for conducting vegetation surveys that can be used to classify the environment associated with each pitfall trap array. Techniques for data storage and presentation are given based on commonly use computer applications. As with any study, much consideration should be given to the study design and methods before beginning any data collection effort.
Quantification of Posterior Globe Flattening: Methodology Development and Validationc
NASA Technical Reports Server (NTRS)
Lumpkins, S. B.; Garcia, K. M.; Sargsyan, A. E.; Hamilton, D. R.; Berggren, M. D.; Antonsen, E.; Ebert, D.
2011-01-01
Microgravity exposure affects visual acuity in a subset of astronauts, and mechanisms may include structural changes in the posterior globe and orbit. Particularly, posterior globe flattening has been implicated in several astronauts. This phenomenon is known to affect some terrestrial patient populations, and has been shown to be associated with intracranial hypertension. It is commonly assessed by magnetic resonance imaging (MRI), computed tomography (CT), or B-mode ultrasound (US), without consistent objective criteria. NASA uses a semi-quantitative scale of 0-3 as part of eye/orbit MRI and US analysis for occupational monitoring purposes. The goal of this study was to initiate development of an objective quantification methodology for posterior globe flattening.
Chiaradia, Enrico Antonio; Facchi, Arianna; Masseroni, Daniele; Ferrari, Daniele; Bischetti, Gian Battista; Gharsallah, Olfa; Cesari de Maria, Sandra; Rienzner, Michele; Naldi, Ezio; Romani, Marco; Gandolfi, Claudio
2015-09-01
The cultivation of rice, one of the most important staple crops worldwide, has very high water requirements. A variety of irrigation practices are applied, whose pros and cons, both in terms of water productivity and of their effects on the environment, are not completely understood yet. The continuous monitoring of irrigation and rainfall inputs, as well as of soil water dynamics, is a very important factor in the analysis of these practices. At the same time, however, it represents a challenging and costly task because of the complexity of the processes involved, of the difference in nature and magnitude of the driving variables and of the high variety of field conditions. In this paper, we present the prototype of an integrated, multisensor system for the continuous monitoring of water dynamics in rice fields under different irrigation regimes. The system consists of the following: (1) flow measurement devices for the monitoring of irrigation supply and tailwater drainage; (2) piezometers for groundwater level monitoring; (3) level gauges for monitoring the flooding depth; (4) multilevel tensiometers and moisture sensor clusters to monitor soil water status; (5) eddy covariance station for the estimation of evapotranspiration fluxes and (6) wireless transmission devices and software interface for data transfer, storage and control from remote computer. The system is modular and it is replicable in different field conditions. It was successfully applied over a 2-year period in three experimental plots in Northern Italy, each one with a different water management strategy. In the paper, we present information concerning the different instruments selected, their interconnections and their integration in a common remote control scheme. We also provide considerations and figures on the material and labour costs of the installation and management of the system.
A Waveform Archiving System for the GE Solar 8000i Bedside Monitor.
Fanelli, Andrea; Jaishankar, Rohan; Filippidis, Aristotelis; Holsapple, James; Heldt, Thomas
2018-01-01
Our objective was to develop, deploy, and test a data-acquisition system for the reliable and robust archiving of high-resolution physiological waveform data from a variety of bedside monitoring devices, including the GE Solar 8000i patient monitor, and for the logging of ancillary clinical and demographic information. The data-acquisition system consists of a computer-based archiving unit and a GE Tram Rac 4A that connects to the GE Solar 8000i monitor. Standard physiological front-end sensors connect directly to the Tram Rac, which serves as a port replicator for the GE monitor and provides access to these waveform signals through an analog data interface. Together with the GE monitoring data streams, we simultaneously collect the cerebral blood flow velocity envelope from a transcranial Doppler ultrasound system and a non-invasive arterial blood pressure waveform along a common time axis. All waveform signals are digitized and archived through a LabView-controlled interface that also allows for the logging of relevant meta-data such as clinical and patient demographic information. The acquisition system was certified for hospital use by the clinical engineering team at Boston Medical Center, Boston, MA, USA. Over a 12-month period, we collected 57 datasets from 11 neuro-ICU patients. The system provided reliable and failure-free waveform archiving. We measured an average temporal drift between waveforms from different monitoring devices of 1 ms every 66 min of recorded data. The waveform acquisition system allows for robust real-time data acquisition, processing, and archiving of waveforms. The temporal drift between waveforms archived from different devices is entirely negligible, even for long-term recording.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basch, Ethan, E-mail: ebasch@med.unc.edu; Lineberger Comprehensive Cancer Center, University of North Carolina, Chapel Hill, North Carolina; Pugh, Stephanie L.
Purpose: To assess the feasibility of measuring symptomatic adverse events (AEs) in a multicenter clinical trial using the National Cancer Institute's Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE). Methods and Materials: Patients enrolled in NRG Oncology's RTOG 1012 (Prophylactic Manuka Honey for Reduction of Chemoradiation Induced Esophagitis-Related Pain during Treatment of Lung Cancer) were asked to self-report 53 PRO-CTCAE items representing 30 symptomatic AEs at 6 time points (baseline; weekly ×4 during treatment; 12 weeks after treatment). Reporting was conducted via wireless tablet computers in clinic waiting areas. Compliance was defined as the proportion of visitsmore » when an expected PRO-CTCAE assessment was completed. Results: Among 226 study sites participating in RTOG 1012, 100% completed 35-minute PRO-CTCAE training for clinical research associates (CRAs); 80 sites enrolled patients, of which 34 (43%) required tablet computers to be provided. All 152 patients in RTOG 1012 agreed to self-report using the PRO-CTCAE (median age 66 years; 47% female; 84% white). Median time for CRAs to learn the system was 60 minutes (range, 30-240 minutes), and median time for CRAs to teach a patient to self-report was 10 minutes (range, 2-60 minutes). Compliance was high, particularly during active treatment, when patients self-reported at 86% of expected time points, although compliance was lower after treatment (72%). Common reasons for noncompliance were institutional errors, such as forgetting to provide computers to participants; patients missing clinic visits; Internet connectivity; and patients feeling “too sick.” Conclusions: Most patients enrolled in a multicenter chemoradiotherapy trial were willing and able to self-report symptomatic AEs at visits using tablet computers. Minimal effort was required by local site staff to support this system. The observed causes of missing data may be obviated by allowing patients to self-report electronically between visits, and by using central compliance monitoring. These approaches are being incorporated into ongoing studies.« less
Basch, Ethan; Pugh, Stephanie L; Dueck, Amylou C; Mitchell, Sandra A; Berk, Lawrence; Fogh, Shannon; Rogak, Lauren J; Gatewood, Marcha; Reeve, Bryce B; Mendoza, Tito R; O'Mara, Ann M; Denicoff, Andrea M; Minasian, Lori M; Bennett, Antonia V; Setser, Ann; Schrag, Deborah; Roof, Kevin; Moore, Joan K; Gergel, Thomas; Stephans, Kevin; Rimner, Andreas; DeNittis, Albert; Bruner, Deborah Watkins
2017-06-01
To assess the feasibility of measuring symptomatic adverse events (AEs) in a multicenter clinical trial using the National Cancer Institute's Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE). Patients enrolled in NRG Oncology's RTOG 1012 (Prophylactic Manuka Honey for Reduction of Chemoradiation Induced Esophagitis-Related Pain during Treatment of Lung Cancer) were asked to self-report 53 PRO-CTCAE items representing 30 symptomatic AEs at 6 time points (baseline; weekly ×4 during treatment; 12 weeks after treatment). Reporting was conducted via wireless tablet computers in clinic waiting areas. Compliance was defined as the proportion of visits when an expected PRO-CTCAE assessment was completed. Among 226 study sites participating in RTOG 1012, 100% completed 35-minute PRO-CTCAE training for clinical research associates (CRAs); 80 sites enrolled patients, of which 34 (43%) required tablet computers to be provided. All 152 patients in RTOG 1012 agreed to self-report using the PRO-CTCAE (median age 66 years; 47% female; 84% white). Median time for CRAs to learn the system was 60 minutes (range, 30-240 minutes), and median time for CRAs to teach a patient to self-report was 10 minutes (range, 2-60 minutes). Compliance was high, particularly during active treatment, when patients self-reported at 86% of expected time points, although compliance was lower after treatment (72%). Common reasons for noncompliance were institutional errors, such as forgetting to provide computers to participants; patients missing clinic visits; Internet connectivity; and patients feeling "too sick." Most patients enrolled in a multicenter chemoradiotherapy trial were willing and able to self-report symptomatic AEs at visits using tablet computers. Minimal effort was required by local site staff to support this system. The observed causes of missing data may be obviated by allowing patients to self-report electronically between visits, and by using central compliance monitoring. These approaches are being incorporated into ongoing studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Fault-tolerant battery system employing intra-battery network architecture
Hagen, Ronald A.; Chen, Kenneth W.; Comte, Christophe; Knudson, Orlin B.; Rouillard, Jean
2000-01-01
A distributed energy storing system employing a communications network is disclosed. A distributed battery system includes a number of energy storing modules, each of which includes a processor and communications interface. In a network mode of operation, a battery computer communicates with each of the module processors over an intra-battery network and cooperates with individual module processors to coordinate module monitoring and control operations. The battery computer monitors a number of battery and module conditions, including the potential and current state of the battery and individual modules, and the conditions of the battery's thermal management system. An over-discharge protection system, equalization adjustment system, and communications system are also controlled by the battery computer. The battery computer logs and reports various status data on battery level conditions which may be reported to a separate system platform computer. A module transitions to a stand-alone mode of operation if the module detects an absence of communication connectivity with the battery computer. A module which operates in a stand-alone mode performs various monitoring and control functions locally within the module to ensure safe and continued operation.
Automating ATLAS Computing Operations using the Site Status Board
NASA Astrophysics Data System (ADS)
J, Andreeva; Iglesias C, Borrego; S, Campana; Girolamo A, Di; I, Dzhunov; Curull X, Espinal; S, Gayazov; E, Magradze; M, Nowotka M.; L, Rinaldi; P, Saiz; J, Schovancova; A, Stewart G.; M, Wright
2012-12-01
The automation of operations is essential to reduce manpower costs and improve the reliability of the system. The Site Status Board (SSB) is a framework which allows Virtual Organizations to monitor their computing activities at distributed sites and to evaluate site performance. The ATLAS experiment intensively uses the SSB for the distributed computing shifts, for estimating data processing and data transfer efficiencies at a particular site, and for implementing automatic exclusion of sites from computing activities, in case of potential problems. The ATLAS SSB provides a real-time aggregated monitoring view and keeps the history of the monitoring metrics. Based on this history, usability of a site from the perspective of ATLAS is calculated. The paper will describe how the SSB is integrated in the ATLAS operations and computing infrastructure and will cover implementation details of the ATLAS SSB sensors and alarm system, based on the information in the SSB. It will demonstrate the positive impact of the use of the SSB on the overall performance of ATLAS computing activities and will overview future plans.
Thermal and orbital analysis of Earth monitoring Sun-synchronous space experiments
NASA Technical Reports Server (NTRS)
Killough, Brian D.
1990-01-01
The fundamentals of an Earth monitoring Sun-synchronous orbit are presented. A Sun-synchronous Orbit Analysis Program (SOAP) was developed to calculate orbital parameters for an entire year. The output from this program provides the required input data for the TRASYS thermal radiation computer code, which in turn computes the infrared, solar and Earth albedo heat fluxes incident on a space experiment. Direct incident heat fluxes can be used as input to a generalized thermal analyzer program to size radiators and predict instrument operating temperatures. The SOAP computer code and its application to the thermal analysis methodology presented, should prove useful to the thermal engineer during the design phases of Earth monitoring Sun-synchronous space experiments.
Checkpoint triggering in a computer system
Cher, Chen-Yong
2016-09-06
According to an aspect, a method for triggering creation of a checkpoint in a computer system includes executing a task in a processing node of the computer system and determining whether it is time to read a monitor associated with a metric of the task. The monitor is read to determine a value of the metric based on determining that it is time to read the monitor. A threshold for triggering creation of the checkpoint is determined based on the value of the metric. Based on determining that the value of the metric has crossed the threshold, the checkpoint including state data of the task is created to enable restarting execution of the task upon a restart operation.
Inductive System Health Monitoring
NASA Technical Reports Server (NTRS)
Iverson, David L.
2004-01-01
The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS uses nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. IMS is able to monitor the system by comparing real time operational data with these classes. We present a description of learning and monitoring method used by IMS and summarize some recent IMS results.
Application of ubiquitous computing in personal health monitoring systems.
Kunze, C; Grossmann, U; Stork, W; Müller-Glaser, K D
2002-01-01
A possibility to significantly reduce the costs of public health systems is to increasingly use information technology. The Laboratory for Information Processing Technology (ITIV) at the University of Karlsruhe is developing a personal health monitoring system, which should improve health care and at the same time reduce costs by combining micro-technological smart sensors with personalized, mobile computing systems. In this paper we present how ubiquitous computing theory can be applied in the health-care domain.
Schnabel, Ulf H; Hegenloh, Michael; Müller, Hermann J; Zehetleitner, Michael
2013-09-01
Electromagnetic motion-tracking systems have the advantage of capturing the tempo-spatial kinematics of movements independently of the visibility of the sensors. However, they are limited in that they cannot be used in the proximity of electromagnetic field sources, such as computer monitors. This prevents exploiting the tracking potential of the sensor system together with that of computer-generated visual stimulation. Here we present a solution for presenting computer-generated visual stimulation that does not distort the electromagnetic field required for precise motion tracking, by means of a back projection medium. In one experiment, we verify that cathode ray tube monitors, as well as thin-film-transistor monitors, distort electro-magnetic sensor signals even at a distance of 18 cm. Our back projection medium, by contrast, leads to no distortion of the motion-tracking signals even when the sensor is touching the medium. This novel solution permits combining the advantages of electromagnetic motion tracking with computer-generated visual stimulation.
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
NASA Astrophysics Data System (ADS)
Mark, W. D.; Reagor, C. P.
2007-02-01
To assess gear health and detect gear-tooth damage, the vibratory response from meshing gear-pair excitations is commonly monitored by accelerometers. In an earlier paper, strong evidence was presented suggesting that, in the case of tooth bending-fatigue damage, the principal source of detectable damage is whole-tooth plastic deformation; i.e. yielding, rather than changes in tooth stiffness caused by tooth-root cracks. Such plastic deformations are geometric deviation contributions to the "static-transmission-error" (STE) vibratory excitation caused by meshing gear pairs. The STE contributions caused by two likely occurring forms of such plastic deformations on a single tooth are derived, and displayed in the time domain as a function of involute "roll distance." Example calculations are provided for transverse contact ratios of Qt=1.4 and 1.8, for spur gears and for helical-gear axial contact ratios ranging from Qa=1.2 to Qa=3.6. Low-pass- and band-pass-filtered versions of these same STE contributions also are computed and displayed in the time domain. Several calculations, consisting of superposition of the computed STE tooth-meshing fundamental harmonic contribution and the band-pass STE contribution caused by a plastically deformed tooth, exhibit the amplitude and frequency or phase modulation character commonly observed in accelerometer-response waveforms caused by damaged teeth. General formulas are provided that enable computation of these STE vibratory-excitation contributions for any form of plastic deformation on any number of teeth for spur and helical gears with any contact ratios.
Rooijakkers, Michiel; Rabotti, Chiara; Bennebroek, Martijn; van Meerbergen, Jef; Mischi, Massimo
2011-01-01
Non-invasive fetal health monitoring during pregnancy has become increasingly important. Recent advances in signal processing technology have enabled fetal monitoring during pregnancy, using abdominal ECG recordings. Ubiquitous ambulatory monitoring for continuous fetal health measurement is however still unfeasible due to the computational complexity of noise robust solutions. In this paper an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, while increasing the R-peak detection quality compared to existing R-peak detection schemes. Validation of the algorithm is performed on two manually annotated datasets, the MIT/BIH Arrhythmia database and an in-house abdominal database. Both R-peak detection quality and computational complexity are compared to state-of-the-art algorithms as described in the literature. With a detection error rate of 0.22% and 0.12% on the MIT/BIH Arrhythmia and in-house databases, respectively, the quality of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.
Exploiting analytics techniques in CMS computing monitoring
NASA Astrophysics Data System (ADS)
Bonacorsi, D.; Kuznetsov, V.; Magini, N.; Repečka, A.; Vaandering, E.
2017-10-01
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster for further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.
Automated technical validation--a real time expert system for decision support.
de Graeve, J S; Cambus, J P; Gruson, A; Valdiguié, P M
1996-04-15
Dealing daily with various machines and various control specimens provides a lot of data that cannot be processed manually. In order to help decision-making we wrote specific software coping with the traditional QC, with patient data (mean of normals, delta check) and with criteria related to the analytical equipment (flags and alarms). Four machines (3 Ektachem 700 and 1 Hitachi 911) analysing 25 common chemical tests are controlled. Every day, three different control specimens and one more once a week (regional survey) are run on the various pieces of equipment. The data are collected on a 486 microcomputer connected to the central computer. For every parameter the standard deviation is compared with the published acceptable limits and the Westgard's rules are computed. The mean of normals is continuously monitored. The final decision induces either an alarm sound and the print-out of the cause of rejection or, if no alarms happen, the daily print-out of recorded data, with or without the Levey Jennings graphs.
Emerging MRI and metabolic neuroimaging techniques in mild traumatic brain injury.
Lu, Liyan; Wei, Xiaoer; Li, Minghua; Li, Yuehua; Li, Wenbin
2014-01-01
Traumatic brain injury (TBI) is one of the leading causes of death worldwide, and mild traumatic brain injury (mTBI) is the most common traumatic injury. It is difficult to detect mTBI using a routine neuroimaging. Advanced techniques with greater sensitivity and specificity for the diagnosis and treatment of mTBI are required. The aim of this review is to offer an overview of various emerging neuroimaging methodologies that can solve the clinical health problems associated with mTBI. Important findings and improvements in neuroimaging that hold value for better detection, characterization and monitoring of objective brain injuries in patients with mTBI are presented. Conventional computed tomography (CT) and magnetic resonance imaging (MRI) are not very efficient for visualizing mTBI. Moreover, techniques such as diffusion tensor imaging, magnetization transfer imaging, susceptibility-weighted imaging, functional MRI, single photon emission computed tomography, positron emission tomography and magnetic resonance spectroscopy imaging were found to be useful for mTBI imaging.
2014-01-01
Quantitative imaging biomarkers (QIBs) are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure QIBs have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for QIB studies. PMID:24919828
Analysis of Protein Kinetics Using Fluorescence Recovery After Photobleaching (FRAP).
Giakoumakis, Nickolaos Nikiforos; Rapsomaniki, Maria Anna; Lygerou, Zoi
2017-01-01
Fluorescence recovery after photobleaching (FRAP) is a cutting-edge live-cell functional imaging technique that enables the exploration of protein dynamics in individual cells and thus permits the elucidation of protein mobility, function, and interactions at a single-cell level. During a typical FRAP experiment, fluorescent molecules in a defined region of interest within the cell are bleached by a short and powerful laser pulse, while the recovery of the fluorescence in the region is monitored over time by time-lapse microscopy. FRAP experimental setup and image acquisition involve a number of steps that need to be carefully executed to avoid technical artifacts. Equally important is the subsequent computational analysis of FRAP raw data, to derive quantitative information on protein diffusion and binding parameters. Here we present an integrated in vivo and in silico protocol for the analysis of protein kinetics using FRAP. We focus on the most commonly encountered challenges and technical or computational pitfalls and their troubleshooting so that valid and robust insight into protein dynamics within living cells is gained.
2012-01-01
Background Routine cytomegalovirus (CMV) screening during pregnancy is not recommended in the United States and the extent to which it is performed is unknown. Using a medical claims database, we computed rates of CMV-specific testing among pregnant women. Methods We used medical claims from the 2009 Truven Health MarketScan® Commercial databases. We computed CMV-specific testing rates using CPT codes. Results We identified 77,773 pregnant women, of whom 1,668 (2%) had a claim for CMV-specific testing. CMV-specific testing was significantly associated with older age, Northeast or urban residence, and a diagnostic code for mononucleosis. We identified 44 women with a diagnostic code for mononucleosis, of whom 14% had CMV-specific testing. Conclusions Few pregnant women had CMV-specific testing, suggesting that screening for CMV infection during pregnancy is not commonly performed. In the absence of national surveillance for CMV infections during pregnancy, healthcare claims are a potential source for monitoring practices of CMV-specific testing. PMID:23198949
Remote monitoring of a Fire Protection System
NASA Astrophysics Data System (ADS)
Bauman, Steven; Vermeulen, Tom; Roberts, Larry; Matsushige, Grant; Gajadhar, Sarah; Taroma, Ralph; Elizares, Casey; Arruda, Tyson; Potter, Sharon; Hoffman, James
2011-03-01
Some years ago CFHT proposed developing a Remote Observing Environment aimed at producing Science Observations at their Observatory Facility on Mauna Kea from their Headquarters facility in Waimea, HI. This Remote Observing Project commonly referred to as OAP (Observatory Automation Project) was completed at the end of January 2011 and has been providing the majority of Science Data since. My poster will discuss the upgrades to the existing fire alarm protection system. With no one at the summit during nightly operations, the observatory facility required automated monitoring of the facility for safety to personnel and equipment in the case of a fire. An addressable analog fire panel was installed which utilizes digital communication protocol (DCP), intelligent communication with other devices, and an RS-232 interface which provides feedback and real-time monitoring of the system. Using the interface capabilities of the panel, it provides notifications when heat detectors, smoke sensors, manual pull stations, or the main observatory computer room fire suppression system has been activated. The notifications are sent out as alerts to staff in the form of test massages and emails and the observing control GUI interface alerts the remote telescope operator with a map showing the location of the fire occurrence and type of device that has been triggered. And all of this was accomplished without the need for an outside vendor to monitor the system and facilitate warnings or notifications regarding the system.
Lu, Wei; Teng, Jun; Zhou, Qiushi; Peng, Qiexin
2018-02-01
The stress in structural steel members is the most useful and directly measurable physical quantity to evaluate the structural safety in structural health monitoring, which is also an important index to evaluate the stress distribution and force condition of structures during structural construction and service phases. Thus, it is common to set stress as a measure in steel structural monitoring. Considering the economy and the importance of the structural members, there are only a limited number of sensors that can be placed, which means that it is impossible to obtain the stresses of all members directly using sensors. This study aims to develop a stress response prediction method for locations where there are insufficent sensors, using measurements from a limited number of sensors and pattern recognition. The detailed improved aspects are: (1) a distributed computing process is proposed, where the same pattern is recognized by several subsets of measurements; and (2) the pattern recognition using the subset of measurements is carried out by considering the optimal number of sensors and number of fusion patterns. The validity and feasibility of the proposed method are verified using two examples: the finite-element simulation of a single-layer shell-like steel structure, and the structural health monitoring of the space steel roof of Shenzhen Bay Stadium; for the latter, the anti-noise performance of this method is verified by the stress measurements from a real-world project.
NASA Technical Reports Server (NTRS)
Holden, D. G.
1975-01-01
Hard Over Monitoring Equipment (HOME) has been designed to complement and enhance the flight safety of a flight research helicopter. HOME is an independent, highly reliable, and fail-safe special purpose computer that monitors the flight control commands issued by the flight control computer of the helicopter. In particular, HOME detects the issuance of a hazardous hard-over command for any of the four flight control axes and transfers the control of the helicopter to the flight safety pilot. The design of HOME incorporates certain reliability and fail-safe enhancement design features, such as triple modular redundancy, majority logic voting, fail-safe dual circuits, independent status monitors, in-flight self-test, and a built-in preflight exerciser. The HOME design and operation is described with special emphasis on the reliability and fail-safe aspects of the design.
A remote access ecg monitoring system - biomed 2009.
Ogawa, Hidekuni; Yonezawa, Yoshiharu; Maki, Hiromichi; Iwamoto, Junichi; Hahn, Allen W; Caldwell, W Morton
2009-01-01
We have developed a remotely accessible telemedicine system for monitoring a patient's electrocardiogram (ECG). The system consists of an ECG recorder mounted on chest electrodes and a physician's laptop personal computer. This ECG recorder is designed with a variable gain instrumentation amplifier; a low power 8-bit single-chip microcomputer; two 128KB EEPROMs and 2.4 GHz low transmit power mobile telephone. When the physician wants to monitor the patient's ECG, he/she calls directly from the laptop PC to the ECG recorder's phone and the recorder sends the ECG to the computer. The electrode-mounted recorder continuously samples the ECG. Additionally, when the patient feels a heart discomfort, he/she pushes a data transmission switch on the recorder and the recorder sends the recorded ECG waveforms of the two prior minutes, and for two minutes after the switch is pressed. The physician can display and monitor the data on the computer's liquid crystal display.
Ergatis: a web interface and scalable software system for bioinformatics workflows
Orvis, Joshua; Crabtree, Jonathan; Galens, Kevin; Gussman, Aaron; Inman, Jason M.; Lee, Eduardo; Nampally, Sreenath; Riley, David; Sundaram, Jaideep P.; Felix, Victor; Whitty, Brett; Mahurkar, Anup; Wortman, Jennifer; White, Owen; Angiuoli, Samuel V.
2010-01-01
Motivation: The growth of sequence data has been accompanied by an increasing need to analyze data on distributed computer clusters. The use of these systems for routine analysis requires scalable and robust software for data management of large datasets. Software is also needed to simplify data management and make large-scale bioinformatics analysis accessible and reproducible to a wide class of target users. Results: We have developed a workflow management system named Ergatis that enables users to build, execute and monitor pipelines for computational analysis of genomics data. Ergatis contains preconfigured components and template pipelines for a number of common bioinformatics tasks such as prokaryotic genome annotation and genome comparisons. Outputs from many of these components can be loaded into a Chado relational database. Ergatis was designed to be accessible to a broad class of users and provides a user friendly, web-based interface. Ergatis supports high-throughput batch processing on distributed compute clusters and has been used for data management in a number of genome annotation and comparative genomics projects. Availability: Ergatis is an open-source project and is freely available at http://ergatis.sourceforge.net Contact: jorvis@users.sourceforge.net PMID:20413634
Taruscio, Domenica; Mollo, Emanuela; Gainotti, Sabina; Posada de la Paz, Manuel; Bianchi, Fabrizio; Vittozzi, Luciano
2014-01-01
The European Union acknowledges the relevance of registries as key instruments for developing rare disease (RD) clinical research, improving patient care and health service (HS) planning and funded the EPIRARE project to improve standardization and data comparability among patient registries and to support new registries and data collections. A reference list of patient registry-based indicators has been prepared building on the work of previous EU projects and on the platform stakeholders' information needs resulting from the EPIRARE surveys and consultations. The variables necessary to compute these indicators have been analysed for their scope and use and then organized in data domains. The reference indicators span from disease surveillance, to socio-economic burden, HS monitoring, research and product development, policy equity and effectiveness. The variables necessary to compute these reference indicators have been selected and, with the exception of more sophisticated indicators for research and clinical care quality, they can be collected as data elements common (CDE) to all rare diseases. They have been organized in data domains characterized by their contents and main goal and a limited set of mandatory data elements has been defined, which allows case notification independently of the physician or the health service. The definition of a set of CDE for the European platform for RD patient registration is the first step in the promotion of the use of common tools for the collection of comparable data. The proposed organization of the CDE contributes to the completeness of case ascertainment, with the possible involvement of patients and patient associations in the registration process.
Wright, Serena; Hull, Tom; Sivyer, David B.; Pearce, David; Pinnegar, John K.; Sayer, Martin D. J.; Mogg, Andrew O. M.; Azzopardi, Elaine; Gontarek, Steve; Hyder, Kieran
2016-01-01
Monitoring temperature of aquatic waters is of great importance, with modelled, satellite and in-situ data providing invaluable insights into long-term environmental change. However, there is often a lack of depth-resolved temperature measurements. Recreational dive computers routinely record temperature and depth, so could provide an alternate and highly novel source of oceanographic information to fill this data gap. In this study, a citizen science approach was used to obtain over 7,000 scuba diver temperature profiles. The accuracy, offset and lag of temperature records was assessed by comparing dive computers with scientific conductivity-temperature-depth instruments and existing surface temperature data. Our results show that, with processing, dive computers can provide a useful and novel tool with which to augment existing monitoring systems all over the globe, but especially in under-sampled or highly changeable coastal environments. PMID:27445104
Wirth, Troy A.; Pyke, David A.
2007-01-01
Emergency Stabilization and Rehabilitation (ES&R) and Burned Area Emergency Response (BAER) treatments are short-term, high-intensity treatments designed to mitigate the adverse effects of wildfire on public lands. The federal government expends significant resources implementing ES&R and BAER treatments after wildfires; however, recent reviews have found that existing data from monitoring and research are insufficient to evaluate the effects of these activities. The purpose of this report is to: (1) document what monitoring methods are generally used by personnel in the field; (2) describe approaches and methods for post-fire vegetation and soil monitoring documented in agency manuals; (3) determine the common elements of monitoring programs recommended in these manuals; and (4) describe a common monitoring approach to determine the effectiveness of future ES&R and BAER treatments in non-forested regions. Both qualitative and quantitative methods to measure effectiveness of ES&R treatments are used by federal land management agencies. Quantitative methods are used in the field depending on factors such as funding, personnel, and time constraints. There are seven vegetation monitoring manuals produced by the federal government that address monitoring methods for (primarily) vegetation and soil attributes. These methods vary in their objectivity and repeatability. The most repeatable methods are point-intercept, quadrat-based density measurements, gap intercepts, and direct measurement of soil erosion. Additionally, these manuals recommend approaches for designing monitoring programs for the state of ecosystems or the effect of management actions. The elements of a defensible monitoring program applicable to ES&R and BAER projects that most of these manuals have in common are objectives, stratification, control areas, random sampling, data quality, and statistical analysis. The effectiveness of treatments can be determined more accurately if data are gathered using an approach that incorporates these six monitoring program design elements and objectives, as well as repeatable procedures to measure cover, density, gap intercept, and soil erosion within each ecoregion and plant community. Additionally, using a common monitoring program design with comparable methods, consistently documenting results, and creating and maintaining a central database for query and reporting, will ultimately allow a determination of the effectiveness of post-fire rehabilitation activities region-wide.
Zheng, Xiujuan; Wei, Wentao; Huang, Qiu; Song, Shaoli; Wan, Jieqing; Huang, Gang
2017-01-01
The objective and quantitative analysis of longitudinal single photon emission computed tomography (SPECT) images are significant for the treatment monitoring of brain disorders. Therefore, a computer aided analysis (CAA) method is introduced to extract a change-rate map (CRM) as a parametric image for quantifying the changes of regional cerebral blood flow (rCBF) in longitudinal SPECT brain images. The performances of the CAA-CRM approach in treatment monitoring are evaluated by the computer simulations and clinical applications. The results of computer simulations show that the derived CRMs have high similarities with their ground truths when the lesion size is larger than system spatial resolution and the change rate is higher than 20%. In clinical applications, the CAA-CRM approach is used to assess the treatment of 50 patients with brain ischemia. The results demonstrate that CAA-CRM approach has a 93.4% accuracy of recovered region's localization. Moreover, the quantitative indexes of recovered regions derived from CRM are all significantly different among the groups and highly correlated with the experienced clinical diagnosis. In conclusion, the proposed CAA-CRM approach provides a convenient solution to generate a parametric image and derive the quantitative indexes from the longitudinal SPECT brain images for treatment monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maxwell, Don E; Ezell, Matthew A; Becklehimer, Jeff
While sites generally have systems in place to monitor the health of Cray computers themselves, often the cooling systems are ignored until a computer failure requires investigation into the source of the failure. The Liebert XDP units used to cool the Cray XE/XK models as well as the Cray proprietary cooling system used for the Cray XC30 models provide data useful for health monitoring. Unfortunately, this valuable information is often available only to custom solutions not accessible by a center-wide monitoring system or is simply ignored entirely. In this paper, methods and tools used to harvest the monitoring data availablemore » are discussed, and the implementation needed to integrate the data into a center-wide monitoring system at the Oak Ridge National Laboratory is provided.« less
Some computer graphical user interfaces in radiation therapy.
Chow, James C L
2016-03-28
In this review, five graphical user interfaces (GUIs) used in radiation therapy practices and researches are introduced. They are: (1) the treatment time calculator, superficial X-ray treatment time calculator (SUPCALC) used in the superficial X-ray radiation therapy; (2) the monitor unit calculator, electron monitor unit calculator (EMUC) used in the electron radiation therapy; (3) the multileaf collimator machine file creator, sliding window intensity modulated radiotherapy (SWIMRT) used in generating fluence map for research and quality assurance in intensity modulated radiation therapy; (4) the treatment planning system, DOSCTP used in the calculation of 3D dose distribution using Monte Carlo simulation; and (5) the monitor unit calculator, photon beam monitor unit calculator (PMUC) used in photon beam radiation therapy. One common issue of these GUIs is that all user-friendly interfaces are linked to complex formulas and algorithms based on various theories, which do not have to be understood and noted by the user. In that case, user only needs to input the required information with help from graphical elements in order to produce desired results. SUPCALC is a superficial radiation treatment time calculator using the GUI technique to provide a convenient way for radiation therapist to calculate the treatment time, and keep a record for the skin cancer patient. EMUC is an electron monitor unit calculator for electron radiation therapy. Instead of doing hand calculation according to pre-determined dosimetric tables, clinical user needs only to input the required drawing of electron field in computer graphical file format, prescription dose, and beam parameters to EMUC to calculate the required monitor unit for the electron beam treatment. EMUC is based on a semi-experimental theory of sector-integration algorithm. SWIMRT is a multileaf collimator machine file creator to generate a fluence map produced by a medical linear accelerator. This machine file controls the multileaf collimator to deliver intensity modulated beams for a specific fluence map used in quality assurance or research. DOSCTP is a treatment planning system using the computed tomography images. Radiation beams (photon or electron) with different energies and field sizes produced by a linear accelerator can be placed in different positions to irradiate the tumour in the patient. DOSCTP is linked to a Monte Carlo simulation engine using the EGSnrc-based code, so that 3D dose distribution can be determined accurately for radiation therapy. Moreover, DOSCTP can be used for treatment planning of patient or small animal. PMUC is a GUI for calculation of the monitor unit based on the prescription dose of patient in photon beam radiation therapy. The calculation is based on dose corrections in changes of photon beam energy, treatment depth, field size, jaw position, beam axis, treatment distance and beam modifiers. All GUIs mentioned in this review were written either by the Microsoft Visual Basic.net or a MATLAB GUI development tool called GUIDE. In addition, all GUIs were verified and tested using measurements to ensure their accuracies were up to clinical acceptable levels for implementations.
LabVIEW: a software system for data acquisition, data analysis, and instrument control.
Kalkman, C J
1995-01-01
Computer-based data acquisition systems play an important role in clinical monitoring and in the development of new monitoring tools. LabVIEW (National Instruments, Austin, TX) is a data acquisition and programming environment that allows flexible acquisition and processing of analog and digital data. The main feature that distinguishes LabVIEW from other data acquisition programs is its highly modular graphical programming language, "G," and a large library of mathematical and statistical functions. The advantage of graphical programming is that the code is flexible, reusable, and self-documenting. Subroutines can be saved in a library and reused without modification in other programs. This dramatically reduces development time and enables researchers to develop or modify their own programs. LabVIEW uses a large amount of processing power and computer memory, thus requiring a powerful computer. A large-screen monitor is desirable when developing larger applications. LabVIEW is excellently suited for testing new monitoring paradigms, analysis algorithms, or user interfaces. The typical LabVIEW user is the researcher who wants to develop a new monitoring technique, a set of new (derived) variables by integrating signals from several existing patient monitors, closed-loop control of a physiological variable, or a physiological simulator.
Less Daily Computer Use is Related to Smaller Hippocampal Volumes in Cognitively Intact Elderly.
Silbert, Lisa C; Dodge, Hiroko H; Lahna, David; Promjunyakul, Nutta-On; Austin, Daniel; Mattek, Nora; Erten-Lyons, Deniz; Kaye, Jeffrey A
2016-01-01
Computer use is becoming a common activity in the daily life of older individuals and declines over time in those with mild cognitive impairment (MCI). The relationship between daily computer use (DCU) and imaging markers of neurodegeneration is unknown. The objective of this study was to examine the relationship between average DCU and volumetric markers of neurodegeneration on brain MRI. Cognitively intact volunteers enrolled in the Intelligent Systems for Assessing Aging Change study underwent MRI. Total in-home computer use per day was calculated using mouse movement detection and averaged over a one-month period surrounding the MRI. Spearman's rank order correlation (univariate analysis) and linear regression models (multivariate analysis) examined hippocampal, gray matter (GM), white matter hyperintensity (WMH), and ventricular cerebral spinal fluid (vCSF) volumes in relation to DCU. A voxel-based morphometry analysis identified relationships between regional GM density and DCU. Twenty-seven cognitively intact participants used their computer for 51.3 minutes per day on average. Less DCU was associated with smaller hippocampal volumes (r = 0.48, p = 0.01), but not total GM, WMH, or vCSF volumes. After adjusting for age, education, and gender, less DCU remained associated with smaller hippocampal volume (p = 0.01). Voxel-wise analysis demonstrated that less daily computer use was associated with decreased GM density in the bilateral hippocampi and temporal lobes. Less daily computer use is associated with smaller brain volume in regions that are integral to memory function and known to be involved early with Alzheimer's pathology and conversion to dementia. Continuous monitoring of daily computer use may detect signs of preclinical neurodegeneration in older individuals at risk for dementia.
Dynamic Analyses of Result Quality in Energy-Aware Approximate Programs
NASA Astrophysics Data System (ADS)
RIngenburg, Michael F.
Energy efficiency is a key concern in the design of modern computer systems. One promising approach to energy-efficient computation, approximate computing, trades off output precision for energy efficiency. However, this tradeoff can have unexpected effects on computation quality. This thesis presents dynamic analysis tools to study, debug, and monitor the quality and energy efficiency of approximate computations. We propose three styles of tools: prototyping tools that allow developers to experiment with approximation in their applications, online tools that instrument code to determine the key sources of error, and online tools that monitor the quality of deployed applications in real time. Our prototyping tool is based on an extension to the functional language OCaml. We add approximation constructs to the language, an approximation simulator to the runtime, and profiling and auto-tuning tools for studying and experimenting with energy-quality tradeoffs. We also present two online debugging tools and three online monitoring tools. The first online tool identifies correlations between output quality and the total number of executions of, and errors in, individual approximate operations. The second tracks the number of approximate operations that flow into a particular value. Our online tools comprise three low-cost approaches to dynamic quality monitoring. They are designed to monitor quality in deployed applications without spending more energy than is saved by approximation. Online monitors can be used to perform real time adjustments to energy usage in order to meet specific quality goals. We present prototype implementations of all of these tools and describe their usage with several applications. Our prototyping, profiling, and autotuning tools allow us to experiment with approximation strategies and identify new strategies, our online tools succeed in providing new insights into the effects of approximation on output quality, and our monitors succeed in controlling output quality while still maintaining significant energy efficiency gains.
Bowman, Caitlin R; Dennis, Nancy A
2016-08-01
Recollection rejection or "recall-to-reject" is a mechanism that has been posited to help maintain accurate memory by preventing the occurrence of false memories. Recollection rejection occurs when the presentation of a new item during recognition triggers recall of an associated target, a mismatch in features between the new and old items is registered, and the lure is correctly rejected. Critically, this characterization of recollection rejection involves a recall signal that is conceptually similar to recollection as elicited by a target. However, previous neuroimaging studies have not evaluated the extent to which recollection rejection and target recollection rely on a common neural signal but have instead focused on recollection rejection as a postretrieval monitoring process. This study utilized a false memory paradigm in conjunction with an adapted remember-know-new response paradigm that separated "new" responses based on recollection rejection from those that were based on a lack of familiarity with the item. This procedure allowed for parallel recollection rejection and target recollection contrasts to be computed. Results revealed that, contrary to predictions from theoretical and behavioral literature, there was virtually no evidence of a common retrieval mechanism supporting recollection rejection and target recollection. Instead of the typical target recollection network, recollection rejection recruited a network of lateral prefrontal and bilateral parietal regions that is consistent with the retrieval monitoring network identified in previous neuroimaging studies of recollection rejection. However, a functional connectivity analysis revealed a component of the frontoparietal rejection network that showed increased coupling with the right hippocampus during recollection rejection responses. As such, we demonstrate a possible link between PFC monitoring network and basic retrieval mechanisms within the hippocampus that was not revealed with univariate analyses alone.
A Behaviour Monitoring System (BMS) for Ambient Assisted Living
Eisa, Samih
2017-01-01
Unusual changes in the regular daily mobility routine of an elderly person at home can be an indicator or early symptom of developing health problems. Sensor technology can be utilised to complement the traditional healthcare systems to gain a more detailed view of the daily mobility of a person at home when performing everyday tasks. We hypothesise that data collected from low-cost sensors such as presence and occupancy sensors can be analysed to provide insights on the daily mobility habits of the elderly living alone at home and to detect routine changes. We validate this hypothesis by designing a system that automatically learns the daily room-to-room transitions and permanence habits in each room at each time of the day and generates alarm notifications when deviations are detected. We present an algorithm to process the sensors’ data streams and compute sensor-driven features that describe the daily mobility routine of the elderly as part of the developed Behaviour Monitoring System (BMS). We are able to achieve low detection delay with confirmation time that is high enough to convey the detection of a set of common abnormal situations. We illustrate and evaluate BMS with synthetic data, generated by a developed data generator that was designed to mimic different user’s mobility profiles at home, and also with a real-life dataset collected from prior research work. Results indicate BMS detects several mobility changes that can be symptoms of common health problems. The proposed system is a useful approach for learning the mobility habits at the home environment, with the potential to detect behaviour changes that occur due to health problems, and therefore, motivating progress toward behaviour monitoring and elder’s care. PMID:28837105
Psychotherapy Is Chaotic—(Not Only) in a Computational World
Schiepek, Günter K.; Viol, Kathrin; Aichhorn, Wolfgang; Hütt, Marc-Thorsten; Sungler, Katharina; Pincus, David; Schöller, Helmut J.
2017-01-01
Objective: The aim of this article is to outline the role of chaotic dynamics in psychotherapy. Besides some empirical findings of chaos at different time scales, the focus is on theoretical modeling of change processes explaining and simulating chaotic dynamics. It will be illustrated how some common factors of psychotherapeutic change and psychological hypotheses on motivation, emotion regulation, and information processing of the client's functioning can be integrated into a comprehensive nonlinear model of human change processes. Methods: The model combines 5 variables (intensity of emotions, problem intensity, motivation to change, insight and new perspectives, therapeutic success) and 4 parameters into a set of 5 coupled nonlinear difference equations. The results of these simulations are presented as time series, as phase space embedding of these time series (i.e., attractors), and as bifurcation diagrams. Results: The model creates chaotic dynamics, phase transition-like phenomena, bi- or multi-stability, and sensibility of the dynamic patterns on parameter drift. These features are predicted by chaos theory and by Synergetics and correspond to empirical findings. The spectrum of these behaviors illustrates the complexity of psychotherapeutic processes. Conclusion: The model contributes to the development of an integrative conceptualization of psychotherapy. It is consistent with the state of scientific knowledge of common factors, as well as other psychological topics, such as: motivation, emotion regulation, and cognitive processing. The role of chaos theory is underpinned, not only in the world of computer simulations, but also in practice. In practice, chaos demands technologies capable of real-time monitoring and reporting on the nonlinear features of the ongoing process (e.g., its stability or instability). Based on this monitoring, a client-centered, continuous, and cooperative process of feedback and control becomes possible. By contrast, restricted predictability and spontaneous changes challenge the usefulness of prescriptive treatment manuals or other predefined programs of psychotherapy. PMID:28484401
Psychotherapy Is Chaotic-(Not Only) in a Computational World.
Schiepek, Günter K; Viol, Kathrin; Aichhorn, Wolfgang; Hütt, Marc-Thorsten; Sungler, Katharina; Pincus, David; Schöller, Helmut J
2017-01-01
Objective: The aim of this article is to outline the role of chaotic dynamics in psychotherapy. Besides some empirical findings of chaos at different time scales, the focus is on theoretical modeling of change processes explaining and simulating chaotic dynamics. It will be illustrated how some common factors of psychotherapeutic change and psychological hypotheses on motivation, emotion regulation, and information processing of the client's functioning can be integrated into a comprehensive nonlinear model of human change processes. Methods: The model combines 5 variables (intensity of emotions, problem intensity, motivation to change, insight and new perspectives, therapeutic success) and 4 parameters into a set of 5 coupled nonlinear difference equations. The results of these simulations are presented as time series, as phase space embedding of these time series (i.e., attractors), and as bifurcation diagrams. Results: The model creates chaotic dynamics, phase transition-like phenomena, bi- or multi-stability, and sensibility of the dynamic patterns on parameter drift. These features are predicted by chaos theory and by Synergetics and correspond to empirical findings. The spectrum of these behaviors illustrates the complexity of psychotherapeutic processes. Conclusion: The model contributes to the development of an integrative conceptualization of psychotherapy. It is consistent with the state of scientific knowledge of common factors, as well as other psychological topics, such as: motivation, emotion regulation, and cognitive processing. The role of chaos theory is underpinned, not only in the world of computer simulations, but also in practice. In practice, chaos demands technologies capable of real-time monitoring and reporting on the nonlinear features of the ongoing process (e.g., its stability or instability). Based on this monitoring, a client-centered, continuous, and cooperative process of feedback and control becomes possible. By contrast, restricted predictability and spontaneous changes challenge the usefulness of prescriptive treatment manuals or other predefined programs of psychotherapy.
Monitor Tone Generates Stress in Computer and VDT Operators: A Preliminary Study.
ERIC Educational Resources Information Center
Dow, Caroline; Covert, Douglas C.
A near-ultrasonic pure tone of 15,570 Herz generated by flyback transformers in computer and video display terminal (VDT) monitors may cause severe non-specific irritation or stress disease in operators. Women hear higher frequency sounds than men and are twice as sensitive to "too loud" noise. Pure tones at high frequencies are more…
ERIC Educational Resources Information Center
Van Norman, Ethan R.; Nelson, Peter M.; Parker, David C.
2017-01-01
Computer adaptive tests (CATs) hold promise to monitor student progress within multitiered systems of support. However, the relationship between how long and how often data are collected and the technical adequacy of growth estimates from CATs has not been explored. Given CAT administration times, it is important to identify optimal data…
Biosensor Technologies for Augmented Brain-Computer Interfaces in the Next Decades
2012-05-13
Research Triangle Park, NC 27709-2211 Augmented brain–computer interface (ABCI);biosensor; cognitive-state monitoring; electroencephalogram( EEG ); human...biosensor; cognitive-state monitoring; electroencephalogram ( EEG ); human brain imaging Manuscript received November 28, 2011; accepted December 20...magnetic reso- nance imaging (fMRI) [1], positron emission tomography (PET) [2], electroencephalograms ( EEGs ) and optical brain imaging techniques (i.e
ERIC Educational Resources Information Center
Balajthy, Ernest
1988-01-01
Investigates college students' ability to monitor learner-controlled vocabulary instruction when performed in traditional workbook-like tasks and in two different computer-based formats: video game and text game exercises. Suggests that developmental reading students are unable to monitor their own vocabulary development accurately. (MM)
A Computational Pipeline to Improve Clinical Alarms Using a Parallel Computing Infrastructure
ERIC Educational Resources Information Center
Nguyen, Andrew V.
2013-01-01
Physicians, nurses, and other clinical staff rely on alarms from various bedside monitors and sensors to alert when there is a change in the patient's clinical status, typically when urgent intervention is necessary. These alarms are usually embedded directly within the sensor or monitor and lacks the context of the patient's medical history and…
Monitoring SLAC High Performance UNIX Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC
2005-12-15
Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia.more » Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.« less
ERIC Educational Resources Information Center
Harris, Julian; Maurer, Hermann
An investigation into high level event monitoring within the scope of a well-known multimedia application, HyperCard--a program on the Macintosh computer, is carried out. A monitoring system is defined as a system which automatically monitors usage of some activity and gathers statistics based on what is has observed. Monitor systems can give the…
Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique
2015-05-01
Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
Imaging-related medications: a class overview
2007-01-01
Imaging-related medications (contrast agents) are commonly utilized to improve visualization of radiographic, computed tomography (CT), and magnetic resonance (MR) images. While traditional medications are used specifically for their pharmacological actions, the ideal imaging agent provides enhanced contrast with little biological interaction. The radiopaque agents, barium sulfate and iodinated contrast agents, confer “contrast” to x-ray films by their physical ability to directly absorb x-rays. Gadolinium-based MR agents enhance visualization of tissues when exposed to a magnetic field. Ferrous-ferric oxide–based paramagnetic agents provide negative contrast for MR liver studies. This article provides an overview of clinically relevant information for the imaging-related medications commonly in use. It reviews the safety improvements in new generations of drugs; risk factors and precautions for the reduction of severe adverse reactions (i.e., extravasation, contrast-induced nephropathy, metformin-induced lactic acidosis, and nephrogenic fibrosing dermopathy/nephrogenic systemic fibrosis); and the significance of diligent patient screening before contrast exposure and appropriate monitoring after exposure. PMID:17948119
ERIC Educational Resources Information Center
Powell, David
1984-01-01
Provides guidelines for selecting a monitor to suit specific applications, explains the process by which graphics images are produced on a CRT monitor, and describes four types of flat-panel displays being used in the newest lap-sized portable computers. A comparison chart provides prices and specifications for over 80 monitors. (MBR)
40 CFR 75.82 - Monitoring of Hg mass emissions and heat input at common and multiple stacks.
Code of Federal Regulations, 2010 CFR
2010-07-01
... heat input at common and multiple stacks. 75.82 Section 75.82 Protection of Environment ENVIRONMENTAL... Provisions § 75.82 Monitoring of Hg mass emissions and heat input at common and multiple stacks. (a) Unit... systems and perform the Hg emission testing described under § 75.81(b). If reporting of the unit heat...
NASA Astrophysics Data System (ADS)
Park, Chan-Hee; Lee, Cholwoo
2016-04-01
Raspberry Pi series is a low cost, smaller than credit-card sized computers that various operating systems such as linux and recently even Windows 10 are ported to run on. Thanks to massive production and rapid technology development, the price of various sensors that can be attached to Raspberry Pi has been dropping at an increasing speed. Therefore, the device can be an economic choice as a small portable computer to monitor temporal hydrogeological data in fields. In this study, we present a Raspberry Pi system that measures a flow rate, and temperature of groundwater at sites, stores them into mysql database, and produces interactive figures and tables such as google charts online or bokeh offline for further monitoring and analysis. Since all the data are to be monitored on internet, any computers or mobile devices can be good monitoring tools at convenience. The measured data are further integrated with OpenGeoSys, one of the hydrogeological models that is also ported to the Raspberry Pi series. This leads onsite hydrogeological modeling fed by temporal sensor data to meet various needs.
NASA Astrophysics Data System (ADS)
Kim, Hyun-Sok; Hyun, Min-Sung; Ju, Jae-Wuk; Kim, Young-Sik; Lambregts, Cees; van Rhee, Peter; Kim, Johan; McNamara, Elliott; Tel, Wim; Böcker, Paul; Oh, Nang-Lyeom; Lee, Jun-Hyung
2018-03-01
Computational metrology has been proposed as the way forward to resolve the need for increased metrology density, resulting from extending correction capabilities, without adding actual metrology budget. By exploiting TWINSCAN based metrology information, dense overlay fingerprints for every wafer can be computed. This extended metrology dataset enables new use cases, such as monitoring and control based on fingerprints for every wafer of the lot. This paper gives a detailed description, discusses the accuracy of the fingerprints computed, and will show results obtained in a DRAM HVM manufacturing environment. Also an outlook for improvements and extensions will be shared.
Ubiquitous computing in sports: A review and analysis.
Baca, Arnold; Dabnichki, Peter; Heller, Mario; Kornfeind, Philipp
2009-10-01
Ubiquitous (pervasive) computing is a term for a synergetic use of sensing, communication and computing. Pervasive use of computing has seen a rapid increase in the current decade. This development has propagated in applied sport science and everyday life. The work presents a survey of recent developments in sport and leisure with emphasis on technology and computational techniques. A detailed analysis on new technological developments is performed. Sensors for position and motion detection, and such for equipment and physiological monitoring are discussed. Aspects of novel trends in communication technologies and data processing are outlined. Computational advancements have started a new trend - development of smart and intelligent systems for a wide range of applications - from model-based posture recognition to context awareness algorithms for nutrition monitoring. Examples particular to coaching and training are discussed. Selected tools for monitoring rules' compliance and automatic decision-making are outlined. Finally, applications in leisure and entertainment are presented, from systems supporting physical activity to systems providing motivation. It is concluded that the emphasis in future will shift from technologies to intelligent systems that allow for enhanced social interaction as efforts need to be made to improve user-friendliness and standardisation of measurement and transmission protocols.
NASA Astrophysics Data System (ADS)
Aneri, Parikh; Sumathy, S.
2017-11-01
Cloud computing provides services over the internet and provides application resources and data to the users based on their demand. Base of the Cloud Computing is consumer provider model. Cloud provider provides resources which consumer can access using cloud computing model in order to build their application based on their demand. Cloud data center is a bulk of resources on shared pool architecture for cloud user to access. Virtualization is the heart of the Cloud computing model, it provides virtual machine as per application specific configuration and those applications are free to choose their own configuration. On one hand, there is huge number of resources and on other hand it has to serve huge number of requests effectively. Therefore, resource allocation policy and scheduling policy play very important role in allocation and managing resources in this cloud computing model. This paper proposes the load balancing policy using Hungarian algorithm. Hungarian Algorithm provides dynamic load balancing policy with a monitor component. Monitor component helps to increase cloud resource utilization by managing the Hungarian algorithm by monitoring its state and altering its state based on artificial intelligent. CloudSim used in this proposal is an extensible toolkit and it simulates cloud computing environment.
Job monitoring on DIRAC for Belle II distributed computing
NASA Astrophysics Data System (ADS)
Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo
2015-12-01
We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.
Real-Time Monitoring of Scada Based Control System for Filling Process
NASA Astrophysics Data System (ADS)
Soe, Aung Kyaw; Myint, Aung Naing; Latt, Maung Maung; Theingi
2008-10-01
This paper is a design of real-time monitoring for filling system using Supervisory Control and Data Acquisition (SCADA). The monitoring of production process is described in real-time using Visual Basic.Net programming under Visual Studio 2005 software without SCADA software. The software integrators are programmed to get the required information for the configuration screens. Simulation of components is expressed on the computer screen using parallel port between computers and filling devices. The programs of real-time simulation for the filling process from the pure drinking water industry are provided.
Method and system for redundancy management of distributed and recoverable digital control system
NASA Technical Reports Server (NTRS)
Stange, Kent (Inventor); Hess, Richard (Inventor); Kelley, Gerald B (Inventor); Rogers, Randy (Inventor)
2012-01-01
A method and system for redundancy management is provided for a distributed and recoverable digital control system. The method uses unique redundancy management techniques to achieve recovery and restoration of redundant elements to full operation in an asynchronous environment. The system includes a first computing unit comprising a pair of redundant computational lanes for generating redundant control commands. One or more internal monitors detect data errors in the control commands, and provide a recovery trigger to the first computing unit. A second redundant computing unit provides the same features as the first computing unit. A first actuator control unit is configured to provide blending and monitoring of the control commands from the first and second computing units, and to provide a recovery trigger to each of the first and second computing units. A second actuator control unit provides the same features as the first actuator control unit.
Computer mouse movement patterns: A potential marker of mild cognitive impairment.
Seelye, Adriana; Hagler, Stuart; Mattek, Nora; Howieson, Diane B; Wild, Katherine; Dodge, Hiroko H; Kaye, Jeffrey A
2015-12-01
Subtle changes in cognitively demanding activities occur in MCI but are difficult to assess with conventional methods. In an exploratory study, we examined whether patterns of computer mouse movements obtained from routine home computer use discriminated between older adults with and without MCI. Participants were 42 cognitively intact and 20 older adults with MCI enrolled in a longitudinal study of in-home monitoring technologies. Mouse pointer movement variables were computed during one week of routine home computer use using algorithms that identified and characterized mouse movements within each computer use session. MCI was associated with making significantly fewer total mouse moves ( p <.01), and making mouse movements that were more variable, less efficient, and with longer pauses between movements ( p <.05). Mouse movement measures were significantly associated with several cognitive domains ( p 's<.01-.05). Remotely monitored computer mouse movement patterns are a potential early marker of real-world cognitive changes in MCI.
Key Lessons in Building "Data Commons": The Open Science Data Cloud Ecosystem
NASA Astrophysics Data System (ADS)
Patterson, M.; Grossman, R.; Heath, A.; Murphy, M.; Wells, W.
2015-12-01
Cloud computing technology has created a shift around data and data analysis by allowing researchers to push computation to data as opposed to having to pull data to an individual researcher's computer. Subsequently, cloud-based resources can provide unique opportunities to capture computing environments used both to access raw data in its original form and also to create analysis products which may be the source of data for tables and figures presented in research publications. Since 2008, the Open Cloud Consortium (OCC) has operated the Open Science Data Cloud (OSDC), which provides scientific researchers with computational resources for storing, sharing, and analyzing large (terabyte and petabyte-scale) scientific datasets. OSDC has provided compute and storage services to over 750 researchers in a wide variety of data intensive disciplines. Recently, internal users have logged about 2 million core hours each month. The OSDC also serves the research community by colocating these resources with access to nearly a petabyte of public scientific datasets in a variety of fields also accessible for download externally by the public. In our experience operating these resources, researchers are well served by "data commons," meaning cyberinfrastructure that colocates data archives, computing, and storage infrastructure and supports essential tools and services for working with scientific data. In addition to the OSDC public data commons, the OCC operates a data commons in collaboration with NASA and is developing a data commons for NOAA datasets. As cloud-based infrastructures for distributing and computing over data become more pervasive, we ask, "What does it mean to publish data in a data commons?" Here we present the OSDC perspective and discuss several services that are key in architecting data commons, including digital identifier services.
Drought: A comprehensive R package for drought monitoring, prediction and analysis
NASA Astrophysics Data System (ADS)
Hao, Zengchao; Hao, Fanghua; Singh, Vijay P.; Cheng, Hongguang
2015-04-01
Drought may impose serious challenges to human societies and ecosystems. Due to complicated causing effects and wide impacts, a universally accepted definition of drought does not exist. The drought indicator is commonly used to characterize drought properties such as duration or severity. Various drought indicators have been developed in the past few decades for the monitoring of a certain aspect of drought condition along with the development of multivariate drought indices for drought characterizations from multiple sources or hydro-climatic variables. Reliable drought prediction with suitable drought indicators is critical to the drought preparedness plan to reduce potential drought impacts. In addition, drought analysis to quantify the risk of drought properties would provide useful information for operation drought managements. The drought monitoring, prediction and risk analysis are important components in drought modeling and assessments. In this study, a comprehensive R package "drought" is developed to aid the drought monitoring, prediction and risk analysis (available from R-Forge and CRAN soon). The computation of a suite of univariate and multivariate drought indices that integrate drought information from various sources such as precipitation, temperature, soil moisture, and runoff is available in the drought monitoring component in the package. The drought prediction/forecasting component consists of statistical drought predictions to enhance the drought early warning for decision makings. Analysis of drought properties such as duration and severity is also provided in this package for drought risk assessments. Based on this package, a drought monitoring and prediction/forecasting system is under development as a decision supporting tool. The package will be provided freely to the public to aid the drought modeling and assessment for researchers and practitioners.
Son, Joohyung; Bae, Miju; Chung, Sung Woon; Lee, Chung Won; Huh, Up; Song, Seunghwan
2017-12-01
The inferior vena cava filter (IVCF) is very effective for preventing pulmonary embolism in patients who cannot undergo anticoagulation therapy. However, if a filter is placed in the body permanently, it may lead to other complications. A retrospective study was performed of 159 patients who underwent retrievable Cook Celect IVCF implantation between January 2007 and April 2015 at a single center. Baseline characteristics, indications, and complications caused by the filter were investigated. The most common underlying disease of patients receiving the filter was cancer (24.3%). Venous thrombolysis or thrombectomy was the most common indication for IVCF insertion in this study (47.2%). The most common complication was inferior vena cava penetration, the risk of which increased the longer the filter remained in the body (p=0.032, Exp(B)=1.004). If the patient is able to retry anticoagulation therapy and the filter is no longer needed, the filter should be removed, even if a long time has elapsed since implantation. If the filter cannot be removed, it is recommended that follow-up computed tomography be performed regularly to monitor the progress of venous thromboembolisms as well as any filter-related complications.
GPS Monitor Station Upgrade Program at the Naval Research Laboratory
NASA Technical Reports Server (NTRS)
Galysh, Ivan J.; Craig, Dwin M.
1996-01-01
One of the measurements made by the Global Positioning System (GPS) monitor stations is to measure the continuous pseudo-range of all the passing GPS satellites. The pseudo-range contains GPS and monitor station clock errors as well as GPS satellite navigation errors. Currently the time at the GPS monitor station is obtained from the GPS constellation and has an inherent inaccuracy as a result. Improved timing accuracy at the GPS monitoring stations will improve GPS performance. The US Naval Research Laboratory (NRL) is developing hardware and software for the GPS monitor station upgrade program to improve the monitor station clock accuracy. This upgrade will allow a method independent of the GPS satellite constellation of measuring and correcting monitor station time to US Naval Observatory (USNO) time. THe hardware consists of a high performance atomic cesium frequency standard (CFS) and a computer which is used to ensemble the CFS with the two CFS's currently located at the monitor station by use of a dual-mixer system. The dual-mixer system achieves phase measurements between the high-performance CFS and the existing monitor station CFS's to within 400 femtoseconds. Time transfer between USNO and a given monitor station is achieved via a two way satellite time transfer modem. The computer at the monitor station disciplines the CFS based on a comparison of one pulse per second sent from the master site at USNO. The monitor station computer is also used to perform housekeeping functions, as well as recording the health status of all three CFS's. This information is sent to the USNO through the time transfer modem. Laboratory time synchronization results in the sub nanosecond range have been observed and the ability to maintain the monitor station CFS frequency to within 3.0 x 10 (sup minus 14) of the master site at USNO.
Exposure to electromagnetic fields from laptop use of "laptop" computers.
Bellieni, C V; Pinto, I; Bogi, A; Zoppetti, N; Andreuccetti, D; Buonocore, G
2012-01-01
Portable computers are often used at tight contact with the body and therefore are called "laptop." The authors measured electromagnetic fields (EMFs) laptop computers produce and estimated the induced currents in the body, to assess the safety of laptop computers. The authors evaluated 5 commonly used laptop of different brands. They measured EMF exposure produced and, using validated computerized models, the authors exploited the data of one of the laptop computers (LTCs) to estimate the magnetic flux exposure of the user and of the fetus in the womb, when the laptop is used at close contact with the woman's womb. In the LTCs analyzed, EMF values (range 1.8-6 μT) are within International Commission on Non-Ionizing Radiation (NIR) Protection (ICNIRP) guidelines, but are considerably higher than the values recommended by 2 recent guidelines for computer monitors magnetic field emissions, MPR II (Swedish Board for Technical Accreditation) and TCO (Swedish Confederation of Professional Employees), and those considered risky for tumor development. When close to the body, the laptop induces currents that are within 34.2% to 49.8% ICNIRP recommendations, but not negligible, to the adult's body and to the fetus (in pregnant women). On the contrary, the power supply induces strong intracorporal electric current densities in the fetus and in the adult subject, which are respectively 182-263% and 71-483% higher than ICNIRP 98 basic restriction recommended to prevent adverse health effects. Laptop is paradoxically an improper site for the use of a LTC, which consequently should be renamed to not induce customers towards an improper use.
2014-06-01
in large-scale datasets such as might be obtained by monitoring a corporate network or social network. Identifying guilty actors, rather than payload...by monitoring a corporate network or social network. Identifying guilty actors, rather than payload-carrying objects, is entirely novel in steganalysis...implementation using Compute Unified Device Architecture (CUDA) on NVIDIA graphics cards. The key to good performance is to combine computations so that
Item-Specific Adaptation and the Conflict-Monitoring Hypothesis: A Computational Model
ERIC Educational Resources Information Center
Blais, Chris; Robidoux, Serje; Risko, Evan F.; Besner, Derek
2007-01-01
Comments on articles by Botvinick et al. and Jacob et al. M. M. Botvinick, T. S. Braver, D. M. Barch, C. S. Carter, and J. D. Cohen (2001) implemented their conflict-monitoring hypothesis of cognitive control in a series of computational models. The authors of the current article first demonstrate that M. M. Botvinick et al.'s (2001)…
ERIC Educational Resources Information Center
Forster, Natalie; Souvignier, Elmar
2011-01-01
The purpose of this study was to examine the technical adequacy of a computer-based assessment instrument which is based on hierarchical models of text comprehension for monitoring student reading progress following the Curriculum-Based Measurement (CBM) approach. At intervals of two weeks, 120 third-grade students finished eight CBM tests. To…
Exploiting Analytics Techniques in CMS Computing Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonacorsi, D.; Kuznetsov, V.; Magini, N.
The CMS experiment has collected an enormous volume of metadata about its computing operations in its monitoring systems, describing its experience in operating all of the CMS workflows on all of the Worldwide LHC Computing Grid Tiers. Data mining efforts into all these information have rarely been done, but are of crucial importance for a better understanding of how CMS did successful operations, and to reach an adequate and adaptive modelling of the CMS operations, in order to allow detailed optimizations and eventually a prediction of system behaviours. These data are now streamed into the CERN Hadoop data cluster formore » further analysis. Specific sets of information (e.g. data on how many replicas of datasets CMS wrote on disks at WLCG Tiers, data on which datasets were primarily requested for analysis, etc) were collected on Hadoop and processed with MapReduce applications profiting of the parallelization on the Hadoop cluster. We present the implementation of new monitoring applications on Hadoop, and discuss the new possibilities in CMS computing monitoring introduced with the ability to quickly process big data sets from mulltiple sources, looking forward to a predictive modeling of the system.« less
Is Technology-Mediated Parental Monitoring Related to Adolescent Substance Use?
Rudi, Jessie; Dworkin, Jodi
2018-01-03
Prevention researchers have identified parental monitoring leading to parental knowledge to be a protective factor against adolescent substance use. In today's digital society, parental monitoring can occur using technology-mediated communication methods, such as text messaging, email, and social networking sites. The current study aimed to identify patterns, or clusters, of in-person and technology-mediated monitoring behaviors, and examine differences between the patterns (clusters) in adolescent substance use. Cross-sectional survey data were collected from 289 parents of adolescents using Facebook and Amazon Mechanical Turk (MTurk). Cluster analyses were computed to identify patterns of in-person and technology-mediated monitoring behaviors, and chi-square analyses were computed to examine differences in substance use between the identified clusters. Three monitoring clusters were identified: a moderate in-person and moderate technology-mediated monitoring cluster (moderate-moderate), a high in-person and high technology-mediated monitoring cluster (high-high), and a high in-person and low technology-mediated monitoring cluster (high-low). Higher frequency of technology-mediated parental monitoring was not associated with lower levels of substance use. Results show that higher levels of technology-mediated parental monitoring may not be associated with adolescent substance use.
A Wireless Monitoring Sub-nA Resolution Test Platform for Nanostructure Sensors
Jang, Chi Woong; Byun, Young Tae; Lee, Taikjin; Woo, Deok Ha; Lee, Seok; Jhon, Young Min
2013-01-01
We have constructed a wireless monitoring test platform with a sub-nA resolution signal amplification/processing circuit (SAPC) and a wireless communication network to test the real-time remote monitoring of the signals from carbon nanotube (CNT) sensors. The operation characteristics of the CNT sensors can also be measured by the ISD-VSD curve with the SAPC. The SAPC signals are transmitted to a personal computer by Bluetooth communication and the signals from the computer are transmitted to smart phones by Wi-Fi communication, in such a way that the signals from the sensors can be remotely monitored through a web browser. Successful remote monitoring of signals from a CNT sensor was achieved with the wireless monitoring test platform for detection of 0.15% methanol vapor with 0.5 nA resolution and 7 Hz sampling rate. PMID:23783735
Implementation of a WAP-based telemedicine system for patient monitoring.
Hung, Kevin; Zhang, Yuan-Ting
2003-06-01
Many parties have already demonstrated telemedicine applications that use cellular phones and the Internet. A current trend in telecommunication is the convergence of wireless communication and computer network technologies, and the emergence of wireless application protocol (WAP) devices is an example. Since WAP will also be a common feature found in future mobile communication devices, it is worthwhile to investigate its use in telemedicine. This paper describes the implementation and experiences with a WAP-based telemedicine system for patient-monitoring that has been developed in our laboratory. It utilizes WAP devices as mobile access terminals for general inquiry and patient-monitoring services. Authorized users can browse the patients' general data, monitored blood pressure (BP), and electrocardiogram (ECG) on WAP devices in store-and-forward mode. The applications, written in wireless markup language (WML), WMLScript, and Perl, resided in a content server. A MySQL relational database system was set up to store the BP readings, ECG data, patient records, clinic and hospital information, and doctors' appointments with patients. A wireless ECG subsystem was built for recording ambulatory ECG in an indoor environment and for storing ECG data into the database. For testing, a WAP phone compliant with WAP 1.1 was used at GSM 1800 MHz by circuit-switched data (CSD) to connect to the content server through a WAP gateway, which was provided by a mobile phone service provider in Hong Kong. Data were successfully retrieved from the database and displayed on the WAP phone. The system shows how WAP can be feasible in remote patient-monitoring and patient data retrieval.
40 CFR 1042.110 - Recording reductant use and other diagnostic functions.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) The onboard computer log must record in nonvolatile computer memory all incidents of engine operation... such operation in nonvolatile computer memory. You are not required to monitor NOX concentrations...
Drought in Southwestern United States
NASA Technical Reports Server (NTRS)
2007-01-01
The southwestern United States pined for water in late March and early April 2007. This image is based on data collected by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite from March 22 through April 6, 2007, and it shows the Normalized Difference Vegetation Index, or NDVI, for the period. In this NDVI color scale, green indicates areas of healthier-than-usual vegetation, and only small patches of green appear in this image, near the California-Nevada border and in Utah. Larger areas of below-normal vegetation are more common, especially throughout California. Pale yellow indicates areas with generally average vegetation. Gray areas appear where no data were available, likely due to persistent clouds or snow cover. According to the April 10, 2007, update from the U.S. Drought Monitor, most of the southwestern United Sates, including Utah, Nevada, California, and Arizona, experienced moderate to extreme drought. The hardest hit areas were southeastern California and southwestern Arizona. Writing for the Drought Monitor, David Miskus of the Joint Agricultural Weather Facility reported that March 2007 had been unusually dry for the southwestern United States. While California's and Utah's reservoir storage was only slightly below normal, reservoir storage was well below normal for New Mexico and Arizona. In early April, an international research team published an online paper in Science noting that droughts could become more common for the southwestern United States and northern Mexico, as these areas were already showing signs of drying. Relying on the same computer models used in the Intergovernmental Panel on Climate Change (IPCC) report released in early 2007, the researchers who published in Science concluded that global warming could make droughts more common, not just in the American Southwest, but also in semiarid regions of southern Europe, Mediterranean northern Africa, and the Middle East.
Remote vibration monitoring system using wireless internet data transfer
NASA Astrophysics Data System (ADS)
Lemke, John
2000-06-01
Vibrations from construction activities can affect infrastructure projects in several ways. Within the general vicinity of a construction site, vibrations can result in damage to existing structures, disturbance to people, damage to sensitive machinery, and degraded performance of precision instrumentation or motion sensitive equipment. Current practice for monitoring vibrations in the vicinity of construction sites commonly consists of measuring free field or structural motions using velocity transducers connected to a portable data acquisition unit via cables. This paper describes an innovative way to collect, process, transmit, and analyze vibration measurements obtained at construction sites. The system described measures vibration at the sensor location, performs necessary signal conditioning and digitization, and sends data to a Web server using wireless data transmission and Internet protocols. A Servlet program running on the Web server accepts the transmitted data and incorporates it into a project database. Two-way interaction between the Web-client and the Web server is accomplished through the use of a Servlet program and a Java Applet running inside a browser located on the Web client's computer. Advantages of this system over conventional vibration data logging systems include continuous unattended monitoring, reduced costs associated with field data collection, instant access to data files and graphs by project team members, and the ability to remotely modify data sampling schemes.
A virtual reality environment for telescope operation
NASA Astrophysics Data System (ADS)
Martínez, Luis A.; Villarreal, José L.; Ángeles, Fernando; Bernal, Abel
2010-07-01
Astronomical observatories and telescopes are becoming increasingly large and complex systems, demanding to any potential user the acquirement of great amount of information previous to access them. At present, the most common way to overcome that information is through the implementation of larger graphical user interfaces and computer monitors to increase the display area. Tonantzintla Observatory has a 1-m telescope with a remote observing system. As a step forward in the improvement of the telescope software, we have designed a Virtual Reality (VR) environment that works as an extension of the remote system and allows us to operate the telescope. In this work we explore this alternative technology that is being suggested here as a software platform for the operation of the 1-m telescope.
Learning Setting-Generalized Activity Models for Smart Spaces
Cook, Diane J.
2011-01-01
The data mining and pervasive computing technologies found in smart homes offer unprecedented opportunities for providing context-aware services, including health monitoring and assistance to individuals experiencing difficulties living independently at home. In order to provide these services, smart environment algorithms need to recognize and track activities that people normally perform as part of their daily routines. However, activity recognition has typically involved gathering and labeling large amounts of data in each setting to learn a model for activities in that setting. We hypothesize that generalized models can be learned for common activities that span multiple environment settings and resident types. We describe our approach to learning these models and demonstrate the approach using eleven CASAS datasets collected in seven environments. PMID:21461133
Design of sensor node platform for wireless biomedical sensor networks.
Xijun, Chen; -H Meng, Max; Hongliang, Ren
2005-01-01
Design of low-cost, miniature, lightweight, ultra low-power, flexible sensor platform capable of customization and seamless integration into a wireless biomedical sensor network(WBSN) for health monitoring applications presents one of the most challenging tasks. In this paper, we propose a WBSN node platform featuring an ultra low-power microcontroller, an IEEE 802.15.4 compatible transceiver, and a flexible expansion connector. The proposed solution promises a cost-effective, flexible platform that allows easy customization, energy-efficient computation and communication. The development of a common platform for multiple physical sensors will increase reuse and alleviate costs of transition to a new generation of sensors. As a case study, we present an implementation of an ECG (Electrocardiogram) sensor.
Subtlenoise: sonification of distributed computing operations
NASA Astrophysics Data System (ADS)
Love, P. A.
2015-12-01
The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.
Low-complexity R-peak detection for ambulatory fetal monitoring.
Rooijakkers, Michael J; Rabotti, Chiara; Oei, S Guid; Mischi, Massimo
2012-07-01
Non-invasive fetal health monitoring during pregnancy is becoming increasingly important because of the increasing number of high-risk pregnancies. Despite recent advances in signal-processing technology, which have enabled fetal monitoring during pregnancy using abdominal electrocardiogram (ECG) recordings, ubiquitous fetal health monitoring is still unfeasible due to the computational complexity of noise-robust solutions. In this paper, an ECG R-peak detection algorithm for ambulatory R-peak detection is proposed, as part of a fetal ECG detection algorithm. The proposed algorithm is optimized to reduce computational complexity, without reducing the R-peak detection performance compared to the existing R-peak detection schemes. Validation of the algorithm is performed on three manually annotated datasets. With a detection error rate of 0.23%, 1.32% and 9.42% on the MIT/BIH Arrhythmia and in-house maternal and fetal databases, respectively, the detection rate of the proposed algorithm is comparable to the best state-of-the-art algorithms, at a reduced computational complexity.
GISMO: A MATLAB toolbox for seismic research, monitoring, & education
NASA Astrophysics Data System (ADS)
Thompson, G.; Reyes, C. G.; Kempler, L. A.
2017-12-01
GISMO is an open-source MATLAB toolbox which provides an object-oriented framework to build workflows and applications that read, process, visualize and write seismic waveform, catalog and instrument response data. GISMO can retrieve data from a variety of sources (e.g. FDSN web services, Earthworm/Winston servers) and data formats (SAC, Seisan, etc.). It can handle waveform data that crosses file boundaries. All this alleviates one of the most time consuming part for scientists developing their own codes. GISMO simplifies seismic data analysis by providing a common interface for your data, regardless of its source. Several common plots are built-in to GISMO, such as record section plots, spectrograms, depth-time sections, event count per unit time, energy release per unit time, etc. Other visualizations include map views and cross-sections of hypocentral data. Several common processing methods are also included, such as an extensive set of tools for correlation analysis. Support is being added to interface GISMO with ObsPy. GISMO encourages community development of an integrated set of codes and accompanying documentation, eliminating the need for seismologists to "reinvent the wheel". By sharing code the consistency and repeatability of results can be enhanced. GISMO is hosted on GitHub with documentation both within the source code and in the project wiki. GISMO has been used at the University of South Florida and University of Alaska Fairbanks in graduate-level courses including Seismic Data Analysis, Time Series Analysis and Computational Seismology. GISMO has also been tailored to interface with the common seismic monitoring software and data formats used by volcano observatories in the US and elsewhere. As an example, toolbox training was delivered to researchers at INETER (Nicaragua). Applications built on GISMO include IceWeb (e.g. web-based spectrograms), which has been used by Alaska Volcano Observatory since 1998 and became the prototype for the USGS Pensive system.
Fiesta, Matthew P; Eagleman, David M
2008-09-15
As the frequency of a flickering light is increased, the perception of flicker is replaced by the perception of steady light at what is known as the critical flicker fusion threshold (CFFT). This threshold provides a useful measure of the brain's information processing speed, and has been used in medicine for over a century both for diagnostic and drug efficacy studies. However, the hardware for presenting the stimulus has not advanced to take advantage of computers, largely because the refresh rates of typical monitors are too slow to provide fine-grained changes in the alternation rate of a visual stimulus. For example, a cathode ray tube (CRT) computer monitor running at 100Hz will render a new frame every 10 ms, thus restricting the period of a flickering stimulus to multiples of 20 ms. These multiples provide a temporal resolution far too low to make precise threshold measurements, since typical CFFT values are in the neighborhood of 35 ms. We describe here a simple and novel technique to enable alternating images at several closely-spaced periods on a standard monitor. The key to our technique is to programmatically control the video card to dynamically reset the refresh rate of the monitor. Different refresh rates allow slightly different frame durations; this can be leveraged to vastly increase the resolution of stimulus presentation times. This simple technique opens new inroads for experiments on computers that require more finely-spaced temporal resolution than a monitor at a single, fixed refresh rate can allow.
NASA Technical Reports Server (NTRS)
Robinson, Julie A.; Tate-Brown, Judy M.
2009-01-01
Using a commercial software CD and minimal up-mass, SNFM monitors the Payload local area network (LAN) to analyze and troubleshoot LAN data traffic. Validating LAN traffic models may allow for faster and more reliable computer networks to sustain systems and science on future space missions. Research Summary: This experiment studies the function of the computer network onboard the ISS. On-orbit packet statistics are captured and used to validate ground based medium rate data link models and enhance the way that the local area network (LAN) is monitored. This information will allow monitoring and improvement in the data transfer capabilities of on-orbit computer networks. The Serial Network Flow Monitor (SNFM) experiment attempts to characterize the network equivalent of traffic jams on board ISS. The SNFM team is able to specifically target historical problem areas including the SAMS (Space Acceleration Measurement System) communication issues, data transmissions from the ISS to the ground teams, and multiple users on the network at the same time. By looking at how various users interact with each other on the network, conflicts can be identified and work can begin on solutions. SNFM is comprised of a commercial off the shelf software package that monitors packet traffic through the payload Ethernet LANs (local area networks) on board ISS.
Unobtrusive measurement of daily computer use to detect mild cognitive impairment
Kaye, Jeffrey; Mattek, Nora; Dodge, Hiroko H; Campbell, Ian; Hayes, Tamara; Austin, Daniel; Hatt, William; Wild, Katherine; Jimison, Holly; Pavel, Michael
2013-01-01
Background Mild disturbances of higher order activities of daily living are present in people diagnosed with mild cognitive impairment (MCI). These deficits may be difficult to detect among those still living independently. Unobtrusive continuous assessment of a complex activity such as home computer use may detect mild functional changes and identify MCI. We sought to determine whether long-term changes in remotely monitored computer use differ in persons with MCI in comparison to cognitively intact volunteers. Methods Participants enrolled in a longitudinal cohort study of unobtrusive in-home technologies to detect cognitive and motor decline in independently living seniors were assessed for computer usage (number of days with use, mean daily usage and coefficient of variation of use) measured by remotely monitoring computer session start and end times. Results Over 230,000 computer sessions from 113 computer users (mean age, 85; 38 with MCI) were acquired during a mean of 36 months. In mixed effects models there was no difference in computer usage at baseline between MCI and intact participants controlling for age, sex, education, race and computer experience. However, over time, between MCI and intact participants, there was a significant decrease in number of days with use (p=0.01), mean daily usage (~1% greater decrease/month; p=0.009) and an increase in day-to-day use variability (p=0.002). Conclusions Computer use change can be unobtrusively monitored and indicate individuals with MCI. With 79% of those 55–64 years old now online, this may be an ecologically valid and efficient approach to track subtle clinically meaningful change with aging. PMID:23688576
HIPAA-compliant automatic monitoring system for RIS-integrated PACS operation
NASA Astrophysics Data System (ADS)
Jin, Jin; Zhang, Jianguo; Chen, Xiaomeng; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Feng, Jie; Sheng, Liwei; Huang, H. K.
2006-03-01
As a governmental regulation, Health Insurance Portability and Accountability Act (HIPAA) was issued to protect the privacy of health information that identifies individuals who are living or deceased. HIPAA requires security services supporting implementation features: Access control; Audit controls; Authorization control; Data authentication; and Entity authentication. These controls, which proposed in HIPAA Security Standards, are Audit trails here. Audit trails can be used for surveillance purposes, to detect when interesting events might be happening that warrant further investigation. Or they can be used forensically, after the detection of a security breach, to determine what went wrong and who or what was at fault. In order to provide security control services and to achieve the high and continuous availability, we design the HIPAA-Compliant Automatic Monitoring System for RIS-Integrated PACS operation. The system consists of two parts: monitoring agents running in each PACS component computer and a Monitor Server running in a remote computer. Monitoring agents are deployed on all computer nodes in RIS-Integrated PACS system to collect the Audit trail messages defined by the Supplement 95 of the DICOM standard: Audit Trail Messages. Then the Monitor Server gathers all audit messages and processes them to provide security information in three levels: system resources, PACS/RIS applications, and users/patients data accessing. Now the RIS-Integrated PACS managers can monitor and control the entire RIS-Integrated PACS operation through web service provided by the Monitor Server. This paper presents the design of a HIPAA-compliant automatic monitoring system for RIS-Integrated PACS Operation, and gives the preliminary results performed by this monitoring system on a clinical RIS-integrated PACS.
Basch, Ethan; Pugh, Stephanie L; Dueck, Amylou C; Mitchell, Sandra A; Berk, Lawrence; Fogh, Shannon; Rogak, Lauren J; Gatewood, Marcha; Reeve, Bryce B; Mendoza, Tito R; O’Mara, Ann; Denicoff, Andrea; Minasian, Lori; Bennett, Antonia V; Setser, Ann; Schrag, Deborah; Roof, Kevin; Moore, Joan K; Gergel, Thomas; Stephans, Kevin; Rimner, Andreas; DeNittis, Albert; Bruner, Deborah Watkins
2017-01-01
Purpose To assess the feasibility of measuring symptomatic adverse events (AEs) in a multicenter clinical trial using the National Cancer Institute’s Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE). Methods and Materials Patients enrolled in Trial XXXX (XXXX) were asked to self-report 53 PRO-CTCAE items representing 30 symptomatic AEs at 6 time points (baseline; weekly x4 during treatment; 12-weeks post-treatment). Reporting was conducted via wireless tablet computers in clinic waiting areas. Compliance was defined as the proportion of visits when an expected PRO-CTCAE assessment was completed. Results Among 226 study sites participating in Trial XXXX, 100% completed 35-minute PRO-CTCAE training for clinical research associates (CRAs); 80 sites enrolled patients of which 34 (43%) required tablet computers to be provided. All 152 patients in Trial XXXX agreed to self-report using the PRO-CTCAE (median age 66; 47% female; 84% white). Median time for CRAs to learn the system was 60 minutes (range 30–240), and median time for CRAs to teach a patient to self-report was 10 minutes (range 2–60). Compliance was high, particularly during active treatment when patients self-reported at 86% of expected time points, although compliance was lower post-treatment (72%). Common reasons for non-compliance were institutional errors such as forgetting to provide computers to participants; patients missing clinic visits; internet connectivity; and patients feeling “too sick”. Conclusions Most patients enrolled in a multicenter chemoradiotherapy trial were willing and able to self-report symptomatic adverse events at visits using tablet computers. Minimal effort was required by local site staff to support this system. The observed causes of missing data may be obviated by allowing patients to self-report electronically between-visits, and by employing central compliance monitoring. These approaches are being incorporated into ongoing studies. PMID:28463161
Automated validation of a computer operating system
NASA Technical Reports Server (NTRS)
Dervage, M. M.; Milberg, B. A.
1970-01-01
Programs apply selected input/output loads to complex computer operating system and measure performance of that system under such loads. Technique lends itself to checkout of computer software designed to monitor automated complex industrial systems.
Chen, Xiaodong; Sadineni, Vikram; Maity, Mita; Quan, Yong; Enterline, Matthew; Mantri, Rao V
2015-12-01
Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material.
Incorporation of CAD/CAM Restoration Into Navy Dentistry
2017-09-26
CAD/CAM Computer-aided design /Computer-assisted manufacturing CDT Common Dental Terminology DENCAS Dental Common Access System DTF Dental...to reduce avoidable dental emergencies for deployed sailors and marines. Dental Computer-aided design /Computer-assisted manufacturing (CAD/CAM...report will review and evaluate the placement rate by Navy dentists of digitally fabricated in-office ceramic restorations compared to traditional direct
Indicators and protocols for monitoring impacts of formal and informal trails in protected areas
Marion, Jeffrey L.; Leung, Yu-Fai
2011-01-01
Trails are a common recreation infrastructure in protected areas and their conditions affect the quality of natural resources and visitor experiences. Various trail impact indicators and assessment protocols have been developed in support of monitoring programs, which are often used for management decision-making or as part of visitor capacity management frameworks. This paper reviews common indicators and assessment protocols for three types of trails, surfaced formal trails, unsurfaced formal trails, and informal (visitor-created) trails. Monitoring methods and selected data from three U.S. National Park Service units are presented to illustrate some common trail impact indicators and assessment options.
Common Calibration Source for Monitoring Long-term Ozone Trends
NASA Technical Reports Server (NTRS)
Kowalewski, Matthew
2004-01-01
Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.
ERIC Educational Resources Information Center
Akhtar, S.; Warburton, S.; Xu, W.
2017-01-01
In this paper we report on the use of a purpose built Computer Support Collaborative learning environment designed to support lab-based CAD teaching through the monitoring of student participation and identified predictors of success. This was carried out by analysing data from the interactive learning system and correlating student behaviour with…
Integrating a Hand Held computer and Stethoscope into a Fetal Monitor
Ahmad Soltani, Mitra
2009-01-01
This article presents procedures for modifying a hand held computer or personal digital assistant (PDA) into a versatile device functioning as an electronic stethoscope for fetal monitoring. Along with functioning as an electronic stethoscope, a PDA can provide a useful information source for a medical trainee. Feedback from medical students, residents and interns suggests the device is well accepted by medical trainees. PMID:20165517
Discussion of "Computational Electrocardiography: Revisiting Holter ECG Monitoring".
Baumgartner, Christian; Caiani, Enrico G; Dickhaus, Hartmut; Kulikowski, Casimir A; Schiecke, Karin; van Bemmel, Jan H; Witte, Herbert
2016-08-05
This article is part of a For-Discussion-Section of Methods of Information in Medicine about the paper "Computational Electrocardiography: Revisiting Holter ECG Monitoring" written by Thomas M. Deserno and Nikolaus Marx. It is introduced by an editorial. This article contains the combined commentaries invited to independently comment on the paper of Deserno and Marx. In subsequent issues the discussion can continue through letters to the editor.
Design of Remote Monitoring System of Irrigation based on GSM and ZigBee Technology
NASA Astrophysics Data System (ADS)
Xiao xi, Zheng; Fang, Zhao; Shuaifei, Shao
2018-03-01
To solve the problems of low level of irrigation and waste of water resources, a remote monitoring system for farmland irrigation based on GSM communication technology and ZigBee technology was designed. The system is composed of sensors, GSM communication module, ZigBee module, host computer, valve and so on. The system detects and closes the pump and the electromagnetic valve according to the need of the system, and transmits the monitoring information to the host computer or the user’s Mobile phone through the GSM communication network. Experiments show that the system has low power consumption, friendly man-machine interface, convenient and simple. It can monitor agricultural environment remotely and control related irrigation equipment at any time and place, and can better meet the needs of remote monitoring of farmland irrigation.
NASA Astrophysics Data System (ADS)
Kunz, A.; Pihet, P.; Arend, E.; Menzel, H. G.
1990-12-01
A portable area monitor for the measurement of dose-equivalent quantities in practical radiation-protection work has been developed. The detector applied is a low-pressure proportional counter (TEPC) used in microdosimetry. The complex analysis system required has been optimized with regard to low power consumption and small size to achieve a real operational survey meter. The newly designed electronic includes complete analog, digital and microprocessor boards. It presents the characteristic of fast pulse-height processing over a large (5 decades) dynamic range. Three original circuits have been specifically developed, consisting of: (1) a miniaturized adjustable high-voltage power supply with low ripple and high stability; (2) a double spectroscopy amplifier with constant gain ratio and common pole-zero stage; and (3) an analog-to-digital converter with quasi-logarithmic characteristics based on a flash converter using fast comparators associated in parallel. With the incorporated single-board computer, the maximal total power consumption is 5 W, enabling 40 hours operation time with batteries. With minor adaptations the equipment is proposed as a low-cost solution for various measuring problems in environmental studies.
Development of an in situ fiber optic Raman system to monitor hydrothermal vents.
Battaglia, Tina M; Dunn, Eileen E; Lilley, Marvin D; Holloway, John; Dable, Brian K; Marquardt, Brian J; Booksh, Karl S
2004-07-01
The development of a field portable fiber optic Raman system modified from commercially available components that can operate remotely on battery power and withstand the corrosive environment of the hydrothermal vents is discussed. The Raman system is designed for continuous monitoring in the deep-sea environment. A 785 nm diode laser was used in conjunction with a sapphire ball fiber optic Raman probe, single board computer, and a CCD detector. Using the system at ambient conditions the detection limits of SO(4)(2-), CO(3)(2-) and NO(3)(-) were determined to be approximately 0.11, 0.36 and 0.12 g l(-1) respectively. Mimicking the cold conditions of the sea floor by placing the equipment in a refrigerator yielded slightly worse detection limits of approximately 0.16 g l(-1) for SO(4)(-2) and 0.20 g l(-1) for NO(3)(-). Addition of minerals commonly found in vent fluid plumes also decreased the detection limits to approximately 0.33 and 0.34 g l(-1) respectively for SO(4)(-2) and NO(3)(-).
Cheng, Chihwen; Brown, R. Clark; Cohen, Lindsey L.; Venugopalan, Janani; Stokes, Todd H.
2016-01-01
Sickle cell disease (SCD) is the most common inherited disease, and SCD symptoms impact functioning and well-being. For example, adolescents with SCD have a higher tendency of psychological problems than the general population. Acceptance and Commitment Therapy (ACT), a cognitive-behavioral therapy, is an effective intervention to promote quality of life and functioning in adolescents with chronic illness. However, traditional visit-based therapy sessions are restrained by challenges, such as limited follow-up, insufficient data collection, low treatment adherence, and delayed intervention. In this paper, we present Instant Acceptance and Commitment Therapy (iACT), a system designed to enhance the quality of pediatric ACT. iACT utilizes text messaging technology, which is the most popular cell phone activity among adolescents, to conduct real-time psychotherapy interventions. The system is built on cloud computing technologies, which provides a convenient and cost-effective monitoring environment. To evaluate iACT, a trial with 60 adolescents with SCD is being conducted in conjunction with the Georgia Institute of Technology, Children’s Healthcare of Atlanta, and Georgia State University. PMID:24110179
Cheng, Chihwen; Brown, R Clark; Cohen, Lindsey L; Venugopalan, Janani; Stokes, Todd H; Wang, May D
2013-01-01
Sickle cell disease (SCD) is the most common inherited disease, and SCD symptoms impact functioning and well-being. For example, adolescents with SCD have a higher tendency of psychological problems than the general population. Acceptance and Commitment Therapy (ACT), a cognitive-behavioral therapy, is an effective intervention to promote quality of life and functioning in adolescents with chronic illness. However, traditional visit-based therapy sessions are restrained by challenges, such as limited follow-up, insufficient data collection, low treatment adherence, and delayed intervention. In this paper, we present Instant Acceptance and Commitment Therapy (iACT), a system designed to enhance the quality of pediatric ACT. iACT utilizes text messaging technology, which is the most popular cell phone activity among adolescents, to conduct real-time psychotherapy interventions. The system is built on cloud computing technologies, which provides a convenient and cost-effective monitoring environment. To evaluate iACT, a trial with 60 adolescents with SCD is being conducted in conjunction with the Georgia Institute of Technology, Children's Healthcare of Atlanta, and Georgia State University.
Anomaly-based intrusion detection for SCADA systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, D.; Usynin, A.; Hines, J. W.
2006-07-01
Most critical infrastructure such as chemical processing plants, electrical generation and distribution networks, and gas distribution is monitored and controlled by Supervisory Control and Data Acquisition Systems (SCADA. These systems have been the focus of increased security and there are concerns that they could be the target of international terrorists. With the constantly growing number of internet related computer attacks, there is evidence that our critical infrastructure may also be vulnerable. Researchers estimate that malicious online actions may cause $75 billion at 2007. One of the interesting countermeasures for enhancing information system security is called intrusion detection. This paper willmore » briefly discuss the history of research in intrusion detection techniques and introduce the two basic detection approaches: signature detection and anomaly detection. Finally, it presents the application of techniques developed for monitoring critical process systems, such as nuclear power plants, to anomaly intrusion detection. The method uses an auto-associative kernel regression (AAKR) model coupled with the statistical probability ratio test (SPRT) and applied to a simulated SCADA system. The results show that these methods can be generally used to detect a variety of common attacks. (authors)« less
DOT National Transportation Integrated Search
2014-08-01
This report describes the instrumentation and data acquisition for a multi-girder, composite steel bridge in Connecticut. The : computer-based remote monitoring system was developed to collect information on the girder bending strains. The monitoring...
NASA Astrophysics Data System (ADS)
Vitali, Lina; Righini, Gaia; Piersanti, Antonio; Cremona, Giuseppe; Pace, Giandomenico; Ciancarella, Luisella
2017-12-01
Air backward trajectory calculations are commonly used in a variety of atmospheric analyses, in particular for source attribution evaluation. The accuracy of backward trajectory analysis is mainly determined by the quality and the spatial and temporal resolution of the underlying meteorological data set, especially in the cases of complex terrain. This work describes a new tool for the calculation and the statistical elaboration of backward trajectories. To take advantage of the high-resolution meteorological database of the Italian national air quality model MINNI, a dedicated set of procedures was implemented under the name of M-TraCE (MINNI module for Trajectories Calculation and statistical Elaboration) to calculate and process the backward trajectories of air masses reaching a site of interest. Some outcomes from the application of the developed methodology to the Italian Network of Special Purpose Monitoring Stations are shown to assess its strengths for the meteorological characterization of air quality monitoring stations. M-TraCE has demonstrated its capabilities to provide a detailed statistical assessment of transport patterns and region of influence of the site under investigation, which is fundamental for correctly interpreting pollutants measurements and ascertaining the official classification of the monitoring site based on meta-data information. Moreover, M-TraCE has shown its usefulness in supporting other assessments, i.e., spatial representativeness of a monitoring site, focussing specifically on the analysis of the effects due to meteorological variables.
Research into a distributed fault diagnosis system and its application
NASA Astrophysics Data System (ADS)
Qian, Suxiang; Jiao, Weidong; Lou, Yongjian; Shen, Xiaomei
2005-12-01
CORBA (Common Object Request Broker Architecture) is a solution to distributed computing methods over heterogeneity systems, which establishes a communication protocol between distributed objects. It takes great emphasis on realizing the interoperation between distributed objects. However, only after developing some application approaches and some practical technology in monitoring and diagnosis, can the customers share the monitoring and diagnosis information, so that the purpose of realizing remote multi-expert cooperation diagnosis online can be achieved. This paper aims at building an open fault monitoring and diagnosis platform combining CORBA, Web and agent. Heterogeneity diagnosis object interoperate in independent thread through the CORBA (soft-bus), realizing sharing resource and multi-expert cooperation diagnosis online, solving the disadvantage such as lack of diagnosis knowledge, oneness of diagnosis technique and imperfectness of analysis function, so that more complicated and further diagnosis can be carried on. Take high-speed centrifugal air compressor set for example, we demonstrate a distributed diagnosis based on CORBA. It proves that we can find out more efficient approaches to settle the problems such as real-time monitoring and diagnosis on the net and the break-up of complicated tasks, inosculating CORBA, Web technique and agent frame model to carry on complemental research. In this system, Multi-diagnosis Intelligent Agent helps improve diagnosis efficiency. Besides, this system offers an open circumstances, which is easy for the diagnosis objects to upgrade and for new diagnosis server objects to join in.
ERIC Educational Resources Information Center
Batt, Russell H., Ed.
1990-01-01
Four applications of microcomputers in the chemical laboratory are presented. Included are "Mass Spectrometer Interface with an Apple II Computer,""Interfacing the Spectronic 20 to a Computer,""A pH-Monitoring and Control System for Teaching Laboratories," and "A Computer-Aided Optical Melting Point Device." Software, instrumentation, and uses are…
Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid
NASA Astrophysics Data System (ADS)
Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration
2014-06-01
The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.
ERIC Educational Resources Information Center
Stone, Antonia
1982-01-01
Provides general information on currently available microcomputers, computer programs (software), hardware requirements, software sources, costs, computer games, and programing. Includes a list of popular microcomputers, providing price category, model, list price, software (cassette, tape, disk), monitor specifications, amount of random access…
Translator program converts computer printout into braille language
NASA Technical Reports Server (NTRS)
Powell, R. A.
1967-01-01
Computer program converts print image tape files into six dot Braille cells, enabling a blind computer programmer to monitor and evaluate data generated by his own programs. The Braille output is printed 8 lines per inch.
Kimwele, Charles; Matheka, Duncan; Ferdowsian, Hope
2011-01-01
Introduction Animal experimentation is common in Africa, a region that accords little priority on animal protection in comparison to economic and social development. The current study aimed at investigating the prevalence of animal experimentation in Kenya, and to review shortfalls in policy, legislation, implementation and enforcement that result in inadequate animal care in Kenya and other African nations. Methods Data was collected using questionnaires, administered at 39 highly ranked academic and research institutions aiming to identify those that used animals, their sources of animals, and application of the three Rs. Perceived challenges to the use of non-animal alternatives and common methods of euthanasia were also queried. Data was analyzed using Epidata, SPSS 16.0 and Microsoft Excel. Results Thirty-eight (97.4%) of thirty-nine institutions reported using animals for education and/or research. Thirty (76.9%) institutions reported using analgesics or anesthetics on a regular basis. Thirteen (33.3%) institutions regularly used statistical methods to minimize the use of animals. Overall, sixteen (41.0%) institutions explored the use of alternatives to animals such as cell cultures and computer simulation techniques, with one (2.6%) academic institution having completely replaced animals with computer modeling, manikins and visual illustrations. The commonest form of euthanasia employed was chloroform administration, reportedly in fourteen (29.8%) of 47 total methods (some institutions used more than one method). Twenty-eight (71.8%) institutions had no designated ethics committee to review or monitor protocols using animals. Conclusion Animals are commonly used in academic and research institutions in Kenya. The relative lack of ethical guidance and oversight regarding the use of animals in research and education presents significant concerns. PMID:22355442
Kimwele, Charles; Matheka, Duncan; Ferdowsian, Hope
2011-01-01
Animal experimentation is common in Africa, a region that accords little priority on animal protection in comparison to economic and social development. The current study aimed at investigating the prevalence of animal experimentation in Kenya, and to review shortfalls in policy, legislation, implementation and enforcement that result in inadequate animal care in Kenya and other African nations. Data was collected using questionnaires, administered at 39 highly ranked academic and research institutions aiming to identify those that used animals, their sources of animals, and application of the three Rs. Perceived challenges to the use of non-animal alternatives and common methods of euthanasia were also queried. Data was analyzed using Epidata, SPSS 16.0 and Microsoft Excel. Thirty-eight (97.4%) of thirty-nine institutions reported using animals for education and/or research. Thirty (76.9%) institutions reported using analgesics or anesthetics on a regular basis. Thirteen (33.3%) institutions regularly used statistical methods to minimize the use of animals. Overall, sixteen (41.0%) institutions explored the use of alternatives to animals such as cell cultures and computer simulation techniques, with one (2.6%) academic institution having completely replaced animals with computer modeling, manikins and visual illustrations. The commonest form of euthanasia employed was chloroform administration, reportedly in fourteen (29.8%) of 47 total methods (some institutions used more than one method). Twenty-eight (71.8%) institutions had no designated ethics committee to review or monitor protocols using animals. Animals are commonly used in academic and research institutions in Kenya. The relative lack of ethical guidance and oversight regarding the use of animals in research and education presents significant concerns.
NASA Astrophysics Data System (ADS)
Wei, Wang; Chongchao, Pan; Yikai, Liang; Gang, Li
2017-11-01
With the rapid development of information technology, the scale of data center increases quickly, and the energy consumption of computer room also increases rapidly, among which, energy consumption of air conditioning cooling makes up a large proportion. How to apply new technology to reduce the energy consumption of the computer room becomes an important topic of energy saving in the current research. This paper study internet of things technology, and design a kind of green computer room environmental monitoring system. In the system, we can get the real-time environment data from the application of wireless sensor network technology, which will be showed in a creative way of three-dimensional effect. In the environment monitor, we can get the computer room assets view, temperature cloud view, humidity cloud view, microenvironment view and so on. Thus according to the condition of the microenvironment, we can adjust the air volume, temperature and humidity parameters of the air conditioning for the individual equipment cabinet to realize the precise air conditioning refrigeration. And this can reduce the energy consumption of air conditioning, as a result, the overall energy consumption of the green computer room will reduce greatly. At the same time, we apply this project in the computer center of Weihai, and after a year of test and running, we find that it took a good energy saving effect, which fully verified the effectiveness of this project on the energy conservation of the computer room.
COMCAN: a computer program for common cause analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burdick, G.R.; Marshall, N.H.; Wilson, J.R.
1976-05-01
The computer program, COMCAN, searches the fault tree minimal cut sets for shared susceptibility to various secondary events (common causes) and common links between components. In the case of common causes, a location check may also be performed by COMCAN to determine whether barriers to the common cause exist between components. The program can locate common manufacturers of components having events in the same minimal cut set. A relative ranking scheme for secondary event susceptibility is included in the program.
Proudfoot, Judith; Parker, Gordon; Hadzi Pavlovic, Dusan; Manicavasagar, Vijaya; Adler, Einat; Whitton, Alexis
2010-12-19
The benefits of self-monitoring on symptom severity, coping, and quality of life have been amply demonstrated. However, paper and pencil self-monitoring can be cumbersome and subject to biases associated with retrospective recall, while computer-based monitoring can be inconvenient in that it relies on users being at their computer at scheduled monitoring times. As a result, nonadherence in self-monitoring is common. Mobile phones offer an alternative. Their take-up has reached saturation point in most developed countries and is increasing in developing countries; they are carried on the person, they are usually turned on, and functionality is continually improving. Currently, however, public conceptions of mobile phones focus on their use as tools for communication and social identity. Community attitudes toward using mobile phones for mental health monitoring and self-management are not known. The objective was to explore community attitudes toward the appropriation of mobile phones for mental health monitoring and management. We held community consultations in Australia consisting of an online survey (n = 525), focus group discussions (n = 47), and interviews (n = 20). Respondents used their mobile phones daily and predominantly for communication purposes. Of those who completed the online survey, the majority (399/525 or 76%) reported that they would be interested in using their mobile phone for mental health monitoring and self-management if the service were free. Of the 455 participants who owned a mobile phone or PDA, there were no significant differences between those who expressed interest in the use of mobile phones for this purpose and those who did not by gender (χ2(1), = 0.98, P = .32, phi = .05), age group (χ2(4), = 1.95, P = .75, phi = .06), employment status (χ2(2), = 2.74, P = .25, phi = .08) or marital status (χ2(4), = 4.62, P = .33, phi = .10). However, the presence of current symptoms of depression, anxiety, or stress affected interest in such a program in that those with symptoms were more interested (χ(2) (1), = 16.67, P < .001, phi = .19). Reasons given for interest in using a mobile phone program were that it would be convenient, counteract isolation, and help identify triggers to mood states. Reasons given for lack of interest included not liking to use a mobile phone or technology, concerns that it would be too intrusive or that privacy would be lacking, and not seeing the need. Design features considered to be key by participants were enhanced privacy and security functions including user name and password, ease of use, the provision of reminders, and the availability of clear feedback. Community attitudes toward the appropriation of mobile phones for the monitoring and self-management of depression, anxiety, and stress appear to be positive as long as privacy and security provisions are assured, the program is intuitive and easy to use, and the feedback is clear.
Quality Assurance in Post-Secondary Education: Some Common Approaches
ERIC Educational Resources Information Center
Law, Dennis Chung Sea
2010-01-01
Purpose: The common approaches to quality assurance (QA), as practiced by most post-secondary education institutions for internal quality monitoring and most QA authorities for external quality monitoring (EQM), have been considered by many researchers as having largely failed to address the essence of educational quality. The purpose of this…
Recent advances to obtain real - Time displacements for engineering applications
Celebi, M.
2005-01-01
This paper presents recent developments and approaches (using GPS technology and real-time double-integration) to obtain displacements and, in turn, drift ratios, in real-time or near real-time to meet the needs of the engineering and user community in seismic monitoring and assessing the functionality and damage condition of structures. Drift ratios computed in near real-time allow technical assessment of the damage condition of a building. Relevant parameters, such as the type of connections and story structural characteristics (including geometry) are used in computing drifts corresponding to several pre-selected threshold stages of damage. Thus, drift ratios determined from real-time monitoring can be compared to pre-computed threshold drift ratios. The approaches described herein can be used for performance evaluation of structures and can be considered as building health-monitoring applications.
Understanding Monitoring Technologies for Adults With Pain: Systematic Literature Review
Rodríguez, Iyubanit; Gerea, Carmen; Fuentes, Carolina; Rossel, Pedro O; Marques, Maíra; Campos, Mauricio
2017-01-01
Background Monitoring of patients may decrease treatment costs and improve quality of care. Pain is the most common health problem that people seek help for in hospitals. Therefore, monitoring patients with pain may have significant impact in improving treatment. Several studies have studied factors affecting pain; however, no previous study has reviewed the contextual information that a monitoring system may capture to characterize a patient’s situation. Objective The objective of this study was to conduct a systematic review to (1) determine what types of technologies have been used to monitor adults with pain, and (2) construct a model of the context information that may be used to implement apps and devices aimed at monitoring adults with pain. Methods A literature search (2005-2015) was conducted in electronic databases pertaining to medical and computer science literature (PubMed, Science Direct, ACM Digital Library, and IEEE Xplore) using a defined search string. Article selection was done through a process of removing duplicates, analyzing title and abstract, and then reviewing the full text of the article. Results In the final analysis, 87 articles were included and 53 of them (61%) used technologies to collect contextual information. A total of 49 types of context information were found and a five-dimension (activity, identity, wellness, environment, physiological) model of context information to monitor adults with pain was proposed, expanding on a previous model. Most technological interfaces for pain monitoring were wearable, possibly because they can be used in more realistic contexts. Few studies focused on older adults, creating a relevant avenue of research on how to create devices for users that may have impaired cognitive skills or low digital literacy. Conclusions The design of monitoring devices and interfaces for adults with pain must deal with the challenge of selecting relevant contextual information to understand the user’s situation, and not overburdening or inconveniencing users with information requests. A model of contextual information may be used by researchers to choose possible contextual information that may be monitored during studies on adults with pain. PMID:29079550
Continuous Seismic Threshold Monitoring
1992-05-31
Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic
Color in Computer-Assisted Instruction.
ERIC Educational Resources Information Center
Steinberg, Esther R.
Color monitors are in wide use in computer systems. Thus, it is important to understand how to apply color effectively in computer assisted instruction (CAI) and computer based training (CBT). Color can enhance learning, but it does not automatically do so. Indiscriminate application of color can mislead a student and thereby even interfere with…
Monitoring land degradation in southern Tunisia: A test of LANDSAT imagery and digital data
NASA Technical Reports Server (NTRS)
Hellden, U.; Stern, M.
1980-01-01
The possible use of LANDSAT imagery and digital data for monitoring desertification indicators in Tunisia was studied. Field data were sampled in Tunisia for estimation of mapping accuracy in maps generated through interpretation of LANDSAT false color composites and processing of LANDSAT computer compatible tapes respectively. Temporal change studies were carried out through geometric registration of computer classified windows from 1972 to classified data from 1979. Indications on land degradation were noted in some areas. No important differences, concerning results, between the interpretation approach and the computer processing approach were found.
TMS communications software. Volume 1: Computer interfaces
NASA Technical Reports Server (NTRS)
Brown, J. S.; Lenker, M. D.
1979-01-01
A prototype bus communications system, which is being used to support the Trend Monitoring System (TMS) as well as for evaluation of the bus concept is considered. Hardware and software interfaces to the MODCOMP and NOVA minicomputers are included. The system software required to drive the interfaces in each TMS computer is described. Documentation of other software for bus statistics monitoring and for transferring files across the bus is also included.
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1992-01-01
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on 6 June 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under a cooperative agreement with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. A flexible scientific staff is provided through a university faculty visitor program, a post doctoral program, and a student visitor program. Not only does this provide appropriate expertise but it also introduces scientists outside of NASA to NASA problems. A small group of core RIACS staff provides continuity and interacts with an ARC technical monitor and scientific advisory group to determine the RIACS mission. RIACS activities are reviewed and monitored by a USRA advisory council and ARC technical monitor. Research at RIACS is currently being done in the following areas: Parallel Computing; Advanced Methods for Scientific Computing; Learning Systems; High Performance Networks and Technology; Graphics, Visualization, and Virtual Environments.
Some computer graphical user interfaces in radiation therapy
Chow, James C L
2016-01-01
In this review, five graphical user interfaces (GUIs) used in radiation therapy practices and researches are introduced. They are: (1) the treatment time calculator, superficial X-ray treatment time calculator (SUPCALC) used in the superficial X-ray radiation therapy; (2) the monitor unit calculator, electron monitor unit calculator (EMUC) used in the electron radiation therapy; (3) the multileaf collimator machine file creator, sliding window intensity modulated radiotherapy (SWIMRT) used in generating fluence map for research and quality assurance in intensity modulated radiation therapy; (4) the treatment planning system, DOSCTP used in the calculation of 3D dose distribution using Monte Carlo simulation; and (5) the monitor unit calculator, photon beam monitor unit calculator (PMUC) used in photon beam radiation therapy. One common issue of these GUIs is that all user-friendly interfaces are linked to complex formulas and algorithms based on various theories, which do not have to be understood and noted by the user. In that case, user only needs to input the required information with help from graphical elements in order to produce desired results. SUPCALC is a superficial radiation treatment time calculator using the GUI technique to provide a convenient way for radiation therapist to calculate the treatment time, and keep a record for the skin cancer patient. EMUC is an electron monitor unit calculator for electron radiation therapy. Instead of doing hand calculation according to pre-determined dosimetric tables, clinical user needs only to input the required drawing of electron field in computer graphical file format, prescription dose, and beam parameters to EMUC to calculate the required monitor unit for the electron beam treatment. EMUC is based on a semi-experimental theory of sector-integration algorithm. SWIMRT is a multileaf collimator machine file creator to generate a fluence map produced by a medical linear accelerator. This machine file controls the multileaf collimator to deliver intensity modulated beams for a specific fluence map used in quality assurance or research. DOSCTP is a treatment planning system using the computed tomography images. Radiation beams (photon or electron) with different energies and field sizes produced by a linear accelerator can be placed in different positions to irradiate the tumour in the patient. DOSCTP is linked to a Monte Carlo simulation engine using the EGSnrc-based code, so that 3D dose distribution can be determined accurately for radiation therapy. Moreover, DOSCTP can be used for treatment planning of patient or small animal. PMUC is a GUI for calculation of the monitor unit based on the prescription dose of patient in photon beam radiation therapy. The calculation is based on dose corrections in changes of photon beam energy, treatment depth, field size, jaw position, beam axis, treatment distance and beam modifiers. All GUIs mentioned in this review were written either by the Microsoft Visual Basic.net or a MATLAB GUI development tool called GUIDE. In addition, all GUIs were verified and tested using measurements to ensure their accuracies were up to clinical acceptable levels for implementations. PMID:27027225
Information Assurance and Forensic Readiness
NASA Astrophysics Data System (ADS)
Pangalos, Georgios; Katos, Vasilios
Egalitarianism and justice are amongst the core attributes of a democratic regime and should be also secured in an e-democratic setting. As such, the rise of computer related offenses pose a threat to the fundamental aspects of e-democracy and e-governance. Digital forensics are a key component for protecting and enabling the underlying (e-)democratic values and therefore forensic readiness should be considered in an e-democratic setting. This position paper commences from the observation that the density of compliance and potential litigation activities is monotonically increasing in modern organizations, as rules, legislative regulations and policies are being constantly added to the corporate environment. Forensic practices seem to be departing from the niche of law enforcement and are becoming a business function and infrastructural component, posing new challenges to the security professionals. Having no a priori knowledge on whether a security related event or corporate policy violation will lead to litigation, we advocate that computer forensics need to be applied to all investigatory, monitoring and auditing activities. This would result into an inflation of the responsibilities of the Information Security Officer. After exploring some commonalities and differences between IS audit and computer forensics, we present a list of strategic challenges the organization and, in effect, the IS security and audit practitioner will face.
OVERSMART Reporting Tool for Flow Computations Over Large Grid Systems
NASA Technical Reports Server (NTRS)
Kao, David L.; Chan, William M.
2012-01-01
Structured grid solvers such as NASA's OVERFLOW compressible Navier-Stokes flow solver can generate large data files that contain convergence histories for flow equation residuals, turbulence model equation residuals, component forces and moments, and component relative motion dynamics variables. Most of today's large-scale problems can extend to hundreds of grids, and over 100 million grid points. However, due to the lack of efficient tools, only a small fraction of information contained in these files is analyzed. OVERSMART (OVERFLOW Solution Monitoring And Reporting Tool) provides a comprehensive report of solution convergence of flow computations over large, complex grid systems. It produces a one-page executive summary of the behavior of flow equation residuals, turbulence model equation residuals, and component forces and moments. Under the automatic option, a matrix of commonly viewed plots such as residual histograms, composite residuals, sub-iteration bar graphs, and component forces and moments is automatically generated. Specific plots required by the user can also be prescribed via a command file or a graphical user interface. Output is directed to the user s computer screen and/or to an html file for archival purposes. The current implementation has been targeted for the OVERFLOW flow solver, which is used to obtain a flow solution on structured overset grids. The OVERSMART framework allows easy extension to other flow solvers.
ALMA Correlator Real-Time Data Processor
NASA Astrophysics Data System (ADS)
Pisano, J.; Amestica, R.; Perez, J.
2005-10-01
The design of a real-time Linux application utilizing Real-Time Application Interface (RTAI) to process real-time data from the radio astronomy correlator for the Atacama Large Millimeter Array (ALMA) is described. The correlator is a custom-built digital signal processor which computes the cross-correlation function of two digitized signal streams. ALMA will have 64 antennas with 2080 signal streams each with a sample rate of 4 giga-samples per second. The correlator's aggregate data output will be 1 gigabyte per second. The software is defined by hard deadlines with high input and processing data rates, while requiring interfaces to non real-time external computers. The designed computer system - the Correlator Data Processor or CDP, consists of a cluster of 17 SMP computers, 16 of which are compute nodes plus a master controller node all running real-time Linux kernels. Each compute node uses an RTAI kernel module to interface to a 32-bit parallel interface which accepts raw data at 64 megabytes per second in 1 megabyte chunks every 16 milliseconds. These data are transferred to tasks running on multiple CPUs in hard real-time using RTAI's LXRT facility to perform quantization corrections, data windowing, FFTs, and phase corrections for a processing rate of approximately 1 GFLOPS. Highly accurate timing signals are distributed to all seventeen computer nodes in order to synchronize them to other time-dependent devices in the observatory array. RTAI kernel tasks interface to the timing signals providing sub-millisecond timing resolution. The CDP interfaces, via the master node, to other computer systems on an external intra-net for command and control, data storage, and further data (image) processing. The master node accesses these external systems utilizing ALMA Common Software (ACS), a CORBA-based client-server software infrastructure providing logging, monitoring, data delivery, and intra-computer function invocation. The software is being developed in tandem with the correlator hardware which presents software engineering challenges as the hardware evolves. The current status of this project and future goals are also presented.
NASA Astrophysics Data System (ADS)
Martinez, M.; Rocha, B.; Li, M.; Shi, G.; Beltempo, A.; Rutledge, R.; Yanishevsky, M.
2012-11-01
The National Research Council Canada (NRC) has worked on the development of structural health monitoring (SHM) test platforms for assessing the performance of sensor systems for load monitoring applications. The first SHM platform consists of a 5.5 m cantilever aluminum beam that provides an optimal scenario for evaluating the ability of a load monitoring system to measure bending, torsion and shear loads. The second SHM platform contains an added level of structural complexity, by consisting of aluminum skins with bonded/riveted stringers, typical of an aircraft lower wing structure. These two load monitoring platforms are well characterized and documented, providing loading conditions similar to those encountered during service. In this study, a micro-electro-mechanical system (MEMS) for acquiring data from triads of gyroscopes, accelerometers and magnetometers is described. The system was used to compute changes in angles at discrete stations along the platforms. The angles obtained from the MEMS were used to compute a second, third or fourth order degree polynomial surface from which displacements at every point could be computed. The use of a new Kalman filter was evaluated for angle estimation, from which displacements in the structure were computed. The outputs of the newly developed algorithms were then compared to the displacements obtained from the linear variable displacement transducers connected to the platforms. The displacement curves were subsequently post-processed either analytically, or with the help of a finite element model of the structure, to estimate strains and loads. The estimated strains were compared with baseline strain gauge instrumentation installed on the platforms. This new approach for load monitoring was able to provide accurate estimates of applied strains and shear loads.
Unobtrusive measurement of daily computer use to detect mild cognitive impairment.
Kaye, Jeffrey; Mattek, Nora; Dodge, Hiroko H; Campbell, Ian; Hayes, Tamara; Austin, Daniel; Hatt, William; Wild, Katherine; Jimison, Holly; Pavel, Michael
2014-01-01
Mild disturbances of higher order activities of daily living are present in people diagnosed with mild cognitive impairment (MCI). These deficits may be difficult to detect among those still living independently. Unobtrusive continuous assessment of a complex activity such as home computer use may detect mild functional changes and identify MCI. We sought to determine whether long-term changes in remotely monitored computer use differ in persons with MCI in comparison with cognitively intact volunteers. Participants enrolled in a longitudinal cohort study of unobtrusive in-home technologies to detect cognitive and motor decline in independently living seniors were assessed for computer use (number of days with use, mean daily use, and coefficient of variation of use) measured by remotely monitoring computer session start and end times. More than 230,000 computer sessions from 113 computer users (mean age, 85 years; 38 with MCI) were acquired during a mean of 36 months. In mixed-effects models, there was no difference in computer use at baseline between MCI and intact participants controlling for age, sex, education, race, and computer experience. However, over time, between MCI and intact participants, there was a significant decrease in number of days with use (P = .01), mean daily use (∼1% greater decrease/month; P = .009), and an increase in day-to-day use variability (P = .002). Computer use change can be monitored unobtrusively and indicates individuals with MCI. With 79% of those 55 to 64 years old now online, this may be an ecologically valid and efficient approach to track subtle, clinically meaningful change with aging. Copyright © 2014 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Monitoring by Use of Clusters of Sensor-Data Vectors
NASA Technical Reports Server (NTRS)
Iverson, David L.
2007-01-01
The inductive monitoring system (IMS) is a system of computer hardware and software for automated monitoring of the performance, operational condition, physical integrity, and other aspects of the health of a complex engineering system (e.g., an industrial process line or a spacecraft). The input to the IMS consists of streams of digitized readings from sensors in the monitored system. The IMS determines the type and amount of any deviation of the monitored system from a nominal or normal ( healthy ) condition on the basis of a comparison between (1) vectors constructed from the incoming sensor data and (2) corresponding vectors in a database of nominal or normal behavior. The term inductive reflects the use of a process reminiscent of traditional mathematical induction to learn about normal operation and build the nominal-condition database. The IMS offers two major advantages over prior computational monitoring systems: The computational burden of the IMS is significantly smaller, and there is no need for abnormal-condition sensor data for training the IMS to recognize abnormal conditions. The figure schematically depicts the relationships among the computational processes effected by the IMS. Training sensor data are gathered during normal operation of the monitored system, detailed computational simulation of operation of the monitored system, or both. The training data are formed into vectors that are used to generate the database. The vectors in the database are clustered into regions that represent normal or nominal operation. Once the database has been generated, the IMS compares the vectors of incoming sensor data with vectors representative of the clusters. The monitored system is deemed to be operating normally or abnormally, depending on whether the vector of incoming sensor data is or is not, respectively, sufficiently close to one of the clusters. For this purpose, a distance between two vectors is calculated by a suitable metric (e.g., Euclidean distance) and "sufficiently close" signifies lying at a distance less than a specified threshold value. It must be emphasized that although the IMS is intended to detect off-nominal or abnormal performance or health, it is not necessarily capable of performing a thorough or detailed diagnosis. Limited diagnostic information may be available under some circumstances. For example, the distance of a vector of incoming sensor data from the nearest cluster could serve as an indication of the severity of a malfunction. The identity of the nearest cluster may be a clue as to the identity of the malfunctioning component or subsystem. It is possible to decrease the IMS computation time by use of a combination of cluster-indexing and -retrieval methods. For example, in one method, the distances between each cluster and two or more reference vectors can be used for the purpose of indexing and retrieval. The clusters are sorted into a list according to these distance values, typically in ascending order of distance. When a set of input data arrives and is to be tested, the data are first arranged as an ordered set (that is, a vector). The distances from the input vector to the reference points are computed. The search of clusters from the list can then be limited to those clusters lying within a certain distance range from the input vector; the computation time is reduced by not searching the clusters at a greater distance.
Obuchowski, Nancy A; Barnhart, Huiman X; Buckler, Andrew J; Pennello, Gene; Wang, Xiao-Feng; Kalpathy-Cramer, Jayashree; Kim, Hyun J Grace; Reeves, Anthony P
2015-02-01
Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients' disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms' bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms' performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for quantitative imaging biomarker studies. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Software Supports Distributed Operations via the Internet
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Backers, Paul; Steinke, Robert
2003-01-01
Multi-mission Encrypted Communication System (MECS) is a computer program that enables authorized, geographically dispersed users to gain secure access to a common set of data files via the Internet. MECS is compatible with legacy application programs and a variety of operating systems. The MECS architecture is centered around maintaining consistent replicas of data files cached on remote computers. MECS monitors these files and, whenever one is changed, the changed file is committed to a master database as soon as network connectivity makes it possible to do so. MECS provides subscriptions for remote users to automatically receive new data as they are generated. Remote users can be producers as well as consumers of data. Whereas a prior program that provides some of the same services treats disconnection of a user from the network of users as an error from which recovery must be effected, MECS treats disconnection as a nominal state of the network: This leads to a different design that is more efficient for serving many users, each of whom typically connects and disconnects frequently and wants only a small fraction of the data at any given time.
Monitoring Statistics Which Have Increased Power over a Reduced Time Range.
ERIC Educational Resources Information Center
Tang, S. M.; MacNeill, I. B.
1992-01-01
The problem of monitoring trends for changes at unknown times is considered. Statistics that permit one to focus high power on a segment of the monitored period are studied. Numerical procedures are developed to compute the null distribution of these statistics. (Author)
NASA Astrophysics Data System (ADS)
Semenov, Z. V.; Labusov, V. A.
2017-11-01
Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.
Computer controlled fluorometer device and method of operating same
Kolber, Z.; Falkowski, P.
1990-07-17
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means. 13 figs.
Computer controlled fluorometer device and method of operating same
Kolber, Zbigniew; Falkowski, Paul
1990-01-01
A computer controlled fluorometer device and method of operating same, said device being made to include a pump flash source and a probe flash source and one or more sample chambers in combination with a light condenser lens system and associated filters and reflectors and collimators, as well as signal conditioning and monitoring means and a programmable computer means and a software programmable source of background irradiance that is operable according to the method of the invention to rapidly, efficiently and accurately measure photosynthetic activity by precisely monitoring and recording changes in fluorescence yield produced by a controlled series of predetermined cycles of probe and pump flashes from the respective probe and pump sources that are controlled by the computer means.
Test Anxiety, Computer-Adaptive Testing and the Common Core
ERIC Educational Resources Information Center
Colwell, Nicole Makas
2013-01-01
This paper highlights the current findings and issues regarding the role of computer-adaptive testing in test anxiety. The computer-adaptive test (CAT) proposed by one of the Common Core consortia brings these issues to the forefront. Research has long indicated that test anxiety impairs student performance. More recent research indicates that…
CLOUDCLOUD : general-purpose instrument monitoring and data managing software
NASA Astrophysics Data System (ADS)
Dias, António; Amorim, António; Tomé, António
2016-04-01
An effective experiment is dependent on the ability to store and deliver data and information to all participant parties regardless of their degree of involvement in the specific parts that make the experiment a whole. Having fast, efficient and ubiquitous access to data will increase visibility and discussion, such that the outcome will have already been reviewed several times, strengthening the conclusions. The CLOUD project aims at providing users with a general purpose data acquisition, management and instrument monitoring platform that is fast, easy to use, lightweight and accessible to all participants of an experiment. This work is now implemented in the CLOUD experiment at CERN and will be fully integrated with the experiment as of 2016. Despite being used in an experiment of the scale of CLOUD, this software can also be used in any size of experiment or monitoring station, from single computers to large networks of computers to monitor any sort of instrument output without influencing the individual instrument's DAQ. Instrument data and meta data is stored and accessed via a specially designed database architecture and any type of instrument output is accepted using our continuously growing parsing application. Multiple databases can be used to separate different data taking periods or a single database can be used if for instance an experiment is continuous. A simple web-based application gives the user total control over the monitored instruments and their data, allowing data visualization and download, upload of processed data and the ability to edit existing instruments or add new instruments to the experiment. When in a network, new computers are immediately recognized and added to the system and are able to monitor instruments connected to them. Automatic computer integration is achieved by a locally running python-based parsing agent that communicates with a main server application guaranteeing that all instruments assigned to that computer are monitored with parsing intervals as fast as milliseconds. This software (server+agents+interface+database) comes in easy and ready-to-use packages that can be installed in any operating system, including Android and iOS systems. This software is ideal for use in modular experiments or monitoring stations with large variability in instruments and measuring methods or in large collaborations, where data requires homogenization in order to be effectively transmitted to all involved parties. This work presents the software and provides performance comparison with previously used monitoring systems in the CLOUD experiment at CERN.
A framework for cognitive monitoring using computer game interactions.
Jimison, Holly B; Pavel, Misha; Bissell, Payton; McKanna, James
2007-01-01
Many countries are faced with a rapidly increasing economic and social challenge of caring for their elderly population. Cognitive issues are at the forefront of the list of concerns. People over the age of 75 are at risk for medically related cognitive decline and confusion, and the early detection of cognitive problems would allow for more effective clinical intervention. However, standard cognitive assessments are not diagnostically sensitive and are performed infrequently. To address these issues, we have developed a set of adaptive computer games to monitor cognitive performance in a home environment. Assessment algorithms for various aspects of cognition are embedded in the games. The monitoring of these metrics allows us to detect within subject trends over time, providing a method for the early detection of cognitive decline. In addition, the real-time information on cognitive state is used to adapt the user interface to the needs of the individual user. In this paper we describe the software architecture and methodology for monitoring cognitive performance using data from natural computer interactions in a home setting.
Migration monitoring in shorebirds and landbirds: commonalities and differences
Susan K. Skagen; Jonathan Bart
2005-01-01
Several aspects of a developing program to monitor shorebirds in the western hemisphere are pertinent to migration monitoring of landbirds. Goals of the Program for Regional and International Shorebird Monitoring (PRISM) include estimating population size and population trends of 74 species, sub-species and distinct populations of North American shorebirds, monitoring...
Lindenmayer DB and Likens GE (eds): Effective ecological monitoring [book review
Charles T. Scott
2011-01-01
Long-term ecological monitoring is becoming increasingly important but more challenging to fund. Lindenmayer and Likens describe the common characteristics of successful monitoring programs and of those that fail. They draw upon their monitoring experiences together, independently, and from a variety of other long-term monitoring programs around the world. They then...
Real-World Neuroimaging Technologies
2013-05-10
system enables long-term wear of up to 10 consecutive hours of operation time. The system’s wireless technologies, light weight (200g), and dry sensor ...biomarkers, body sensor networks , brain computer interactionbrain, computer interfaces, data acquisition, electroencephalography monitoring, translational...brain activity in real-world scenarios. INDEX TERMS Behavioral science, biomarkers, body sensor networks , brain computer interfaces, brain computer
Aircraft Alerting Systems Standardization Study. Phase IV. Accident Implications on Systems Design.
1982-06-01
computing and processing to assimilate and process status informa- 5 tion using...provided with capabilities in computing and processing , sensing, interfacing, and controlling and displaying. 17 o Computing and Processing - Algorithms...alerting system to perform a flight status monitor function would require additional sensinq, computing and processing , interfacing, and controlling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mancosu, Pietro; Fogliata, Antonella, E-mail: Antonella.Fogliata@humanitas.it; Stravato, Antonella
2016-07-01
Frameless stereotactic radiosurgery (SRS) requires dedicated systems to monitor the patient position during the treatment to avoid target underdosage due to involuntary shift. The optical surface monitoring system (OSMS) is here evaluated in a phantom-based study. The new EDGE linear accelerator from Varian (Varian, Palo Alto, CA) integrates, for cranial lesions, the common cone beam computed tomography (CBCT) and kV-MV portal images to the optical surface monitoring system (OSMS), a device able to detect real-time patient's face movements in all 6 couch axes (vertical, longitudinal, lateral, rotation along the vertical axis, pitch, and roll). We have evaluated the OSMS imagingmore » capability in checking the phantoms' position and monitoring its motion. With this aim, a home-made cranial phantom was developed to evaluate the OSMS accuracy in 4 different experiments: (1) comparison with CBCT in isocenter location, (2) capability to recognize predefined shifts up to 2° or 3 cm, (3) evaluation at different couch angles, (4) ability to properly reconstruct the surface when the linac gantry visually block one of the cameras. The OSMS system showed, with a phantom, to be accurate for positioning in respect to the CBCT imaging system with differences of 0.6 ± 0.3 mm for linear vector displacement, with a maximum rotational inaccuracy of 0.3°. OSMS presented an accuracy of 0.3 mm for displacement up to 1 cm and 1°, and 0.5 mm for larger displacements. Different couch angles (45° and 90°) induced a mean vector uncertainty < 0.4 mm. Coverage of 1 camera produced an uncertainty < 0.5 mm. Translations and rotations of a phantom can be accurately detect with the optical surface detector system.« less
An on-line reactivity and power monitor for a TRIGA reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binney, Stephen E.; Bakir, Alia J.
1988-07-01
As the personal computer (PC) becomes more and more of a significant influence on modern technology, it is reasonable that at some point in time they would be used to interface with TRIGA reactors. A personal computer with a special interface board has been used to monitor key parameters during operation of the Oregon State University TRIGA Reactor (OSTR). A description of the apparatus used and sample results are included.
DIALOG: An executive computer program for linking independent programs
NASA Technical Reports Server (NTRS)
Glatt, C. R.; Hague, D. S.; Watson, D. A.
1973-01-01
A very large scale computer programming procedure called the DIALOG executive system was developed for the CDC 6000 series computers. The executive computer program, DIALOG, controls the sequence of execution and data management function for a library of independent computer programs. Communication of common information is accomplished by DIALOG through a dynamically constructed and maintained data base of common information. Each computer program maintains its individual identity and is unaware of its contribution to the large scale program. This feature makes any computer program a candidate for use with the DIALOG executive system. The installation and uses of the DIALOG executive system are described.
Classification Models for Pulmonary Function using Motion Analysis from Phone Sensors.
Cheng, Qian; Juen, Joshua; Bellam, Shashi; Fulara, Nicholas; Close, Deanna; Silverstein, Jonathan C; Schatz, Bruce
2016-01-01
Smartphones are ubiquitous, but it is unknown what physiological functions can be monitored at clinical quality. Pulmonary function is a standard measure of health status for cardiopulmonary patients. We have shown phone sensors can accurately measure walking patterns. Here we show that improved classification models can accurately measure pulmonary function, with sole inputs being sensor data from carried phones. Twenty-four cardiopulmonary patients performed six minute walk tests in pulmonary rehabilitation at a regional hospital. They carried smartphones running custom software recording phone motion. For every patient, every ten-second interval was correctly computed. The trained model perfectly computed the GOLD level 1/2/3, which is a standard categorization of pulmonary function as measured by spirometry. These results are encouraging towards field trials with passive monitors always running in the background. We expect patients can simply carry their phones during daily living, while supporting automatic computation ofpulmonary function for health monitoring.
Data Auditor: Analyzing Data Quality Using Pattern Tableaux
NASA Astrophysics Data System (ADS)
Srivastava, Divesh
Monitoring databases maintain configuration and measurement tables about computer systems, such as networks and computing clusters, and serve important business functions, such as troubleshooting customer problems, analyzing equipment failures, planning system upgrades, etc. These databases are prone to many data quality issues: configuration tables may be incorrect due to data entry errors, while measurement tables may be affected by incorrect, missing, duplicate and delayed polls. We describe Data Auditor, a tool for analyzing data quality and exploring data semantics of monitoring databases. Given a user-supplied constraint, such as a boolean predicate expected to be satisfied by every tuple, a functional dependency, or an inclusion dependency, Data Auditor computes "pattern tableaux", which are concise summaries of subsets of the data that satisfy or fail the constraint. We discuss the architecture of Data Auditor, including the supported types of constraints and the tableau generation mechanism. We also show the utility of our approach on an operational network monitoring database.
Katayama, Hirohito; Higo, Takashi; Tokunaga, Yuji; Katoh, Shigeo; Hiyama, Yukio; Morikawa, Kaoru
2008-01-01
A practical, risk-based monitoring approach using the combined data collected from actual experiments and computer simulations was developed for the qualification of an EU GMP Annex 1 Grade B, ISO Class 7 area. This approach can locate and minimize the representative number of sampling points used for microbial contamination risk assessment. We conducted a case study on an aseptic clean room, newly constructed and specifically designed for the use of a restricted access barrier system (RABS). Hotspots were located using three-dimensional airflow analysis based on a previously published empirical measurement method, the three-dimensional airflow analysis. Local mean age of air (LMAA) values were calculated based on computer simulations. Comparable results were found using actual measurements and simulations, demonstrating the potential usefulness of such tools in estimating contamination risks based on the airflow characteristics of a clean room. Intensive microbial monitoring and particle monitoring at the Grade B environmental qualification stage, as well as three-dimensional airflow analysis, were also conducted to reveal contamination hotspots. We found representative hotspots were located at perforated panels covering the air exhausts where the major piston airflows collect in the Grade B room, as well as at any locations within the room that were identified as having stagnant air. However, we also found that the floor surface air around the exit airway of the RABS EU GMP Annex 1 Grade A, ISO Class 5 area was always remarkably clean, possibly due to the immediate sweep of the piston airflow, which prevents dispersed human microbes from falling in a Stokes-type manner on settling plates placed on the floor around the Grade A exit airway. In addition, this airflow is expected to be clean with a significantly low LMAA. Based on these observed results, we propose a simplified daily monitoring program to monitor microbial contamination in Grade B environments. To locate hotspots we propose using a combination of computer simulation, actual airflow measurements, and intensive environmental monitoring at the qualification stage. Thereafter, instead of particle or microbial air monitoring, we recommend the use of microbial surface monitoring at the main air exhaust. These measures would be sufficient to assure the efficiency of the monitoring program, as well as to minimize the number of surface sampling points used in environments surrounding a RABS.
Practical Algorithms for the Longest Common Extension Problem
NASA Astrophysics Data System (ADS)
Ilie, Lucian; Tinta, Liviu
The Longest Common Extension problem considers a string s and computes, for each of a number of pairs (i,j), the longest substring of s that starts at both i and j. It appears as a subproblem in many fundamental string problems and can be solved by linear-time preprocessing of the string that allows (worst-case) constant-time computation for each pair. The two known approaches use powerful algorithms: either constant-time computation of the Lowest Common Ancestor in trees or constant-time computation of Range Minimum Queries (RMQ) in arrays. We show here that, from practical point of view, such complicated approaches are not needed. We give two very simple algorithms for this problem that require no preprocessing. The first needs only the string and is significantly faster than all previous algorithms on the average. The second combines the first with a direct RMQ computation on the Longest Common Prefix array. It takes advantage of the superior speed of the cache memory and is the fastest on virtually all inputs.
Structural Probability Concepts Adapted to Electrical Engineering
NASA Technical Reports Server (NTRS)
Steinberg, Eric P.; Chamis, Christos C.
1994-01-01
Through the use of equivalent variable analogies, the authors demonstrate how an electrical subsystem can be modeled by an equivalent structural subsystem. This allows the electrical subsystem to be probabilistically analyzed by using available structural reliability computer codes such as NESSUS. With the ability to analyze the electrical subsystem probabilistically, we can evaluate the reliability of systems that include both structural and electrical subsystems. Common examples of such systems are a structural subsystem integrated with a health-monitoring subsystem, and smart structures. Since these systems have electrical subsystems that directly affect the operation of the overall system, probabilistically analyzing them could lead to improved reliability and reduced costs. The direct effect of the electrical subsystem on the structural subsystem is of secondary order and is not considered in the scope of this work.
Advanced Transport Operating System (ATOPS) color displays software description: MicroVAX system
NASA Technical Reports Server (NTRS)
Slominski, Christopher J.; Plyler, Valerie E.; Dickson, Richard W.
1992-01-01
This document describes the software created for the Display MicroVAX computer used for the Advanced Transport Operating Systems (ATOPS) project on the Transport Systems Research Vehicle (TSRV). The software delivery of February 27, 1991, known as the 'baseline display system', is the one described in this document. Throughout this publication, module descriptions are presented in a standardized format which contains module purpose, calling sequence, detailed description, and global references. The global references section includes subroutines, functions, and common variables referenced by a particular module. The system described supports the Research Flight Deck (RFD) of the TSRV. The RFD contains eight Cathode Ray Tubes (CRTs) which depict a Primary Flight Display, Navigation Display, System Warning Display, Takeoff Performance Monitoring System Display, and Engine Display.
VRACK: measuring pedal kinematics during stationary bike cycling.
Farjadian, Amir B; Kong, Qingchao; Gade, Venkata K; Deutsch, Judith E; Mavroidis, Constantinos
2013-06-01
Ankle impairment and lower limb asymmetries in strength and coordination are common symptoms for individuals with selected musculoskeletal and neurological impairments. The virtual reality augmented cycling kit (VRACK) was designed as a compact mechatronics system for lower limb and mobility rehabilitation. The system measures interaction forces and cardiac activity during cycling in a virtual environment. The kinematics measurement was added to the system. Due to the constrained problem definition, the combination of inertial measurement unit (IMU) and Kalman filtering was recruited to compute the optimal pedal angular displacement during dynamic cycling exercise. Using a novel benchmarking method the accuracy of IMU-based kinematics measurement was evaluated. Relatively accurate angular measurements were achieved. The enhanced VRACK system can serve as a rehabilitation device to monitor biomechanical and physiological variables during cycling on a stationary bike.
Remote maintenance monitoring system
NASA Technical Reports Server (NTRS)
Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)
1992-01-01
A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.
Wong, W; Sivak, J G; Moran, K L
2003-01-01
This study determines the relative ocular lens irritancy of 16 common partially transparent or non-transparent consumer hygiene products. The irritancy was found by measuring the changes in the sharpness of focus [referred to as the back vertex distance (BVD) variability] of the cultured bovine lens using a scanning laser In Vitro Assay System. This method consists of a laser beam that scans across the lens, and a computer, which then analyses the average focal length (mm), the BVD variability (mm), and the intensity of the beam transmitted. Lenses were exposed to the 16 hygiene products and the lens' focusing ability was monitored over 192 h. The products are semi-solids or solids (e.g. gels, lotions, shampoos). They are categorized into six groups: shampoos, body washes, lotions, toothpastes, deodorant, and anti-perspirant. Damage (measured by > 1 mm BVD variability) occurred slower for the shampoos, especially in the case of baby shampoo. The results indicate that shampoos exhibit the lowest level of ocular lens toxicity (irritability) while the deodorant is the most damaging.
Protecting Against Faults in JPL Spacecraft
NASA Technical Reports Server (NTRS)
Morgan, Paula
2007-01-01
A paper discusses techniques for protecting against faults in spacecraft designed and operated by NASA s Jet Propulsion Laboratory (JPL). The paper addresses, more specifically, fault-protection requirements and techniques common to most JPL spacecraft (in contradistinction to unique, mission specific techniques), standard practices in the implementation of these techniques, and fault-protection software architectures. Common requirements include those to protect onboard command, data-processing, and control computers; protect against loss of Earth/spacecraft radio communication; maintain safe temperatures; and recover from power overloads. The paper describes fault-protection techniques as part of a fault-management strategy that also includes functional redundancy, redundant hardware, and autonomous monitoring of (1) the operational and health statuses of spacecraft components, (2) temperatures inside and outside the spacecraft, and (3) allocation of power. The strategy also provides for preprogrammed automated responses to anomalous conditions. In addition, the software running in almost every JPL spacecraft incorporates a general-purpose "Safe Mode" response algorithm that configures the spacecraft in a lower-power state that is safe and predictable, thereby facilitating diagnosis of more complex faults by a team of human experts on Earth.
[Brain abscess--modern diagnostics and therapeutic treatment].
Kalinowska-Nowak, Anna; Garlicki, Aleksander; Bociaga-Jasik, Monika
2009-01-01
Brain abscess is one of the most serious diseases of the central nervous system. This condition is more common among men--twice to three times, and morbidity rate is highest in fourth decade of the life. Etiologic agents of brain abscess are bacteria, fungus, protozoa and parasites. The development of the brain abscess can resulted from the spread of infection from local sites or bloodborne from distal sites. In 10-15% of cases multiple abscesses develop. Headache is the most common syndrome. The radiologic tests: computed tomography or magnetic resonance are tests of choice in diagnosis and monitoring of treatment. Treatment of brains abscesses required cooperation of different specialists: infectious diseases, neuroradiologist, neurologists and neurosurgeon. Decision about therapeutic methods depends on number, size and localization of lesions, and patient's condition. In conservative treatment empiric antibiotic therapy and supportive treatment are used. Actually two methods of surgical treatment are used: CT- guided stereotactic aspiration and incision of the brain abscess by craniotomy. Actually mortality rate is 6 to 24%. Among 30-56% patients permanent neurological complications are reported.
Examining pharmaceuticals using terahertz spectroscopy
NASA Astrophysics Data System (ADS)
Sulovská, Kateřina; Křesálek, Vojtěch
2015-10-01
Pharmaceutical trafficking is common issue in countries where they are under stricter dispensing regime with monitoring of users. Most commonly smuggled pharmaceuticals include trade names Paralen Plus, Modafen, Clarinase repetabs, Aspirin complex, etc. These are transported mainly from Eastern Europe (e.g. Poland, Ukraine, Russia) to countries like Czech Republic, which is said to have one of the highest number of methamphetamine producers in Europe. The aim of this paper is to describe the possibility of terahertz spectroscopy utilization as an examining tool to distinguish between pharmaceuticals containing pseudoephedrine compounds and those without it. Selected medicaments for experimental part contain as an active ingredient pseudoephedrine hydrochloride or pseudoephedrine sulphate. Results show a possibility to find a pseudoephedrine compound spectra in samples according to previously computed and experimentally found ones, and point out that spectra of same brand names pills may vary according to their expiration date, batch, and amount of absorbed water vapours from ambience. Mislead spectrum also occurs during experimental work in a sample without chosen active ingredient, which shows persistent minor inconveniences of terahertz spectroscopy. All measurement were done on the TPS Spectra 3000 instrument.
NASA Technical Reports Server (NTRS)
Oliger, Joseph
1993-01-01
The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on 6 June 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. A flexible scientific staff is provided through a university faculty visitor program, a post doctoral program, and a student visitor program. Not only does this provide appropriate expertise but it also introduces scientists outside of NASA to NASA problems. A small group of core RIACS staff provides continuity and interacts with an ARC technical monitor and scientific advisory group to determine the RIACS mission. RIACS activities are reviewed and monitored by a USRA advisory council and ARC technical monitor. Research at RIACS is currently being done in the following areas: Parallel Computing, Advanced Methods for Scientific Computing, High Performance Networks and Technology, and Learning Systems. Parallel compiler techniques, adaptive numerical methods for flows in complicated geometries, and optimization were identified as important problems to investigate for ARC's involvement in the Computational Grand Challenges of the next decade.
Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V.; Esliger, Dale W.; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L.
2016-01-01
Objectives (1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Methods Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. Results For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Conclusions Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD thresholds, in order to accurately separate sedentary behaviours from common motion-based light-intensity physical activities. However, caution should be taken if isolating sedentary behaviours from standing is of particular interest. PMID:27706241
Bakrania, Kishan; Yates, Thomas; Rowlands, Alex V; Esliger, Dale W; Bunnewell, Sarah; Sanders, James; Davies, Melanie; Khunti, Kamlesh; Edwardson, Charlotte L
2016-01-01
(1) To develop and internally-validate Euclidean Norm Minus One (ENMO) and Mean Amplitude Deviation (MAD) thresholds for separating sedentary behaviours from common light-intensity physical activities using raw acceleration data collected from both hip- and wrist-worn tri-axial accelerometers; and (2) to compare and evaluate the performances between the ENMO and MAD metrics. Thirty-three adults [mean age (standard deviation (SD)) = 27.4 (5.9) years; mean BMI (SD) = 23.9 (3.7) kg/m2; 20 females (60.6%)] wore four accelerometers; an ActiGraph GT3X+ and a GENEActiv on the right hip; and an ActiGraph GT3X+ and a GENEActiv on the non-dominant wrist. Under laboratory-conditions, participants performed 16 different activities (11 sedentary behaviours and 5 light-intensity physical activities) for 5 minutes each. ENMO and MAD were computed from the raw acceleration data, and logistic regression and receiver-operating-characteristic (ROC) analyses were implemented to derive thresholds for activity discrimination. Areas under ROC curves (AUROC) were calculated to summarise performances and thresholds were assessed via executing leave-one-out-cross-validations. For both hip and wrist monitor placements, in comparison to the ActiGraph GT3X+ monitors, the ENMO and MAD values derived from the GENEActiv devices were observed to be slightly higher, particularly for the lower-intensity activities. Monitor-specific hip and wrist ENMO and MAD thresholds showed excellent ability for separating sedentary behaviours from motion-based light-intensity physical activities (in general, AUROCs >0.95), with validation indicating robustness. However, poor classification was experienced when attempting to isolate standing still from sedentary behaviours (in general, AUROCs <0.65). The ENMO and MAD metrics tended to perform similarly across activities and accelerometer brands. Researchers can utilise these robust monitor-specific hip and wrist ENMO and MAD thresholds, in order to accurately separate sedentary behaviours from common motion-based light-intensity physical activities. However, caution should be taken if isolating sedentary behaviours from standing is of particular interest.
An Intelligent CAI Monitor and Generative Tutor. Interim Report.
ERIC Educational Resources Information Center
Koffman, Elliot B.; And Others
Design techniques for generative computer-assisted-instructional (CAI) systems are described in this report. These are systems capable of generating problems for students and of deriving and monitoring solutions; problem difficulty, instructional pace, and depth of monitoring are all individually tailored and parts of the solution algorithms can…
Solar Wind Monitor--A School Geophysics Project
ERIC Educational Resources Information Center
Robinson, Ian
2018-01-01
Described is an established geophysics project to construct a solar wind monitor based on a nT resolution fluxgate magnetometer. Low-cost and appropriate from school to university level it incorporates elements of astrophysics, geophysics, electronics, programming, computer networking and signal processing. The system monitors the earth's field in…
Dynamic data filtering system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-04-29
A computer-implemented dynamic data filtering system and method for selectively choosing operating data of a monitored asset that modifies or expands a learned scope of an empirical model of normal operation of the monitored asset while simultaneously rejecting operating data of the monitored asset that is indicative of excessive degradation or impending failure of the monitored asset, and utilizing the selectively chosen data for adaptively recalibrating the empirical model to more accurately monitor asset aging changes or operating condition changes of the monitored asset.
Imaging of the hip joint. Computed tomography versus magnetic resonance imaging
NASA Technical Reports Server (NTRS)
Lang, P.; Genant, H. K.; Jergesen, H. E.; Murray, W. R.
1992-01-01
The authors reviewed the applications and limitations of computed tomography (CT) and magnetic resonance (MR) imaging in the assessment of the most common hip disorders. Magnetic resonance imaging is the most sensitive technique in detecting osteonecrosis of the femoral head. Magnetic resonance reflects the histologic changes associated with osteonecrosis very well, which may ultimately help to improve staging. Computed tomography can more accurately identify subchondral fractures than MR imaging and thus remains important for staging. In congenital dysplasia of the hip, the position of the nonossified femoral head in children less than six months of age can only be inferred by indirect signs on CT. Magnetic resonance imaging demonstrates the cartilaginous femoral head directly without ionizing radiation. Computed tomography remains the imaging modality of choice for evaluating fractures of the hip joint. In some patients, MR imaging demonstrates the fracture even when it is not apparent on radiography. In neoplasm, CT provides better assessment of calcification, ossification, and periosteal reaction than MR imaging. Magnetic resonance imaging, however, represents the most accurate imaging modality for evaluating intramedullary and soft-tissue extent of the tumor and identifying involvement of neurovascular bundles. Magnetic resonance imaging can also be used to monitor response to chemotherapy. In osteoarthrosis and rheumatoid arthritis of the hip, both CT and MR provide more detailed assessment of the severity of disease than conventional radiography because of their tomographic nature. Magnetic resonance imaging is unique in evaluating cartilage degeneration and loss, and in demonstrating soft-tissue alterations such as inflammatory synovial proliferation.
Online production validation in a HEP environment
NASA Astrophysics Data System (ADS)
Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.
2017-03-01
In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.
Angelcare mobile system: homecare patient monitoring using bluetooth and GPRS.
Ribeiro, Anna G D; Maitelli, Andre L; Valentim, Ricardo A M; Brandao, Glaucio B; Guerreiro, Ana M G
2010-01-01
The quick progress in technology has brought new paradigms to the computing area, bringing with them many benefits to society. The paradigm of ubiquitous computing brings innovations applying computing in people's daily life without being noticed. For this, it has used the combination of several existing technologies like wireless communications and sensors. Several of the benefits have reached the medical area, bringing new methods of surgery, appointments and examinations. This work presents telemedicine software that adds the idea of ubiquity to the medical area, innovating the relation between doctor and patient. It also brings security and confidence to a patient being monitored in homecare.
Design of Remote GPRS-based Gas Data Monitoring System
NASA Astrophysics Data System (ADS)
Yan, Xiyue; Yang, Jianhua; Lu, Wei
2018-01-01
In order to solve the problem of remote data transmission of gas flowmeter, and realize unattended operation on the spot, an unattended remote monitoring system based on GPRS for gas data is designed in this paper. The slave computer of this system adopts embedded microprocessor to read data of gas flowmeter through rs-232 bus and transfers it to the host computer through DTU. In the host computer, the VB program dynamically binds the Winsock control to receive and parse data. By using dynamic data exchange, the Kingview configuration software realizes history trend curve, real time trend curve, alarm, print, web browsing and other functions.
Statistical Model Applied to NetFlow for Network Intrusion Detection
NASA Astrophysics Data System (ADS)
Proto, André; Alexandre, Leandro A.; Batista, Maira L.; Oliveira, Isabela L.; Cansian, Adriano M.
The computers and network services became presence guaranteed in several places. These characteristics resulted in the growth of illicit events and therefore the computers and networks security has become an essential point in any computing environment. Many methodologies were created to identify these events; however, with increasing of users and services on the Internet, many difficulties are found in trying to monitor a large network environment. This paper proposes a methodology for events detection in large-scale networks. The proposal approaches the anomaly detection using the NetFlow protocol, statistical methods and monitoring the environment in a best time for the application.
Wilson, J Adam; Shutter, Lori A; Hartings, Jed A
2013-01-01
Neuromonitoring in patients with severe brain trauma and stroke is often limited to intracranial pressure (ICP); advanced neuroscience intensive care units may also monitor brain oxygenation (partial pressure of brain tissue oxygen, P(bt)O(2)), electroencephalogram (EEG), cerebral blood flow (CBF), or neurochemistry. For example, cortical spreading depolarizations (CSDs) recorded by electrocorticography (ECoG) are associated with delayed cerebral ischemia after subarachnoid hemorrhage and are an attractive target for novel therapeutic approaches. However, to better understand pathophysiologic relations and realize the potential of multimodal monitoring, a common platform for data collection and integration is needed. We have developed a multimodal system that integrates clinical, research, and imaging data into a single research and development (R&D) platform. Our system is adapted from the widely used BCI2000, a brain-computer interface tool which is written in the C++ language and supports over 20 data acquisition systems. It is optimized for real-time analysis of multimodal data using advanced time and frequency domain analyses and is extensible for research development using a combination of C++, MATLAB, and Python languages. Continuous streams of raw and processed data, including BP (blood pressure), ICP, PtiO2, CBF, ECoG, EEG, and patient video are stored in an open binary data format. Selected events identified in raw (e.g., ICP) or processed (e.g., CSD) measures are displayed graphically, can trigger alarms, or can be sent to researchers or clinicians via text message. For instance, algorithms for automated detection of CSD have been incorporated, and processed ECoG signals are projected onto three-dimensional (3D) brain models based on patient magnetic resonance imaging (MRI) and computed tomographic (CT) scans, allowing real-time correlation of pathoanatomy and cortical function. This platform will provide clinicians and researchers with an advanced tool to investigate pathophysiologic relationships and novel measures of cerebral status, as well as implement treatment algorithms based on such multimodal measures.
Rautaharju, Pentti M; Zhang, Zhu-ming; Gregg, Richard E; Haisty, Wesley K; Z Vitolins, Mara; Curtis, Anne B; Warren, James; Horaĉek, Milan B; Zhou, Sophia H; Soliman, Elsayed Z
2013-01-01
Substantial new information has emerged recently about the prognostic value for a variety of new ECG variables. The objective of the present study was to establish reference standards for these novel risk predictors in a large, ethnically diverse cohort of healthy women from the Women's Health Initiative (WHI) study. The study population consisted of 36,299 healthy women. Racial differences in rate-adjusted QT end (QT(ea)) and QT peak (QT(pa)) intervals as linear functions of RR were small, leading to the conclusion that 450 and 390 ms are applicable as thresholds for prolonged and shortened QT(ea) and similarly, 365 and 295 ms for prolonged and shortened QT(pa), respectively. As a threshold for increased dispersion of global repolarization (T(peak)T(end) interval), 110 ms was established for white and Hispanic women and 120 ms for African-American and Asian women. ST elevation and depression values for the monitoring leads of each person with limb electrodes at Mason-Likar positions and chest leads at level of V1 and V2 were first computed from standard leads using lead transformation coefficients derived from 892 body surface maps, and subsequently normal standards were determined for the monitoring leads, including vessel-specific bipolar left anterior descending, left circumflex artery and right coronary artery leads. The results support the choice 150 μV as a tentative threshold for abnormal ST-onset elevation for all monitoring leads. Body mass index (BMI) had a profound effect on Cornell voltage and Sokolow-Lyon voltage in all racial groups and their utility for left ventricular hypertrophy classification remains open. Common thresholds for all racial groups are applicable for QT(ea), and QT(pa) intervals and ST elevation. Race-specific normal standards are required for many other ECG parameters. Copyright © 2013 Elsevier Inc. All rights reserved.
Batey, Michael A.; Almeida, Gilberto S.; Wilson, Ian; Dildey, Petra; Sharma, Abhishek; Blair, Helen; Hide, I. Geoff; Heidenreich, Olaf; Vormoor, Josef; Maxwell, Ross J.; Bacon, Chris M.
2014-01-01
Ewing sarcoma and osteosarcoma represent the two most common primary bone tumours in childhood and adolescence, with bone metastases being the most adverse prognostic factor. In prostate cancer, osseous metastasis poses a major clinical challenge. We developed a preclinical orthotopic model of Ewing sarcoma, reflecting the biology of the tumour-bone interactions in human disease and allowing in vivo monitoring of disease progression, and compared this with models of osteosarcoma and prostate carcinoma. Human tumour cell lines were transplanted into non-obese diabetic/severe combined immunodeficient (NSG) and Rag2−/−/γc−/− mice by intrafemoral injection. For Ewing sarcoma, minimal cell numbers (1000–5000) injected in small volumes were able to induce orthotopic tumour growth. Tumour progression was studied using positron emission tomography, computed tomography, magnetic resonance imaging and bioluminescent imaging. Tumours and their interactions with bones were examined by histology. Each tumour induced bone destruction and outgrowth of extramedullary tumour masses, together with characteristic changes in bone that were well visualised by computed tomography, which correlated with post-mortem histology. Ewing sarcoma and, to a lesser extent, osteosarcoma cells induced prominent reactive new bone formation. Osteosarcoma cells produced osteoid and mineralised “malignant” bone within the tumour mass itself. Injection of prostate carcinoma cells led to osteoclast-driven osteolytic lesions. Bioluminescent imaging of Ewing sarcoma xenografts allowed easy and rapid monitoring of tumour growth and detection of tumour dissemination to lungs, liver and bone. Magnetic resonance imaging proved useful for monitoring soft tissue tumour growth and volume. Positron emission tomography proved to be of limited use in this model. Overall, we have developed an orthotopic in vivo model for Ewing sarcoma and other primary and secondary human bone malignancies, which resemble the human disease. We have shown the utility of small animal bioimaging for tracking disease progression, making this model a useful assay for preclinical drug testing. PMID:24409320
Rotating Desk for Collaboration by Two Computer Programmers
NASA Technical Reports Server (NTRS)
Riley, John Thomas
2005-01-01
A special-purpose desk has been designed to facilitate collaboration by two computer programmers sharing one desktop computer or computer terminal. The impetus for the design is a trend toward what is known in the software industry as extreme programming an approach intended to ensure high quality without sacrificing the quantity of computer code produced. Programmers working in pairs is a major feature of extreme programming. The present desk design minimizes the stress of the collaborative work environment. It supports both quality and work flow by making it unnecessary for programmers to get in each other s way. The desk (see figure) includes a rotating platform that supports a computer video monitor, keyboard, and mouse. The desk enables one programmer to work on the keyboard for any amount of time and then the other programmer to take over without breaking the train of thought. The rotating platform is supported by a turntable bearing that, in turn, is supported by a weighted base. The platform contains weights to improve its balance. The base includes a stand for a computer, and is shaped and dimensioned to provide adequate foot clearance for both users. The platform includes an adjustable stand for the monitor, a surface for the keyboard and mouse, and spaces for work papers, drinks, and snacks. The heights of the monitor, keyboard, and mouse are set to minimize stress. The platform can be rotated through an angle of 40 to give either user a straight-on view of the monitor and full access to the keyboard and mouse. Magnetic latches keep the platform preferentially at either of the two extremes of rotation. To switch between users, one simply grabs the edge of the platform and pulls it around. The magnetic latch is easily released, allowing the platform to rotate freely to the position of the other user
Blood Glucose Monitoring Devices
... of interferences ability to transmit data to a computer cost of the meter cost of the test ... Performance FDA expands indication for continuous glucose monitoring system, first to replace fingerstick testing for diabetes treatment ...
NASA Astrophysics Data System (ADS)
Clarke, John R.; Southerland, David
1999-07-01
Semi-closed circuit underwater breathing apparatus (UBA) provide a constant flow of mixed gas containing oxygen and nitrogen or helium to a diver. However, as a diver's work rate and metabolic oxygen consumption varies, the oxygen percentages within the UBA can change dramatically. Hence, even a resting diver can become hypoxic and become at risk for oxygen induced seizures. Conversely, a hard working diver can become hypoxic and lose consciousness. Unfortunately, current semi-closed UBA do not contain oxygen monitors. We describe a simple oxygen monitoring system designed and prototyped at the Navy Experimental Diving Unit. The main monitor components include a PIC microcontroller, analog-to-digital converter, bicolor LED, and oxygen sensor. The LED, affixed to the diver's mask is steady green if the oxygen partial pressure is within pre- defined acceptable limits. A more advanced monitor with a depth senor and additional computational circuitry could be used to estimate metabolic oxygen consumption. The computational algorithm uses the oxygen partial pressure and the diver's depth to compute O2 using the steady state solution of the differential equation describing oxygen concentrations within the UBA. Consequently, dive transients induce errors in the O2 estimation. To evalute these errors, we used a computer simulation of semi-closed circuit UBA dives to generate transient rich data as input to the estimation algorithm. A step change in simulated O2 elicits a monoexponential change in the estimated O2 with a time constant of 5 to 10 minutes. Methods for predicting error and providing a probable error indication to the diver are presented.
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Structural health monitoring feature design by genetic programming
NASA Astrophysics Data System (ADS)
Harvey, Dustin Y.; Todd, Michael D.
2014-09-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and other high-capital or life-safety critical structures. Conventional data processing involves pre-processing and extraction of low-dimensional features from in situ time series measurements. The features are then input to a statistical pattern recognition algorithm to perform the relevant classification or regression task necessary to facilitate decisions by the SHM system. Traditional design of signal processing and feature extraction algorithms can be an expensive and time-consuming process requiring extensive system knowledge and domain expertise. Genetic programming, a heuristic program search method from evolutionary computation, was recently adapted by the authors to perform automated, data-driven design of signal processing and feature extraction algorithms for statistical pattern recognition applications. The proposed method, called Autofead, is particularly suitable to handle the challenges inherent in algorithm design for SHM problems where the manifestation of damage in structural response measurements is often unclear or unknown. Autofead mines a training database of response measurements to discover information-rich features specific to the problem at hand. This study provides experimental validation on three SHM applications including ultrasonic damage detection, bearing damage classification for rotating machinery, and vibration-based structural health monitoring. Performance comparisons with common feature choices for each problem area are provided demonstrating the versatility of Autofead to produce significant algorithm improvements on a wide range of problems.
The Electronic Supervisor: New Technology, New Tensions.
ERIC Educational Resources Information Center
Congress of the U.S., Washington, DC. Office of Technology Assessment.
Computer technology has made it possible for employers to collect and analyze management information about employees' work performance and equipment use. There are three main tools for supervising office activities. Computer-based (electronic) monitoring systems automatically record statistics about the work of employees using computer or…
Microelectronics and Computers in Medicine.
ERIC Educational Resources Information Center
Meindl, James D.
1982-01-01
The use of microelectronics and computers in medicine is reviewed, focusing on medical research; medical data collection, storage, retrieval, and manipulation; medical decision making; computed tomography; ultrasonic imaging; role in clinical laboratories; and use as adjuncts for diagnostic tests, monitors of critically-ill patients, and with the…
Horton, John J.
2006-04-11
A system and method of maintaining communication between a computer and a server, the server being in communication with the computer via xDSL service or dial-up modem service, with xDSL service being the default mode of communication, the method including sending a request to the server via xDSL service to which the server should respond and determining if a response has been received. If no response has been received, displaying on the computer a message (i) indicating that xDSL service has failed and (ii) offering to establish communication between the computer and the server via the dial-up modem, and thereafter changing the default mode of communication between the computer and the server to dial-up modem service. In a preferred embodiment, an xDSL service provider monitors dial-up modem communications and determines if the computer dialing in normally establishes communication with the server via xDSL service. The xDSL service provider can thus quickly and easily detect xDSL failures.
Weidling, Patrick; Jaschinski, Wolfgang
2015-01-01
When presbyopic employees are wearing general-purpose progressive lenses, they have clear vision only with a lower gaze inclination to the computer monitor, given the head assumes a comfortable inclination. Therefore, in the present intervention field study the monitor position was lowered, also with the aim to reduce musculoskeletal symptoms. A comparison group comprised users of lenses that do not restrict the field of clear vision. The lower monitor positions led the participants to lower their head inclination, which was linearly associated with a significant reduction in musculoskeletal symptoms. However, for progressive lenses a lower head inclination means a lower zone of clear vision, so that clear vision of the complete monitor was not achieved, rather the monitor should have been placed even lower. The procedures of this study may be useful for optimising the individual monitor position depending on the comfortable head and gaze inclination and the vertical zone of clear vision of progressive lenses. For users of general-purpose progressive lenses, it is suggested that low monitor positions allow for clear vision at the monitor and for a physiologically favourable head inclination. Employees may improve their workplace using a flyer providing ergonomic-optometric information.
NASA Technical Reports Server (NTRS)
Smith, M. E.; Gevins, A.; Brown, H.; Karnik, A.; Du, R.
2001-01-01
Electroencephalographic (EEG) recordings were made while 16 participants performed versions of a personal-computer-based flight simulation task of low, moderate, or high difficulty. As task difficulty increased, frontal midline theta EEG activity increased and alpha band activity decreased. A participant-specific function that combined multiple EEG features to create a single load index was derived from a sample of each participant's data and then applied to new test data from that participant. Index values were computed for every 4 s of task data. Across participants, mean task load index values increased systematically with increasing task difficulty and differed significantly between the different task versions. Actual or potential applications of this research include the use of multivariate EEG-based methods to monitor task loading during naturalistic computer-based work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzeng, Nian-Feng; White, Christopher D.; Moreman, Douglas
2012-07-14
The UCoMS research cluster has spearheaded three research areas since August 2004, including wireless and sensor networks, Grid computing, and petroleum applications. The primary goals of UCoMS research are three-fold: (1) creating new knowledge to push forward the technology forefronts on pertinent research on the computing and monitoring aspects of energy resource management, (2) developing and disseminating software codes and toolkits for the research community and the public, and (3) establishing system prototypes and testbeds for evaluating innovative techniques and methods. Substantial progress and diverse accomplishment have been made by research investigators in their respective areas of expertise cooperatively onmore » such topics as sensors and sensor networks, wireless communication and systems, computational Grids, particularly relevant to petroleum applications.« less
The Effects of Computer Usage on Computer Screen Reading Rate.
ERIC Educational Resources Information Center
Clausing, Carolyn S.; Schmitt, Dorren Rafael
This study investigated the differences in the reading rate of eighth grade students on a cloze reading exercise involving the reading of text from a computer monitor. Several different modes of presentation were used in order to determine the effect of prior experience with computers on the students' reading rate. Subjects were 240 eighth grade…
Skripochka and Kaleri watch monitor
2011-03-04
ISS026-E-031766 (4 March 2011) --- Russian cosmonauts Oleg Skripochka (foreground) and Alexander Kaleri, both Expedition 26 flight engineers, watch a computer monitor in the Zvezda Service Module of the International Space Station.
Computer use and addiction in Romanian children and teenagers--an observational study.
Chiriţă, V; Chiriţă, Roxana; Stefănescu, C; Chele, Gabriela; Ilinca, M
2006-01-01
The computer has provided some wonderful opportunities for our children. Although research on the effects of children's use of computer is still ambiguous, some initial indications of positive and negative effects are beginning t emerge. They commonly use computers for playing games, completing school assignments, email, and connecting to the Internet. This may sometimes come at the expense of other activities such as homework or normal social interchange. Although most children seem to naturally correct the problem, parents and educators must monitor the signs of misuse. Studies of general computer users suggest that some children's may experience psychological problems such as social isolation, depression, loneliness, and time mismanagement related to their computer use and failure at school. The purpose of this study is to investigate issues related to computer use by school students from 11 to 18 years old. The survey included a representative sample of 439 school students of ages 11 to 18. All of the students came from 3 gymnasium schools and 5 high schools of Iaşi, Romania. The students answered to a questionnaire comprising 34 questions related to computer activities. The children's parents answered to a second questionnaire with the same subject. Most questions supposed to rate on a scale the frequency of occurrence of a certain event or issue; some questions solicited an open-answer or to choose an answer from a list. These were aimed at highlighting: (1) The frequency of computer use by the students; (2) The interference of excessive use with school performance and social life; (3) The identification of a possible computer addiction. The data was processed using the SPSS statistics software, version 11.0. Results show that the school students prefer to spend a considerable amount of time with their computers, over 3 hours/day. More than 65.7% of the students have a computer at home. More than 70% of the parents admit they do not or only occasionally discuss computer use with their children. This indicates the fact that, although they bought a computer for their children, they do not supervise the way it is used. The family is rather a passive presence, vaguely responsible and lacking involvement. But, the parents consider that, for better school results, their children should use their computers. This study tried to identify aspects of computer addiction in gymnasium and high school students, as well.
Perceptual evaluation of visual alerts in surveillance videos
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Topkara, Mercan; Pfeiffer, William; Hampapur, Arun
2015-03-01
Visual alerts are commonly used in video monitoring and surveillance systems to mark events, presumably making them more salient to human observers. Surprisingly, the effectiveness of computer-generated alerts in improving human performance has not been widely studied. To address this gap, we have developed a tool for simulating different alert parameters in a realistic visual monitoring situation, and have measured human detection performance under conditions that emulated different set-points in a surveillance algorithm. In the High-Sensitivity condition, the simulated alerts identified 100% of the events with many false alarms. In the Lower-Sensitivity condition, the simulated alerts correctly identified 70% of the targets, with fewer false alarms. In the control condition, no simulated alerts were provided. To explore the effects of learning, subjects performed these tasks in three sessions, on separate days, in a counterbalanced, within subject design. We explore these results within the context of cognitive models of human attention and learning. We found that human observers were more likely to respond to events when marked by a visual alert. Learning played a major role in the two alert conditions. In the first session, observers generated almost twice as many False Alarms as in the No-Alert condition, as the observers responded pre-attentively to the computer-generated false alarms. However, this rate dropped equally dramatically in later sessions, as observers learned to discount the false cues. Highest observer Precision, Hits/(Hits + False Alarms), was achieved in the High Sensitivity condition, but only after training. The successful evaluation of surveillance systems depends on understanding human attention and performance.
... It is a painless process that uses a computer and a video monitor to display bodily functions ... or as linegraphs we can see on a computer screen. In this way, we receive information (feedback) ...
50 CFR 660.17 - Catch monitors and catch monitor providers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... work competently with standard database software and computer hardware. (v) Have a current and valid... candidate's academic transcripts and resume; (4) A statement signed by the candidate under penalty of...
50 CFR 660.17 - Catch monitors and catch monitor service providers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... work competently with standard database software and computer hardware. (v) Have a current and valid... candidate's academic transcripts and resume; (4) A statement signed by the candidate under penalty of...
50 CFR 660.17 - Catch monitors and catch monitor service providers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... work competently with standard database software and computer hardware. (v) Have a current and valid... candidate's academic transcripts and resume; (4) A statement signed by the candidate under penalty of...
50 CFR 660.17 - Catch monitors and catch monitor service providers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... work competently with standard database software and computer hardware. (v) Have a current and valid... candidate's academic transcripts and resume; (4) A statement signed by the candidate under penalty of...
Network monitoring in the Tier2 site in Prague
NASA Astrophysics Data System (ADS)
Eliáš, Marek; Fiala, Lukáš; Horký, Jiří; Chudoba, Jiří; Kouba, Tomáš; Kundrát, Jan; Švec, Jan
2011-12-01
Network monitoring provides different types of view on the network traffic. It's output enables computing centre staff to make qualified decisions about changes in the organization of computing centre network and to spot possible problems. In this paper we present network monitoring framework used at Tier-2 in Prague in Institute of Physics (FZU). The framework consists of standard software and custom tools. We discuss our system for hardware failures detection using syslog logging and Nagios active checks, bandwidth monitoring of physical links and analysis of NetFlow exports from Cisco routers. We present tool for automatic detection of network layout based on SNMP. This tool also records topology changes into SVN repository. Adapted weathermap4rrd is used to visualize recorded data to get fast overview showing current bandwidth usage of links in network.
Use of Continuous Integration Tools for Application Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vergara Larrea, Veronica G; Joubert, Wayne; Fuson, Christopher B
High performance computing systems are becom- ing increasingly complex, both in node architecture and in the multiple layers of software stack required to compile and run applications. As a consequence, the likelihood is increasing for application performance regressions to occur as a result of routine upgrades of system software components which interact in complex ways. The purpose of this study is to evaluate the effectiveness of continuous integration tools for application performance monitoring on HPC systems. In addition, this paper also describes a prototype system for application perfor- mance monitoring based on Jenkins, a Java-based continuous integration tool. The monitoringmore » system described leverages several features in Jenkins to track application performance results over time. Preliminary results and lessons learned from monitoring applications on Cray systems at the Oak Ridge Leadership Computing Facility are presented.« less
ERIC Educational Resources Information Center
Classroom Computer Learning, 1984
1984-01-01
Offers suggestions for five computer-oriented classroom activities. They include uniting a writing class by having them collectively write a book using a word processor, examining FOR/NEXT loops, using a compound interest computer program, and developing a list of facts about computers. Includes four short programs which erase monitor screens. (JN)
Computer Ethics Topics and Teaching Strategies.
ERIC Educational Resources Information Center
DeLay, Jeanine A.
An overview of six major issues in computer ethics is provided in this paper: (1) unauthorized and illegal database entry, surveillance and monitoring, and privacy issues; (2) piracy and intellectual property theft; (3) equity and equal access; (4) philosophical implications of artificial intelligence and computer rights; (5) social consequences…
Design and modelling of a link monitoring mechanism for the Common Data Link (CDL)
NASA Astrophysics Data System (ADS)
Eichelberger, John W., III
1994-09-01
The Common Data Link (CDL) is a full duplex, point-to-point microwave communications system used in imagery and signals intelligence collection systems. It provides a link between two remote Local Area Networks (LAN's) aboard collection and surface platforms. In a hostile environment, there is an overwhelming need to dynamically monitor the link and thus, limit the impact of jamming. This work describes steps taken to design, model, and evaluate a link monitoring system suitable for the CDL. The monitoring system is based on features and monitoring constructs of the Link Control Protocol (LCP) in the Point-to-Point Protocol (PPP) suite. The CDL model is based on a system of two remote Fiber Distributed Data Interface (FDDI) LAN's. In particular, the policies and mechanisms associated with monitoring are described in detail. An implementation of the required mechanisms using the OPNET network engineering tool is described. Performance data related to monitoring parameters is reported. Finally, integration of the FDDI-CDL model with the OPNET Internet model is described.
Real-time Seismic Amplitude Measurement (RSAM): a volcano monitoring and prediction tool
Endo, E.T.; Murray, T.
1991-01-01
Seismicity is one of the most commonly monitored phenomena used to determine the state of a volcano and for the prediction of volcanic eruptions. Although several real-time earthquake-detection and data acquisition systems exist, few continuously measure seismic amplitude in circumstances where individual events are difficult to recognize or where volcanic tremor is prevalent. Analog seismic records provide a quick visual overview of activity; however, continuous rapid quantitative analysis to define the intensity of seismic activity for the purpose of predicing volcanic eruptions is not always possible because of clipping that results from the limited dynamic range of analog recorders. At the Cascades Volcano Observatory, an inexpensive 8-bit analog-to-digital system controlled by a laptop computer is used to provide 1-min average-amplitude information from eight telemetered seismic stations. The absolute voltage level for each station is digitized, averaged, and appended in near real-time to a data file on a multiuser computer system. Raw realtime seismic amplitude measurement (RSAM) data or transformed RSAM data are then plotted on a common time base with other available volcano-monitoring information such as tilt. Changes in earthquake activity associated with dome-building episodes, weather, and instrumental difficulties are recognized as distinct patterns in the RSAM data set. RSAM data for domebuilding episodes gradually develop into exponential increases that terminate just before the time of magma extrusion. Mount St. Helens crater earthquakes show up as isolated spikes on amplitude plots for crater seismic stations but seldom for more distant stations. Weather-related noise shows up as low-level, long-term disturbances on all seismic stations, regardless of distance from the volcano. Implemented in mid-1985, the RSAM system has proved valuable in providing up-to-date information on seismic activity for three Mount St. Helens eruptive episodes from 1985 to 1986 (May 1985, May 1986, and October 1986). Tiltmeter data, the only other telemetered geophysical information that was available for the three dome-building episodes, is compared to RSAM data to show that the increase in RSAM data was related to the transport of magma to the surface. Thus, if tiltmeter data is not available, RSAM data can be used to predict future magmatic eruptions at Mount St. Helens. We also recognize the limitations of RSAm data. Two examples of RSAM data associated with phreatic or shallow phreatomagmatic explosions were not preceded by the same increases in RSAM data or changes in tilt associated with the three dome-building eruptions. ?? 1991 Springer-Verlag.
Wearable sensors for health monitoring
NASA Astrophysics Data System (ADS)
Suciu, George; Butca, Cristina; Ochian, Adelina; Halunga, Simona
2015-02-01
In this paper we describe several wearable sensors, designed for monitoring the health condition of the patients, based on an experimental model. Wearable sensors enable long-term continuous physiological monitoring, which is important for the treatment and management of many chronic illnesses, neurological disorders, and mental health issues. The system is based on a wearable sensors network, which is connected to a computer or smartphone. The wearable sensor network integrates several wearable sensors that can measure different parameters such as body temperature, heart rate and carbon monoxide quantity from the air. After the portable sensors measuring parameter values, they are transmitted by microprocessor through the Bluetooth to the application developed on computer or smartphone, to be interpreted.
User friendly IT Services for Monitoring and Prevention during Pregnancy.
Crişan-Vida, Mihaela; Serban, Alexandru; Ghihor-Izdrăilă, Ioana; Mirea, Adrian; Stoicu-Tivadar, Lacramioara
2014-01-01
A healthy lifestyle for a mother and monitoring both mother and fetus activities are crucial factors for a normal pregnancy without hazardous conditions. This paper proposes a cloud computing solution and a mobile application which collect data from the sensors to be used in Obstetrics-Gynecology Department. This application monitors the dietary plan of the pregnant and gives her the possibility to socialize and share pregnancy experience with the rest of women from the social network from the hospital. The physicians can access the information's of the patient in real time and they can alert mothers in some situations. Using this cloud computing device, the health condition of the pregnant women may be improved.
NASA Astrophysics Data System (ADS)
Hamza, Mostafa; El-Ahl, Mohammad H. S.; Hamza, Ahmad M.
2001-06-01
The high efficacy of laser phototherapy combined with transcutaneous monitoring of serum bilirubin provides optimum safety for jaundiced infants from the risk of bilirubin encephalopathy. In this paper the authors introduce the design and operating principles of a new laser system that can provide simultaneous monitoring and treatment of several jaundiced babies at one time. The new system incorporates diode-based laser sources oscillating at selected wavelengths to achieve both transcutaneous differential absorption measurements of bilirubin concentration in addition to the computer controlled intermittent laser therapy through a network of optical fibers. The detailed description and operating characteristics of this system are presented.
Real-time robot deliberation by compilation and monitoring of anytime algorithms
NASA Technical Reports Server (NTRS)
Zilberstein, Shlomo
1994-01-01
Anytime algorithms are algorithms whose quality of results improves gradually as computation time increases. Certainty, accuracy, and specificity are metrics useful in anytime algorighm construction. It is widely accepted that a successful robotic system must trade off between decision quality and the computational resources used to produce it. Anytime algorithms were designed to offer such a trade off. A model of compilation and monitoring mechanisms needed to build robots that can efficiently control their deliberation time is presented. This approach simplifies the design and implementation of complex intelligent robots, mechanizes the composition and monitoring processes, and provides independent real time robotic systems that automatically adjust resource allocation to yield optimum performance.
Computer systems for automatic earthquake detection
Stewart, S.W.
1974-01-01
U.S Geological Survey seismologists in Menlo park, California, are utilizing the speed, reliability, and efficiency of minicomputers to monitor seismograph stations and to automatically detect earthquakes. An earthquake detection computer system, believed to be the only one of its kind in operation, automatically reports about 90 percent of all local earthquakes recorded by a network of over 100 central California seismograph stations. The system also monitors the stations for signs of malfunction or abnormal operation. Before the automatic system was put in operation, all of the earthquakes recorded had to be detected by manually searching the records, a time-consuming process. With the automatic detection system, the stations are efficiently monitored continuously.
2015-05-18
Head computed tomographic scan most commonly found skull fracture (68.9%), subdural hematoma (54.1%), and cerebral contusion (51.4%). Hypertonic saline...were common on presentation. Head computed tomographic scan most commonly found skull fracture (68.9%), subdural hematoma (54.1%), and cerebral con...reported was skull fracture, occurring in 68.9% of patients. The most common type of intracranial hemorrhage was subdural hematoma (54.1%). Multiple
Health and performance monitoring of the online computer cluster of CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, G.; et al.
2012-01-01
The CMS experiment at the LHC features over 2'500 devices that need constant monitoring in order to ensure proper data taking. The monitoring solution has been migrated from Nagios to Icinga, with several useful plugins. The motivations behind the migration and the selection of the plugins are discussed.
Web-Based Mathematics Progress Monitoring in Second Grade
ERIC Educational Resources Information Center
Salaschek, Martin; Souvignier, Elmar
2014-01-01
We examined a web-based mathematics progress monitoring tool for second graders. The tool monitors the learning progress of two competences, number sense and computation. A total of 414 students from 19 classrooms in Germany were checked every 3 weeks from fall to spring. Correlational analyses indicate that alternate-form reliability was adequate…
Evaluating Behavioral Self-Monitoring with Accuracy Training for Changing Computer Work Postures
ERIC Educational Resources Information Center
Gravina, Nicole E.; Loewy, Shannon; Rice, Anna; Austin, John
2013-01-01
The primary purpose of this study was to replicate and extend a study by Gravina, Austin, Schroedter, and Loewy (2008). A similar self-monitoring procedure, with the addition of self-monitoring accuracy training, was implemented to increase the percentage of observations in which participants worked in neutral postures. The accuracy training…
Monitoring tool usage in surgery videos using boosted convolutional and recurrent neural networks.
Al Hajj, Hassan; Lamard, Mathieu; Conze, Pierre-Henri; Cochener, Béatrice; Quellec, Gwenolé
2018-05-09
This paper investigates the automatic monitoring of tool usage during a surgery, with potential applications in report generation, surgical training and real-time decision support. Two surgeries are considered: cataract surgery, the most common surgical procedure, and cholecystectomy, one of the most common digestive surgeries. Tool usage is monitored in videos recorded either through a microscope (cataract surgery) or an endoscope (cholecystectomy). Following state-of-the-art video analysis solutions, each frame of the video is analyzed by convolutional neural networks (CNNs) whose outputs are fed to recurrent neural networks (RNNs) in order to take temporal relationships between events into account. Novelty lies in the way those CNNs and RNNs are trained. Computational complexity prevents the end-to-end training of "CNN+RNN" systems. Therefore, CNNs are usually trained first, independently from the RNNs. This approach is clearly suboptimal for surgical tool analysis: many tools are very similar to one another, but they can generally be differentiated based on past events. CNNs should be trained to extract the most useful visual features in combination with the temporal context. A novel boosting strategy is proposed to achieve this goal: the CNN and RNN parts of the system are simultaneously enriched by progressively adding weak classifiers (either CNNs or RNNs) trained to improve the overall classification accuracy. Experiments were performed in a dataset of 50 cataract surgery videos, where the usage of 21 surgical tools was manually annotated, and a dataset of 80 cholecystectomy videos, where the usage of 7 tools was manually annotated. Very good classification performance are achieved in both datasets: tool usage could be labeled with an average area under the ROC curve of A z =0.9961 and A z =0.9939, respectively, in offline mode (using past, present and future information), and A z =0.9957 and A z =0.9936, respectively, in online mode (using past and present information only). Copyright © 2018 Elsevier B.V. All rights reserved.
Tang, Te; Weiss, Michael D; Borum, Peggy; Turovets, Sergei; Tucker, Don; Sadleir, Rosalind
2016-06-01
Intraventricular hemorrhage (IVH) is a common occurrence in the days immediately after premature birth. It has been correlated with outcomes such as periventricular leukomalacia (PVL), cerebral palsy and developmental delay. The causes and evolution of IVH are unclear; it has been associated with fluctuations in blood pressure, damage to the subventricular zone and seizures. At present, ultrasound is the most commonly used method for detection of IVH, but is used retrospectively. Without the presence of adequate therapies to avert IVH, the use of a continuous monitoring technique may be somewhat moot. While treatments to mitigate the damage caused by IVH are still under development, the principal benefit of a continuous monitoring technique will be in investigations into the etiology of IVH, and its associations with periventricular injury and blood pressure fluctuations. Electrical impedance tomography (EIT) is potentially of use in this context as accumulating blood displaces higher conductivity cerebrospinal fluid (CSF) in the ventricles. We devised an electrode array and EIT measurement strategy that performed well in detection of simulated ventricular blood in computer models and phantom studies. In this study we describe results of pilot in vivo experiments on neonatal piglets, and show that EIT has high sensitivity and specificity to small quantities of blood (<1 ml) introduced into the ventricle. EIT images were processed to an index representing the quantity of accumulated blood (the 'quantity index', QI). We found that QI values were linearly related to fluid quantity, and that the slope of the curve was consistent between measurements on different subjects. Linear discriminant analysis showed a false positive rate of 0%, and receiver operator characteristic analysis found area under curve values greater than 0.98 to administered volumes between 0.5, and 2.0 ml. We believe our study indicates that this method may be well suited to quantitative monitoring of IVH in newborns, simultaneously or interleaved with electroencephalograph assessments.
Fault detection and isolation in motion monitoring system.
Kim, Duk-Jin; Suk, Myoung Hoon; Prabhakaran, B
2012-01-01
Pervasive computing becomes very active research field these days. A watch that can trace human movement to record motion boundary as well as to study of finding social life pattern by one's localized visiting area. Pervasive computing also helps patient monitoring. A daily monitoring system helps longitudinal study of patient monitoring such as Alzheimer's and Parkinson's or obesity monitoring. Due to the nature of monitoring sensor (on-body wireless sensor), however, signal noise or faulty sensors errors can be present at any time. Many research works have addressed these problems any with a large amount of sensor deployment. In this paper, we present the faulty sensor detection and isolation using only two on-body sensors. We have been investigating three different types of sensor errors: the SHORT error, the CONSTANT error, and the NOISY SENSOR error (see more details on section V). Our experimental results show that the success rate of isolating faulty signals are an average of over 91.5% on fault type 1, over 92% on fault type 2, and over 99% on fault type 3 with the fault prior of 30% sensor errors.
High Performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions
2016-08-30
High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions A dedicated high-performance computer cluster was...SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Computer cluster ...peer-reviewed journals: Final Report: High-performance Computer Cluster for Theoretical Studies of Roaming in Chemical Reactions Report Title A dedicated
Macauley, Molly; Palmer, Karen; Shih, Jhih-Shyang
2003-05-01
The importance of information technology to the world economy has brought about a surge in demand for electronic equipment. With rapid technological change, a growing fraction of the increasing stock of many types of electronics becomes obsolete each year. We model the costs and benefits of policies to manage 'e-waste' by focusing on a large component of the electronic waste stream-computer monitors-and the environmental concerns associated with disposal of the lead embodied in cathode ray tubes (CRTs) used in most monitors. We find that the benefits of avoiding health effects associated with CRT disposal appear far outweighed by the costs for a wide range of policies. For the stock of monitors disposed of in the United States in 1998, we find that policies restricting or banning some popular disposal options would increase disposal costs from about US dollar 1 per monitor to between US dollars 3 and US dollars 20 per monitor. Policies to promote a modest amount of recycling of monitor parts, including lead, can be less expensive. In all cases, however, the costs of the policies exceed the value of the avoided health effects of CRT disposal.
Sim, L; Manthey, K; Esdaile, P; Benson, M
2004-09-01
A study to compare the performance of the following display monitors for application as PACS CR diagnostic workstations is described. 1. Diagnostic quality, 3 megapixel, 21 inch monochrome LCD monitors. 2. Commercial grade, 2 megapixel, 20 inch colour LCD monitors. Two sets of fifty radiological studies each were presented separately to five radiologists on two occasions, using different displays on each occasion. The two sets of radiological studies were CR of the chest, querying the presence of pneumothorax, and CR of the wrist, querying the presence of a scaphoid fracture. Receiver Operating Characteristic (ROC) curves were constructed for diagnostic performance for each presentation. Areas under the ROC curves (AUC) for diagnosis using different monitors were compared for each image set and the following results obtained: Set 1: Monochrome AUC = 0.873 +/- 0.026; Colour AUC = 0.831 +/- 0.032; Set 2: Monochrome AUC = 0.945 +/- 0.014; Colour AUC = 0.931 +/- 0.019; Differences in AUC were attributed to the different monitors. While not significant at a 95% confidence level, the results have supported a cautious approach to consideration of the use of commercial grade LCD colour monitors for diagnostic application.
A mobile phone-based ECG monitoring system.
Iwamoto, Junichi; Yonezawa, Yoshiharu; Maki, Hiromichi; Ogawa, Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Hahn, Allen W; Caldwell, W Morton
2006-01-01
We have developed a telemedicine system for monitoring a patient's electrocardiogram during daily activities. The recording system consists of three ECG chest electrodes, a variable gain instrumentation amplifier, a low power 8-bit single-chip microcomputer, a 256 KB EEPROM and a 2.4 GHz low transmitting power mobile phone (PHS). The complete system is mounted on a single, lightweight, chest electrode array. When a heart discomfort is felt, the patient pushes the data transmission switch on the recording system. The system sends the recorded ECG waveforms of the two prior minutes and ECG waveforms of the two minutes after the switch is pressed, directly in the hospital server computer via the PHS. The server computer sends the data to the physician on call. The data is displayed on the doctor's Java mobile phone LCD (Liquid Crystal Display), so he or she can monitor the ECG regardless of their location. The developed ECG monitoring system is not only applicable to at-home patients, but should also be useful for monitoring hospital patients.
Self-, other-, and joint monitoring using forward models.
Pickering, Martin J; Garrod, Simon
2014-01-01
In the psychology of language, most accounts of self-monitoring assume that it is based on comprehension. Here we outline and develop the alternative account proposed by Pickering and Garrod (2013), in which speakers construct forward models of their upcoming utterances and compare them with the utterance as they produce them. We propose that speakers compute inverse models derived from the discrepancy (error) between the utterance and the predicted utterance and use that to modify their production command or (occasionally) begin anew. We then propose that comprehenders monitor other people's speech by simulating their utterances using covert imitation and forward models, and then comparing those forward models with what they hear. They use the discrepancy to compute inverse models and modify their representation of the speaker's production command, or realize that their representation is incorrect and may develop a new production command. We then discuss monitoring in dialogue, paying attention to sequential contributions, concurrent feedback, and the relationship between monitoring and alignment.
Self-, other-, and joint monitoring using forward models
Pickering, Martin J.; Garrod, Simon
2014-01-01
In the psychology of language, most accounts of self-monitoring assume that it is based on comprehension. Here we outline and develop the alternative account proposed by Pickering and Garrod (2013), in which speakers construct forward models of their upcoming utterances and compare them with the utterance as they produce them. We propose that speakers compute inverse models derived from the discrepancy (error) between the utterance and the predicted utterance and use that to modify their production command or (occasionally) begin anew. We then propose that comprehenders monitor other people’s speech by simulating their utterances using covert imitation and forward models, and then comparing those forward models with what they hear. They use the discrepancy to compute inverse models and modify their representation of the speaker’s production command, or realize that their representation is incorrect and may develop a new production command. We then discuss monitoring in dialogue, paying attention to sequential contributions, concurrent feedback, and the relationship between monitoring and alignment. PMID:24723869
Design and Implementation of a Modern Automatic Deformation Monitoring System
NASA Astrophysics Data System (ADS)
Engel, Philipp; Schweimler, Björn
2016-03-01
The deformation monitoring of structures and buildings is an important task field of modern engineering surveying, ensuring the standing and reliability of supervised objects over a long period. Several commercial hardware and software solutions for the realization of such monitoring measurements are available on the market. In addition to them, a research team at the University of Applied Sciences in Neubrandenburg (NUAS) is actively developing a software package for monitoring purposes in geodesy and geotechnics, which is distributed under an open source licence and free of charge. The task of managing an open source project is well-known in computer science, but it is fairly new in a geodetic context. This paper contributes to that issue by detailing applications, frameworks, and interfaces for the design and implementation of open hardware and software solutions for sensor control, sensor networks, and data management in automatic deformation monitoring. It will be discussed how the development effort of networked applications can be reduced by using free programming tools, cloud computing technologies, and rapid prototyping methods.
A Spectral Method for Spatial Downscaling
Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.
2014-01-01
Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037
40 CFR 146.84 - Area of review and corrective action.
Code of Federal Regulations, 2013 CFR
2013-07-01
... activity. The area of review is delineated using computational modeling that accounts for the physical and... characterization, monitoring and operational data, and computational modeling, the projected lateral and vertical...
40 CFR 146.84 - Area of review and corrective action.
Code of Federal Regulations, 2012 CFR
2012-07-01
... activity. The area of review is delineated using computational modeling that accounts for the physical and... characterization, monitoring and operational data, and computational modeling, the projected lateral and vertical...
40 CFR 146.84 - Area of review and corrective action.
Code of Federal Regulations, 2011 CFR
2011-07-01
... activity. The area of review is delineated using computational modeling that accounts for the physical and... characterization, monitoring and operational data, and computational modeling, the projected lateral and vertical...
40 CFR 146.84 - Area of review and corrective action.
Code of Federal Regulations, 2014 CFR
2014-07-01
... activity. The area of review is delineated using computational modeling that accounts for the physical and... characterization, monitoring and operational data, and computational modeling, the projected lateral and vertical...
Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping
2014-01-01
EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804
Johnson, Timothy C.; Versteeg, Roelof J.; Ward, Andy; Day-Lewis, Frederick D.; Revil, André
2010-01-01
Electrical geophysical methods have found wide use in the growing discipline of hydrogeophysics for characterizing the electrical properties of the subsurface and for monitoring subsurface processes in terms of the spatiotemporal changes in subsurface conductivity, chargeability, and source currents they govern. Presently, multichannel and multielectrode data collections systems can collect large data sets in relatively short periods of time. Practitioners, however, often are unable to fully utilize these large data sets and the information they contain because of standard desktop-computer processing limitations. These limitations can be addressed by utilizing the storage and processing capabilities of parallel computing environments. We have developed a parallel distributed-memory forward and inverse modeling algorithm for analyzing resistivity and time-domain induced polar-ization (IP) data. The primary components of the parallel computations include distributed computation of the pole solutions in forward mode, distributed storage and computation of the Jacobian matrix in inverse mode, and parallel execution of the inverse equation solver. We have tested the corresponding parallel code in three efforts: (1) resistivity characterization of the Hanford 300 Area Integrated Field Research Challenge site in Hanford, Washington, U.S.A., (2) resistivity characterization of a volcanic island in the southern Tyrrhenian Sea in Italy, and (3) resistivity and IP monitoring of biostimulation at a Superfund site in Brandywine, Maryland, U.S.A. Inverse analysis of each of these data sets would be limited or impossible in a standard serial computing environment, which underscores the need for parallel high-performance computing to fully utilize the potential of electrical geophysical methods in hydrogeophysical applications.
Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping
2014-01-01
EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.
NASA Technical Reports Server (NTRS)
Schlegel, Todd T. (Inventor); Arenare, Brian (Inventor)
2008-01-01
Cardiac electrical data are received from a patient, manipulated to determine various useful aspects of the ECG signal, and displayed and stored in a useful form using a computer. The computer monitor displays various useful information, and in particular graphically displays various permutations of reduced amplitude zones and kurtosis that increase the rapidity and accuracy of cardiac diagnoses. New criteria for reduced amplitude zones are defined that enhance the sensitivity and specificity for detecting cardiac abnormalities.
Wetland mapping from digitized aerial photography. [Sheboygen Marsh, Sheboygen County, Wisconsin
NASA Technical Reports Server (NTRS)
Scarpace, F. L.; Quirk, B. K.; Kiefer, R. W.; Wynn, S. L.
1981-01-01
Computer assisted interpretation of small scale aerial imagery was found to be a cost effective and accurate method of mapping complex vegetation patterns if high resolution information is desired. This type of technique is suited for problems such as monitoring changes in species composition due to environmental factors and is a feasible method of monitoring and mapping large areas of wetlands. The technique has the added advantage of being in a computer compatible form which can be transformed into any georeference system of interest.
Ayres-de-Campos, Diogo; Rei, Mariana; Nunes, Inês; Sousa, Paulo; Bernardes, João
2017-01-01
SisPorto 4.0 is the most recent version of a program for the computer analysis of cardiotocographic (CTG) signals and ST events, which has been adapted to the 2015 International Federation of Gynaecology and Obstetrics (FIGO) guidelines for intrapartum foetal monitoring. This paper provides a detailed description of the analysis performed by the system, including the signal-processing algorithms involved in identification of basic CTG features and the resulting real-time alerts.
Goshima, Yoshio; Hida, Tomonobu; Gotoh, Toshiyuki
2012-01-01
Axonal transport plays a crucial role in neuronal morphogenesis, survival and function. Despite its importance, however, the molecular mechanisms of axonal transport remain mostly unknown because a simple and quantitative assay system for monitoring this cellular process has been lacking. In order to better characterize the mechanisms involved in axonal transport, we formulate a novel computer-assisted monitoring system of axonal transport. Potential uses of this system and implications for future studies will be discussed.
Geography Students Assess Their Learning Using Computer-Marked Tests.
ERIC Educational Resources Information Center
Hogg, Jim
1997-01-01
Reports on a pilot study designed to assess the potential of computer-marked tests for allowing students to monitor their learning. Students' answers to multiple choice tests were fed into a computer that provided a full analysis of their strengths and weaknesses. Students responded favorably to the feedback. (MJP)
ERIC Educational Resources Information Center
Openshaw, Peter
1988-01-01
Presented are five ideas for A-level biology experiments using a laboratory computer interface. Topics investigated include photosynthesis, yeast growth, animal movements, pulse rates, and oxygen consumption and production by organisms. Includes instructions specific to the BBC computer system. (CW)
A Novel Use of Computer Simulation in an Applied Pharmacokinetics Course.
ERIC Educational Resources Information Center
Sullivan, Timothy J.
1982-01-01
The use of a package of interactive computer programs designed to simulate pharmacokinetic monitoring of drug therapy in a required undergraduate applied pharmacokinetics course is described. Students were assigned the problem of maintaining therapeutic drug concentrations in a computer generated "patient" as an adjunct to classroom instruction.…
Metocognitive Support Accelerates Computer Assisted Learning for Novice Programmers
ERIC Educational Resources Information Center
Rum, Siti Nurulain Mohd; Ismail, Maizatul Akmar
2017-01-01
Computer programming is a part of the curriculum in computer science education, and high drop rates for this subject are a universal problem. Development of metacognitive skills, including the conceptual framework provided by socio-cognitive theories that afford reflective thinking, such as actively monitoring, evaluating, and modifying one's…
Computational Electrocardiography: Revisiting Holter ECG Monitoring.
Deserno, Thomas M; Marx, Nikolaus
2016-08-05
Since 1942, when Goldberger introduced the 12-lead electrocardiography (ECG), this diagnostic method has not been changed. After 70 years of technologic developments, we revisit Holter ECG from recording to understanding. A fundamental change is fore-seen towards "computational ECG" (CECG), where continuous monitoring is producing big data volumes that are impossible to be inspected conventionally but require efficient computational methods. We draw parallels between CECG and computational biology, in particular with respect to computed tomography, computed radiology, and computed photography. From that, we identify technology and methodology needed for CECG. Real-time transfer of raw data into meaningful parameters that are tracked over time will allow prediction of serious events, such as sudden cardiac death. Evolved from Holter's technology, portable smartphones with Bluetooth-connected textile-embedded sensors will capture noisy raw data (recording), process meaningful parameters over time (analysis), and transfer them to cloud services for sharing (handling), predicting serious events, and alarming (understanding). To make this happen, the following fields need more research: i) signal processing, ii) cycle decomposition; iii) cycle normalization, iv) cycle modeling, v) clinical parameter computation, vi) physiological modeling, and vii) event prediction. We shall start immediately developing methodology for CECG analysis and understanding.
A portable toolbox to monitor and evaluate signal operations.
DOT National Transportation Integrated Search
2011-10-01
Researchers from the Texas Transportation Institute developed a portable tool consisting of a fieldhardened : computer interfacing with the traffic signal cabinet through special enhanced Bus Interface Units. : The toolbox consisted of a monitoring t...
Measured energy savings and performance of power-managed personal computers and monitors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nordman, B.; Piette, M.A.; Kinney, K.
1996-08-01
Personal computers and monitors are estimated to use 14 billion kWh/year of electricity, with power management potentially saving $600 million/year by the year 2000. The effort to capture these savings is lead by the US Environmental Protection Agency`s Energy Star program, which specifies a 30W maximum demand for the computer and for the monitor when in a {open_quote}sleep{close_quote} or idle mode. In this paper the authors discuss measured energy use and estimated savings for power-managed (Energy Star compliant) PCs and monitors. They collected electricity use measurements of six power-managed PCs and monitors in their office and five from two othermore » research projects. The devices are diverse in machine type, use patterns, and context. The analysis method estimates the time spent in each system operating mode (off, low-, and full-power) and combines these with real power measurements to derive hours of use per mode, energy use, and energy savings. Three schedules are explored in the {open_quotes}As-operated,{close_quotes} {open_quotes}Standardized,{close_quotes} and `Maximum` savings estimates. Energy savings are established by comparing the measurements to a baseline with power management disabled. As-operated energy savings for the eleven PCs and monitors ranged from zero to 75 kWh/year. Under the standard operating schedule (on 20% of nights and weekends), the savings are about 200 kWh/year. An audit of power management features and configurations for several dozen Energy Star machines found only 11% of CPU`s fully enabled and about two thirds of monitors were successfully power managed. The highest priority for greater power management savings is to enable monitors, as opposed to CPU`s, since they are generally easier to configure, less likely to interfere with system operation, and have greater savings. The difficulties in properly configuring PCs and monitors is the largest current barrier to achieving the savings potential from power management.« less
An overview of wireless structural health monitoring for civil structures.
Lynch, Jerome Peter
2007-02-15
Wireless monitoring has emerged in recent years as a promising technology that could greatly impact the field of structural monitoring and infrastructure asset management. This paper is a summary of research efforts that have resulted in the design of numerous wireless sensing unit prototypes explicitly intended for implementation in civil structures. Wireless sensing units integrate wireless communications and mobile computing with sensors to deliver a relatively inexpensive sensor platform. A key design feature of wireless sensing units is the collocation of computational power and sensors; the tight integration of computing with a wireless sensing unit provides sensors with the opportunity to self-interrogate measurement data. In particular, there is strong interest in using wireless sensing units to build structural health monitoring systems that interrogate structural data for signs of damage. After the hardware and the software designs of wireless sensing units are completed, the Alamosa Canyon Bridge in New Mexico is utilized to validate their accuracy and reliability. To improve the ability of low-cost wireless sensing units to detect the onset of structural damage, the wireless sensing unit paradigm is extended to include the capability to command actuators and active sensors.
Integrated control and health management. Orbit transfer rocket engine technology program
NASA Technical Reports Server (NTRS)
Holzmann, Wilfried A.; Hayden, Warren R.
1988-01-01
To insure controllability of the baseline design for a 7500 pound thrust, 10:1 throttleable, dual expanded cycle, Hydrogen-Oxygen, orbit transfer rocket engine, an Integrated Controls and Health Monitoring concept was developed. This included: (1) Dynamic engine simulations using a TUTSIM derived computer code; (2) analysis of various control methods; (3) Failure Modes Analysis to identify critical sensors; (4) Survey of applicable sensors technology; and, (5) Study of Health Monitoring philosophies. The engine design was found to be controllable over the full throttling range by using 13 valves, including an oxygen turbine bypass valve to control mixture ratio, and a hydrogen turbine bypass valve, used in conjunction with the oxygen bypass to control thrust. Classic feedback control methods are proposed along with specific requirements for valves, sensors, and the controller. Expanding on the control system, a Health Monitoring system is proposed including suggested computing methods and the following recommended sensors: (1) Fiber optic and silicon bearing deflectometers; (2) Capacitive shaft displacement sensors; and (3) Hot spot thermocouple arrays. Further work is needed to refine and verify the dynamic simulations and control algorithms, to advance sensor capabilities, and to develop the Health Monitoring computational methods.
Manage habitat, monitor species [Chapter 10
Michael K. Schwartz; Jamie S. Sanderlin; William M. Block
2015-01-01
Monitoring is the collection of data over time. We monitor many things: temperatures at local weather stations, daily changes in sea level along the coastline, annual prevalence of specific diseases, sunspot cycles, unemployment rates, inflation, commodity futures-the list is virtually endless. In wildlife biology, we also conduct a lot of monitoring, most commonly...
NASA Astrophysics Data System (ADS)
Chen, B.; Harp, D. R.; Lin, Y.; Keating, E. H.; Pawar, R.
2017-12-01
Monitoring is a crucial aspect of geologic carbon sequestration (GCS) risk management. It has gained importance as a means to ensure CO2 is safely and permanently stored underground throughout the lifecycle of a GCS project. Three issues are often involved in a monitoring project: (i) where is the optimal location to place the monitoring well(s), (ii) what type of data (pressure, rate and/or CO2 concentration) should be measured, and (iii) What is the optimal frequency to collect the data. In order to address these important issues, a filtering-based data assimilation procedure is developed to perform the monitoring optimization. The optimal monitoring strategy is selected based on the uncertainty reduction of the objective of interest (e.g., cumulative CO2 leak) for all potential monitoring strategies. To reduce the computational cost of the filtering-based data assimilation process, two machine-learning algorithms: Support Vector Regression (SVR) and Multivariate Adaptive Regression Splines (MARS) are used to develop the computationally efficient reduced-order-models (ROMs) from full numerical simulations of CO2 and brine flow. The proposed framework for GCS monitoring optimization is demonstrated with two examples: a simple 3D synthetic case and a real field case named Rock Spring Uplift carbon storage site in Southwestern Wyoming.
Improved Real-Time Monitoring Using Multiple Expert Systems
NASA Technical Reports Server (NTRS)
Schwuttke, Ursula M.; Angelino, Robert; Quan, Alan G.; Veregge, John; Childs, Cynthia
1993-01-01
Monitor/Analyzer of Real-Time Voyager Engineering Link (MARVEL) computer program implements combination of techniques of both conventional automation and artificial intelligence to improve monitoring of complicated engineering system. Designed to support ground-based operations of Voyager spacecraft, also adapted to other systems. Enables more-accurate monitoring and analysis of telemetry, enhances productivity of monitoring personnel, reduces required number of such personnel by performing routine monitoring tasks, and helps ensure consistency in face of turnover of personnel. Programmed in C language and includes commercial expert-system software shell also written in C.
User guide to a command and control system; a part of a prelaunch wind monitoring program
NASA Technical Reports Server (NTRS)
Cowgill, G. R.
1976-01-01
A set of programs called Command and Control System (CCS), intended as a user manual, is described for the operation of CCS by the personnel supporting the wind monitoring portion of the launch mission. Wind data obtained by tracking balloons is sent by electronic means using telephone lines to other locations. Steering commands are computed from a system called ADDJUST for the on-board computer and relays this data. Data are received and automatically stored in a microprocessor, then via a real time program transferred to the UNIVAC 1100/40 computer. At this point the data is available to be used by the Command and Control system.
The image-interpretation-workstation of the future: lessons learned
NASA Astrophysics Data System (ADS)
Maier, S.; van de Camp, F.; Hafermann, J.; Wagner, B.; Peinsipp-Byma, E.; Beyerer, J.
2017-05-01
In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.
NASA Astrophysics Data System (ADS)
Ukwatta, E.; Awad, J.; Ward, A. D.; Samarabandu, J.; Krasinski, A.; Parraga, G.; Fenster, A.
2011-03-01
Three-dimensional ultrasound (3D US) vessel wall volume (VWV) measurements provide high measurement sensitivity and reproducibility for the monitoring and assessment of carotid atherosclerosis. In this paper, we describe a semiautomated approach based on the level set method to delineate the media-adventitia and lumen boundaries of the common carotid artery from 3D US images to support the computation of VWV. Due to the presence of plaque and US image artifacts, the carotid arteries are challenging to segment using image information alone. Our segmentation framework combines several image cues with domain knowledge and limited user interaction. Our method was evaluated with respect to manually outlined boundaries on 430 2D US images extracted from 3D US images of 30 patients who have carotid stenosis of 60% or more. The VWV given by our method differed from that given by manual segmentation by 6.7% +/- 5.0%. For the media-adventitia and lumen segmentations, respectively, our method yielded Dice coefficients of 95.2% +/- 1.6%, 94.3% +/- 2.6%, mean absolute distances of 0.3 +/- 0.1 mm, 0.2 +/- 0.1 mm, maximum absolute distances of 0.8 +/- 0.4 mm, 0.6 +/- 0.3 mm, and volume differences of 4.2% +/- 3.1%, 3.4% +/- 2.6%. The realization of a semi-automated segmentation method will accelerate the translation of 3D carotid US to clinical care for the rapid, non-invasive, and economical monitoring of atherosclerotic disease progression and regression during therapy.
Six steps to a successful dose-reduction strategy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, M.
1995-03-01
The increased importance of demonstrating achievement of the ALARA principle has helped produce a proliferation of dose-reduction ideas. Across a company there may be many dose-reduction items being pursued in a variety of areas. However, companies have a limited amount of resource and, therefore, to ensure funding is directed to those items which will produce the most benefit and that all areas apply a common policy, requires the presence of a dose-reduction strategy. Six steps were identified in formulating the dose-reduction strategy for Rolls-Royce and Associates (RRA): (1) collating the ideas; (2) quantitatively evaluating them on a common basis; (3)more » prioritizing the ideas in terms of cost benefit, (4) implementation of the highest priority items; (5) monitoring their success; (6) periodically reviewing the strategy. Inherent in producing the dose-reduction strategy has been a comprehensive dose database and the RRA-developed dose management computer code DOMAIN, which allows prediction of dose rates and dose. The database enabled high task dose items to be identified, assisted in evaluating dose benefits, and monitored dose trends once items had been implemented. The DOMAIN code was used both in quantifying some of the project dose benefits and its results, such as dose contours, used in some of the dose-reduction items themselves. In all, over fifty dose-reduction items were evaluated in the strategy process and the items which will give greatest benefit are being implemented. The strategy has been successful in giving renewed impetus and direction to dose-reduction management.« less
Domestic Violence against Men: Know the Signs
... complete call and texting history. Use your home computer cautiously. Your abuser might use spyware to monitor ... and the websites you visit. Consider using a computer at work, at the library or at a ...
Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets
NASA Astrophysics Data System (ADS)
Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter
2017-06-01
The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.
Radioactive release during nuclear accidents in Chernobyl and Fukushima
NASA Astrophysics Data System (ADS)
Nur Ain Sulaiman, Siti; Mohamed, Faizal; Rahim, Ahmad Nabil Ab
2018-01-01
Nuclear accidents that occurred in Chernobyl and Fukushima have initiated many research interests to understand the cause and mechanism of radioactive release within reactor compound and to the environment. Common types of radionuclide release are the fission products from the irradiated fuel rod itself. In case of nuclear accident, the focus of monitoring will be mostly on the release of noble gases, I-131 and Cs-137. As these are the only accidents have been rated within International Nuclear Events Scale (INES) Level 7, the radioactive release to the environment was one of the critical insights to be monitored. It was estimated that the release of radioactive material to the atmosphere due to Fukushima accident was approximately 10% of the Chernobyl accident. By referring to the previous reports using computational code systems to model the release rate, the release activity of I-131 and Cs-137 in Chernobyl was significantly higher compare to Fukushima. The simulation code also showed that Chernobyl had higher release rate of both radionuclides on the day of accident. Other factors affecting the radioactive release for Fukushima and Chernobyl accidents such as the current reactor technology and safety measures are also compared for discussion.
Bai, Yong; Sow, Daby; Vespa, Paul; Hu, Xiao
2016-01-01
Continuous high-volume and high-frequency brain signals such as intracranial pressure (ICP) and electroencephalographic (EEG) waveforms are commonly collected by bedside monitors in neurocritical care. While such signals often carry early signs of neurological deterioration, detecting these signs in real time with conventional data processing methods mainly designed for retrospective analysis has been extremely challenging. Such methods are not designed to handle the large volumes of waveform data produced by bedside monitors. In this pilot study, we address this challenge by building a prototype system using the IBM InfoSphere Streams platform, a scalable stream computing platform, to detect unstable ICP dynamics in real time. The system continuously receives electrocardiographic and ICP signals and analyzes ICP pulse morphology looking for deviations from a steady state. We also designed a Web interface to display in real time the result of this analysis in a Web browser. With this interface, physicians are able to ubiquitously check on the status of their patients and gain direct insight into and interpretation of the patient's state in real time. The prototype system has been successfully tested prospectively on live hospitalized patients.
Corsalini, Massimo; Pettini, Francesco; Di Venere, Daniela; Ballini, Andrea; Chiatante, Giuseppe; Lamberti, Luciano; Pappalettere, Carmine; Fiorentino, Michele; Uva, Antonio E.; Monno, Giuseppe; Boccaccio, Antonio
2016-01-01
Endocanalar posts are necessary to build up and retain coronal restorations but they do not reinforce dental roots. It was observed that the dislodgement of post-retained restorations commonly occurs after several years of function and long-term retention may be influenced by various factors such as temperature changes. Temperature changes, in fact, produce micrometric deformations of post and surrounding tissues/materials that may generate high stress concentrations at the interface thus leading to failure. In this study we present an optical system based on the projection moiré technique that has been utilized to monitor the displacement field of endocanalar glass-fibre posts subjected to temperature changes. Measurements were performed on forty samples and the average displacement values registered at the apical and middle region were determined for six different temperature levels. A total of 480 displacement measurements was hence performed. The values of the standard deviation computed for each of the tested temperatures over the forty samples appear reasonably small which proves the robustness and the reliability of the proposed optical technique. The possible implications for the use of the system in the applicative context were discussed. PMID:27990186
"Data Day" and "Data Night" Definitions - Towards Producing Seamless Global Satellite Imagery
NASA Astrophysics Data System (ADS)
Schmaltz, J. E.
2017-12-01
For centuries, the art and science of cartography has struggled with the challenge of mapping the round earth on to a flat page, or a flat computer monitor. Earth observing satellites with continuous monitoring of our planet have added the additional complexity of the time dimension to this procedure. The most common current practice is to segment this data by 24-hour Coordinated Universal Time (UTC) day and then split the day into sun side "Data Day" and shadow side "Data Night" global imagery that spans from dateline to dateline. Due to the nature of satellite orbits, simply binning the data by UTC date produces significant discontinuities at the dateline for day images and at Greenwich for night images. Instead, imagery could be generated in a fashion that follows the spatial and temporal progression of the satellite which would produce seamless imagery everywhere on the globe for all times. This presentation will explore approaches to produce such imagery but will also address some of the practical and logistical difficulties in implementing such changes. Topics will include composites versus granule/orbit based imagery, day/night versus ascending/descending definitions, and polar versus global projections.
NASA Astrophysics Data System (ADS)
Sim, Sung-Han; Spencer, Billie F., Jr.; Park, Jongwoong; Jung, Hyungjo
2012-04-01
Wireless Smart Sensor Networks (WSSNs) facilitates a new paradigm to structural identification and monitoring for civil infrastructure. Conventional monitoring systems based on wired sensors and centralized data acquisition and processing have been considered to be challenging and costly due to cabling and expensive equipment and maintenance costs. WSSNs have emerged as a technology that can overcome such difficulties, making deployment of a dense array of sensors on large civil structures both feasible and economical. However, as opposed to wired sensor networks in which centralized data acquisition and processing is common practice, WSSNs require decentralized computing algorithms to reduce data transmission due to the limitation associated with wireless communication. Thus, several system identification methods have been implemented to process sensor data and extract essential information, including Natural Excitation Technique with Eigensystem Realization Algorithm, Frequency Domain Decomposition (FDD), and Random Decrement Technique (RDT); however, Stochastic Subspace Identification (SSI) has not been fully utilized in WSSNs, while SSI has the strong potential to enhance the system identification. This study presents a decentralized system identification using SSI in WSSNs. The approach is implemented on MEMSIC's Imote2 sensor platform and experimentally verified using a 5-story shear building model.
Compact spectrometer for precision studies of multimode behavior in an extended-cavity diode laser
NASA Astrophysics Data System (ADS)
Roach, Timothy; Golemi, Josian; Krueger, Thomas
2016-05-01
We have built a compact, inexpensive, high-precision spectrometer and used it to investigate the tuning behavior of a grating stabilized extended-cavity diode laser (ECDL). A common ECDL design uses a laser chip with an uncoated (partially reflecting) front facet, and the laser output exhibits a complicated pattern of mode hops as the frequency is tuned, in some cases even showing chaotic dynamics. Our grating spectrometer (based on a design by White & Scholten) monitors a span of 4000 GHz (8 nm at 780 nm) with a linewidth of 3 GHz, which with line-splitting gives a precision of 0.02 GHz in determining the frequency of a laser mode. We have studied multimode operation of the ECDL, tracking two or three simultaneous chip cavity modes (spacing ~ 30 GHz) during tuning via current or piezo control of the external cavity. Simultaneous output on adjacent external cavity modes (spacing ~ 5 GHz) is monitored by measuring an increase in the spectral linewidth. Computer-control of the spectrometer (for line-fitting and averaging) and of the ECDL (electronic tuning) allows rapid collection of spectral data sets, which we will use to test mathematical simulation models of the non-linear laser cavity interactions.
Historical data recording for process computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, J.C.; Sellars, H.L.
1981-11-01
Computers have been used to monitor and control chemical and refining processes for more than 15 years. During this time, there has been a steady growth in the variety and sophistication of the functions performed by these process computers. Early systems were limited to maintaining only current operating measurements, available through crude operator's consoles or noisy teletypes. The value of retaining a process history, that is, a collection of measurements over time, became apparent, and early efforts produced shift and daily summary reports. The need for improved process historians which record, retrieve and display process information has grown as processmore » computers assume larger responsibilities in plant operations. This paper describes newly developed process historian functions that have been used on several of its in-house process monitoring and control systems in Du Pont factories. 3 refs.« less
Processing Diabetes Mellitus Composite Events in MAGPIE.
Brugués, Albert; Bromuri, Stefano; Barry, Michael; Del Toro, Óscar Jiménez; Mazurkiewicz, Maciej R; Kardas, Przemyslaw; Pegueroles, Josep; Schumacher, Michael
2016-02-01
The focus of this research is in the definition of programmable expert Personal Health Systems (PHS) to monitor patients affected by chronic diseases using agent oriented programming and mobile computing to represent the interactions happening amongst the components of the system. The paper also discusses issues of knowledge representation within the medical domain when dealing with temporal patterns concerning the physiological values of the patient. In the presented agent based PHS the doctors can personalize for each patient monitoring rules that can be defined in a graphical way. Furthermore, to achieve better scalability, the computations for monitoring the patients are distributed among their devices rather than being performed in a centralized server. The system is evaluated using data of 21 diabetic patients to detect temporal patterns according to a set of monitoring rules defined. The system's scalability is evaluated by comparing it with a centralized approach. The evaluation concerning the detection of temporal patterns highlights the system's ability to monitor chronic patients affected by diabetes. Regarding the scalability, the results show the fact that an approach exploiting the use of mobile computing is more scalable than a centralized approach. Therefore, more likely to satisfy the needs of next generation PHSs. PHSs are becoming an adopted technology to deal with the surge of patients affected by chronic illnesses. This paper discusses architectural choices to make an agent based PHS more scalable by using a distributed mobile computing approach. It also discusses how to model the medical knowledge in the PHS in such a way that it is modifiable at run time. The evaluation highlights the necessity of distributing the reasoning to the mobile part of the system and that modifiable rules are able to deal with the change in lifestyle of the patients affected by chronic illnesses.
Automating slope monitoring in mines with terrestrial lidar scanners
NASA Astrophysics Data System (ADS)
Conforti, Dario
2014-05-01
Static terrestrial laser scanners (TLS) have been an important component of slope monitoring for some time, and many solutions for monitoring the progress of a slide have been devised over the years. However, all of these solutions have required users to operate the lidar equipment in the field, creating a high cost in time and resources, especially if the surveys must be performed very frequently. This paper presents a new solution for monitoring slides, developed using a TLS and an automated data acquisition, processing and analysis system. In this solution, a TLS is permanently mounted within sight of the target surface and connected to a control computer. The control software on the computer automatically triggers surveys according to a user-defined schedule, parses data into point clouds, and compares data against a baseline. The software can base the comparison against either the original survey of the site or the most recent survey, depending on whether the operator needs to measure the total or recent movement of the slide. If the displacement exceeds a user-defined safety threshold, the control computer transmits alerts via SMS text messaging and/or email, including graphs and tables describing the nature and size of the displacement. The solution can also be configured to trigger the external visual/audio alarm systems. If the survey areas contain high-traffic areas such as roads, the operator can mark them for exclusion in the comparison to prevent false alarms. To improve usability and safety, the control computer can connect to a local intranet and allow remote access through the software's web portal. This enables operators to perform most tasks with the TLS from their office, including reviewing displacement reports, downloading survey data, and adjusting the scan schedule. This solution has proved invaluable in automatically detecting and alerting users to potential danger within the monitored areas while lowering the cost and work required for monitoring. An explanation of the entire system and a post-acquisition data demonstration will be presented.
Assessment of toxic metals in waste personal computers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolias, Konstantinos; Hahladakis, John N., E-mail: john_chach@yahoo.gr; Gidarakos, Evangelos, E-mail: gidarako@mred.tuc.gr
Highlights: • Waste personal computers were collected and dismantled in their main parts. • Motherboards, monitors and plastic housing were examined in their metal content. • Concentrations measured were compared to the RoHS Directive, 2002/95/EC. • Pb in motherboards and funnel glass of devices released <2006 was above the limit. • Waste personal computers need to be recycled and environmentally sound managed. - Abstract: Considering the enormous production of waste personal computers nowadays, it is obvious that the study of their composition is necessary in order to regulate their management and prevent any environmental contamination caused by their inappropriate disposal.more » This study aimed at determining the toxic metals content of motherboards (printed circuit boards), monitor glass and monitor plastic housing of two Cathode Ray Tube (CRT) monitors, three Liquid Crystal Display (LCD) monitors, one LCD touch screen monitor and six motherboards, all of which were discarded. In addition, concentrations of chromium (Cr), cadmium (Cd), lead (Pb) and mercury (Hg) were compared with the respective limits set by the RoHS 2002/95/EC Directive, that was recently renewed by the 2012/19/EU recast, in order to verify manufacturers’ compliance with the regulation. The research included disassembly, pulverization, digestion and chemical analyses of all the aforementioned devices. The toxic metals content of all samples was determined using Inductively Coupled Plasma-Mass Spectrometry (ICP-MS). The results demonstrated that concentrations of Pb in motherboards and funnel glass of devices with release dates before 2006, that is when the RoHS Directive came into force, exceeded the permissible limit. In general, except from Pb, higher metal concentrations were detected in motherboards in comparison with plastic housing and glass samples. Finally, the results of this work were encouraging, since concentrations of metals referred in the RoHS Directive were found in lower levels than the legislative limits.« less
13 point video tape quality guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaunt, R.
1997-05-01
Until high definition television (ATV) arrives, in the U.S. we must still contend with the National Television Systems Committee (NTSC) video standard (or PAL or SECAM-depending on your country). NTSC, a 40-year old standard designed for transmission of color video camera images over a small bandwidth, is not well suited for the sharp, full-color images that todays computers are capable of producing. PAL and SECAM also suffers from many of NTSC`s problems, but to varying degrees. Video professionals, when working with computer graphic (CG) images, use two monitors: a computer monitor for producing CGs and an NTSC monitor to viewmore » how a CG will look on video. More often than not, the NTSC image will differ significantly from the CG image, and outputting it to NTSC as an artist works enables the him or her to see the images as others will see it. Below are thirteen guidelines designed to increase the quality of computer graphics recorded onto video tape. Viewing your work in NTSC and attempting to follow the below tips will enable you to create higher quality videos. No video is perfect, so don`t expect to abide by every guideline every time.« less
Infrared imaging based hyperventilation monitoring through respiration rate estimation
NASA Astrophysics Data System (ADS)
Basu, Anushree; Routray, Aurobinda; Mukherjee, Rashmi; Shit, Suprosanna
2016-07-01
A change in the skin temperature is used as an indicator of physical illness which can be detected through infrared thermography. Thermograms or thermal images can be used as an effective diagnostic tool for monitoring and diagnosis of various diseases. This paper describes an infrared thermography based approach for detecting hyperventilation caused due to stress and anxiety in human beings by computing their respiration rates. The work employs computer vision techniques for tracking the region of interest from thermal video to compute the breath rate. Experiments have been performed on 30 subjects. Corner feature extraction using Minimum Eigenvalue (Shi-Tomasi) algorithm and registration using Kanade Lucas-Tomasi algorithm has been used here. Thermal signature around the extracted region is detected and subsequently filtered through a band pass filter to compute the respiration profile of an individual. If the respiration profile shows unusual pattern and exceeds the threshold we conclude that the person is stressed and tending to hyperventilate. Results obtained are compared with standard contact based methods which have shown significant correlations. It is envisaged that the thermal image based approach not only will help in detecting hyperventilation but can assist in regular stress monitoring as it is non-invasive method.
From computers to ubiquitous computing by 2010: health care.
Aziz, Omer; Lo, Benny; Pansiot, Julien; Atallah, Louis; Yang, Guang-Zhong; Darzi, Ara
2008-10-28
Over the past decade, miniaturization and cost reduction in semiconductors have led to computers smaller in size than a pinhead with powerful processing abilities that are affordable enough to be disposable. Similar advances in wireless communication, sensor design and energy storage have meant that the concept of a truly pervasive 'wireless sensor network', used to monitor environments and objects within them, has become a reality. The need for a wireless sensor network designed specifically for human body monitoring has led to the development of wireless 'body sensor network' (BSN) platforms composed of tiny integrated microsensors with on-board processing and wireless data transfer capability. The ubiquitous computing abilities of BSNs offer the prospect of continuous monitoring of human health in any environment, be it home, hospital, outdoors or the workplace. This pervasive technology comes at a time when Western world health care costs have sharply risen, reflected by increasing expenditure on health care as a proportion of gross domestic product over the last 20 years. Drivers of this rise include an ageing post 'baby boom' population, higher incidence of chronic disease and the need for earlier diagnosis. This paper outlines the role of pervasive health care technologies in providing more efficient health care.
Structural health monitoring for DOT using magnetic shape memory alloy cables in concrete
NASA Astrophysics Data System (ADS)
Davis, Allen; Mirsayar, Mirmilad; Sheahan, Emery; Hartl, Darren
2018-03-01
Embedding shape memory alloy (SMA) wires in concrete components offers the potential to monitor their structural health via external magnetic field sensing. Currently, structural health monitoring (SHM) is dominated by acoustic emission and vibration-based methods. Thus, it is attractive to pursue alternative damage sensing techniques that may lower the cost or increase the accuracy of SHM. In this work, SHM via magnetic field detection applied to embedded magnetic shape memory alloy (MSMA) is demonstrated both experimentally and using computational models. A concrete beam containing iron-based MSMA wire is subjected to a 3-point bend test where structural damage is induced, thereby resulting in a localized phase change of the MSMA wire. Magnetic field lines passing through the embedded MSMA domain are altered by this phase change and can thus be used to detect damage within the structure. A good correlation is observed between the computational and experimental results. Additionally, the implementation of stranded MSMA cables in place of the MSMA wire is assessed through similar computational models. The combination of these computational models and their subsequent experimental validation provide sufficient support for the feasibility of SHM using magnetic field sensing via MSMA embedded components.
A New Approach to Monitoring Coastal Marshes for Persistent Flooding
NASA Astrophysics Data System (ADS)
Kalcic, M. T.; Underwood, L. W.; Fletcher, R. M.
2012-12-01
Many areas in coastal Louisiana are below sea level and protected from flooding by a system of natural and man-made levees. Flooding is common when the levees are overtopped by storm surge or rising rivers. Many levees in this region are further stressed by erosion and subsidence. The floodwaters can become constricted by levees and trapped, causing prolonged inundation. Vegetative communities in coastal regions, from fresh swamp forest to saline marsh, can be negatively affected by inundation and changes in salinity. As saltwater persists, it can have a toxic effect upon marsh vegetation causing die off and conversion to open water types, destroying valuable species habitats. The length of time the water persists and the average annual salinity are important variables in modeling habitat switching (cover type change). Marsh type habitat switching affects fish, shellfish, and wildlife inhabitants, and can affect the regional ecosystem and economy. There are numerous restoration and revitalization projects underway in the coastal region, and their effects on the entire ecosystem need to be understood. For these reasons, monitoring persistent saltwater intrusion and inundation is important. For this study, persistent flooding in Louisiana coastal marshes was mapped using MODIS (Moderate Resolution Imaging Spectroradiometer) time series of a Normalized Difference Water Index (NDWI). The time series data were derived for 2000 through 2009, including flooding due to Hurricane Rita in 2005 and Hurricane Ike in 2008. Using the NDWI, duration and extent of flooding can be inferred. The Time Series Product Tool (TSPT), developed at NASA SSC, is a suite of software developed in MATLAB® that enables improved-quality time series images to be computed using advanced temporal processing techniques. This software has been used to compute time series for monitoring temporal changes in environmental phenomena, (e.g. NDVI times series from MODIS), and was modified and used to compute the NDWI indices and also the Normalized Difference Soil Index (NDSI). Coastwide Reference Monitoring System (CRMS) water levels from various hydrologic monitoring stations and aerial photography were used to optimize thresholds for MODIS-derived time series of NDWI and to validate resulting flood maps. In most of the profiles produced for post-hurricane assessment, the increase in the NDWI index (from storm surge) is accompanied by a decrease in the vegetation index (NDVI) and then a period of declining water. The NDSI index represents non-green or dead vegetation and increases after the hurricane's destruction of the marsh vegetation. Behavior of these indices over time is indicative of which areas remain flooded, which areas recover to their former levels of vegetative vigor, and which areas are stressed or in transition. Tracking these indices over time shows the recovery rate of vegetation and the relative behavior to inundation persistence. The results from this study demonstrated that identification of persistent marsh flooding, utilizing the tools developed in this study, provided an approximate 70-80 percent accuracy rate when compared to the actual days flooded at the CRMS stations.
A New Approach to Monitoring Coastal Marshes for Persistent Flooding
NASA Technical Reports Server (NTRS)
Kalcic, M. T.; Undersood, Lauren W.; Fletcher, Rose
2012-01-01
Many areas in coastal Louisiana are below sea level and protected from flooding by a system of natural and man-made levees. Flooding is common when the levees are overtopped by storm surge or rising rivers. Many levees in this region are further stressed by erosion and subsidence. The floodwaters can become constricted by levees and trapped, causing prolonged inundation. Vegetative communities in coastal regions, from fresh swamp forest to saline marsh, can be negatively affected by inundation and changes in salinity. As saltwater persists, it can have a toxic effect upon marsh vegetation causing die off and conversion to open water types, destroying valuable species habitats. The length of time the water persists and the average annual salinity are important variables in modeling habitat switching (cover type change). Marsh type habitat switching affects fish, shellfish, and wildlife inhabitants, and can affect the regional ecosystem and economy. There are numerous restoration and revitalization projects underway in the coastal region, and their effects on the entire ecosystem need to be understood. For these reasons, monitoring persistent saltwater intrusion and inundation is important. For this study, persistent flooding in Louisiana coastal marshes was mapped using MODIS (Moderate Resolution Imaging Spectroradiometer) time series of a Normalized Difference Water Index (NDWI). The time series data were derived for 2000 through 2009, including flooding due to Hurricane Rita in 2005 and Hurricane Ike in 2008. Using the NDWI, duration and extent of flooding can be inferred. The Time Series Product Tool (TSPT), developed at NASA SSC, is a suite of software developed in MATLAB(R) that enables improved-quality time series images to be computed using advanced temporal processing techniques. This software has been used to compute time series for monitoring temporal changes in environmental phenomena, (e.g. NDVI times series from MODIS), and was modified and used to compute the NDWI indices and also the Normalized Difference Soil Index (NDSI). Coastwide Reference Monitoring System (CRMS) water levels from various hydrologic monitoring stations and aerial photography were used to optimize thresholds for MODIS-derived time series of NDWI and to validate resulting flood maps. In most of the profiles produced for post-hurricane assessment, the increase in the NDWI index (from storm surge) is accompanied by a decrease in the vegetation index (NDVI) and then a period of declining water. The NDSI index represents non-green or dead vegetation and increases after the hurricane s destruction of the marsh vegetation. Behavior of these indices over time is indicative of which areas remain flooded, which areas recover to their former levels of vegetative vigor, and which areas are stressed or in transition. Tracking these indices over time shows the recovery rate of vegetation and the relative behavior to inundation persistence. The results from this study demonstrated that identification of persistent marsh flooding, utilizing the tools developed in this study, provided an approximate 70-80 percent accuracy rate when compared to the actual days flooded at the CRMS stations.
Computer Use and Computer Anxiety in Older Korean Americans.
Yoon, Hyunwoo; Jang, Yuri; Xie, Bo
2016-09-01
Responding to the limited literature on computer use in ethnic minority older populations, the present study examined predictors of computer use and computer anxiety in older Korean Americans. Separate regression models were estimated for computer use and computer anxiety with the common sets of predictors: (a) demographic variables (age, gender, marital status, and education), (b) physical health indicators (chronic conditions, functional disability, and self-rated health), and (c) sociocultural factors (acculturation and attitudes toward aging). Approximately 60% of the participants were computer-users, and they had significantly lower levels of computer anxiety than non-users. A higher likelihood of computer use and lower levels of computer anxiety were commonly observed among individuals with younger age, male gender, advanced education, more positive ratings of health, and higher levels of acculturation. In addition, positive attitudes toward aging were found to reduce computer anxiety. Findings provide implications for developing computer training and education programs for the target population. © The Author(s) 2015.
Visualization assisted by parallel processing
NASA Astrophysics Data System (ADS)
Lange, B.; Rey, H.; Vasques, X.; Puech, W.; Rodriguez, N.
2011-01-01
This paper discusses the experimental results of our visualization model for data extracted from sensors. The objective of this paper is to find a computationally efficient method to produce a real time rendering visualization for a large amount of data. We develop visualization method to monitor temperature variance of a data center. Sensors are placed on three layers and do not cover all the room. We use particle paradigm to interpolate data sensors. Particles model the "space" of the room. In this work we use a partition of the particle set, using two mathematical methods: Delaunay triangulation and Voronoý cells. Avis and Bhattacharya present these two algorithms in. Particles provide information on the room temperature at different coordinates over time. To locate and update particles data we define a computational cost function. To solve this function in an efficient way, we use a client server paradigm. Server computes data and client display this data on different kind of hardware. This paper is organized as follows. The first part presents related algorithm used to visualize large flow of data. The second part presents different platforms and methods used, which was evaluated in order to determine the better solution for the task proposed. The benchmark use the computational cost of our algorithm that formed based on located particles compared to sensors and on update of particles value. The benchmark was done on a personal computer using CPU, multi core programming, GPU programming and hybrid GPU/CPU. GPU programming method is growing in the research field; this method allows getting a real time rendering instates of a precompute rendering. For improving our results, we compute our algorithm on a High Performance Computing (HPC), this benchmark was used to improve multi-core method. HPC is commonly used in data visualization (astronomy, physic, etc) for improving the rendering and getting real-time.
Computational analysis of integrated biosensing and shear flow in a microfluidic vascular model
NASA Astrophysics Data System (ADS)
Wong, Jeremy F.; Young, Edmond W. K.; Simmons, Craig A.
2017-11-01
Fluid flow and flow-induced shear stress are critical components of the vascular microenvironment commonly studied using microfluidic cell culture models. Microfluidic vascular models mimicking the physiological microenvironment also offer great potential for incorporating on-chip biomolecular detection. In spite of this potential, however, there are few examples of such functionality. Detection of biomolecules released by cells under flow-induced shear stress is a significant challenge due to severe sample dilution caused by the fluid flow used to generate the shear stress, frequently to the extent where the analyte is no longer detectable. In this work, we developed a computational model of a vascular microfluidic cell culture model that integrates physiological shear flow and on-chip monitoring of cell-secreted factors. Applicable to multilayer device configurations, the computational model was applied to a bilayer configuration, which has been used in numerous cell culture applications including vascular models. Guidelines were established that allow cells to be subjected to a wide range of physiological shear stress while ensuring optimal rapid transport of analyte to the biosensor surface and minimized biosensor response times. These guidelines therefore enable the development of microfluidic vascular models that integrate cell-secreted factor detection while addressing flow constraints imposed by physiological shear stress. Ultimately, this work will result in the addition of valuable functionality to microfluidic cell culture models that further fulfill their potential as labs-on-chips.
Probabilistic Prognosis of Non-Planar Fatigue Crack Growth
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Newman, John A.; Warner, James E.; Leser, William P.; Hochhalter, Jacob D.; Yuan, Fuh-Gwo
2016-01-01
Quantifying the uncertainty in model parameters for the purpose of damage prognosis can be accomplished utilizing Bayesian inference and damage diagnosis data from sources such as non-destructive evaluation or structural health monitoring. The number of samples required to solve the Bayesian inverse problem through common sampling techniques (e.g., Markov chain Monte Carlo) renders high-fidelity finite element-based damage growth models unusable due to prohibitive computation times. However, these types of models are often the only option when attempting to model complex damage growth in real-world structures. Here, a recently developed high-fidelity crack growth model is used which, when compared to finite element-based modeling, has demonstrated reductions in computation times of three orders of magnitude through the use of surrogate models and machine learning. The model is flexible in that only the expensive computation of the crack driving forces is replaced by the surrogate models, leaving the remaining parameters accessible for uncertainty quantification. A probabilistic prognosis framework incorporating this model is developed and demonstrated for non-planar crack growth in a modified, edge-notched, aluminum tensile specimen. Predictions of remaining useful life are made over time for five updates of the damage diagnosis data, and prognostic metrics are utilized to evaluate the performance of the prognostic framework. Challenges specific to the probabilistic prognosis of non-planar fatigue crack growth are highlighted and discussed in the context of the experimental results.
Beats: Video Monitors and Cameras.
ERIC Educational Resources Information Center
Worth, Frazier
1996-01-01
Presents a method to teach the concept of beats as a generalized phenomenon rather than teaching it only in the context of sound. Involves using a video camera to film a computer terminal, 16-mm projector, or TV monitor. (JRH)
Tuning Forks and Monitor Screens.
ERIC Educational Resources Information Center
Harrison, M. A. T.
2000-01-01
Defines the vibrations of a tuning fork against a computer monitor screen as a pattern that can illustrate or explain physical concepts like wave vibrations, wave forms, and phase differences. Presents background information and demonstrates the experiment. (Author/YDS)
Bianca N. I. Eskelson; Hailemariam Temesgen; Valerie Lemay; Tara M. Barrett; Nicholas L. Crookston; Andrew T. Hudak
2009-01-01
Almost universally, forest inventory and monitoring databases are incomplete, ranging from missing data for only a few records and a few variables, common for small land areas, to missing data for many observations and many variables, common for large land areas. For a wide variety of applications, nearest neighbor (NN) imputation methods have been developed to fill in...
Avatar - a multi-sensory system for real time body position monitoring.
Jovanov, E; Hanish, N; Courson, V; Stidham, J; Stinson, H; Webb, C; Denny, K
2009-01-01
Virtual reality and computer assisted physical rehabilitation applications require an unobtrusive and inexpensive real time monitoring systems. Existing systems are usually complex and expensive and based on infrared monitoring. In this paper we propose Avatar, a hybrid system consisting of off-the-shelf components and sensors. Absolute positioning of a few reference points is determined using infrared diode on subject's body and a set of Wii Remotes as optical sensors. Individual body segments are monitored by intelligent inertial sensor nodes iSense. A network of inertial nodes is controlled by a master node that serves as a gateway for communication with a capture device. Each sensor features a 3D accelerometer and a 2 axis gyroscope. Avatar system is used for control of avatars in Virtual Reality applications, but could be used in a variety of augmented reality, gaming, and computer assisted physical rehabilitation applications.
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh
2015-01-01
Recently, a novel and computationally efficient method - based on a vector covering approach - to design optimal control places and an iteration approach that computes the reachability graph to obtain a maximally permissive liveness enforcing supervisor for FMS (flexible manufacturing systems) have been reported. However, it is unclear as to the relationship between the structure of the net and the minimal number of monitors required. This paper develops a theory to show that the minimal number of monitors required cannot be less than that of basic siphons in α-S3PR (systems of simple sequential processes with resources). This confirms that two of the three controlled systems by Chen et al. are of a minimal monitor configuration since they belong to α-S3PR and their number in each example equals that of basic siphons.
Understanding Monitoring Technologies for Adults With Pain: Systematic Literature Review.
Rodríguez, Iyubanit; Herskovic, Valeria; Gerea, Carmen; Fuentes, Carolina; Rossel, Pedro O; Marques, Maíra; Campos, Mauricio
2017-10-27
Monitoring of patients may decrease treatment costs and improve quality of care. Pain is the most common health problem that people seek help for in hospitals. Therefore, monitoring patients with pain may have significant impact in improving treatment. Several studies have studied factors affecting pain; however, no previous study has reviewed the contextual information that a monitoring system may capture to characterize a patient's situation. The objective of this study was to conduct a systematic review to (1) determine what types of technologies have been used to monitor adults with pain, and (2) construct a model of the context information that may be used to implement apps and devices aimed at monitoring adults with pain. A literature search (2005-2015) was conducted in electronic databases pertaining to medical and computer science literature (PubMed, Science Direct, ACM Digital Library, and IEEE Xplore) using a defined search string. Article selection was done through a process of removing duplicates, analyzing title and abstract, and then reviewing the full text of the article. In the final analysis, 87 articles were included and 53 of them (61%) used technologies to collect contextual information. A total of 49 types of context information were found and a five-dimension (activity, identity, wellness, environment, physiological) model of context information to monitor adults with pain was proposed, expanding on a previous model. Most technological interfaces for pain monitoring were wearable, possibly because they can be used in more realistic contexts. Few studies focused on older adults, creating a relevant avenue of research on how to create devices for users that may have impaired cognitive skills or low digital literacy. The design of monitoring devices and interfaces for adults with pain must deal with the challenge of selecting relevant contextual information to understand the user's situation, and not overburdening or inconveniencing users with information requests. A model of contextual information may be used by researchers to choose possible contextual information that may be monitored during studies on adults with pain. ©Iyubanit Rodríguez, Valeria Herskovic, Carmen Gerea, Carolina Fuentes, Pedro O Rossel, Maíra Marques, Mauricio Campos. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 27.10.2017.
DOT National Transportation Integrated Search
2014-08-01
This report describes the instrumentation and data acquisition for a three-span continuous, curved post-tensioned box-girder : bridge in Connecticut. The computer-based remote monitoring system was developed to collect information on the deformations...
DOT National Transportation Integrated Search
2014-08-01
This report describes the instrumentation and data acquisition for a continuous curved steel box-girder composite bridge in : Connecticut. The computer-based remote monitoring system was installed in 2001, with accelerometers, tilt meters and : tempe...
DOT National Transportation Integrated Search
2014-08-01
This report describes the instrumentation and data acquisition for an eleven span segmental, post-tensioned : box-girder bridge in Connecticut. Based on a request from the designers, the computer-based remote : monitoring system was developed to coll...