Study on an agricultural environment monitoring server system using Wireless Sensor Networks.
Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun
2010-01-01
This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.
The Development of a Remote Patient Monitoring System using Java-enabled Mobile Phones.
Kogure, Y; Matsuoka, H; Kinouchi, Y; Akutagawa, M
2005-01-01
A remote patient monitoring system is described. This system is to monitor information of multiple patients in ICU/CCU via 3G mobile phones. Conventionally, various patient information, such as vital signs, is collected and stored on patient information systems. In proposed system, the patient information is recollected by remote information server, and transported to mobile phones. The server is worked as a gateway between hospital intranet and public networks. Provided information from the server consists of graphs and text data. Doctors can browse patient's information on their mobile phones via the server. A custom Java application software is used to browse these data. In this study, the information server and Java application are developed, and communication between the server and mobile phone in model environment is confirmed. To apply this system to practical products of patient information systems is future work.
Advanced Pulse Oximetry System for Remote Monitoring and Management
Pak, Ju Geon; Park, Kee Hyun
2012-01-01
Pulse oximetry data such as saturation of peripheral oxygen (SpO2) and pulse rate are vital signals for early diagnosis of heart disease. Therefore, various pulse oximeters have been developed continuously. However, some of the existing pulse oximeters are not equipped with communication capabilities, and consequently, the continuous monitoring of patient health is restricted. Moreover, even though certain oximeters have been built as network models, they focus on exchanging only pulse oximetry data, and they do not provide sufficient device management functions. In this paper, we propose an advanced pulse oximetry system for remote monitoring and management. The system consists of a networked pulse oximeter and a personal monitoring server. The proposed pulse oximeter measures a patient's pulse oximetry data and transmits the data to the personal monitoring server. The personal monitoring server then analyzes the received data and displays the results to the patient. Furthermore, for device management purposes, operational errors that occur in the pulse oximeter are reported to the personal monitoring server, and the system configurations of the pulse oximeter, such as thresholds and measurement targets, are modified by the server. We verify that the proposed pulse oximetry system operates efficiently and that it is appropriate for monitoring and managing a pulse oximeter in real time. PMID:22933841
Advanced pulse oximetry system for remote monitoring and management.
Pak, Ju Geon; Park, Kee Hyun
2012-01-01
Pulse oximetry data such as saturation of peripheral oxygen (SpO(2)) and pulse rate are vital signals for early diagnosis of heart disease. Therefore, various pulse oximeters have been developed continuously. However, some of the existing pulse oximeters are not equipped with communication capabilities, and consequently, the continuous monitoring of patient health is restricted. Moreover, even though certain oximeters have been built as network models, they focus on exchanging only pulse oximetry data, and they do not provide sufficient device management functions. In this paper, we propose an advanced pulse oximetry system for remote monitoring and management. The system consists of a networked pulse oximeter and a personal monitoring server. The proposed pulse oximeter measures a patient's pulse oximetry data and transmits the data to the personal monitoring server. The personal monitoring server then analyzes the received data and displays the results to the patient. Furthermore, for device management purposes, operational errors that occur in the pulse oximeter are reported to the personal monitoring server, and the system configurations of the pulse oximeter, such as thresholds and measurement targets, are modified by the server. We verify that the proposed pulse oximetry system operates efficiently and that it is appropriate for monitoring and managing a pulse oximeter in real time.
Integrated technologies for solid waste bin monitoring system.
Arebey, Maher; Hannan, M A; Basri, Hassan; Begum, R A; Abdullah, Huda
2011-06-01
The integration of communication technologies such as radio frequency identification (RFID), global positioning system (GPS), general packet radio system (GPRS), and geographic information system (GIS) with a camera are constructed for solid waste monitoring system. The aim is to improve the way of responding to customer's inquiry and emergency cases and estimate the solid waste amount without any involvement of the truck driver. The proposed system consists of RFID tag mounted on the bin, RFID reader as in truck, GPRS/GSM as web server, and GIS as map server, database server, and control server. The tracking devices mounted in the trucks collect location information in real time via the GPS. This information is transferred continuously through GPRS to a central database. The users are able to view the current location of each truck in the collection stage via a web-based application and thereby manage the fleet. The trucks positions and trash bin information are displayed on a digital map, which is made available by a map server. Thus, the solid waste of the bin and the truck are being monitored using the developed system.
Introduction to the Space Weather Monitoring System at KASI
NASA Astrophysics Data System (ADS)
Baek, J.; Choi, S.; Kim, Y.; Cho, K.; Bong, S.; Lee, J.; Kwak, Y.; Hwang, J.; Park, Y.; Hwang, E.
2014-05-01
We have developed the Space Weather Monitoring System (SWMS) at the Korea Astronomy and Space Science Institute (KASI). Since 2007, the system has continuously evolved into a better system. The SWMS consists of several subsystems: applications which acquire and process observational data, servers which run the applications, data storage, and display facilities which show the space weather information. The applications collect solar and space weather data from domestic and oversea sites. The collected data are converted to other format and/or visualized in real time as graphs and illustrations. We manage 3 data acquisition and processing servers, a file service server, a web server, and 3 sets of storage systems. We have developed 30 applications for a variety of data, and the volume of data is about 5.5 GB per day. We provide our customers with space weather contents displayed at the Space Weather Monitoring Lab (SWML) using web services.
Ubiquitous-health (U-Health) monitoring systems for elders and caregivers
NASA Astrophysics Data System (ADS)
Moon, Gyu; Lim, Kyung-won; Yoo, Young-min; An, Hye-min; Lee, Ki Seop; Szu, Harold
2011-06-01
This paper presents two aordable low-tack system for household biomedical wellness monitoring. The rst system, JIKIMI (pronounced caregiver in Korean), is a remote monitoring system that analyzes the behavior patterns of elders that live alone. JIKIMI is composed of an in-house sensing system, a set of wireless sensor nodes containing a pyroelectric infrared sensor to detect the motion of elders, an emergency button and a magnetic sensor that detects the opening and closing of doors. The system is also equipped with a server system, which is comprised of a database and web server. The server provides the mechanism for web-based monitoring to caregivers. The second system, Reader of Bottle Information (ROBI), is an assistant system which advises the contents of bottles for elders. ROBI is composed of bottles that have connected RFID tags and an advice system, which is composed of a wireless RFID reader, a gateway and a remote database server. The RFID tags are connected to the caps of the bottles are used in conjunction with the advice system These systems have been in use for three years and have proven to be useful for caregivers to provide more ecient and eective care services.
Xu, Xiu; Zhang, Honglei; Li, Yiming; Li, Bin
2015-07-01
Developed the information centralization and management integration system for monitors of different brands and models with wireless sensor network technologies such as wireless location and wireless communication, based on the existing wireless network. With adaptive implementation and low cost, the system which possesses the advantages of real-time, efficiency and elaboration is able to collect status and data of the monitors, locate the monitors, and provide services with web server, video server and locating server via local network. Using an intranet computer, the clinical and device management staffs can access the status and parameters of monitors. Applications of this system provide convenience and save human resource for clinical departments, as well as promote the efficiency, accuracy and elaboration for the device management. The successful achievement of this system provides solution for integrated and elaborated management of the mobile devices including ventilator and infusion pump.
NASA Technical Reports Server (NTRS)
Deb, Somnath (Inventor); Ghoshal, Sudipto (Inventor); Malepati, Venkata N. (Inventor); Kleinman, David L. (Inventor); Cavanaugh, Kevin F. (Inventor)
2004-01-01
A network-based diagnosis server for monitoring and diagnosing a system, the server being remote from the system it is observing, comprises a sensor for generating signals indicative of a characteristic of a component of the system, a network-interfaced sensor agent coupled to the sensor for receiving signals therefrom, a broker module coupled to the network for sending signals to and receiving signals from the sensor agent, a handler application connected to the broker module for transmitting signals to and receiving signals therefrom, a reasoner application in communication with the handler application for processing, and responding to signals received from the handler application, wherein the sensor agent, broker module, handler application, and reasoner applications operate simultaneously relative to each other, such that the present invention diagnosis server performs continuous monitoring and diagnosing of said components of the system in real time. The diagnosis server is readily adaptable to various different systems.
A daily living activity remote monitoring system for solitary elderly people.
Maki, Hiromichi; Ogawa, Hidekuni; Matsuoka, Shingo; Yonezawa, Yoshiharu; Caldwell, W Morton
2011-01-01
A daily living activity remote monitoring system has been developed for supporting solitary elderly people. The monitoring system consists of a tri-axis accelerometer, six low-power active filters, a low-power 8-bit microcontroller (MC), a 1GB SD memory card (SDMC) and a 2.4 GHz low transmitting power mobile phone (PHS). The tri-axis accelerometer attached to the subject's chest can simultaneously measure dynamic and static acceleration forces produced by heart sound, respiration, posture and behavior. The heart rate, respiration rate, activity, posture and behavior are detected from the dynamic and static acceleration forces. These data are stored in the SD. The MC sends the data to the server computer every hour. The server computer stores the data and makes a graphic chart from the data. When the caregiver calls from his/her mobile phone to the server computer, the server computer sends the graphical chart via the PHS. The caregiver's mobile phone displays the chart to the monitor graphically.
Client-Server Connection Status Monitoring Using Ajax Push Technology
NASA Technical Reports Server (NTRS)
Lamongie, Julien R.
2008-01-01
This paper describes how simple client-server connection status monitoring can be implemented using Ajax (Asynchronous JavaScript and XML), JSF (Java Server Faces) and ICEfaces technologies. This functionality is required for NASA LCS (Launch Control System) displays used in the firing room for the Constellation project. Two separate implementations based on two distinct approaches are detailed and analyzed.
The development of a tele-monitoring system for physiological parameters based on the B/S model.
Shuicai, Wu; Peijie, Jiang; Chunlan, Yang; Haomin, Li; Yanping, Bai
2010-01-01
The development of a new physiological multi-parameter remote monitoring system is based on the B/S model. The system consists of a server monitoring center, Internet network and PC-based multi-parameter monitors. Using the B/S model, the clients can browse web pages via the server monitoring center and download and install ActiveX controls. The physiological multi-parameters are collected, displayed and remotely transmitted. The experimental results show that the system is stable, reliable and operates in real time. The system is suitable for use in physiological multi-parameter remote monitoring for family and community healthcare. Copyright © 2010 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
1997-06-01
The Geographic Information System-Transportation (GIS-T) ISTEA Management Systems Server Net Prototype Pooled Fund Study represents the first national cooperative effort in the transportation industry to address the management and monitoring systems ...
Application of Aquaculture Monitoring System Based on CC2530
NASA Astrophysics Data System (ADS)
Chen, H. L.; Liu, X. Q.
In order to improve the intelligent level of aquaculture technology, this paper puts forward a remote wireless monitoring system based on ZigBee technology, GPRS technology and Android mobile phone platform. The system is composed of wireless sensor network (WSN), GPRS module, PC server, and Android client. The WSN was set up by CC2530 chips based on ZigBee protocol, to realize the collection of water quality parameters such as the water level, temperature, PH and dissolved oxygen. The GPRS module realizes remote communication between WSN and PC server. Android client communicates with server to monitor the level of water quality. The PID (proportion, integration, differentiation) control is adopted in the control part, the control commands from the android mobile phone is sent to the server, the server again send it to the lower machine to control the water level regulating valve and increasing oxygen pump. After practical testing to the system in Liyang, Jiangsu province, China, temperature measurement accuracy reaches 0.5°C, PH measurement accuracy reaches 0.3, water level control precision can be controlled within ± 3cm, dissolved oxygen control precision can be controlled within ±0.3 mg/L, all the indexes can meet the requirements, this system is very suitable for aquaculture.
Hu, Peter F; Yang, Shiming; Li, Hsiao-Chi; Stansbury, Lynn G; Yang, Fan; Hagegeorge, George; Miller, Catriona; Rock, Peter; Stein, Deborah M; Mackenzie, Colin F
2017-01-01
Research and practice based on automated electronic patient monitoring and data collection systems is significantly limited by system down time. We asked whether a triple-redundant Monitor of Monitors System (MoMs) to collect and summarize key information from system-wide data sources could achieve high fault tolerance, early diagnosis of system failure, and improve data collection rates. In our Level I trauma center, patient vital signs(VS) monitors were networked to collect real time patient physiologic data streams from 94 bed units in our various resuscitation, operating, and critical care units. To minimize the impact of server collection failure, three BedMaster® VS servers were used in parallel to collect data from all bed units. To locate and diagnose system failures, we summarized critical information from high throughput datastreams in real-time in a dashboard viewer and compared the before and post MoMs phases to evaluate data collection performance as availability time, active collection rates, and gap duration, occurrence, and categories. Single-server collection rates in the 3-month period before MoMs deployment ranged from 27.8 % to 40.5 % with combined 79.1 % collection rate. Reasons for gaps included collection server failure, software instability, individual bed setting inconsistency, and monitor servicing. In the 6-month post MoMs deployment period, average collection rates were 99.9 %. A triple redundant patient data collection system with real-time diagnostic information summarization and representation improved the reliability of massive clinical data collection to nearly 100 % in a Level I trauma center. Such data collection framework may also increase the automation level of hospital-wise information aggregation for optimal allocation of health care resources.
Implementation experience of a patient monitoring solution based on end-to-end standards.
Martinez, I; Fernandez, J; Galarraga, M; Serrano, L; de Toledo, P; Escayola, J; Jimenez-Fernandez, S; Led, S; Martinez-Espronceda, M; Garcia, J
2007-01-01
This paper presents a proof-of-concept design of a patient monitoring solution for Intensive Care Unit (ICU). It is end-to-end standards-based, using ISO/IEEE 11073 (X73) in the bedside environment and EN13606 to communicate the information to an Electronic Healthcare Record (EHR) server. At the bedside end a plug-and-play sensor network is implemented, which communicates with a gateway that collects the medical information and sends it to a monitoring server. At this point the server transforms the data frame into an EN13606 extract, to be stored on the EHR server. The presented system has been tested in a laboratory environment to demonstrate the feasibility of this end-to-end standards-based solution.
NASA Astrophysics Data System (ADS)
Ibrahim, Maslina Mohd; Yussup, Nolida; Haris, Mohd Fauzi; Soh @ Shaari, Syirrazie Che; Azman, Azraf; Razalim, Faizal Azrin B. Abdul; Yapp, Raymond; Hasim, Harzawardi; Aslan, Mohd Dzul Aiman
2017-01-01
One of the applications for radiation detector is area monitoring which is crucial for safety especially at a place where radiation source is involved. An environmental radiation monitoring system is a professional system that combines flexibility and ease of use for data collection and monitoring. Nowadays, with the growth of technology, devices and equipment can be connected to the network and Internet to enable online data acquisition. This technology enables data from the area monitoring devices to be transmitted to any place and location directly and faster. In Nuclear Malaysia, area radiation monitor devices are located at several selective locations such as laboratories and radiation facility. This system utilizes an Ethernet as a communication media for data acquisition of the area radiation levels from radiation detectors and stores the data at a server for recording and analysis. This paper discusses on the design and development of website that enable all user in Nuclear Malaysia to access and monitor the radiation level for each radiation detectors at real time online. The web design also included a query feature for history data from various locations online. The communication between the server's software and web server is discussed in detail in this paper.
Preliminary Results on Design and Implementation of a Solar Radiation Monitoring System
Balan, Mugur C.; Damian, Mihai; Jäntschi, Lorentz
2008-01-01
The paper presents a solar radiation monitoring system, using two scientific pyranometers and an on-line computer home-made data acquisition system. The first pyranometer measures the global solar radiation and the other one, which is shaded, measure the diffuse radiation. The values of total and diffuse solar radiation are continuously stored into a database on a server. Original software was created for data acquisition and interrogation of the created system. The server application acquires the data from pyranometers and stores it into a database with a baud rate of one record at 50 seconds. The client-server application queries the database and provides descriptive statistics. A web interface allow to any user to define the including criteria and to obtain the results. In terms of results, the system is able to provide direct, diffuse and total radiation intensities as time series. Our client-server application computes also derivate heats. The ability of the system to evaluate the local solar energy potential is highlighted. PMID:27879746
Deploying Server-side File System Monitoring at NERSC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uselton, Andrew
2009-05-01
The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.
Environmental Monitoring Using Sensor Networks
NASA Astrophysics Data System (ADS)
Yang, J.; Zhang, C.; Li, X.; Huang, Y.; Fu, S.; Acevedo, M. F.
2008-12-01
Environmental observatories, consisting of a variety of sensor systems, computational resources and informatics, are important for us to observe, model, predict, and ultimately help preserve the health of the nature. The commoditization and proliferation of coin-to-palm sized wireless sensors will allow environmental monitoring with unprecedented fine spatial and temporal resolution. Once scattered around, these sensors can identify themselves, locate their positions, describe their functions, and self-organize into a network. They communicate through wireless channel with nearby sensors and transmit data through multi-hop protocols to a gateway, which can forward information to a remote data server. In this project, we describe an environmental observatory called Texas Environmental Observatory (TEO) that incorporates a sensor network system with intertwined wired and wireless sensors. We are enhancing and expanding the existing wired weather stations to include wireless sensor networks (WSNs) and telemetry using solar-powered cellular modems. The new WSNs will monitor soil moisture and support long-term hydrologic modeling. Hydrologic models are helpful in predicting how changes in land cover translate into changes in the stream flow regime. These models require inputs that are difficult to measure over large areas, especially variables related to storm events, such as soil moisture antecedent conditions and rainfall amount and intensity. This will also contribute to improve rainfall estimations from meteorological radar data and enhance hydrological forecasts. Sensor data are transmitted from monitoring site to a Central Data Collection (CDC) Server. We incorporate a GPRS modem for wireless telemetry, a single-board computer (SBC) as Remote Field Gateway (RFG) Server, and a WSN for distributed soil moisture monitoring. The RFG provides effective control, management, and coordination of two independent sensor systems, i.e., a traditional datalogger-based wired sensor system and the WSN-based wireless sensor system. The RFG also supports remote manipulation of the devices in the field such as the SBC, datalogger, and WSN. Sensor data collected from the distributed monitoring stations are stored in a database (DB) Server. The CDC Server acts as an intermediate component to hide the heterogeneity of different devices and support data validation required by the DB Server. Daemon programs running on the CDC Server pre-process the data before it is inserted into the database, and periodically perform synchronization tasks. A SWE-compliant data repository is installed to enable data exchange, accepting data from both internal DB Server and external sources through the OGC web services. The web portal, i.e. TEO Online, serves as a user-friendly interface for data visualization, analysis, synthesis, modeling, and K-12 educational outreach activities. It also provides useful capabilities for system developers and operators to remotely monitor system status and remotely update software and system configuration, which greatly simplifies the system debugging and maintenance tasks. We also implement Sensor Observation Services (SOS) at this layer, conforming to the SWE standard to facilitate data exchange. The standard SensorML/O&M data representation makes it easy to integrate our sensor data into the existing Geographic Information Systems (GIS) web services and exchange the data with other organizations.
Horton, John J.
2006-04-11
A system and method of maintaining communication between a computer and a server, the server being in communication with the computer via xDSL service or dial-up modem service, with xDSL service being the default mode of communication, the method including sending a request to the server via xDSL service to which the server should respond and determining if a response has been received. If no response has been received, displaying on the computer a message (i) indicating that xDSL service has failed and (ii) offering to establish communication between the computer and the server via the dial-up modem, and thereafter changing the default mode of communication between the computer and the server to dial-up modem service. In a preferred embodiment, an xDSL service provider monitors dial-up modem communications and determines if the computer dialing in normally establishes communication with the server via xDSL service. The xDSL service provider can thus quickly and easily detect xDSL failures.
Remote online monitoring and measuring system for civil engineering structures
NASA Astrophysics Data System (ADS)
Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł
2009-06-01
In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.
HIPAA-compliant automatic monitoring system for RIS-integrated PACS operation
NASA Astrophysics Data System (ADS)
Jin, Jin; Zhang, Jianguo; Chen, Xiaomeng; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Feng, Jie; Sheng, Liwei; Huang, H. K.
2006-03-01
As a governmental regulation, Health Insurance Portability and Accountability Act (HIPAA) was issued to protect the privacy of health information that identifies individuals who are living or deceased. HIPAA requires security services supporting implementation features: Access control; Audit controls; Authorization control; Data authentication; and Entity authentication. These controls, which proposed in HIPAA Security Standards, are Audit trails here. Audit trails can be used for surveillance purposes, to detect when interesting events might be happening that warrant further investigation. Or they can be used forensically, after the detection of a security breach, to determine what went wrong and who or what was at fault. In order to provide security control services and to achieve the high and continuous availability, we design the HIPAA-Compliant Automatic Monitoring System for RIS-Integrated PACS operation. The system consists of two parts: monitoring agents running in each PACS component computer and a Monitor Server running in a remote computer. Monitoring agents are deployed on all computer nodes in RIS-Integrated PACS system to collect the Audit trail messages defined by the Supplement 95 of the DICOM standard: Audit Trail Messages. Then the Monitor Server gathers all audit messages and processes them to provide security information in three levels: system resources, PACS/RIS applications, and users/patients data accessing. Now the RIS-Integrated PACS managers can monitor and control the entire RIS-Integrated PACS operation through web service provided by the Monitor Server. This paper presents the design of a HIPAA-compliant automatic monitoring system for RIS-Integrated PACS Operation, and gives the preliminary results performed by this monitoring system on a clinical RIS-integrated PACS.
Automatic analysis of attack data from distributed honeypot network
NASA Astrophysics Data System (ADS)
Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel
2013-05-01
There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.
Home medical monitoring network based on embedded technology
NASA Astrophysics Data System (ADS)
Liu, Guozhong; Deng, Wenyi; Yan, Bixi; Lv, Naiguang
2006-11-01
Remote medical monitoring network for long-term monitoring of physiological variables would be helpful for recovery of patients as people are monitored at more comfortable conditions. Furthermore, long-term monitoring would be beneficial to investigate slowly developing deterioration in wellness status of a subject and provide medical treatment as soon as possible. The home monitor runs on an embedded microcomputer Rabbit3000 and interfaces with different medical monitoring module through serial ports. The network based on asymmetric digital subscriber line (ADSL) or local area network (LAN) is established and a client - server model, each embedded home medical monitor is client and the monitoring center is the server, is applied to the system design. The client is able to provide its information to the server when client's request of connection to the server is permitted. The monitoring center focuses on the management of the communications, the acquisition of medical data, and the visualization and analysis of the data, etc. Diagnosing model of sleep apnea syndrome is built basing on ECG, heart rate, respiration wave, blood pressure, oxygen saturation, air temperature of mouth cavity or nasal cavity, so sleep status can be analyzed by physiological data acquired as people in sleep. Remote medical monitoring network based on embedded micro Internetworking technology have advantages of lower price, convenience and feasibility, which have been tested by the prototype.
Rajan, J Pandia; Rajan, S Edward
2018-01-01
Wireless physiological signal monitoring system designing with secured data communication in the health care system is an important and dynamic process. We propose a signal monitoring system using NI myRIO connected with the wireless body sensor network through multi-channel signal acquisition method. Based on the server side validation of the signal, the data connected to the local server is updated in the cloud. The Internet of Things (IoT) architecture is used to get the mobility and fast access of patient data to healthcare service providers. This research work proposes a novel architecture for wireless physiological signal monitoring system using ubiquitous healthcare services by virtual Internet of Things. We showed an improvement in method of access and real time dynamic monitoring of physiological signal of this remote monitoring system using virtual Internet of thing approach. This remote monitoring and access system is evaluated in conventional value. This proposed system is envisioned to modern smart health care system by high utility and user friendly in clinical applications. We claim that the proposed scheme significantly improves the accuracy of the remote monitoring system compared to the other wireless communication methods in clinical system.
A mobile phone-based ECG monitoring system.
Iwamoto, Junichi; Yonezawa, Yoshiharu; Maki, Hiromichi; Ogawa, Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Hahn, Allen W; Caldwell, W Morton
2006-01-01
We have developed a telemedicine system for monitoring a patient's electrocardiogram during daily activities. The recording system consists of three ECG chest electrodes, a variable gain instrumentation amplifier, a low power 8-bit single-chip microcomputer, a 256 KB EEPROM and a 2.4 GHz low transmitting power mobile phone (PHS). The complete system is mounted on a single, lightweight, chest electrode array. When a heart discomfort is felt, the patient pushes the data transmission switch on the recording system. The system sends the recorded ECG waveforms of the two prior minutes and ECG waveforms of the two minutes after the switch is pressed, directly in the hospital server computer via the PHS. The server computer sends the data to the physician on call. The data is displayed on the doctor's Java mobile phone LCD (Liquid Crystal Display), so he or she can monitor the ECG regardless of their location. The developed ECG monitoring system is not only applicable to at-home patients, but should also be useful for monitoring hospital patients.
Personalized professional content recommendation
Xu, Songhua
2015-10-27
A personalized content recommendation system includes a client interface configured to automatically monitor a user's information data stream transmitted on the Internet. A hybrid contextual behavioral and collaborative personal interest inference engine resident to a non-transient media generates automatic predictions about the interests of individual users of the system. A database server retains the user's personal interest profile based on a plurality of monitored information. The system also includes a server programmed to filter items in an incoming information stream with the personal interest profile and is further programmed to identify only those items of the incoming information stream that substantially match the personal interest profile.
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
2004-03-22
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Cheon, Gyeongwoo; Shin, Il Hyung; Jung, Min Yang; Kim, Hee Chan
2009-01-01
We developed a gateway server to support various types of bio-signal monitoring devices for ubiquitous emergency healthcare in a reliable, effective, and scalable way. The server provides multiple channels supporting real-time N-to-N client connections. We applied our system to four types of health monitoring devices including a 12-channel electrocardiograph (ECG), oxygen saturation (SpO(2)), and medical imaging devices (a ultrasonograph and a digital skin microscope). Different types of telecommunication networks were tested: WIBRO, CDMA, wireless LAN, and wired internet. We measured the performance of our system in terms of the transmission rate and the number of simultaneous connections. The results show that the proposed network communication strategy can be successfully applied to the ubiquitous emergency healthcare service by providing a fast rate enough for real-time video transmission and multiple connections among patients and medical personnel.
Application-level regression testing framework using Jenkins
Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen
2017-09-26
Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less
Application-level regression testing framework using Jenkins
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen
Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less
A Cyber-Physical System for Girder Hoisting Monitoring Based on Smartphones.
Han, Ruicong; Zhao, Xuefeng; Yu, Yan; Guan, Quanhua; Hu, Weitong; Li, Mingchu
2016-07-07
Offshore design and construction is much more difficult than land-based design and construction, particularly due to hoisting operations. Real-time monitoring of the orientation and movement of a hoisted structure is thus required for operators' safety. In recent years, rapid development of the smart-phone commercial market has offered the possibility that everyone can carry a mini personal computer that is integrated with sensors, an operating system and communication system that can act as an effective aid for cyber-physical systems (CPS) research. In this paper, a CPS for hoisting monitoring using smartphones was proposed, including a phone collector, a controller and a server. This system uses smartphones equipped with internal sensors to obtain girder movement information, which will be uploaded to a server, then returned to controller users. An alarming system will be provided on the controller phone once the returned data exceeds a threshold. The proposed monitoring system is used to monitor the movement and orientation of a girder during hoisting on a cross-sea bridge in real time. The results show the convenience and feasibility of the proposed system.
NASA Technical Reports Server (NTRS)
Douard, Stephane
1994-01-01
Known as a Graphic Server, the system presented was designed for the control ground segment of the Telecom 2 satellites. It is a tool used to dynamically display telemetry data within graphic pages, also known as views. The views are created off-line through various utilities and then, on the operator's request, displayed and animated in real time as data is received. The system was designed as an independent component, and is installed in different Telecom 2 operational control centers. It enables operators to monitor changes in the platform and satellite payloads in real time. It has been in operation since December 1991.
VoIP attacks detection engine based on neural network
NASA Astrophysics Data System (ADS)
Safarik, Jakub; Slachta, Jiri
2015-05-01
The security is crucial for any system nowadays, especially communications. One of the most successful protocols in the field of communication over IP networks is Session Initiation Protocol. It is an open-source project used by different kinds of applications, both open-source and proprietary. High penetration and text-based principle made SIP number one target in IP telephony infrastructure, so security of SIP server is essential. To keep up with hackers and to detect potential malicious attacks, security administrator needs to monitor and evaluate SIP traffic in the network. But monitoring and following evaluation could easily overwhelm the security administrator in networks, typically in networks with a number of SIP servers, users and logically or geographically separated networks. The proposed solution lies in automatic attack detection systems. The article covers detection of VoIP attacks through a distributed network of nodes. Then the gathered data analyze aggregation server with artificial neural network. Artificial neural network means multilayer perceptron network trained with a set of collected attacks. Attack data could also be preprocessed and verified with a self-organizing map. The source data is detected by distributed network of detection nodes. Each node contains a honeypot application and traffic monitoring mechanism. Aggregation of data from each node creates an input for neural networks. The automatic classification on a centralized server with low false positive detection reduce the cost of attack detection resources. The detection system uses modular design for easy deployment in final infrastructure. The centralized server collects and process detected traffic. It also maintains all detection nodes.
Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei
2011-07-01
This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.
GSM module for wireless radiation monitoring system via SMS
NASA Astrophysics Data System (ADS)
Rahman, Nur Aira Abd; Hisyam Ibrahim, Noor; Lombigit, Lojius; Azman, Azraf; Jaafar, Zainudin; Arymaswati Abdullah, Nor; Hadzir Patai Mohamad, Glam
2018-01-01
A customised Global System for Mobile communication (GSM) module is designed for wireless radiation monitoring through Short Messaging Service (SMS). This module is able to receive serial data from radiation monitoring devices such as survey meter or area monitor and transmit the data as text SMS to a host server. It provides two-way communication for data transmission, status query, and configuration setup. The module hardware consists of GSM module, voltage level shifter, SIM circuit and Atmega328P microcontroller. Microcontroller provides control for sending, receiving and AT command processing to GSM module. The firmware is responsible to handle task related to communication between device and host server. It process all incoming SMS, extract, and store new configuration from Host, transmits alert/notification SMS when the radiation data reach/exceed threshold value, and transmits SMS data at every fixed interval according to configuration. Integration of this module with radiation survey/monitoring device will create mobile and wireless radiation monitoring system with prompt emergency alert at high-level radiation.
Developing Control System of Electrical Devices with Operational Expense Prediction
NASA Astrophysics Data System (ADS)
Sendari, Siti; Wahyu Herwanto, Heru; Rahmawati, Yuni; Mukti Putranto, Dendi; Fitri, Shofiana
2017-04-01
The purpose of this research is to develop a system that can monitor and record home electrical device’s electricity usage. This system has an ability to control electrical devices in distance and predict the operational expense. The system was developed using micro-controllers and WiFi modules connected to PC server. The communication between modules is arranged by server via WiFi. Beside of reading home electrical devices electricity usage, the unique point of the proposed-system is the ability of micro-controllers to send electricity data to server for recording the usage of electrical devices. The testing of this research was done by Black-box method to test the functionality of system. Testing system run well with 0% error.
The event notification and alarm system for the Open Science Grid operations center
NASA Astrophysics Data System (ADS)
Hayashi, S.; Teige and, S.; Quick, R.
2012-12-01
The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.
Development of HIHM (Home Integrated Health Monitor) for ubiquitous home healthcare.
Kim, Jung Soo; Kim, Beom Oh; Park, Kwang Suk
2007-01-01
Home Integrated Health Monitor (HIHM) was developed for ubiquitous home healthcare. From quantitative analysis, we have elicited modal of chair. The HIHM could detect Electrocardiogram (ECG) and Photoplethysmography (PPG) non-intrusively. Also, it could estimate blood pressure (BP) non-intrusively, measure blood glucose and ear temperature. Detected signals and information were transmitted to home gateway and home server through Zigbee communication technology. Home server carried them to Healthcare Center, and specialists such as medical doctors could monitor by Internet. There was also feedback system. This device has a potential to study about ubiquitous home healthcare.
A Server-Based Mobile Coaching System
Baca, Arnold; Kornfeind, Philipp; Preuschl, Emanuel; Bichler, Sebastian; Tampier, Martin; Novatchkov, Hristo
2010-01-01
A prototype system for monitoring, transmitting and processing performance data in sports for the purpose of providing feedback has been developed. During training, athletes are equipped with a mobile device and wireless sensors using the ANT protocol in order to acquire biomechanical, physiological and other sports specific parameters. The measured data is buffered locally and forwarded via the Internet to a server. The server provides experts (coaches, biomechanists, sports medicine specialists etc.) with remote data access, analysis and (partly automated) feedback routines. In this way, experts are able to analyze the athlete’s performance and return individual feedback messages from remote locations. PMID:22163490
A new mobile phone-based ECG monitoring system.
Iwamoto, Junichi; Yonezawa, Yoshiharu; Ogawa, Hiromichi Maki Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Hahn, Allen W; Caldwell, W Morton
2007-01-01
We have developed a system for monitoring a patient's electrocardiogram (ECG) and movement during daily activities. The complete system is mounted on chest electrodes and continuously samples the ECG and three axis accelerations. When the patient feels a heart discomfort, he or she pushes the data transmission switch on the recording system and the system sends the recorded ECG waveforms and three axis accelerations of the two prior minutes, and for two minutes after the switch is pressed. The data goes directly to a hospital server computer via a 2.4 GHz low power mobile phone. These data are stored on a server computer and downloaded to the physician's Java mobile phone. The physician can display the data on the phone's liquid crystal display.
Privacy-Preserving Electrocardiogram Monitoring for Intelligent Arrhythmia Detection.
Son, Junggab; Park, Juyoung; Oh, Heekuck; Bhuiyan, Md Zakirul Alam; Hur, Junbeom; Kang, Kyungtae
2017-06-12
Long-term electrocardiogram (ECG) monitoring, as a representative application of cyber-physical systems, facilitates the early detection of arrhythmia. A considerable number of previous studies has explored monitoring techniques and the automated analysis of sensing data. However, ensuring patient privacy or confidentiality has not been a primary concern in ECG monitoring. First, we propose an intelligent heart monitoring system, which involves a patient-worn ECG sensor (e.g., a smartphone) and a remote monitoring station, as well as a decision support server that interconnects these components. The decision support server analyzes the heart activity, using the Pan-Tompkins algorithm to detect heartbeats and a decision tree to classify them. Our system protects sensing data and user privacy, which is an essential attribute of dependability, by adopting signal scrambling and anonymous identity schemes. We also employ a public key cryptosystem to enable secure communication between the entities. Simulations using data from the MIT-BIH arrhythmia database demonstrate that our system achieves a 95.74% success rate in heartbeat detection and almost a 96.63% accuracy in heartbeat classification, while successfully preserving privacy and securing communications among the involved entities.
Privacy-Preserving Electrocardiogram Monitoring for Intelligent Arrhythmia Detection †
Son, Junggab; Park, Juyoung; Oh, Heekuck; Bhuiyan, Md Zakirul Alam; Hur, Junbeom; Kang, Kyungtae
2017-01-01
Long-term electrocardiogram (ECG) monitoring, as a representative application of cyber-physical systems, facilitates the early detection of arrhythmia. A considerable number of previous studies has explored monitoring techniques and the automated analysis of sensing data. However, ensuring patient privacy or confidentiality has not been a primary concern in ECG monitoring. First, we propose an intelligent heart monitoring system, which involves a patient-worn ECG sensor (e.g., a smartphone) and a remote monitoring station, as well as a decision support server that interconnects these components. The decision support server analyzes the heart activity, using the Pan–Tompkins algorithm to detect heartbeats and a decision tree to classify them. Our system protects sensing data and user privacy, which is an essential attribute of dependability, by adopting signal scrambling and anonymous identity schemes. We also employ a public key cryptosystem to enable secure communication between the entities. Simulations using data from the MIT-BIH arrhythmia database demonstrate that our system achieves a 95.74% success rate in heartbeat detection and almost a 96.63% accuracy in heartbeat classification, while successfully preserving privacy and securing communications among the involved entities. PMID:28604628
Self-Powered WSN for Distributed Data Center Monitoring
Brunelli, Davide; Passerone, Roberto; Rizzon, Luca; Rossi, Maurizio; Sartori, Davide
2016-01-01
Monitoring environmental parameters in data centers is gathering nowadays increasing attention from industry, due to the need of high energy efficiency of cloud services. We present the design and the characterization of an energy neutral embedded wireless system, prototyped to monitor perpetually environmental parameters in servers and racks. It is powered by an energy harvesting module based on Thermoelectric Generators, which converts the heat dissipation from the servers. Starting from the empirical characterization of the energy harvester, we present a power conditioning circuit optimized for the specific application. The whole system has been enhanced with several sensors. An ultra-low-power micro-controller stacked over the energy harvesting provides an efficient power management. Performance have been assessed and compared with the analytical model for validation. PMID:26729135
Self-Powered WSN for Distributed Data Center Monitoring.
Brunelli, Davide; Passerone, Roberto; Rizzon, Luca; Rossi, Maurizio; Sartori, Davide
2016-01-02
Monitoring environmental parameters in data centers is gathering nowadays increasing attention from industry, due to the need of high energy efficiency of cloud services. We present the design and the characterization of an energy neutral embedded wireless system, prototyped to monitor perpetually environmental parameters in servers and racks. It is powered by an energy harvesting module based on Thermoelectric Generators, which converts the heat dissipation from the servers. Starting from the empirical characterization of the energy harvester, we present a power conditioning circuit optimized for the specific application. The whole system has been enhanced with several sensors. An ultra-low-power micro-controller stacked over the energy harvesting provides an efficient power management. Performance have been assessed and compared with the analytical model for validation.
Web-Accessible Scientific Workflow System for Performance Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roelof Versteeg; Roelof Versteeg; Trevor Rowe
2006-03-01
We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoringmore » systems with different degrees of complexity.« less
Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure
NASA Astrophysics Data System (ADS)
Antolik, L.; Shiro, B.; Friberg, P. A.
2016-12-01
The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.
Request queues for interactive clients in a shared file system of a parallel computing system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin
Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less
Test-bed for the remote health monitoring system for bridge structures using FBG sensors
NASA Astrophysics Data System (ADS)
Lee, Chin-Hyung; Park, Ki-Tae; Joo, Bong-Chul; Hwang, Yoon-Koog
2009-05-01
This paper reports on test-bed for the long-term health monitoring system for bridge structures employing fiber Bragg grating (FBG) sensors, which is remotely accessible via the web, to provide real-time quantitative information on a bridge's response to live loading and environmental changes, and fast prediction of the structure's integrity. The sensors are attached on several locations of the structure and connected to a data acquisition system permanently installed onsite. The system can be accessed through remote communication using an optical cable network, through which the evaluation of the bridge behavior under live loading can be allowed at place far away from the field. Live structural data are transmitted continuously to the server computer at the central office. The server computer is connected securely to the internet, where data can be retrieved, processed and stored for the remote web-based health monitoring. Test-bed revealed that the remote health monitoring technology will enable practical, cost-effective, and reliable condition assessment and maintenance of bridge structures.
Landslide and Flood Warning System Prototypes based on Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Hloupis, George; Stavrakas, Ilias; Triantis, Dimos
2010-05-01
Wireless sensor networks (WSNs) are one of the emerging areas that received great attention during the last few years. This is mainly due to the fact that WSNs have provided scientists with the capability of developing real-time monitoring systems equipped with sensors based on Micro-Electro-Mechanical Systems (MEMS). WSNs have great potential for many applications in environmental monitoring since the sensor nodes that comprised from can host several MEMS sensors (such as temperature, humidity, inertial, pressure, strain-gauge) and transducers (such as position, velocity, acceleration, vibration). The resulting devices are small and inexpensive but with limited memory and computing resources. Each sensor node contains a sensing module which along with an RF transceiver. The communication is broadcast-based since the network topology can change rapidly due to node failures [1]. Sensor nodes can transmit their measurements to central servers through gateway nodes without any processing or they make preliminary calculations locally in order to produce results that will be sent to central servers [2]. Based on the above characteristics, two prototypes using WSNs are presented in this paper: A Landslide detection system and a Flood warning system. Both systems sent their data to central processing server where the core of processing routines exists. Transmission is made using Zigbee and IEEE 802.11b protocol but is capable to use VSAT communication also. Landslide detection system uses structured network topology. Each measuring node comprises of a columnar module that is half buried to the area under investigation. Each sensing module contains a geophone, an inclinometer and a set of strain gauges. Data transmitted to central processing server where possible landslide evolution is monitored. Flood detection system uses unstructured network topology since the failure rate of sensor nodes is expected higher. Each sensing module contains a custom water level sensor (based on plastic optical fiber). Data transmitted directly to server where the early warning algorithms monitor the water level variations in real time. Both sensor nodes use power harvesting techniques in order to extend their battery life as much as possible. [1] Yick J.; Mukherjee, B.; Ghosal, D. Wireless sensor network survey. Comput. Netw. 2008, 52, 2292-2330. [2] Garcia, M.; Bri, D.; Boronat, F.; Lloret, J. A new neighbor selection strategy for group-based wireless sensor networks, In The Fourth International Conference on Networking and Services (ICNS 2008), Gosier, Guadalupe, March 16-21, 2008.
[Automated anesthesia record system].
Zhu, Tao; Liu, Jin
2005-12-01
Based on Client/Server architecture, a software of automated anesthesia record system running under Windows operation system and networks has been developed and programmed with Microsoft Visual C++ 6.0, Visual Basic 6.0 and SQL Server. The system can deal with patient's information throughout the anesthesia. It can collect and integrate the data from several kinds of medical equipment such as monitor, infusion pump and anesthesia machine automatically and real-time. After that, the system presents the anesthesia sheets automatically. The record system makes the anesthesia record more accurate and integral and can raise the anesthesiologist's working efficiency.
The HydroServer Platform for Sharing Hydrologic Data
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.
2010-12-01
The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its parts in advancing hydrologic research. Details of the CUAHSI HIS can be found at http://his.cuahsi.org, and HydroServer codeplex site http://hydroserver.codeplex.com.
Remote vibration monitoring system using wireless internet data transfer
NASA Astrophysics Data System (ADS)
Lemke, John
2000-06-01
Vibrations from construction activities can affect infrastructure projects in several ways. Within the general vicinity of a construction site, vibrations can result in damage to existing structures, disturbance to people, damage to sensitive machinery, and degraded performance of precision instrumentation or motion sensitive equipment. Current practice for monitoring vibrations in the vicinity of construction sites commonly consists of measuring free field or structural motions using velocity transducers connected to a portable data acquisition unit via cables. This paper describes an innovative way to collect, process, transmit, and analyze vibration measurements obtained at construction sites. The system described measures vibration at the sensor location, performs necessary signal conditioning and digitization, and sends data to a Web server using wireless data transmission and Internet protocols. A Servlet program running on the Web server accepts the transmitted data and incorporates it into a project database. Two-way interaction between the Web-client and the Web server is accomplished through the use of a Servlet program and a Java Applet running inside a browser located on the Web client's computer. Advantages of this system over conventional vibration data logging systems include continuous unattended monitoring, reduced costs associated with field data collection, instant access to data files and graphs by project team members, and the ability to remotely modify data sampling schemes.
Mobile cloud-computing-based healthcare service by noncontact ECG monitoring.
Fong, Ee-May; Chung, Wan-Young
2013-12-02
Noncontact electrocardiogram (ECG) measurement technique has gained popularity these days owing to its noninvasive features and convenience in daily life use. This paper presents mobile cloud computing for a healthcare system where a noncontact ECG measurement method is employed to capture biomedical signals from users. Healthcare service is provided to continuously collect biomedical signals from multiple locations. To observe and analyze the ECG signals in real time, a mobile device is used as a mobile monitoring terminal. In addition, a personalized healthcare assistant is installed on the mobile device; several healthcare features such as health status summaries, medication QR code scanning, and reminders are integrated into the mobile application. Health data are being synchronized into the healthcare cloud computing service (Web server system and Web server dataset) to ensure a seamless healthcare monitoring system and anytime and anywhere coverage of network connection is available. Together with a Web page application, medical data are easily accessed by medical professionals or family members. Web page performance evaluation was conducted to ensure minimal Web server latency. The system demonstrates better availability of off-site and up-to-the-minute patient data, which can help detect health problems early and keep elderly patients out of the emergency room, thus providing a better and more comprehensive healthcare cloud computing service.
Mobile Cloud-Computing-Based Healthcare Service by Noncontact ECG Monitoring
Fong, Ee-May; Chung, Wan-Young
2013-01-01
Noncontact electrocardiogram (ECG) measurement technique has gained popularity these days owing to its noninvasive features and convenience in daily life use. This paper presents mobile cloud computing for a healthcare system where a noncontact ECG measurement method is employed to capture biomedical signals from users. Healthcare service is provided to continuously collect biomedical signals from multiple locations. To observe and analyze the ECG signals in real time, a mobile device is used as a mobile monitoring terminal. In addition, a personalized healthcare assistant is installed on the mobile device; several healthcare features such as health status summaries, medication QR code scanning, and reminders are integrated into the mobile application. Health data are being synchronized into the healthcare cloud computing service (Web server system and Web server dataset) to ensure a seamless healthcare monitoring system and anytime and anywhere coverage of network connection is available. Together with a Web page application, medical data are easily accessed by medical professionals or family members. Web page performance evaluation was conducted to ensure minimal Web server latency. The system demonstrates better availability of off-site and up-to-the-minute patient data, which can help detect health problems early and keep elderly patients out of the emergency room, thus providing a better and more comprehensive healthcare cloud computing service. PMID:24316562
Operational Experience with the Frontier System in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter
2012-06-20
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less
Operational Experience with the Frontier System in CMS
NASA Astrophysics Data System (ADS)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen
2012-12-01
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.
Honey Bee Colonies Remote Monitoring System.
Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús
2016-12-29
Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees' work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive-monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time.
Honey Bee Colonies Remote Monitoring System
Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús
2016-01-01
Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees’ work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive—monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time. PMID:28036061
BIO-Plex Information System Concept
NASA Technical Reports Server (NTRS)
Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)
1999-01-01
This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.
A Remote Health Monitoring System for the Elderly Based on Smart Home Gateway
Shao, Minggang
2017-01-01
This paper proposed a remote health monitoring system for the elderly based on smart home gateway. The proposed system consists of three parts: the smart clothing, the smart home gateway, and the health care server. The smart clothing collects the elderly's electrocardiogram (ECG) and motion signals. The home gateway is used for data transmission. The health care server provides services of data storage and user information management; it is constructed on the Windows-Apache-MySQL-PHP (WAMP) platform and is tested on the Ali Cloud platform. To resolve the issues of data overload and network congestion of the home gateway, an ECG compression algorithm is applied. System demonstration shows that the ECG signals and motion signals of the elderly can be monitored. Evaluation of the compression algorithm shows that it has a high compression ratio and low distortion and consumes little time, which is suitable for home gateways. The proposed system has good scalability, and it is simple to operate. It has the potential to provide long-term and continuous home health monitoring services for the elderly. PMID:29204258
A Remote Health Monitoring System for the Elderly Based on Smart Home Gateway.
Guan, Kai; Shao, Minggang; Wu, Shuicai
2017-01-01
This paper proposed a remote health monitoring system for the elderly based on smart home gateway. The proposed system consists of three parts: the smart clothing, the smart home gateway, and the health care server. The smart clothing collects the elderly's electrocardiogram (ECG) and motion signals. The home gateway is used for data transmission. The health care server provides services of data storage and user information management; it is constructed on the Windows-Apache-MySQL-PHP (WAMP) platform and is tested on the Ali Cloud platform. To resolve the issues of data overload and network congestion of the home gateway, an ECG compression algorithm is applied. System demonstration shows that the ECG signals and motion signals of the elderly can be monitored. Evaluation of the compression algorithm shows that it has a high compression ratio and low distortion and consumes little time, which is suitable for home gateways. The proposed system has good scalability, and it is simple to operate. It has the potential to provide long-term and continuous home health monitoring services for the elderly.
Sensor Fusion for Nuclear Proliferation Activity Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adel Ghanem, Ph D
2007-03-30
The objective of Phase 1 of this STTR project is to demonstrate a Proof-of-Concept (PoC) of the Geo-Rad system that integrates a location-aware SmartTag (made by ZonTrak) and a radiation detector (developed by LLNL). It also includes the ability to transmit the collected radiation data and location information to the ZonTrak server (ZonService). The collected data is further transmitted to a central server at LLNL (the Fusion Server) to be processed in conjunction with overhead imagery to generate location estimates of nuclear proliferation and radiation sources.
EVAcon: a protein contact prediction evaluation service
Graña, Osvaldo; Eyrich, Volker A.; Pazos, Florencio; Rost, Burkhard; Valencia, Alfonso
2005-01-01
Here we introduce EVAcon, an automated web service that evaluates the performance of contact prediction servers. Currently, EVAcon is monitoring nine servers, four of which are specialized in contact prediction and five are general structure prediction servers. Results are compared for all newly determined experimental structures deposited into PDB (∼5–50 per week). EVAcon allows for a precise comparison of the results based on a system of common protein subsets and the commonly accepted evaluation criteria that are also used in the corresponding category of the CASP assessment. EVAcon is a new service added to the functionality of the EVA system for the continuous evaluation of protein structure prediction servers. The new service is accesible from any of the three EVA mirrors: PDG (CNB-CSIC, Madrid) (); CUBIC (Columbia University, NYC) (); and Sali Lab (UCSF, San Francisco) (). PMID:15980486
CrossVit: enhancing canopy monitoring management practices in viticulture.
Matese, Alessandro; Vaccari, Francesco Primo; Tomasi, Diego; Di Gennaro, Salvatore Filippo; Primicerio, Jacopo; Sabatini, Francesco; Guidoni, Silvia
2013-06-13
A new wireless sensor network (WSN), called CrossVit, and based on MEMSIC products, has been tested for two growing seasons in two vineyards in Italy. The aims are to evaluate the monitoring performances of the new WSN directly in the vineyard and collect air temperature, air humidity and solar radiation data to support vineyard management practices. The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering; Nodes level is based on a network of peripheral nodes consisting of a MDA300 sensor board and Iris module and equipped with thermistors for air temperature, photodiodes for global and diffuse solar radiation, and an HTM2500LF sensor for relative humidity. The communication levels are: WSN links between gateways and sensor nodes by ZigBee, and long-range GSM/GPRS links between gateways and the server farm level. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity, detecting the differences between the canopy treatments applied. The performance of CrossVit, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power consumption.
CrossVit: Enhancing Canopy Monitoring Management Practices in Viticulture
Matese, Alessandro; Vaccari, Francesco Primo; Tomasi, Diego; Di Gennaro, Salvatore Filippo; Primicerio, Jacopo; Sabatini, Francesco; Guidoni, Silvia
2013-01-01
A new wireless sensor network (WSN), called CrossVit, and based on MEMSIC products, has been tested for two growing seasons in two vineyards in Italy. The aims are to evaluate the monitoring performances of the new WSN directly in the vineyard and collect air temperature, air humidity and solar radiation data to support vineyard management practices. The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering; Nodes level is based on a network of peripheral nodes consisting of a MDA300 sensor board and Iris module and equipped with thermistors for air temperature, photodiodes for global and diffuse solar radiation, and an HTM2500LF sensor for relative humidity. The communication levels are: WSN links between gateways and sensor nodes by ZigBee, and long-range GSM/GPRS links between gateways and the server farm level. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity, detecting the differences between the canopy treatments applied. The performance of CrossVit, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power consumption. PMID:23765273
Remote health monitoring system for detecting cardiac disorders.
Bansal, Ayush; Kumar, Sunil; Bajpai, Anurag; Tiwari, Vijay N; Nayak, Mithun; Venkatesan, Shankar; Narayanan, Rangavittal
2015-12-01
Remote health monitoring system with clinical decision support system as a key component could potentially quicken the response of medical specialists to critical health emergencies experienced by their patients. A monitoring system, specifically designed for cardiac care with electrocardiogram (ECG) signal analysis as the core diagnostic technique, could play a vital role in early detection of a wide range of cardiac ailments, from a simple arrhythmia to life threatening conditions such as myocardial infarction. The system that the authors have developed consists of three major components, namely, (a) mobile gateway, deployed on patient's mobile device, that receives 12-lead ECG signals from any ECG sensor, (b) remote server component that hosts algorithms for accurate annotation and analysis of the ECG signal and (c) point of care device of the doctor to receive a diagnostic report from the server based on the analysis of ECG signals. In the present study, their focus has been toward developing a system capable of detecting critical cardiac events well in advance using an advanced remote monitoring system. A system of this kind is expected to have applications ranging from tracking wellness/fitness to detection of symptoms leading to fatal cardiac events.
Web-based remote monitoring of infant incubators in the ICU.
Shin, D I; Huh, S J; Lee, T S; Kim, I Y
2003-09-01
A web-based real-time operating, management, and monitoring system for checking temperature and humidity within infant incubators using the Intranet has been developed and installed in the infant Intensive Care Unit (ICU). We have created a pilot system which has a temperature and humidity sensor and a measuring module in each incubator, which is connected to a web-server board via an RS485 port. The system transmits signals using standard web-based TCP/IP so that users can access the system from any Internet-connected personal computer in the hospital. Using this method, the system gathers temperature and humidity data transmitted from the measuring modules via the RS485 port on the web-server board and creates a web document containing these data. The system manager can maintain centralized supervisory monitoring of the situations in all incubators while sitting within the infant ICU at a work space equipped with a personal computer. The system can be set to monitor unusual circumstances and to emit an alarm signal expressed as a sound or a light on a measuring module connected to the related incubator. If the system is configured with a large number of incubators connected to a centralized supervisory monitoring station, it will improve convenience and assure meaningful improvement in response to incidents that require intervention.
A FPGA embedded web server for remote monitoring and control of smart sensors networks.
Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique
2013-12-27
This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.
A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks
Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique
2014-01-01
This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology. PMID:24379047
Feasibility of interactive biking exercise system for telemanagement in elderly.
Finkelstein, Joseph; Jeong, In Cheol
2013-01-01
Inexpensive cycling equipment is widely available for home exercise however its use is hampered by lack of tools supporting real-time monitoring of cycling exercise in elderly and coordination with a clinical care team. To address these barriers, we developed a low-cost mobile system aimed at facilitating safe and effective home-based cycling exercise. The system used a miniature wireless 3-axis accelerometer that transmitted the cycling acceleration data to a tablet PC that was integrated with a multi-component disease management system. An exercise dashboard was presented to a patient allowing real-time graphical visualization of exercise progress. The system was programmed to alert patients when exercise intensity exceeded the levels recommended by the patient care providers and to exchange information with a central server. The feasibility of the system was assessed by testing the accuracy of cycling speed monitoring and reliability of alerts generated by the system. Our results demonstrated high validity of the system both for upper and lower extremity exercise monitoring as well as reliable data transmission between home unit and central server.
2000-04-01
be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling
[Implementation of ECG Monitoring System Based on Internet of Things].
Lu, Liangliang; Chen, Minya
2015-11-01
In order to expand the capabilities of hospital's traditional ECG device and enhance medical staff's work efficiency, an ECG monitoring system based on internet of things is introduced. The system can monitor ECG signals in real time and analyze data using ECG sensor, PDA, Web servers, which embeds C language, Android systems, .NET, wireless network and other technologies. After experiments, it can be showed that the system has high reliability and stability and can bring the convenience to medical staffs.
Ahmed, Mobyen Uddin; Björkman, Mats; Lindén, Maria
2015-01-01
Sensor data are traveling from sensors to a remote server, data is analyzed remotely in a distributed manner, and health status of a user is presented in real-time. This paper presents a generic system-level framework for a self-served health monitoring system through the Internet of Things (IoT) to facilities an efficient sensor data management.
Design of Deformation Monitoring System for Volcano Mitigation
NASA Astrophysics Data System (ADS)
Islamy, M. R. F.; Salam, R. A.; Munir, M. M.; Irsyam, M.; Khairurrijal
2016-08-01
Indonesia has many active volcanoes that are potentially disastrous. It needs good mitigation systems to prevent victims and to reduce casualties from potential disaster caused by volcanoes eruption. Therefore, the system to monitor the deformation of volcano was built. This system employed telemetry with the combination of Radio Frequency (RF) communications of XBEE and General Packet Radio Service (GPRS) communication of SIM900. There are two types of modules in this system, first is the coordinator as a parent and second is the node as a child. Each node was connected to coordinator forming a Wireless Sensor Network (WSN) with a star topology and it has an inclinometer based sensor, a Global Positioning System (GPS), and an XBEE module. The coordinator collects data to each node, one a time, to prevent collision data between nodes, save data to SD Card and transmit data to web server via GPRS. Inclinometer was calibrated with self-built in calibrator and tested in high temperature environment to check the durability. The GPS was tested by displaying its position in web server via Google Map Application Protocol Interface (API v.3). It was shown that the coordinator can receive and transmit data from every node to web server very well and the system works well in a high temperature environment.
Computerized procedures system
Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.
2010-10-12
An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.
Remote Control and Monitoring of VLBI Experiments by Smartphones
NASA Astrophysics Data System (ADS)
Ruztort, C. H.; Hase, H.; Zapata, O.; Pedreros, F.
2012-12-01
For the remote control and monitoring of VLBI operations, we developed a software optimized for smartphones. This is a new tool based on a client-server architecture with a Web interface optimized for smartphone screens and cellphone networks. The server uses variables of the Field System and its station specific parameters stored in the shared memory. The client running on the smartphone by a Web interface analyzes and visualizes the current status of the radio telescope, receiver, schedule, and recorder. In addition, it allows commands to be sent remotely to the Field System computer and displays the log entries. The user has full access to the entire operation process, which is important in emergency cases. The software also integrates a webcam interface.
NASA Astrophysics Data System (ADS)
Kamiya, Toshiyuki; Numano, Nagisa; Yagyu, Hiroyuki; Shimazu, Hideo
This paper describes a mobile phone-based data logging system for monitoring the growing status of Satsuma mandarin, a type of citrus fruit, in the field. The system can provide various feedback to the farm producers with collected data, such as visualization of related data as a timeline chart or advice on the necessity of watering crops. It is important to collect information on environment conditions, plant status and product quality, to analyze it and to provide it as feedback to the farm producers to aid their operations. This paper proposes a novel framework of field monitoring and feedback for open-field farming. For field monitoring, it combines a low-cost plant status monitoring method using a simple apparatus and a Field Server for environment condition monitoring. Each field worker has a simple apparatus to measure fruit firmness and records data with a mobile phone. The logged data are stored in the database of the system on the server. The system analyzes stored data for each field and is able to show the necessity of watering to the user in five levels. The system is also able to show various stored data in timeline chart form. The user and coach can compare or analyze these data via a web interface. A test site was built at a Satsuma mandarin field at Kumano in Mie Prefecture, Japan using the framework, and farm workers monitor in the area used and evaluated the system.
The Battle Command Sustainment Support System: Initial Analysis Report
2016-09-01
diagnostic monitoring, asynchronous commits, and others. The other components of the NEDP include a main forwarding gateway /web server and one or more...NATIONAL ENTERPRISE DATA PORTAL ANALYSIS The NEDP is comprised of an Oracle Database 10g referred to as the National Data Server and several other...data forwarding gateways (DFG). Together, with the Oracle Database 10g, these components provide a heterogeneous data source that aligns various data
Automatic Response to Intrusion
2002-10-01
Computing Corporation Sidewinder Firewall [18] SRI EMERALD Basic Security Module (BSM) and EMERALD File Transfer Protocol (FTP) Monitors...the same event TCP Wrappers [24] Internet Security Systems RealSecure [31] SRI EMERALD IDIP monitor NAI Labs Generic Software Wrappers Prototype...included EMERALD , NetRadar, NAI Labs UNIX wrappers, ARGuE, MPOG, NetRadar, CyberCop Server, Gauntlet, RealSecure, and the Cyber Command System
Rotor Smoothing and Vibration Monitoring Results for the US Army VMEP
2009-06-01
individual component CI detection thresholds, and development of models for diagnostics, prognostics , and anomaly detection . Figure 16 VMEP Server...and prognostics are of current interest. Development of those systems requires large amounts of data (collection, monitoring , manipulation) to capture...development of automated systems and for continuous updating of algorithms to improve detection , classification, and prognostic performance. A test
Exploring No-SQL alternatives for ALMA monitoring system
NASA Astrophysics Data System (ADS)
Shen, Tzu-Chiang; Soto, Ruben; Merino, Patricio; Peña, Leonel; Bartsch, Marcelo; Aguirre, Alvaro; Ibsen, Jorge
2014-07-01
The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. This paper describes the experience gained after several years working with the monitoring system, which has a strong requirement of collecting and storing up to 150K variables with a highest sampling rate of 20.8 kHz. The original design was built on top of a cluster of relational database server and network attached storage with fiber channel interface. As the number of monitoring points increases with the number of antennas included in the array, the current monitoring system demonstrated to be able to handle the increased data rate in the collection and storage area (only one month of data), but the data query interface showed serious performance degradation. A solution based on no-SQL platform was explored as an alternative to the current long-term storage system. Among several alternatives, mongoDB has been selected. In the data flow, intermediate cache servers based on Redis were introduced to allow faster streaming of the most recently acquired data to web based charts and applications for online data analysis.
Real Time Monitor of Grid job executions
NASA Astrophysics Data System (ADS)
Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.
2010-04-01
In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.
NASA Astrophysics Data System (ADS)
Pulok, Md Kamrul Hasan
Intelligent and effective monitoring of power system stability in control centers is one of the key issues in smart grid technology to prevent unwanted power system blackouts. Voltage stability analysis is one of the most important requirements for control center operation in smart grid era. With the advent of Phasor Measurement Unit (PMU) or Synchrophasor technology, real time monitoring of voltage stability of power system is now a reality. This work utilizes real-time PMU data to derive a voltage stability index to monitor the voltage stability related contingency situation in power systems. The developed tool uses PMU data to calculate voltage stability index that indicates relative closeness of the instability by producing numerical indices. The IEEE 39 bus, New England power system was modeled and run on a Real-time Digital Simulator that stream PMU data over the Internet using IEEE C37.118 protocol. A Phasor data concentrator (PDC) is setup that receives streaming PMU data and stores them in Microsoft SQL database server. Then the developed voltage stability monitoring (VSM) tool retrieves phasor measurement data from SQL server, performs real-time state estimation of the whole network, calculate voltage stability index, perform real-time ranking of most vulnerable transmission lines, and finally shows all the results in a graphical user interface. All these actions are done in near real-time. Control centers can easily monitor the systems condition by using this tool and can take precautionary actions if needed.
Intellectual Production Supervision Perform based on RFID Smart Electricity Meter
NASA Astrophysics Data System (ADS)
Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng
2018-03-01
This topic develops the RFID intelligent electricity meter production supervision project management system. The system is designed for energy meter production supervision in the management of the project schedule, quality and cost information management requirements in RFID intelligent power, and provide quantitative information more comprehensive, timely and accurate for supervision engineer and project manager management decisions, and to provide technical information for the product manufacturing stage file. From the angle of scheme analysis, design, implementation and test, the system development of production supervision project management system for RFID smart meter project is discussed. Focus on the development of the system, combined with the main business application and management mode at this stage, focuses on the energy meter to monitor progress information, quality information and cost based information on RFID intelligent power management function. The paper introduces the design scheme of the system, the overall client / server architecture, client oriented graphical user interface universal, complete the supervision of project management and interactive transaction information display, the server system of realizing the main program. The system is programmed with C# language and.NET operating environment, and the client and server platforms use Windows operating system, and the database server software uses Oracle. The overall platform supports mainstream information and standards and has good scalability.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-01-01
A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613
Shen, Qinghua; Liang, Xiaohui; Shen, Xuemin; Lin, Xiaodong; Luo, Henry Y
2014-03-01
In this paper, we propose an e-health monitoring system with minimum service delay and privacy preservation by exploiting geo-distributed clouds. In the system, the resource allocation scheme enables the distributed cloud servers to cooperatively assign the servers to the requested users under the load balance condition. Thus, the service delay for users is minimized. In addition, a traffic-shaping algorithm is proposed. The traffic-shaping algorithm converts the user health data traffic to the nonhealth data traffic such that the capability of traffic analysis attacks is largely reduced. Through the numerical analysis, we show the efficiency of the proposed traffic-shaping algorithm in terms of service delay and privacy preservation. Furthermore, through the simulations, we demonstrate that the proposed resource allocation scheme significantly reduces the service delay compared to two other alternatives using jointly the short queue and distributed control law.
A study of smart card for radiation exposure history of patient.
Rehani, Madan M; Kushi, Joseph F
2013-04-01
The purpose of this article is to undertake a study on developing a prototype of a smart card that, when swiped in a system with access to the radiation exposure monitoring server, will locate the patient's radiation exposure history from that institution or set of associated institutions to which it has database access. Like the ATM or credit card, the card acts as a secure unique "token" rather than having cash, credit, or dose data on the card. The system provides the requested radiation history report, which then can be printed or sent by e-mail to the patient. The prototype system is capable of extending outreach to wherever the radiation exposure monitoring server extends, at county, state, or national levels. It is anticipated that the prototype shall pave the way for quick availability of patient exposure history for use in clinical practice for strengthening radiation protection of patients.
Hayashi, Takashi; Iwai, Mitsuhiro; Takahashi, Katsuhiko; Takeda, Satoshi; Tateishi, Toshiki; Kaneko, Rumi; Ogasawara, Yoko; Yonezawa, Kazuya; Hanada, Akiko
2011-01-01
Using a 3D-imaging-create-function server and network services by IP-VPN, we began to deliver 3D images to the remote institution. An indication trial of the primary image, a rotary trial of a 3D image, and a reproducibility trial were studied in order to examine the practicality of using the system in a real network between Hakodate and Sapporo (communication distance of about 150 km). In these trials, basic data (time and receiving data volume) were measured for every variation of QF (quality factor) or monitor resolution. Analyzing the results of the system using a 3D image delivery server of our hospital with variations in the setting of QF and monitor resolutions, we concluded that this system has practicality in the remote interpretation-of-radiogram work, even if the access point of the region has a line speed of 6 Mbps.
DICOM-compliant PACS with CD-based image archival
NASA Astrophysics Data System (ADS)
Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.
1998-07-01
This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.
Evaluation of an electrocardiogram on QR code.
Nakayama, Masaharu; Shimokawa, Hiroaki
2013-01-01
An electrocardiogram (ECG) is an indispensable tool to diagnose cardiac diseases, such as ischemic heart disease, myocarditis, arrhythmia, and cardiomyopathy. Since ECG patterns vary depend on patient status, it is also used to monitor patients during treatment and comparison with ECGs with previous results is important for accurate diagnosis. However, the comparison requires connection to ECG data server in a hospital and the availability of data connection among hospitals is limited. To improve the portability and availability of ECG data regardless of server connection, we here introduce conversion of ECG data into 2D barcodes as text data and decode of the QR code for drawing ECG with Google Chart API. Fourteen cardiologists and six general physicians evaluated the system using iPhone and iPad. Overall, they were satisfied with the system in usability and accuracy of decoded ECG compared to the original ECG. This new coding system may be useful in utilizing ECG data irrespective of server connections.
Report #12-P-0836, September 20, 2012. EPA's OEI is not managing key system management documentation, system administration functions, the granting and monitoring of privileged accounts, and the application of security controls associated with its DSS.
Kim, Dong Seong; Park, Jong Sou
2014-01-01
It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732
A web-based approach for electrocardiogram monitoring in the home.
Magrabi, F; Lovell, N H; Celler, B G
1999-05-01
A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.
Popova, A Yu; Kuzkin, B P; Demina, Yu V; Dubyansky, V M; Kulichenko, A N; Maletskaya, O V; Shayakhmetov, O Kh; Semenko, O V; Nazarenko, Yu V; Agapitov, D S; Mezentsev, V M; Kharchenko, T V; Efremenko, D V; Oroby, V G; Klindukhov, V P; Grechanaya, T V; Nikolaevich, P N; Tesheva, S Ch; Rafeenko, G K
2015-01-01
To improve the sanitary and epidemiological surveillance at the Olympic Games has developed a system of GIS for monitoring objects and situations in the region of Sochi. The system is based on software package ArcGIS, version 10.2 server, with Web-java.lang. Object, Web-server Apach, and software developed in language java. During th execution of the tasks are solved: the stratification of the region of the Olympic Games for the private and aggregate epidemiological risk OCI various eti- ologies, ranking epidemiologically important facilities for the sanitary and hygienic conditions, monitoring of infectious diseases (in real time according to the preliminary diagnosis). GIS monitoring has shown its effectiveness: Information received from various sources, but focused on one portal. Information was available in real time all the specialists involved in ensuring epidemiological well-being and use at work during the Olympic Games in Sochi.
NASA Astrophysics Data System (ADS)
Adamczewski-Musch, Joern; Linev, Sergey
2015-12-01
The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.
Development of Geomagnetic Monitoring System Using a Magnetometer for the Field
NASA Astrophysics Data System (ADS)
Lee, Young-Cheol; Kim, Sung-Wook; Choi, Eun-Kyeong; Kim, In-Soo
2014-05-01
Three institutes including KMA (Korea Meteorological Administration), KSWC (Korean Space Weather Center) of NRRA (National Radio Research Agency) and KIGAM (Korea Institute of Geoscience and Mineral Resources) are now operating magnetic observatories. Those observatories observe the total intensity and three components of geomagnetic element. This paper comes up with a magnetic monitoring system now under development that uses a magnetometer for field survey. In monitoring magnetic variations in areas (active faults or volcanic regions), more reliable results can be obtained when an array of several magnetometers are used rather than a single magnetometer. In order to establish and operate a magnetometer array, such factors as expenses, convenience of the establishment and operation of the array should be taken into account. This study has come up with a magnetic monitoring system complete with a magnetometer for the field survey of our own designing. A magnetic monitoring system, which is composed of two parts. The one is a field part and the other a data part. The field part is composed of a magnetometer, an external memory module, a power supply and a set of data transmission equipment. The data part is a data server which can store the data transmitted from the field part, analyze the data and provide service to the web. This study has developed an external memory module for ENVI-MAG (Scintrex Ltd.) using an embedded Cortex-M3 board, which can be programmed, attach other functional devices (SD memory cards, GPS antennas for time synchronization, ethernet cards and so forth). The board thus developed can store magnetic measurements up to 8 Gbytes, synchronize with the GPS time and transmit the magnetic measurements to the data server which is now under development. A monitoring system of our own developing was installed in Jeju island, taking measurements throughout Korea. Other parts including a data transfer module, a server and a power supply using solar power will continue to be developed in the days to come. Acknowlegments This work was funded by the Korea Meteorological Administration Research and Development Program under Grant CATER 2006-5074
Rawstorn, Jonathan C; Gant, Nicholas; Warren, Ian; Doughty, Robert Neil; Lever, Nigel; Poppe, Katrina K; Maddison, Ralph
2015-03-20
Remote telemonitoring holds great potential to augment management of patients with coronary heart disease (CHD) and atrial fibrillation (AF) by enabling regular physiological monitoring during physical activity. Remote physiological monitoring may improve home and community exercise-based cardiac rehabilitation (exCR) programs and could improve assessment of the impact and management of pharmacological interventions for heart rate control in individuals with AF. Our aim was to evaluate the measurement validity and data transmission reliability of a remote telemonitoring system comprising a wireless multi-parameter physiological sensor, custom mobile app, and middleware platform, among individuals in sinus rhythm and AF. Participants in sinus rhythm and with AF undertook simulated daily activities, low, moderate, and/or high intensity exercise. Remote monitoring system heart rate and respiratory rate were compared to reference measures (12-lead ECG and indirect calorimeter). Wireless data transmission loss was calculated between the sensor, mobile app, and remote Internet server. Median heart rate (-0.30 to 1.10 b∙min -1 ) and respiratory rate (-1.25 to 0.39 br∙min -1 ) measurement biases were small, yet statistically significant (all P≤.003) due to the large number of observations. Measurement reliability was generally excellent (rho=.87-.97, all P<.001; intraclass correlation coefficient [ICC]=.94-.98, all P<.001; coefficient of variation [CV]=2.24-7.94%), although respiratory rate measurement reliability was poor among AF participants (rho=.43, P<.001; ICC=.55, P<.001; CV=16.61%). Data loss was minimal (<5%) when all system components were active; however, instability of the network hosting the remote data capture server resulted in data loss at the remote Internet server during some trials. System validity was sufficient for remote monitoring of heart and respiratory rates across a range of exercise intensities. Remote exercise monitoring has potential to augment current exCR and heart rate control management approaches by enabling the provision of individually tailored care to individuals outside traditional clinical environments. ©Jonathan C Rawstorn, Nicholas Gant, Ian Warren, Robert Neil Doughty, Nigel Lever, Katrina K Poppe, Ralph Maddison. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 20.03.2015.
The future of remote ECG monitoring systems.
Guo, Shu-Li; Han, Li-Na; Liu, Hong-Wei; Si, Quan-Jin; Kong, De-Feng; Guo, Fu-Su
2016-09-01
Remote ECG monitoring systems are becoming commonplace medical devices for remote heart monitoring. In recent years, remote ECG monitoring systems have been applied in the monitoring of various kinds of heart diseases, and the quality of the transmission and reception of the ECG signals during remote process kept advancing. However, there remains accompanying challenges. This report focuses on the three components of the remote ECG monitoring system: patient (the end user), the doctor workstation, and the remote server, reviewing and evaluating the imminent challenges on the wearable systems, packet loss in remote transmission, portable ECG monitoring system, patient ECG data collection system, and ECG signals transmission including real-time processing ST segment, R wave, RR interval and QRS wave, etc. This paper tries to clarify the future developmental strategies of the ECG remote monitoring, which can be helpful in guiding the research and development of remote ECG monitoring.
Self-Organizing Peer-To-Peer Middleware for Healthcare Monitoring in Real-Time
Kim, Hyun Ho; Jo, Hyeong Gon
2017-01-01
As the number of elderly persons with chronic illnesses increases, a new public infrastructure for their care is becoming increasingly necessary. In particular, technologies that can monitoring bio-signals in real-time have been receiving significant attention. Currently, most healthcare monitoring services are implemented by wireless carrier through centralized servers. These services are vulnerable to data concentration because all data are sent to a remote server. To solve these problems, we propose self-organizing P2P middleware for healthcare monitoring that enables a real-time multi bio-signal streaming without any central server by connecting the caregiver and care recipient. To verify the performance of the proposed middleware, we evaluated the monitoring service matching time based on a monitoring request. We also confirmed that it is possible to provide an effective monitoring service by evaluating the connectivity between Peer-to-Peer and average jitter. PMID:29149045
Self-Organizing Peer-To-Peer Middleware for Healthcare Monitoring in Real-Time.
Kim, Hyun Ho; Jo, Hyeong Gon; Kang, Soon Ju
2017-11-17
As the number of elderly persons with chronic illnesses increases, a new public infrastructure for their care is becoming increasingly necessary. In particular, technologies that can monitoring bio-signals in real-time have been receiving significant attention. Currently, most healthcare monitoring services are implemented by wireless carrier through centralized servers. These services are vulnerable to data concentration because all data are sent to a remote server. To solve these problems, we propose self-organizing P2P middleware for healthcare monitoring that enables a real-time multi bio-signal streaming without any central server by connecting the caregiver and care recipient. To verify the performance of the proposed middleware, we evaluated the monitoring service matching time based on a monitoring request. We also confirmed that it is possible to provide an effective monitoring service by evaluating the connectivity between Peer-to-Peer and average jitter.
Research on cloud-based remote measurement and analysis system
NASA Astrophysics Data System (ADS)
Gao, Zhiqiang; He, Lingsong; Su, Wei; Wang, Can; Zhang, Changfan
2015-02-01
The promising potential of cloud computing and its convergence with technologies such as cloud storage, cloud push, mobile computing allows for creation and delivery of newer type of cloud service. Combined with the thought of cloud computing, this paper presents a cloud-based remote measurement and analysis system. This system mainly consists of three parts: signal acquisition client, web server deployed on the cloud service, and remote client. This system is a special website developed using asp.net and Flex RIA technology, which solves the selective contradiction between two monitoring modes, B/S and C/S. This platform supplies customer condition monitoring and data analysis service by Internet, which was deployed on the cloud server. Signal acquisition device is responsible for data (sensor data, audio, video, etc.) collection and pushes the monitoring data to the cloud storage database regularly. Data acquisition equipment in this system is only conditioned with the function of data collection and network function such as smartphone and smart sensor. This system's scale can adjust dynamically according to the amount of applications and users, so it won't cause waste of resources. As a representative case study, we developed a prototype system based on Ali cloud service using the rotor test rig as the research object. Experimental results demonstrate that the proposed system architecture is feasible.
Implementation of remote monitoring and managing switches
NASA Astrophysics Data System (ADS)
Leng, Junmin; Fu, Guo
2010-12-01
In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.
The Development of the Puerto Rico Lightning Detection Network for Meteorological Research
NASA Technical Reports Server (NTRS)
Legault, Marc D.; Miranda, Carmelo; Medin, J.; Ojeda, L. J.; Blakeslee, Richard J.
2011-01-01
A land-based Puerto Rico Lightning Detection Network (PR-LDN) dedicated to the academic research of meteorological phenomena has being developed. Five Boltek StormTracker PCI-Receivers with LTS-2 Timestamp Cards with GPS and lightning detectors were integrated to Pentium III PC-workstations running the CentOS linux operating system. The Boltek detector linux driver was compiled under CentOS, modified, and thoroughly tested. These PC-workstations with integrated lightning detectors were installed at five of the University of Puerto Rico (UPR) campuses distributed around the island of PR. The PC-workstations are left on permanently in order to monitor lightning activity at all times. Each is networked to their campus network-backbone permitting quasi-instantaneous data transfer to a central server at the UPR-Bayam n campus. Information generated by each lightning detector is managed by a C-program developed by us called the LDN-client. The LDN-client maintains an open connection to the central server operating the LDN-server program where data is sent real-time for analysis and archival. The LDN-client also manages the storing of data on the PC-workstation hard disk. The LDN-server software (also an in-house effort) analyses the data from each client and performs event triangulations. Time-of-arrival (TOA) and related hybrid algorithms, lightning-type and event discriminating routines are also implemented in the LDN-server software. We also have developed software to visually monitor lightning events in real-time from all clients and the triangulated events. We are currently monitoring and studying the spatial, temporal, and type distribution of lightning strikes associated with electrical storms and tropical cyclones in the vicinity of Puerto Rico.
YODA++: A proposal for a semi-automatic space mission control
NASA Astrophysics Data System (ADS)
Casolino, M.; de Pascale, M. P.; Nagni, M.; Picozza, P.
YODA++ is a proposal for a semi-automated data handling and analysis system for the PAMELA space experiment. The core of the routines have been developed to process a stream of raw data downlinked from the Resurs DK1 satellite (housing PAMELA) to the ground station in Moscow. Raw data consist of scientific data and are complemented by housekeeping information. Housekeeping information will be analyzed within a short time from download (1 h) in order to monitor the status of the experiment and to foreseen the mission acquisition planning. A prototype for the data visualization will run on an APACHE TOMCAT web application server, providing an off-line analysis tool using a browser and part of code for the system maintenance. Data retrieving development is in production phase, while a GUI interface for human friendly monitoring is on preliminary phase as well as a JavaServerPages/JavaServerFaces (JSP/JSF) web application facility. On a longer timescale (1 3 h from download) scientific data are analyzed. The data storage core will be a mix of CERNs ROOT files structure and MySQL as a relational database. YODA++ is currently being used in the integration and testing on ground of PAMELA data.
Database architectures for Space Telescope Science Institute
NASA Astrophysics Data System (ADS)
Lubow, Stephen
1993-08-01
At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).
Active Cyber Defense: Enhancing National Cyber Defense
2011-12-01
Prevention System ISP Internet Service Provider IT Information Technology IWM Information Warfare Monitor LOAC Law of Armed Conflict NATO...the Information Warfare Monitor ( IWM ) discovered that GhostNet had infected 1,295 computers in 103 countries. As many as thirty percent of these...By monitoring the computers in Dharamsala and at various Tibetan missions, IWM was able to determine the IP addresses of the servers hosting Gh0st
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-09-18
A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.
Fault-Tolerant Local-Area Network
NASA Technical Reports Server (NTRS)
Morales, Sergio; Friedman, Gary L.
1988-01-01
Local-area network (LAN) for computers prevents single-point failure from interrupting communication between nodes of network. Includes two complete cables, LAN 1 and LAN 2. Microprocessor-based slave switches link cables to network-node devices as work stations, print servers, and file servers. Slave switches respond to commands from master switch, connecting nodes to two cable networks or disconnecting them so they are completely isolated. System monitor and control computer (SMC) acts as gateway, allowing nodes on either cable to communicate with each other and ensuring that LAN 1 and LAN 2 are fully used when functioning properly. Network monitors and controls itself, automatically routes traffic for efficient use of resources, and isolates and corrects its own faults, with potential dramatic reduction in time out of service.
A Wireless MEMS-Based Inclinometer Sensor Node for Structural Health Monitoring
Ha, Dae Woong; Park, Hyo Seon; Choi, Se Woon; Kim, Yousok
2013-01-01
This paper proposes a wireless inclinometer sensor node for structural health monitoring (SHM) that can be applied to civil engineering and building structures subjected to various loadings. The inclinometer used in this study employs a method for calculating the tilt based on the difference between the static acceleration and the acceleration due to gravity, using a micro-electro-mechanical system (MEMS)-based accelerometer. A wireless sensor node was developed through which tilt measurement data are wirelessly transmitted to a monitoring server. This node consists of a slave node that uses a short-distance wireless communication system (RF 2.4 GHz) and a master node that uses a long-distance telecommunication system (code division multiple access—CDMA). The communication distance limitation, which is recognized as an important issue in wireless monitoring systems, has been resolved via these two wireless communication components. The reliability of the proposed wireless inclinometer sensor node was verified experimentally by comparing the values measured by the inclinometer and subsequently transferred to the monitoring server via wired and wireless transfer methods to permit a performance evaluation of the wireless communication sensor nodes. The experimental results indicated that the two systems (wired and wireless transfer systems) yielded almost identical values at a tilt angle greater than 1°, and a uniform difference was observed at a tilt angle less than 0.42° (approximately 0.0032° corresponding to 0.76% of the tilt angle, 0.42°) regardless of the tilt size. This result was deemed to be within the allowable range of measurement error in SHM. Thus, the wireless transfer system proposed in this study was experimentally verified for practical application in a structural health monitoring system. PMID:24287533
A system for monitoring the radiation effects of a proton linear accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skorkin, V. M., E-mail: skorkin@inr.ru; Belyanski, K. L.; Skorkin, A. V.
2016-12-15
The system for real-time monitoring of radioactivity of a high-current proton linear accelerator detects secondary neutron emission from proton beam losses in transport channels and measures the activity of radionuclides in gas and aerosol emissions and the radiation background in the environment affected by a linear accelerator. The data provided by gamma, beta, and neutron detectors are transferred over a computer network to the central server. The system allows one to monitor proton beam losses, the activity of gas and aerosol emissions, and the radiation emission level of a linear accelerator in operation.
A Case Study in Software Adaptation
2002-01-01
1 A Case Study in Software Adaptation Giuseppe Valetto Telecom Italia Lab Via Reiss Romoli 274 10148, Turin, Italy +39 011 2288788...configuration of the service; monitoring of database connectivity from within the service; monitoring of crashes and shutdowns of IM servers; monitoring of...of the IM server all share a relational database and a common runtime state repository, which make up the backend tier, and allow replicas to
Report: Information Security Series: Security Practices Safe Drinking Water Information System
Report #2006-P-00021, March 30, 2006. We found that the Office of Water (OW) substantially complied with many of the information security controls reviewed and had implemented practices to ensure production servers are monitored.
PREDICT: Privacy and Security Enhancing Dynamic Information Monitoring
2015-08-03
consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided local...12], consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided...these methods achieve high sensing coverage with low cost using cloaked locations [3]. In follow-on work, the issue of mobility is addressed. Task
A Modular IoT Platform for Real-Time Indoor Air Quality Monitoring.
Benammar, Mohieddine; Abdaoui, Abderrazak; Ahmad, Sabbir H M; Touati, Farid; Kadri, Abdullah
2018-02-14
The impact of air quality on health and on life comfort is well established. In many societies, vulnerable elderly and young populations spend most of their time indoors. Therefore, indoor air quality monitoring (IAQM) is of great importance to human health. Engineers and researchers are increasingly focusing their efforts on the design of real-time IAQM systems using wireless sensor networks. This paper presents an end-to-end IAQM system enabling measurement of CO₂, CO, SO₂, NO₂, O₃, Cl₂, ambient temperature, and relative humidity. In IAQM systems, remote users usually use a local gateway to connect wireless sensor nodes in a given monitoring site to the external world for ubiquitous access of data. In this work, the role of the gateway in processing collected air quality data and its reliable dissemination to end-users through a web-server is emphasized. A mechanism for the backup and the restoration of the collected data in the case of Internet outage is presented. The system is adapted to an open-source Internet-of-Things (IoT) web-server platform, called Emoncms, for live monitoring and long-term storage of the collected IAQM data. A modular IAQM architecture is adopted, which results in a smart scalable system that allows seamless integration of various sensing technologies, wireless sensor networks (WSNs) and smart mobile standards. The paper gives full hardware and software details of the proposed solution. Sample IAQM results collected in various locations are also presented to demonstrate the abilities of the system.
IMIS desktop & smartphone software solutions for monitoring spacecrafts' payload from anywhere
NASA Astrophysics Data System (ADS)
Baroukh, J.; Queyrut, O.; Airaud, J.
In the past years, the demand for satellite remote operations has increased guided by on one hand, the will to reduce operations cost (on-call operators out of business hours), and on the other hand, the development of cooperation space missions resulting in a world wide distribution of engineers and science team members. Only a few off-the-shelf solutions exist to fulfill the need of remote payload monitoring, and they mainly use proprietary devices. The recent advent of mobile technologies (laptops, smartphones and tablets) as well as the worldwide deployment of broadband networks (3G, Wi-Fi hotspots), has opened up a technical window that brings new options. As part of the Mars Science Laboratory (MSL) mission, the Centre National D'Etudes Spatiales (CNES, the French space agency) has developed a new software solution for monitoring spacecraft payloads. The Instrument Monitoring Interactive Software (IMIS) offers state-of-the-art operational features for payload monitoring, and can be accessed remotely. It was conceived as a generic tool that can be used for heterogeneous payloads and missions. IMIS was designed as a classical client/server architecture. The server is hosted at CNES and acts as a data provider while two different kinds of clients are available depending on the level of mobility required. The first one is a rich client application, built on Eclipse framework, which can be installed on usual operating systems and communicates with the server through the Internet. The second one is a smartphone application for any Android platform, connected to the server thanks to the mobile broadband network or a Wi-Fi connection. This second client is mainly devoted to on-call operations and thus only contains a subset of the IMIS functionalities. This paper describes the operational context, including security aspects, that led IMIS development, presents the selected software architecture and details the various features of both clients: the desktop and the sm- rtphone application.
Report #2006-P-00019, March 28, 2006. OSWER’s implemented practices to ensure production servers were being monitored for known vulnerabilities and personnel with significant security responsibility completed the Agency’s recommended security training.
A Wireless Physiological Signal Monitoring System with Integrated Bluetooth and WiFi Technologies.
Yu, Sung-Nien; Cheng, Jen-Chieh
2005-01-01
This paper proposes a wireless patient monitoring system which integrates Bluetooth and WiFi wireless technologies. A wireless portable multi-parameter device was designated to acquire physiological signals and transmit them to a local server via Bluetooth wireless technology. Four kinds of monitor units were designed to communicate via the WiFi wireless technology, including a local monitor unit, a control center, mobile devices (personal digital assistant; PDA), and a web page. The use of various monitor units is intending to meet different medical requirements for different medical personnel. This system was demonstrated to promote the mobility and flexibility for both the patients and the medical personnel, which further improves the quality of health care.
Real-time ground motions monitoring system developed by Raspberry Pi 3
NASA Astrophysics Data System (ADS)
Chen, P.; Jang, J. P.; Chang, H.; Lin, C. R.; Lin, P. P.; Wang, C. C.
2016-12-01
Ground-motions seismic stations are usually installed in the special geological area, like high possibility landslide area, active volcanoes, or nearby faults, to real-time monitor the possible geo-hazards. Base on the demands, three main issues needs to be considered: size, low-power consumption and real-time data transmission. Raspberry Pi 3 has the suitable characteristics to fit our requests. Thus, we develop a real-time ground motions monitoring system by Raspberry Pi 3. The Raspberry Pi has the credit-card-sized with single-board computers. The operating system is based on the programmable Linux system.The volume is only 85.6 by 53.98 by 17 mm with USB and Ethernet interfaces. The power supply is only needed 5 Volts and 2.1 A. It is easy to get power by using solar power and transmit the real-time data through Ethernet or by the mobile signal through USB adapter. As Raspberry Pi still a kind of small computer, the service, software or GUI can be very flexibly developed, such as the basic web server, ftp server, SSH connection, and real-time visualization interface tool etc. Until now, we have developed ten instruments with on-line/ real-time data transmission and have installed in the Taiping Mountain in Taiwan to motor the geohazard like mudslide.
NASA Astrophysics Data System (ADS)
Sorooshian, S.; Hsu, K. L.; Gao, X.; Imam, B.; Nguyen, P.; Braithwaite, D.; Logan, W. S.; Mishra, A.
2015-12-01
The G-WADI Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) GeoServer has been successfully developed by the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California Irvine in collaboration with the UNESCO's International Hydrological Programme (IHP) and a number of its international centers. The system employs state-of-the-art technologies in remote sensing and artificial intelligence to estimate precipitation globally from satellite imagery in real-time and high spatiotemporal resolution (4km, hourly). It offers graphical tools and data service to help the user in emergency planning and management for natural disasters related to hydrological processes. The G-WADI PERSIANN-CCS GeoServer has been upgraded with new user-friendly functionalities. The precipitation data generated by the GeoServer is disseminated to the user community through support provided by ICIWaRM (The International Center for Integrated Water Resources Management), UNESCO and UC Irvine. Recently a number of new applications for mobile devices have been developed by our students. The RainMapper has been available on App Store and Google Play for the real-time PERSIANN-CCS observations. A global crowd sourced rainfall reporting system named iRain has also been developed to engage the public globally to provide qualitative information about real-time precipitation in their location which will be useful in improving the quality of the PERSIANN-CCS data. A number of recent examples of the application and use of the G-WADI PERSIANN-CCS GeoServer information will also be presented.
Efficient monitoring of CRAB jobs at CMS
NASA Astrophysics Data System (ADS)
Silva, J. M. D.; Balcas, J.; Belforte, S.; Ciangottini, D.; Mascheroni, M.; Rupeika, E. A.; Ivanov, T. T.; Hernandez, J. M.; Vaandering, E.
2017-10-01
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.
Efficient Monitoring of CRAB Jobs at CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva, J. M.D.; Balcas, J.; Belforte, S.
CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates themore » design choices and gives a report on our experience with the tools we developed and the external ones we used.« less
An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image
NASA Astrophysics Data System (ADS)
Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan
2017-10-01
It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.
Design and Application of a Field Sensing System for Ground Anchors in Slopes
Choi, Se Woon; Lee, Jihoon; Kim, Jong Moon; Park, Hyo Seon
2013-01-01
In a ground anchor system, cables or tendons connected to a bearing plate are used for stabilization of slopes. Then, the stability of a slope is dependent on maintaining the tension levels in the cables. So far, no research on a strain-based field sensing system for ground anchors has been reported. Therefore, in this study, a practical monitoring system for long-term sensing of tension levels in tendons for anchor-reinforced slopes is proposed. The system for anchor-reinforced slopes is composed of: (1) load cells based on vibrating wire strain gauges (VWSGs), (2) wireless sensor nodes which receive and process the signals from load cells and then transmit the result to a master node through local area communication, (3) master nodes which transmit the data sent from sensor nodes to the server through mobile communication, and (4) a server located at the base station. The system was applied to field sensing of ground anchors in the 62 m-long and 26 m-high slope at the side of the highway. Based on the long-term monitoring, the safety of the anchor-reinforced slope can be secured by the timely applications of re-tensioning processes in tendons. PMID:23507820
Yang, Shu; Qiu, Yuyan; Shi, Bo
2016-09-01
This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.
Automated Cryocooler Monitor and Control System Software
NASA Technical Reports Server (NTRS)
Britchcliffe, Michael J.; Conroy, Bruce L.; Anderson, Paul E.; Wilson, Ahmad
2011-01-01
This software is used in an automated cryogenic control system developed to monitor and control the operation of small-scale cryocoolers. The system was designed to automate the cryogenically cooled low-noise amplifier system described in "Automated Cryocooler Monitor and Control System" (NPO-47246), NASA Tech Briefs, Vol. 35, No. 5 (May 2011), page 7a. The software contains algorithms necessary to convert non-linear output voltages from the cryogenic diode-type thermometers and vacuum pressure and helium pressure sensors, to temperature and pressure units. The control function algorithms use the monitor data to control the cooler power, vacuum solenoid, vacuum pump, and electrical warm-up heaters. The control algorithms are based on a rule-based system that activates the required device based on the operating mode. The external interface is Web-based. It acts as a Web server, providing pages for monitor, control, and configuration. No client software from the external user is required.
Centralized Fabric Management Using Puppet, Git, and GLPI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William
2012-12-01
Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).
NASA Astrophysics Data System (ADS)
Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.
2005-12-01
Long term performance monitoring has been identified by DOE, DOD and EPA as one of the most challenging and costly elements of contaminated site remedial efforts. Such monitoring should provide timely and actionable information relevant to a multitude of stakeholder needs. This information should be obtained in a manner which is auditable, cost effective and transparent. Over the last several years INL staff has designed and implemented a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition from diverse sensors (geophysical, geochemical and hydrological) with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic javascript and html/css) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This system has been implemented and is operational for several sites, including the Ruby Gulch Waste Rock Repository (a capped mine waste rock dump on the Gilt Edge Mine Superfund Site), the INL Vadoze Zone Research Park and an alternative cover landfill. Implementations for other vadoze zone sites are currently in progress. These systems allow for autonomous performance monitoring through automated data analysis and report generation. This performance monitoring has allowed users to obtain insights into system dynamics, regulatory compliance and residence times of water. Our system uses modular components for data selection and graphing and WSDL compliant webservices for external functions such as statistical analyses and model invocations. Thus, implementing this system for novel sites and extending functionality (e.g. adding novel models) is relatively straightforward. As system access requires a standard webbrowser and uses intuitive functionality, stakeholders with diverse degrees of technical insight can use this system with little or no training.
Ueki, Shigeharu; Kayaba, Hiroyuki; Tomita, Noriko; Kobayashi, Noriko; Takahashi, Tomoe; Obara, Toshikage; Takeda, Masahide; Moritoki, Yuki; Itoga, Masamichi; Ito, Wataru; Ohsaga, Atsushi; Kondoh, Katsuyuki; Chihara, Junichi
2011-04-01
The active involvement of hospital laboratory in surveillance is crucial to the success of nosocomial infection control. The recent dramatic increase of antimicrobial-resistant organisms and their spread into the community suggest that the infection control strategy of independent medical institutions is insufficient. To share the clinical data and surveillance in our local medical region, we developed a microbiology data warehouse for networking hospital laboratories in Akita prefecture. This system, named Akita-ReNICS, is an easy-to-use information management system designed to compare, track, and report the occurrence of antimicrobial-resistant organisms. Participating laboratories routinely transfer their coded and formatted microbiology data to ReNICS server located at Akita University Hospital from their health care system's clinical computer applications over the internet. We established the system to automate the statistical processes, so that the participants can access the server to monitor graphical data in the manner they prefer, using their own computer's browser. Furthermore, our system also provides the documents server, microbiology and antimicrobiotic database, and space for long-term storage of microbiological samples. Akita-ReNICS could be a next generation network for quality improvement of infection control.
Implementation of a WAP-based telemedicine system for patient monitoring.
Hung, Kevin; Zhang, Yuan-Ting
2003-06-01
Many parties have already demonstrated telemedicine applications that use cellular phones and the Internet. A current trend in telecommunication is the convergence of wireless communication and computer network technologies, and the emergence of wireless application protocol (WAP) devices is an example. Since WAP will also be a common feature found in future mobile communication devices, it is worthwhile to investigate its use in telemedicine. This paper describes the implementation and experiences with a WAP-based telemedicine system for patient-monitoring that has been developed in our laboratory. It utilizes WAP devices as mobile access terminals for general inquiry and patient-monitoring services. Authorized users can browse the patients' general data, monitored blood pressure (BP), and electrocardiogram (ECG) on WAP devices in store-and-forward mode. The applications, written in wireless markup language (WML), WMLScript, and Perl, resided in a content server. A MySQL relational database system was set up to store the BP readings, ECG data, patient records, clinic and hospital information, and doctors' appointments with patients. A wireless ECG subsystem was built for recording ambulatory ECG in an indoor environment and for storing ECG data into the database. For testing, a WAP phone compliant with WAP 1.1 was used at GSM 1800 MHz by circuit-switched data (CSD) to connect to the content server through a WAP gateway, which was provided by a mobile phone service provider in Hong Kong. Data were successfully retrieved from the database and displayed on the WAP phone. The system shows how WAP can be feasible in remote patient-monitoring and patient data retrieval.
The ICT monitoring system of the ASTRI SST-2M prototype proposed for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Gianotti, F.; Bruno, P.; Tacchini, A.; Conforti, V.; Fioretti, V.; Tanci, C.; Grillo, A.; Leto, G.; Malaguti, G.; Trifoglio, M.
2016-08-01
In the framework of the international Cherenkov Telescope Array (CTA) observatory, the Italian National Institute for Astrophysics (INAF) has developed a dual mirror, small sized, telescope prototype (ASTRI SST-2M), installed in Italy at the INAF observing station located at Serra La Nave, Mt. Etna. The ASTRI SST-2M prototype is the basis of the ASTRI telescopes that will form the mini-array proposed to be installed at the CTA southern site during its preproduction phase. This contribution presents the solutions implemented to realize the monitoring system for the Information and Communication Technology (ICT) infrastructure of the ASTRI SST-2M prototype. The ASTRI ICT monitoring system has been implemented by integrating traditional tools used in computer centers, with specific custom tools which interface via Open Platform Communication Unified Architecture (OPC UA) to the Alma Common Software (ACS) that is used to operate the ASTRI SST-2M prototype. The traditional monitoring tools are based on Simple Network Management Protocol (SNMP) and commercial solutions and features embedded in the devices themselves. They generate alerts by email and SMS. The specific custom tools convert the SNMP protocol into the OPC UA protocol and implement an OPC UA server. The server interacts with an OPC UA client implemented in an ACS component that, through the ACS Notification Channel, sends monitor data and alerts to the central console of the ASTRI SST-2M prototype. The same approach has been proposed also for the monitoring of the CTA onsite ICT infrastructures.
A web-based quantitative signal detection system on adverse drug reaction in China.
Li, Chanjuan; Xia, Jielai; Deng, Jianxiong; Chen, Wenge; Wang, Suzhen; Jiang, Jing; Chen, Guanquan
2009-07-01
To establish a web-based quantitative signal detection system for adverse drug reactions (ADRs) based on spontaneous reporting to the Guangdong province drug-monitoring database in China. Using Microsoft Visual Basic and Active Server Pages programming languages and SQL Server 2000, a web-based system with three software modules was programmed to perform data preparation and association detection, and to generate reports. Information component (IC), the internationally recognized measure of disproportionality for quantitative signal detection, was integrated into the system, and its capacity for signal detection was tested with ADR reports collected from 1 January 2002 to 30 June 2007 in Guangdong. A total of 2,496 associations including known signals were mined from the test database. Signals (e.g., cefradine-induced hematuria) were found early by using the IC analysis. In addition, 291 drug-ADR associations were alerted for the first time in the second quarter of 2007. The system can be used for the detection of significant associations from the Guangdong drug-monitoring database and could be an extremely useful adjunct to the expert assessment of very large numbers of spontaneously reported ADRs for the first time in China.
A wearable, mobile phone-based respiration monitoring system for sleep apnea syndrome detection.
Ishida, Ryoichi; Yonezawa, Yoshiharu; Maki, Hiromichi; Ogawa, Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Hahn, Allen W; Caldwell, W Morton
2005-01-01
A new wearable respiration monitoring system has been developed for non-invasive detection of sleep apnea syndrome. The system, which is attached to a shirt, consists of a piezoelectric sensor, a low-power 8-bit single chip microcontroller, EEPROM and a 2.4 GHz low-power transmitting mobile phone (PHS). The piezoelectric sensor, whose electrical polarization voltage is produced by body movements, is installed inside the shirt and closely contacts the patient's chest. The low frequency components of body movements recorded by the sensor are mainly generated by respiration. The microcontroller sequentially stores the movement signal to the EEPROM for 5 minutes and detects, by time-frequency analysis, whether the patient has breathed during that time. When the patient is apneic for 10 sseconds, the microcontroller sends the recorded respiration waveform during and one minute before and after the apnea directly to the hospital server computer via the mobile phone. The server computer then creates apnea "filings" automatically for every patient. The system can be used at home and be self-applied by patients. Moreover, the system does not require any extra equipment such as a personal computer, PDA, or Internet connection.
Lim, Y A; Kim, H H; Joung, U S; Kim, C Y; Shin, Y H; Lee, S W; Kim, H J
2010-04-01
We developed a web-based program for a national surveillance system to determine baseline data regarding the supply and demand of blood products at sentinel hospitals in South Korea. Sentinel hospitals were invited to participate in a 1-month pilot-test. The data for receipts and exports of blood from each hospital information system were converted into comma-separated value files according to a specific conversion rule. The daily data from the sites could be transferred to the web-based program server using a semi-automated submission procedure: pressing a key allowed the program to automatically compute the blood inventory level as well as other indices including the minimal inventory ratio (MIR), ideal inventory ratio (IIR), supply index (SI) and utilisation index (UI). The national surveillance system was referred to as the Korean Blood Inventory Monitoring System (KBIMS) and the web-based program for KBIMS was referred to as the Blood Inventory Monitoring System (BMS). A total of 30 256 red blood cell (RBC) units were submitted as receipt data, however, only 83% of the receipt data were submitted to the BMS server as export data (25 093 RBC units). Median values were 2.67 for MIR, 1.08 for IIR, 1.00 for SI, 0.88 for UI and 5.33 for the ideal inventory day. The BMS program was easy to use and is expected to provide a useful tool for monitoring hospital inventory levels. This information will provide baseline data regarding the supply and demand of blood products in South Korea.
A Modular IoT Platform for Real-Time Indoor Air Quality Monitoring
Abdaoui, Abderrazak; Ahmad, Sabbir H.M.; Touati, Farid; Kadri, Abdullah
2018-01-01
The impact of air quality on health and on life comfort is well established. In many societies, vulnerable elderly and young populations spend most of their time indoors. Therefore, indoor air quality monitoring (IAQM) is of great importance to human health. Engineers and researchers are increasingly focusing their efforts on the design of real-time IAQM systems using wireless sensor networks. This paper presents an end-to-end IAQM system enabling measurement of CO2, CO, SO2, NO2, O3, Cl2, ambient temperature, and relative humidity. In IAQM systems, remote users usually use a local gateway to connect wireless sensor nodes in a given monitoring site to the external world for ubiquitous access of data. In this work, the role of the gateway in processing collected air quality data and its reliable dissemination to end-users through a web-server is emphasized. A mechanism for the backup and the restoration of the collected data in the case of Internet outage is presented. The system is adapted to an open-source Internet-of-Things (IoT) web-server platform, called Emoncms, for live monitoring and long-term storage of the collected IAQM data. A modular IAQM architecture is adopted, which results in a smart scalable system that allows seamless integration of various sensing technologies, wireless sensor networks (WSNs) and smart mobile standards. The paper gives full hardware and software details of the proposed solution. Sample IAQM results collected in various locations are also presented to demonstrate the abilities of the system. PMID:29443893
Research and design of smart grid monitoring control via terminal based on iOS system
NASA Astrophysics Data System (ADS)
Fu, Wei; Gong, Li; Chen, Heli; Pan, Guangji
2017-06-01
Aiming at a series of problems existing in current smart grid monitoring Control Terminal, such as high costs, poor portability, simple monitoring system, poor software extensions, low system reliability when transmitting information, single man-machine interface, poor security, etc., smart grid remote monitoring system based on the iOS system has been designed. The system interacts with smart grid server so that it can acquire grid data through WiFi/3G/4G networks, and monitor each grid line running status, as well as power plant equipment operating conditions. When it occurs an exception in the power plant, incident information can be sent to the user iOS terminal equipment timely, which will provide troubleshooting information to help the grid staff to make the right decisions in a timely manner, to avoid further accidents. Field tests have shown the system realizes the integrated grid monitoring functions, low maintenance cost, friendly interface, high security and reliability, and it possesses certain applicable value.
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, Rachel; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
The Work Coordination Engine (WCE) is a Java application integrated into the Service Management Database (SMDB), which coordinates the dispatching and monitoring of a work order system. WCE de-queues work orders from SMDB and orchestrates the dispatching of work to a registered set of software worker applications distributed over a set of local, or remote, heterogeneous computing systems. WCE monitors the execution of work orders once dispatched, and accepts the results of the work order by storing to the SMDB persistent store. The software leverages the use of a relational database, Java Messaging System (JMS), and Web Services using Simple Object Access Protocol (SOAP) technologies to implement an efficient work-order dispatching mechanism capable of coordinating the work of multiple computer servers on various platforms working concurrently on different, or similar, types of data or algorithmic processing. Existing (legacy) applications can be wrapped with a proxy object so that no changes to the application are needed to make them available for integration into the work order system as "workers." WCE automatically reschedules work orders that fail to be executed by one server to a different server if available. From initiation to completion, the system manages the execution state of work orders and workers via a well-defined set of events, states, and actions. It allows for configurable work-order execution timeouts by work-order type. This innovation eliminates a current processing bottleneck by providing a highly scalable, distributed work-order system used to quickly generate products needed by the Deep Space Network (DSN) to support space flight operations. WCE is driven by asynchronous messages delivered via JMS indicating the availability of new work or workers. It runs completely unattended in support of the lights-out operations concept in the DSN.
Modeling And Simulation Of Multimedia Communication Networks
NASA Astrophysics Data System (ADS)
Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.
1989-05-01
In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.
Jo, Byung Wan; Jo, Jun Ho; Khan, Rana Muhammad Asad; Kim, Jung Hoon; Lee, Yun Sung
2018-05-23
Structure Health Monitoring is a topic of great interest in port structures due to the ageing of structures and the limitations of evaluating structures. This paper presents a cloud computing-based stability evaluation platform for a pier type port structure using Fiber Bragg Grating (FBG) sensors in a system consisting of a FBG strain sensor, FBG displacement gauge, FBG angle meter, gateway, and cloud computing-based web server. The sensors were installed on core components of the structure and measurements were taken to evaluate the structures. The measurement values were transmitted to the web server via the gateway to analyze and visualize them. All data were analyzed and visualized in the web server to evaluate the structure based on the safety evaluation index (SEI). The stability evaluation platform for pier type port structures involves the efficient monitoring of the structures which can be carried out easily anytime and anywhere by converging new technologies such as cloud computing and FBG sensors. In addition, the platform has been successfully implemented at “Maryang Harbor” situated in Maryang-Meyon of Korea to test its durability.
The USGODAE Monterey Data Server
NASA Astrophysics Data System (ADS)
Sharfstein, P.; Dimitriou, D.; Hankin, S.
2005-12-01
The USGODAE Monterey Data Server (http://www.usgodae.org/) has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. The server is operated with oversight and funding from the Office of Naval Research (ONR). Support of the GODAE Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the GODAE server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data available from the Global Telecommunications System (GTS) and other FTP sites, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. It supports GODAE participants, as well as the broader oceanographic research community, and is becoming a significant node in the international GODAE program. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Presenting data with a consistent interface and ensuring its availability in the maximum number of standard formats is one of the primary challenges in hosting the many diverse formats and broad range of data used by the GODAE community. To this end, all USGODAE data sets are available in their original format via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and GODAE Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC) specifications. USGODAE serves FNMOC GRIB files from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) as OPeNDAP data sets using the GrADS Data Server (GDS). The server also provides several FNMOC custom IEEE binary format high resolution ocean analysis products and model outputs through GDS. These data sets are also made available through LAS. The Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo temperature/salinity profiling float data. The Argo collection includes all available Delayed-Mode (scientific quality controlled and corrected) data. USGODAE Argo data are served through OPeNDAP and LAS, which provide complete integration of the Argo data set into NVODS and the IOOS/DMAC. By providing researchers flexible, easy access to data through standard Internet and oceanographic interfaces, the USGODAE Monterey Data Server has become an invaluable resource for oceanographic research. Also, by promoting the community data serving projects, USGODAE strengthens the community and helps to advance the data serving standards.
Cloud-ECG for real time ECG monitoring and analysis.
Xia, Henian; Asif, Irfan; Zhao, Xiaopeng
2013-06-01
Recent advances in mobile technology and cloud computing have inspired numerous designs of cloud-based health care services and devices. Within the cloud system, medical data can be collected and transmitted automatically to medical professionals from anywhere and feedback can be returned to patients through the network. In this article, we developed a cloud-based system for clients with mobile devices or web browsers. Specially, we aim to address the issues regarding the usefulness of the ECG data collected from patients themselves. Algorithms for ECG enhancement, ECG quality evaluation and ECG parameters extraction were implemented in the system. The system was demonstrated by a use case, in which ECG data was uploaded to the web server from a mobile phone at a certain frequency and analysis was performed in real time using the server. The system has been proven to be functional, accurate and efficient. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
SSRL Emergency Response Shore Tool
NASA Technical Reports Server (NTRS)
Mah, Robert W.; Papasin, Richard; McIntosh, Dawn M.; Denham, Douglas; Jorgensen, Charles; Betts, Bradley J.; Del Mundo, Rommel
2006-01-01
The SSRL Emergency Response Shore Tool (wherein SSRL signifies Smart Systems Research Laboratory ) is a computer program within a system of communication and mobile-computing software and hardware being developed to increase the situational awareness of first responders at building collapses. This program is intended for use mainly in planning and constructing shores to stabilize partially collapsed structures. The program consists of client and server components, runs in the Windows operating system on commercial off-the-shelf portable computers, and can utilize such additional hardware as digital cameras and Global Positioning System devices. A first responder can enter directly, into a portable computer running this program, the dimensions of a required shore. The shore dimensions, plus an optional digital photograph of the shore site, can then be uploaded via a wireless network to a server. Once on the server, the shore report is time-stamped and made available on similarly equipped portable computers carried by other first responders, including shore wood cutters and an incident commander. The staff in a command center can use the shore reports and photographs to monitor progress and to consult with structural engineers to assess whether a building is in imminent danger of further collapse.
Real-time indoor monitoring system based on wireless sensor networks
NASA Astrophysics Data System (ADS)
Wu, Zhengzhong; Liu, Zilin; Huang, Xiaowei; Liu, Jun
2008-10-01
Wireless sensor networks (WSN) greatly extend our ability to monitor and control the physical world. It can collaborate and aggregate a huge amount of sensed data to provide continuous and spatially dense observation of environment. The control and monitoring of indoor atmosphere conditions represents an important task with the aim of ensuring suitable working and living spaces to people. However, the comprehensive air quality, which includes monitoring of humidity, temperature, gas concentrations, etc., is not so easy to be monitored and controlled. In this paper an indoor WSN monitoring system was developed. In the system several sensors such as temperature sensor, humidity sensor, gases sensor, were built in a RF transceiver board for monitoring indoor environment conditions. The indoor environmental monitoring parameters can be transmitted by wireless to database server and then viewed throw PC or PDA accessed to the local area networks by administrators. The system, which was also field-tested and showed a reliable and robust characteristic, is significant and valuable to people.
NASA Astrophysics Data System (ADS)
Neidhardt, Alexander; Kirschbauer, Katharina; Plötz, Christian; Schönberger, Matthias; Böer, Armin; Wettzell VLBI Team
2016-12-01
The first test implementation of an auxiliary data archive is tested at the Geodetic Observatory Wetttzell. It is software which follows on the Wettzell SysMon, extending the database and data sensors with the functionalities of a professional monitoring environment, named Zabbix. Some extensions to the remote control server on the NASA Field System PC enable the inclusion of data from external antennas. The presentation demonstrates the implementation and discusses the current possibilities to encourage other antennas to join the auxiliary archive.
NASA Astrophysics Data System (ADS)
Antony, Joby; Mathuria, D. S.; Chaudhary, Anup; Datta, T. S.; Maity, T.
2017-02-01
Cryogenic network for linear accelerator operations demand a large number of Cryogenic sensors, associated instruments and other control-instrumentation to measure, monitor and control different cryogenic parameters remotely. Here we describe an alternate approach of six types of newly designed integrated intelligent cryogenic instruments called device-servers which has the complete circuitry for various sensor-front-end analog instrumentation and the common digital back-end http-server built together, to make crateless PLC-free model of controls and data acquisition. These identified instruments each sensor-specific viz. LHe server, LN2 Server, Control output server, Pressure server, Vacuum server and Temperature server are completely deployed over LAN for the cryogenic operations of IUAC linac (Inter University Accelerator Centre linear Accelerator), New Delhi. This indigenous design gives certain salient features like global connectivity, low cost due to crateless model, easy signal processing due to integrated design, less cabling and device-interconnectivity etc.
NASA Astrophysics Data System (ADS)
Sumarudin, A.; Ghozali, A. L.; Hasyim, A.; Effendi, A.
2016-04-01
Indonesian agriculture has great potensial for development. Agriculture a lot yet based on data collection for soil or plant, data soil can use for analys soil fertility. We propose e-agriculture system for monitoring soil. This system can monitoring soil status. Monitoring system based on wireless sensor mote that sensing soil status. Sensor monitoring utilize soil moisture, humidity and temperature. System monitoring design with mote based on microcontroler and xbee connection. Data sensing send to gateway with star topology with one gateway. Gateway utilize with mini personal computer and connect to xbee cordinator mode. On gateway, gateway include apache server for store data based on My-SQL. System web base with YII framework. System done implementation and can show soil status real time. Result the system can connection other mote 40 meters and mote lifetime 7 hours and minimum voltage 7 volt. The system can help famer for monitoring soil and farmer can making decision for treatment soil based on data. It can improve the quality in agricultural production and would decrease the management and farming costs.
Recommending personally interested contents by text mining, filtering, and interfaces
Xu, Songhua
2015-10-27
A personalized content recommendation system includes a client interface device configured to monitor a user's information data stream. A collaborative filter remote from the client interface device generates automated predictions about the interests of the user. A database server stores personal behavioral profiles and user's preferences based on a plurality of monitored past behaviors and an output of the collaborative user personal interest inference engine. A programmed personal content recommendation server filters items in an incoming information stream with the personal behavioral profile and identifies only those items of the incoming information stream that substantially matches the personal behavioral profile. The identified personally relevant content is then recommended to the user following some priority that may consider the similarity between the personal interest matches, the context of the user information consumption behaviors that may be shown by the user's content consumption mode.
NASA Astrophysics Data System (ADS)
Polkowski, Marcin; Grad, Marek
2016-04-01
Passive seismic experiment "13BB Star" is operated since mid 2013 in northern Poland and consists of 13 broadband seismic stations. One of the elements of this experiment is dedicated on-line data acquisition system comprised of both client (station) side and server side modules with web based interface that allows monitoring of network status and provides tools for preliminary data analysis. Station side is controlled by ARM Linux board that is programmed to maintain 3G/EDGE internet connection, receive data from digitizer, send data do central server among with additional auxiliary parameters like temperatures, voltages and electric current measurements. Station side is controlled by set of easy to install PHP scripts. Data is transmitted securely over SSH protocol to central server. Central server is a dedicated Linux based machine. Its duty is receiving and processing all data from all stations including auxiliary parameters. Server side software is written in PHP and Python. Additionally, it allows remote station configuration and provides web based interface for user friendly interaction. All collected data can be displayed for each day and station. It also allows manual creation of event oriented plots with different filtering abilities and provides numerous status and statistic information. Our solution is very flexible and easy to modify. In this presentation we would like to share our solution and experience. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
Zhu, Lingyun; Li, Lianjie; Meng, Chunyan
2014-12-01
There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.
Computation offloading for real-time health-monitoring devices.
Kalantarian, Haik; Sideris, Costas; Tuan Le; Hosseini, Anahita; Sarrafzadeh, Majid
2016-08-01
Among the major challenges in the development of real-time wearable health monitoring systems is to optimize battery life. One of the major techniques with which this objective can be achieved is computation offloading, in which portions of computation can be partitioned between the device and other resources such as a server or cloud. In this paper, we describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data between the wearable device and mobile application as a function of desired classification accuracy.
A remote data access architecture for home-monitoring health-care applications.
Lin, Chao-Hung; Young, Shuenn-Tsong; Kuo, Te-Son
2007-03-01
With the aging of the population and the increasing patient preference for receiving care in their own homes, remote home care is one of the fastest growing areas of health care in Taiwan and many other countries. Many remote home-monitoring applications have been developed and implemented to enable both formal and informal caregivers to have remote access to patient data so that they can respond instantly to any abnormalities of in-home patients. The aim of this technology is to give both patients and relatives better control of the health care, reduce the burden on informal caregivers and reduce visits to hospitals and thus result in a better quality of life for both the patient and his/her family. To facilitate their widespread adoption, remote home-monitoring systems take advantage of the low-cost features and popularity of the Internet and PCs, but are inherently exposed to several security risks, such as virus and denial-of-service (DoS) attacks. These security threats exist as long as the in-home PC is directly accessible by remote-monitoring users over the Internet. The purpose of the study reported in this paper was to improve the security of such systems, with the proposed architecture aimed at increasing the system availability and confidentiality of patient information. A broker server is introduced between the remote-monitoring devices and the in-home PCs. This topology removes direct access to the in-home PC, and a firewall can be configured to deny all inbound connections while the remote home-monitoring application is operating. This architecture helps to transfer the security risks from the in-home PC to the managed broker server, on which more advanced security measures can be implemented. The pros and cons of this novel architecture design are also discussed and summarized.
A Tale of Two Observing Systems: Interoperability in the World of Microsoft Windows
NASA Astrophysics Data System (ADS)
Babin, B. L.; Hu, L.
2008-12-01
Louisiana Universities Marine Consortium's (LUMCON) and Dauphin Island Sea Lab's (DISL) Environmental Monitoring System provide a unified coastal ocean observing system. These two systems are mirrored to maintain autonomy while offering an integrated data sharing environment. Both systems collect data via Campbell Scientific Data loggers, store the data in Microsoft SQL servers, and disseminate the data in real- time on the World Wide Web via Microsoft Internet Information Servers and Active Server Pages (ASP). The utilization of Microsoft Windows technologies presented many challenges to these observing systems as open source tools for interoperability grow. The current open source tools often require the installation of additional software. In order to make data available through common standards formats, "home grown" software has been developed. One example of this is the development of software to generate xml files for transmission to the National Data Buoy Center (NDBC). OOSTethys partners develop, test and implement easy-to-use, open-source, OGC-compliant software., and have created a working prototype of networked, semantically interoperable, real-time data systems. Partnering with OOSTethys, we are developing a cookbook to implement OGC web services. The implementation will be written in ASP, will run in a Microsoft operating system environment, and will serve data via Sensor Observation Services (SOS). This cookbook will give observing systems running Microsoft Windows the tools to easily participate in the Open Geospatial Consortium (OGC) Oceans Interoperability Experiment (OCEANS IE).
Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks.
Taniguchi, Yoshiaki; Tsutsumi, Hiroaki; Iguchi, Nobukazu; Watanabe, Kenzi
2016-01-01
Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator's configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance.
Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks
Taniguchi, Yoshiaki; Tsutsumi, Hiroaki; Iguchi, Nobukazu; Watanabe, Kenzi
2016-01-01
Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator's configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance. PMID:27006977
Web Program for Development of GUIs for Cluster Computers
NASA Technical Reports Server (NTRS)
Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward
2003-01-01
WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.
Implementation of Online Promethee Method for Poor Family Change Rate Calculation
NASA Astrophysics Data System (ADS)
Aji, Dhady Lukito; Suryono; Widodo, Catur Edi
2018-02-01
This research has been done online calculation of the rate of poor family change rate by using Preference Ranking Method of Organization Of Enrichment Evaluation (PROMETHEE) .This system is very useful to monitor poverty in a region as well as for administrative services related to poverty rate. The system consists of computer clients and servers connected via the internet network. Poor family residence data obtained from the government. In addition, survey data are inputted through the client computer in each administrative village and also 23 criteria of input in accordance with the established government. The PROMETHEE method is used to evaluate the value of poverty and its weight is used to determine poverty status. PROMETHEE output can also be used to rank the poverty of the registered population of the server based on the netflow value. The poverty rate is calculated based on the current poverty rate compared to the previous poverty rate. The rate results can be viewed online and real time on the server through numbers and graphs. From the test results can be seen that the system can classify poverty status, calculate the poverty rate change rate and can determine the value and poverty ranking of each population.
Maltz, Jonathan; C Ng, Thomas; Li, Dustin; Wang, Jian; Wang, Kang; Bergeron, William; Martin, Ron; Budinger, Thomas
2005-01-01
In mass trauma situations, emergency personnel are challenged with the task of prioritizing the care of many injured victims. We propose a trauma patient tracking system (TPTS) where first-responders tag all patients with a wireless monitoring device that continuously reports the location of each patient. The system can be used not only to prioritize patient care, but also to determine the time taken for each patient to receive treatment. This is important in training emergency personnel and in identifying bottlenecks in the disaster response process. In situations where biochemical agents are involved, a TPTS may be employed to determine sites of cross-contamination. In order to track patient location in both outdoor and indoor environments, we employ both Global Positioning System (GPS) and Television/ Radio Frequency (TVRF) technologies. Each patient tag employs IEEE 802.11 (Wi-Fi)/TCP/IP networking to communicate with a central server via any available Wi-Fi basestation. A key component to increase TPTS fault-tolerance is a mobile Wi-Fi basestation that employs redundant Internet connectivity to ensure that tags at the disaster scene can send information to the central server even when local infrastructure is unavailable for use. We demonstrate the robustness of the system in tracking multiple patients in a simulated trauma situation in an urban environment.
Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL
NASA Astrophysics Data System (ADS)
Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong
2011-12-01
We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.
Cruz, Márcio Freire; Cavalcante, Carlos Arthur Mattos Teixeira; Sá Barretto, Sérgio Torres
2018-05-30
Health Level Seven (HL7) is one of the standards most used to centralize data from different vital sign monitoring systems. This solution significantly limits the data available for historical analysis, because it typically uses databases that are not effective in storing large volumes of data. In industry, a specific Big Data Historian, known as a Process Information Management System (PIMS), solves this problem. This work proposes the same solution to overcome the restriction on storing vital sign data. The PIMS needs a compatible communication standard to allow storing, and the one most commonly used is the OLE for Process Control (OPC). This paper presents a HL7-OPC Server that permits communication between vital sign monitoring systems with PIMS, thus allowing the storage of long historical series of vital signs. In addition, it carries out a review about local and cloud-based Big Medical Data researches, followed by an analysis of the PIMS in a Health IT Environment. Then it shows the architecture of HL7 and OPC Standards. Finally, it shows the HL7-OPC Server and a sequence of tests that proved its full operation and performance.
Web Monitoring of EOS Front-End Ground Operations, Science Downlinks and Level 0 Processing
NASA Technical Reports Server (NTRS)
Cordier, Guy R.; Wilkinson, Chris; McLemore, Bruce
2008-01-01
This paper addresses the efforts undertaken and the technology deployed to aggregate and distribute the metadata characterizing the real-time operations associated with NASA Earth Observing Systems (EOS) high-rate front-end systems and the science data collected at multiple ground stations and forwarded to the Goddard Space Flight Center for level 0 processing. Station operators, mission project management personnel, spacecraft flight operations personnel and data end-users for various EOS missions can retrieve the information at any time from any location having access to the internet. The users are distributed and the EOS systems are distributed but the centralized metadata accessed via an external web server provide an effective global and detailed view of the enterprise-wide events as they are happening. The data-driven architecture and the implementation of applied middleware technology, open source database, open source monitoring tools, and external web server converge nicely to fulfill the various needs of the enterprise. The timeliness and content of the information provided are key to making timely and correct decisions which reduce project risk and enhance overall customer satisfaction. The authors discuss security measures employed to limit access of data to authorized users only.
[Design and implementation of field questionnaire survey system of taeniasis/cysticercosis].
Huan-Zhang, Li; Jing-Bo, Xue; Men-Bao, Qian; Xin-Zhong, Zang; Shang, Xia; Qiang, Wang; Ying-Dan, Chen; Shi-Zhu, Li
2018-04-17
A taeniasis/cysticercosis information management system was designed to achieve the dynamic monitoring of the epidemic situation of taeniasis/cysticercosis and improve the intelligence level of disease information management. The system includes three layer structures (application layer, technical core layer, and data storage layer) and designs a datum transmission and remote communication system of traffic information tube in Browser/Server architecture. The system is believed to promote disease datum collection. Additionally, the system may provide the standardized data for convenience of datum analysis.
2004-01-01
login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a
Takagaki, Yusaku; Yamamoto, Shuji; Kubo, Mayu; Kunitatsu, Kosei
2014-01-01
Susami is a typical rural town of which about 5,000 with a 40% aging rate, located in the south of Wakayama prefecture. The needs with regard to medical care, nursing care and senior care has been increasing every year. However, there are few staff members involved in such care services. To take better care of our community, we developed the "Susami information sharing system." The subjects consisted of 2,600 people from Susami who provided their consent for their information to be shared. Using the information sharing system, the medical information, including prescriptions, infusions, imaging and laboratory data is automatically extracted from the electronic medical records at Susami hospital. Home nursing information is uploaded by a handheld unit by nurses at home nursing stations. Senior care information is also shared by care workers as part of the Susami social welfare association. Welfare information, including the results of basic medical examinations, cancer screening and vaccination data are uploaded by staff of the government office. Infrared motion sensors are installed in the homes of subjects living on their own to monitor their life activities. All information is collected by a shared host server through each information disclosure server. All information can be seen in the electronic medical records and PC monitors. The Susami government office administers this system under an annual budget, 3,800,000 yen. Most of the budget is the maintenance cost of the infrared motion sensors. The annual administration expense for the system's servers is 680,000 yen. Because the maintenance cost is relatively low, it is not difficult for small-scale governments like that in Susami to maintain this system. In the near future, we will consider allowing other departments and practitioners to connect to our system. This system has strengthened both mutual understanding and cooperation between patients, health care providers, nurses and caregivers.
NASA Astrophysics Data System (ADS)
Plank, G.; Slater, D.; Torrisi, J.; Presser, R.; Williams, M.; Smith, K. D.
2012-12-01
The Nevada Seismological Laboratory (NSL) manages time-series data and high-throughput IP telemetry for the National Center for Nuclear Security (NCNS) Source Physics Experiment (SPE), underway on the Nevada National Security Site (NNSS). During active-source experiments, SPE's heterogeneous systems record over 350 channels of a variety of data types including seismic, infrasound, acoustic, and electro-magnetic. During the interim periods, broadband and short period instruments record approximately 200 channels of continuous, high-sample-rate seismic data. Frequent changes in sensor and station configurations create a challenging meta-data environment. Meta-data account for complete operational histories, including sensor types, serial numbers, gains, sample rates, orientations, instrument responses, data-logger types etc. To date, these catalogue 217 stations, over 40 different sensor types, and over 1000 unique recording configurations (epochs). Facilities for processing, backup, and distribution of time-series data currently span four Linux servers, 60Tb of disk capacity, and two data centers. Bandwidth, physical security, and redundant power and cooling systems for acquisition, processing, and backup servers are provided by NSL's Reno data center. The Nevada System of Higher Education (NSHE) System Computer Services (SCS) in Las Vegas provides similar facilities for the distribution server. NSL staff handle setup, maintenance, and security of all data management systems. SPE PIs have remote access to meta-data, raw data, and CSS3.0 compilations, via SSL-based transfers such as rsync or secure-copy, as well as shell access for data browsing and limited processing. Meta-data are continuously updated and posted on the Las Vegas distribution server as station histories are better understood and errors are corrected. Raw time series and refined CSS3.0 data compilations with standardized formats are transferred to the Las Vegas data server as available. For better data availability and station monitoring, SPE is beginning to leverage NSL's wide-area digital IP network with nine SPE stations and six Rock Valley area stations that stream continuous recordings in real time to the NSL Reno data center. These stations, in addition to eight regional legacy stations supported by National Security Technologies (NSTec), are integrated with NSL's regional monitoring network and constrain a high-quality local earthquake catalog for NNSS. The telemetered stations provide critical capabilities for SPE, and infrastructure for earthquake response on NNSS as well as southern Nevada and the Las Vegas area.
The USGODAE Monterey Data Server
NASA Astrophysics Data System (ADS)
Sharfstein, P. J.; Dimitriou, D.; Hankin, S. C.
2004-12-01
With oversight from the U.S. Global Ocean Data Assimilation Experiment (GODAE) Steering Committee and funding from the Office of Naval Research, the USGODAE Monterey Data Server has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. Support of the Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Because of the broad range and diverse formats of data used by the GODAE community, presenting data with a consistent interface and ensuring its availability in standard formats is a primary challenge faced by the USGODAE Server project. To this end, all USGODAE data sets are available via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC). To provide surface forcing, fluxes, and boundary conditions for ocean model research, USGODAE serves global data from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and regional data from the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Global meteorological data and observational data from the FNMOC Ocean QC process are posted in near real-time to USGODAE. These include T/S profiles, in-situ and satellite sea surface temperature (SST), satellite altimetry, and SSM/I sea ice. They contain all of the unclassified in-situ and satellite observations used to initialize the FNMOC NOGAPS model. Also, the Naval Oceanographic Office provides daily satellite SST and SSH retrievals to USGODAE. The USGODAE Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo T/S profiling float data. USGODAE Argo data are served through OPeNDAP and LAS, providing complete integration into NVODS and the IOOS/DMAC. Due to its high reliability, ease of data access, and increasing breadth of data, the USGODAE Server is becoming an invaluable resource for both the GODAE community and the general oceanographic community. Continued integration of model, forcing, and in-situ data sets from providers throughout the world is making the USGODAE Monterey Data Server a key part of the international GODAE project.
Health monitoring of offshore structures using wireless sensor network: experimental investigations
NASA Astrophysics Data System (ADS)
Chandrasekaran, Srinivasan; Chitambaram, Thailammai
2016-04-01
This paper presents a detailed methodology of deploying wireless sensor network in offshore structures for structural health monitoring (SHM). Traditional SHM is carried out by visual inspections and wired systems, which are complicated and requires larger installation space to deploy while decommissioning is a tedious process. Wireless sensor networks can enhance the art of health monitoring with deployment of scalable and dense sensor network, which consumes lesser space and lower power consumption. Proposed methodology is mainly focused to determine the status of serviceability of large floating platforms under environmental loads using wireless sensors. Data acquired by the servers will analyze the data for their exceedance with respect to the threshold values. On failure, SHM architecture will trigger an alarm or an early warning in the form of alert messages to alert the engineer-in-charge on board; emergency response plans can then be subsequently activated, which shall minimize the risk involved apart from mitigating economic losses occurring from the accidents. In the present study, wired and wireless sensors are installed in the experimental model and the structural response, acquired is compared. The wireless system comprises of Raspberry pi board, which is programmed to transmit the acquired data to the server using Wi-Fi adapter. Data is then hosted in the webpage for further post-processing, as desired.
Distributed On-line Monitoring System Based on Modem and Public Phone Net
NASA Astrophysics Data System (ADS)
Chen, Dandan; Zhang, Qiushi; Li, Guiru
In order to solve the monitoring problem of urban sewage disposal, a distributed on-line monitoring system is proposed. By introducing dial-up communication technology based on Modem, the serial communication program can rationally solve the information transmission problem between master station and slave station. The realization of serial communication program is based on the MSComm control of C++ Builder 6.0.The software includes real-time data operation part and history data handling part, which using Microsoft SQL Server 2000 for database, and C++ Builder6.0 for user interface. The monitoring center displays a user interface with alarm information of over-standard data and real-time curve. Practical application shows that the system has successfully accomplished the real-time data acquisition from data gather station, and stored them in the terminal database.
A Remote Patient Monitoring System for Congestive Heart Failure
Suh, Myung-kyung; Chen, Chien-An; Woodbridge, Jonathan; Tu, Michael Kai; Kim, Jung In; Nahapetian, Ani; Evangelista, Lorraine S.; Sarrafzadeh, Majid
2011-01-01
Congestive heart failure (CHF) is a leading cause of death in the United States affecting approximately 670,000 individuals. Due to the prevalence of CHF related issues, it is prudent to seek out methodologies that would facilitate the prevention, monitoring, and treatment of heart disease on a daily basis. This paper describes WANDA (Weight and Activity with Blood Pressure Monitoring System); a study that leverages sensor technologies and wireless communications to monitor the health related measurements of patients with CHF. The WANDA system is a three-tier architecture consisting of sensors, web servers, and back-end databases. The system was developed in conjunction with the UCLA School of Nursing and the UCLA Wireless Health Institute to enable early detection of key clinical symptoms indicative of CHF-related decompensation. This study shows that CHF patients monitored by WANDA are less likely to have readings fall outside a healthy range. In addition, WANDA provides a useful feedback system for regulating readings of CHF patients. PMID:21611788
A graphic system for telemetry monitoring and procedure performing at the Telecom SCC
NASA Technical Reports Server (NTRS)
Loubeyre, Jean Philippe
1994-01-01
The increasing amount of telemetry parameters and the increasing complexity of procedures used for the in-orbit satellite follow-up has led to the development of new tools for telemetry monitoring and procedures performing. The name of the system presented here is Graphic Server. It provides an advanced graphic representation of the satellite subsystems, including real-time telemetry and alarm displaying, and a powerful help for decision making with on line contingency procedures. Used for 2.5 years at the TELECOM S.C.C. for procedure performing, it has become an essential part of the S.C.C.
The Standard Autonomous File Server, A Customized, Off-the-Shelf Success Story
NASA Technical Reports Server (NTRS)
Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)
2001-01-01
The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper describes the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system has been so successful; it is becoming a NASA standard resource, leading to its nomination for NASA's Software of the Year Award in 1999.
Data management for biofied building
NASA Astrophysics Data System (ADS)
Matsuura, Kohta; Mita, Akira
2015-03-01
Recently, Smart houses have been studied by many researchers to satisfy individual demands of residents. However, they are not feasible yet as they are very costly and require many sensors to be embedded into houses. Therefore, we suggest "Biofied Building". In Biofied Building, sensor agent robots conduct sensing, actuation, and control in their house. The robots monitor many parameters of human lives such as walking postures and emotion continuously. In this paper, a prototype network system and a data model for practical application for Biofied Building is pro-posed. In the system, functions of robots and servers are divided according to service flows in Biofield Buildings. The data model is designed to accumulate both the building data and the residents' data. Data sent from the robots and data analyzed in the servers are automatically registered into the database. Lastly, feasibility of this system is verified through lighting control simulation performed in an office space.
Remote Diagnosis of the International Space Station Utilizing Telemetry Data
NASA Technical Reports Server (NTRS)
Deb, Somnath; Ghoshal, Sudipto; Malepati, Venkat; Domagala, Chuck; Patterson-Hine, Ann; Alena, Richard; Norvig, Peter (Technical Monitor)
2000-01-01
Modern systems such as fly-by-wire aircraft, nuclear power plants, manufacturing facilities, battlefields, etc., are all examples of highly connected network enabled systems. Many of these systems are also mission critical and need to be monitored round the clock. Such systems typically consist of embedded sensors in networked subsystems that can transmit data to central (or remote) monitoring stations. Moreover, many legacy are safety systems were originally not designed for real-time onboard diagnosis, but a critical and would benefit from such a solution. Embedding additional software or hardware in such systems is often considered too intrusive and introduces flight safety and validation concerns. Such systems can be equipped to transmit the sensor data to a remote-processing center for continuous health monitoring. At Qualtech Systems, we are developing a Remote Diagnosis Server (RDS) that can support multiple simultaneous diagnostic sessions from a variety of remote subsystems.
Transaction aware tape-infrastructure monitoring
NASA Astrophysics Data System (ADS)
Nikolaidis, Fotios; Kruse, Daniele Francesco
2014-06-01
Administrating a large scale, multi protocol, hierarchical tape infrastructure like the CERN Advanced STORage manager (CASTOR)[2], which stores now 100 PB (with an increasing step of 25 PB per year), requires an adequate monitoring system for quick spotting of malfunctions, easier debugging and on demand report generation. The main challenges for such system are: to cope with CASTOR's log format diversity and its information scattered among several log files, the need for long term information archival, the strict reliability requirements and the group based GUI visualization. For this purpose, we have designed, developed and deployed a centralized system consisting of four independent layers: the Log Transfer layer for collecting log lines from all tape servers to a single aggregation server, the Data Mining layer for combining log data into transaction context, the Storage layer for archiving the resulting transactions and finally the Web UI layer for accessing the information. Having flexibility, extensibility and maintainability in mind, each layer is designed to work as a message broker for the next layer, providing a clean and generic interface while ensuring consistency, redundancy and ultimately fault tolerance. This system unifies information previously dispersed over several monitoring tools into a single user interface, using Splunk, which also allows us to provide information visualization based on access control lists (ACL). Since its deployment, it has been successfully used by CASTOR tape operators for quick overview of transactions, performance evaluation, malfunction detection and from managers for report generation.
Design and implementation of streaming media server cluster based on FFMpeg.
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Design and Implementation of Streaming Media Server Cluster Based on FFMpeg
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
Wide Area Information Servers: An Executive Information System for Unstructured Files.
ERIC Educational Resources Information Center
Kahle, Brewster; And Others
1992-01-01
Describes the Wide Area Information Servers (WAIS) system, an integrated information retrieval system for corporate end users. Discussion covers general characteristics of the system, search techniques, protocol development, user interfaces, servers, selective dissemination of information, nontextual data, access to other servers, and description…
A monitoring system for vegetable greenhouses based on a wireless sensor network.
Li, Xiu-hong; Cheng, Xiao; Yan, Ke; Gong, Peng
2010-01-01
A wireless sensor network-based automatic monitoring system is designed for monitoring the life conditions of greenhouse vegetables. The complete system architecture includes a group of sensor nodes, a base station, and an internet data center. For the design of wireless sensor node, the JN5139 micro-processor is adopted as the core component and the Zigbee protocol is used for wireless communication between nodes. With an ARM7 microprocessor and embedded ZKOS operating system, a proprietary gateway node is developed to achieve data influx, screen display, system configuration and GPRS based remote data forwarding. Through a Client/Server mode the management software for remote data center achieves real-time data distribution and time-series analysis. Besides, a GSM-short-message-based interface is developed for sending real-time environmental measurements, and for alarming when a measurement is beyond some pre-defined threshold. The whole system has been tested for over one year and satisfactory results have been observed, which indicate that this system is very useful for greenhouse environment monitoring.
Asynchronous data change notification between database server and accelerator controls system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, W.; Morris, J.; Nemesure, S.
2011-10-10
Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to anymore » client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.« less
Artificial intelligence in the service of system administrators
NASA Astrophysics Data System (ADS)
Haen, C.; Barra, V.; Bonaccorsi, E.; Neufeld, N.
2012-12-01
The LHCb online system relies on a large and heterogeneous IT infrastructure made from thousands of servers on which many different applications are running. They run a great variety of tasks: critical ones such as data taking and secondary ones like web servers. The administration of such a system and making sure it is working properly represents a very important workload for the small expert-operator team. Research has been performed to try to automatize (some) system administration tasks, starting in 2001 when IBM defined the so-called “self objectives” supposed to lead to “autonomic computing”. In this context, we present a framework that makes use of artificial intelligence and machine learning to monitor and diagnose at a low level and in a non intrusive way Linux-based systems and their interaction with software. Moreover, the multi agent approach we use, coupled with an “object oriented paradigm” architecture should increase our learning speed a lot and highlight relations between problems.
Space Images for NASA JPL Android Version
NASA Technical Reports Server (NTRS)
Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice
2013-01-01
This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.
Development of a Web-based financial application System
NASA Astrophysics Data System (ADS)
Hasan, M. R.; Ibrahimy, M. I.; Motakabber, S. M. A.; Ferdaus, M. M.; Khan, M. N. H.; Mostafa, M. G.
2013-12-01
The paper describes a technique to develop a web based financial system, following latest technology and business needs. In the development of web based application, the user friendliness and technology both are very important. It is used ASP .NET MVC 4 platform and SQL 2008 server for development of web based financial system. It shows the technique for the entry system and report monitoring of the application is user friendly. This paper also highlights the critical situations of development, which will help to develop the quality product.
Web Proxy Auto Discovery for the WLCG
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.; Blumenfeld, B.; De Salvo, A.; Dewhurst, A.; Verguilov, V.
2017-10-01
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily support that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which they direct to the nearest publicly accessible web proxy servers. The responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.
Web Proxy Auto Discovery for the WLCG
Dykstra, D.; Blomer, J.; Blumenfeld, B.; ...
2017-11-23
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less
Web Proxy Auto Discovery for the WLCG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, D.; Blomer, J.; Blumenfeld, B.
All four of the LHC experiments depend on web proxies (that is, squids) at each grid site to support software distribution by the CernVM FileSystem (CVMFS). CMS and ATLAS also use web proxies for conditions data distributed through the Frontier Distributed Database caching system. ATLAS & CMS each have their own methods for their grid jobs to find out which web proxies to use for Frontier at each site, and CVMFS has a third method. Those diverse methods limit usability and flexibility, particularly for opportunistic use cases, where an experiment’s jobs are run at sites that do not primarily supportmore » that experiment. This paper describes a new Worldwide LHC Computing Grid (WLCG) system for discovering the addresses of web proxies. The system is based on an internet standard called Web Proxy Auto Discovery (WPAD). WPAD is in turn based on another standard called Proxy Auto Configuration (PAC). Both the Frontier and CVMFS clients support this standard. The input into the WLCG system comes from squids registered in the ATLAS Grid Information System (AGIS) and CMS SITECONF files, cross-checked with squids registered by sites in the Grid Configuration Database (GOCDB) and the OSG Information Management (OIM) system, and combined with some exceptions manually configured by people from ATLAS and CMS who operate WLCG Squid monitoring. WPAD servers at CERN respond to http requests from grid nodes all over the world with a PAC file that lists available web proxies, based on IP addresses matched from a database that contains the IP address ranges registered to organizations. Large grid sites are encouraged to supply their own WPAD web servers for more flexibility, to avoid being affected by short term long distance network outages, and to offload the WLCG WPAD servers at CERN. The CERN WPAD servers additionally support requests from jobs running at non-grid sites (particularly for LHC@Home) which it directs to the nearest publicly accessible web proxy servers. Furthermore, the responses to those requests are geographically ordered based on a separate database that maps IP addresses to longitude and latitude.« less
Pichler, Peter; Mazanek, Michael; Dusberger, Frederico; Weilnböck, Lisa; Huber, Christian G; Stingl, Christoph; Luider, Theo M; Straube, Werner L; Köcher, Thomas; Mechtler, Karl
2012-11-02
While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC-MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge.
2012-01-01
While the performance of liquid chromatography (LC) and mass spectrometry (MS) instrumentation continues to increase, applications such as analyses of complete or near-complete proteomes and quantitative studies require constant and optimal system performance. For this reason, research laboratories and core facilities alike are recommended to implement quality control (QC) measures as part of their routine workflows. Many laboratories perform sporadic quality control checks. However, successive and systematic longitudinal monitoring of system performance would be facilitated by dedicated automatic or semiautomatic software solutions that aid an effortless analysis and display of QC metrics over time. We present the software package SIMPATIQCO (SIMPle AuTomatIc Quality COntrol) designed for evaluation of data from LTQ Orbitrap, Q-Exactive, LTQ FT, and LTQ instruments. A centralized SIMPATIQCO server can process QC data from multiple instruments. The software calculates QC metrics supervising every step of data acquisition from LC and electrospray to MS. For each QC metric the software learns the range indicating adequate system performance from the uploaded data using robust statistics. Results are stored in a database and can be displayed in a comfortable manner from any computer in the laboratory via a web browser. QC data can be monitored for individual LC runs as well as plotted over time. SIMPATIQCO thus assists the longitudinal monitoring of important QC metrics such as peptide elution times, peak widths, intensities, total ion current (TIC) as well as sensitivity, and overall LC–MS system performance; in this way the software also helps identify potential problems. The SIMPATIQCO software package is available free of charge. PMID:23088386
Zao, John K.; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping
2014-01-01
EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system. PMID:24917804
Zao, John K; Gan, Tchin-Tze; You, Chun-Kai; Chung, Cheng-En; Wang, Yu-Te; Rodríguez Méndez, Sergio José; Mullen, Tim; Yu, Chieh; Kothe, Christian; Hsiao, Ching-Teng; Chu, San-Liang; Shieh, Ce-Kuen; Jung, Tzyy-Ping
2014-01-01
EEG-based Brain-computer interfaces (BCI) are facing basic challenges in real-world applications. The technical difficulties in developing truly wearable BCI systems that are capable of making reliable real-time prediction of users' cognitive states in dynamic real-life situations may seem almost insurmountable at times. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report an attempt to develop a pervasive on-line EEG-BCI system using state-of-art technologies including multi-tier Fog and Cloud Computing, semantic Linked Data search, and adaptive prediction/classification models. To verify our approach, we implement a pilot system by employing wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end Fog Servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC) as the far-end Cloud Servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line EEG-BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch to use our system in real-life personal stress monitoring and the UCSD Movement Disorder Center to conduct in-home Parkinson's disease patient monitoring experiments. We shall proceed to develop the necessary BCI ontology and introduce automatic semantic annotation and progressive model refinement capability to our system.
NASA Astrophysics Data System (ADS)
Sébastien, Nicolas; Cros, Sylvain; Lallemand, Caroline; Kurzrock, Frederik; Schmutz, Nicolas
2016-04-01
Reunion Island is a French oversea territory located in the Indian Ocean. This tropical Island has about 840,000 inhabitants and is visited every year by more than 400,000 tourists. On average, 340 sunny days occurs on this island in a whole year. Beyond these advantageous conditions, exposure of the population to ultraviolet radiation constitutes a public health issue. The number of hospitalisations for skin cancer increased by 50% between 2005 and 2010. Health insurance reimbursements due to ophthalmic anomalies caused by the sun is about two million Euros. Among the prevention measures recommended by public health policies, access to information on UV radiation is one of the basic needs. Reuniwatt, supported by the Regional Council of La Reunion, is currently developing the project Uveka. Uveka is a solution permitting to provide in real-time and in short-term forecast (several hours), the UV radiation maps of the Reunion Island. Accessible via web interface and smartphone application, Uveka informs the citizens about the UV exposure rate and its risk according to its individual characteristics (skin phototype, past exposure to sun etc.). The present work describes this initiative through the presentation of the UV radiation monitoring system and the data processing chain toward the end-users. The UV radiation monitoring system of Uveka is a network of low cost UV sensors. Each instrument is equipped with a solar panel and a battery. Moreover, the sensor is able to communicate using the 3G telecommunication network. Then, the instrument can be installed without AC power or access to a wired communication network. This feature eliminates a site selection constraint. Indeed, with more than 200 microclimates and a strong cloud cover spatial variability, building a representative measurement site network in this island with a limited number of instruments is a real challenge. In addition to these UV radiation measurements, the mapping of the surface solar radiation using the meteorological satellite Meteosat-7 data permits to complete the gaps. Kriging the punctual measurements using satellite data as spatial weights enables to obtain a continuous map with a spatially constant quality all over the Reunion Island. A significant challenge of this monitoring system is to ensure the temporal continuity of the real-time mapping. Indeed, autonomous sensors are programmed with our proprietary protocol leading to a smart management of the battery load and telecommunication costs. Measurements are sent to a server with a protocol minimizing the data amount in order to ensure low telecommunication prices. The server receives the measurements data and integrates them into a NoSql database. The server is able to handle long times series and quality control is routinely made to ensure data consistence as well as instruments float state monitoring. The database can be requested by our geographical information system server through an application programming interface. This configuration permits an easy development of a web-based or smart phone application using any external information provided by the user (personal phenotype and exposure experience) or its device (e.g. computing refinements according to its location).
Huang, Ean-Wen; Hung, Rui-Suan; Chiou, Shwu-Fen; Liu, Fei-Ying; Liou, Der-Ming
2011-01-01
Information and communication technologies progress rapidly and many novel applications have been developed in many domains of human life. In recent years, the demand for healthcare services has been growing because of the increase in the elderly population. Consequently, a number of healthcare institutions have focused on creating technologies to reduce extraneous work and improve the quality of service. In this study, an information platform for tele- healthcare services was implemented. The architecture of the platform included a web-based application server and client system. The client system was able to retrieve the blood pressure and glucose levels of a patient stored in measurement instruments through Bluetooth wireless transmission. The web application server assisted the staffs and clients in analyzing the health conditions of patients. In addition, the server provided face-to-face communications and instructions through remote video devices. The platform deployed a service-oriented architecture, which consisted of HL7 standard messages and web service components. The platform could transfer health records into HL7 standard clinical document architecture for data exchange with other organizations. The prototyping system was pretested and evaluated in a homecare department of hospital and a community management center for chronic disease monitoring. Based on the results of this study, this system is expected to improve the quality of healthcare services.
Monitoring Global Precipitation through UCI CHRS's RainMapper App on Mobile Devices
NASA Astrophysics Data System (ADS)
Nguyen, P.; Huynh, P.; Braithwaite, D.; Hsu, K. L.; Sorooshian, S.
2014-12-01
The Water and Development Information for Arid Lands-a Global Network (G-WADI) Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks—Cloud Classification System (PERSIANN-CCS) GeoServer has been developed through a collaboration between the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California, Irvine (UCI) and the UNESCO's International Hydrological Program (IHP). G-WADI PERSIANN-CCS GeoServer provides near real-time high resolution (0.04o, approx 4km) global (60oN - 60oS) satellite precipitation estimated by the PERSIANN-CCS algorithm developed by the scientists at CHRS. The G-WADI PERSIANN-CCS GeoServer utilizes the open-source MapServer software from the University of Minnesota to provide a user-friendly web-based mapping and visualization of satellite precipitation data. Recent efforts have been made by the scientists at CHRS to provide free on-the-go access to the PERSIANN-CCS precipitation data through an application named RainMapper for mobile devices. RainMapper provides visualization of global satellite precipitation of the most recent 3, 6, 12, 24, 48 and 72-hour periods overlaid with various basemaps. RainMapper uses the Google maps application programing interface (API) and embedded global positioning system (GPS) access to better monitor the global precipitation data on mobile devices. Functionalities include using geographical searching with voice recognition technologies make it easy for the user to explore near real-time precipitation in a certain location. RainMapper also allows for conveniently sharing the precipitation information and visualizations with the public through social networks such as Facebook and Twitter. RainMapper is available for iOS and Android devices and can be downloaded (free) from the App Store and Google Play. The usefulness of RainMapper was demonstrated through an application in tracking the evolution of the recent Rammasun Typhoon over the Philippines in mid July 2014.
Master Console System Monitoring and Control Development
NASA Technical Reports Server (NTRS)
Brooks, Russell A.
2013-01-01
The Master Console internship during the spring of 2013 involved the development of firing room displays at the John F. Kennedy Space Center (KSC). This position was with the Master Console Product Group (MCPG) on the Launch Control System (LCS) project. This project is responsible for the System Monitoring and Control (SMC) and Record and Retrieval (R&R) of launch operations data. The Master Console is responsible for: loading the correct software into each of the remaining consoles in the firing room, connecting the proper data paths to and from the launch vehicle and all ground support equipment, and initializing the entire firing room system to begin processing. During my internship, I developed a system health and status display for use by Master Console Operators (MCO) to monitor and verify the integrity of the servers, gateways, network switches, and firewalls used in the firing room.
2012-02-06
Event Interface Custom ASCII JSS Client Y (Spectrum) 3.2 8 IT Infrastructure Performance Data/Vulnerability Assessment eHealth , Spectrum NSM...monitoring of infrastructure servers.) The Concord product line. Concord products ( eHealth and Spectrum) can provide both real-time and historical...Network and Systems Management (NSM) • Unicenter Asset Management • Spectrum • eHealth • Centennial Discovery Table 12 summarizes the the role of
NASA Astrophysics Data System (ADS)
Yussup, F.; Ibrahim, M. M.; Haris, M. F.; Soh, S. C.; Hasim, H.; Azman, A.; Razalim, F. A. A.; Yapp, R.; Ramli, A. A. M.
2016-01-01
With the growth of technology, many devices and equipments can be connected to the network and internet to enable online data acquisition for real-time data monitoring and control from monitoring devices located at remote sites. Centralized radiation monitoring system (CRMS) is a system that enables area radiation level at various locations in Malaysian Nuclear Agency (Nuklear Malaysia) to be monitored centrally by using a web browser. The Local Area Network (LAN) in Nuclear Malaysia is utilized in CRMS as a communication media for data acquisition of the area radiation levels from radiation detectors. The development of the system involves device configuration, wiring, network and hardware installation, software and web development. This paper describes the software upgrading on the system server that is responsible to acquire and record the area radiation readings from the detectors. The recorded readings are called in a web programming to be displayed on a website. Besides the main feature which is acquiring the area radiation levels in Nuclear Malaysia centrally, the upgrading involves new features such as uniform time interval for data recording and exporting, warning system and dose triggering.
ESUMS: a mobile system for continuous home monitoring of rehabilitation patients.
Strisland, Frode; Svagård, Ingrid; Seeberg, Trine M; Mathisen, Bjørn Magnus; Vedum, Jon; Austad, Hanne O; Liverud, Anders E; Kofod-Petersen, Anders; Bendixen, Ole Christian
2013-01-01
The pressure on the healthcare services is building up for several reasons. The ageing population trend, the increase in life-style related disease prevalence, as well as the increased treatment capabilities with associated general expectation all add pressure. The use of ambient healthcare technologies can alleviate the situation by enabling time and cost-efficient monitoring and follow-up of patients discharged from hospital care. We report on an ambulatory system developed for monitoring of physical rehabilitation patients. The system consists of a wearable multisensor monitoring device; a mobile phone with client application aggregating the data collected; a service-oriented-architecture based server solution; and a PC application facilitating patient follow-up by their health professional carers. The system has been tested and verified for accuracy in controlled environment trials on healthy volunteers, and also been usability tested by 5 congestive heart failure patients and their nurses. This investigation indicated that patients were able to use the system, and that nurses got an improved basis for patient follow-up.
NASA Astrophysics Data System (ADS)
Licari, Daniele; Calzolari, Federico
2011-12-01
In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.
EuCliD (European Clinical Database): a database comparing different realities.
Marcelli, D; Kirchgessner, J; Amato, C; Steil, H; Mitteregger, A; Moscardò, V; Carioni, C; Orlandini, G; Gatti, E
2001-01-01
Quality and variability of dialysis practice are generally gaining more and more importance. Fresenius Medical Care (FMC), as provider of dialysis, has the duty to continuously monitor and guarantee the quality of care delivered to patients treated in its European dialysis units. Accordingly, a new clinical database called EuCliD has been developed. It is a multilingual and fully codified database, using as far as possible international standard coding tables. EuCliD collects and handles sensitive medical patient data, fully assuring confidentiality. The Infrastructure: a Domino server is installed in each country connected to EuCliD. All the centres belonging to a country are connected via modem to the country server. All the Domino Servers are connected via Wide Area Network to the Head Quarter Server in Bad Homburg (Germany). Inside each country server only anonymous data related to that particular country are available. The only place where all the anonymous data are available is the Head Quarter Server. The data collection is strongly supported in each country by "key-persons" with solid relationships to their respective national dialysis units. The quality of the data in EuCliD is ensured at different levels. At the end of January 2001, more than 11,000 patients treated in 135 centres located in 7 countries are already included in the system. FMC has put the patient care at the centre of its activities for many years and now is able to provide transparency to the community (Authorities, Nephrologists, Patients.....) thus demonstrating the quality of the service.
Monitoring Heart Disease and Diabetes with Mobile Internet Communications
Mulvaney, David; Woodward, Bryan; Datta, Sekharjit; Harvey, Paul; Vyas, Anoop; Thakker, Bhaskar; Farooq, Omar; Istepanian, Robert
2012-01-01
A telemedicine system is described for monitoring vital signs and general health indicators of patients with cardiac and diabetic conditions. Telemetry from wireless sensors and readings from other instruments are combined into a comprehensive set of measured patient parameters. Using a combination of mobile device applications and web browser, the data can be stored, accessed, and displayed using mobile internet communications to the central server. As an extra layer of security in the data transmission, information embedded in the data is used in its verification. The paper highlights features that could be enhanced from previous systems by using alternative components or methods. PMID:23213330
NASA Astrophysics Data System (ADS)
Vucnik, Matevz; Robinson, Johanna; Smolnikar, Miha; Kocman, David; Horvat, Milena; Mohorcic, Mihael
2015-04-01
Key words: portable air quality sensor, CITI-SENSE, participatory monitoring, VESNA-AQ The emergence of low-cost easy to use portable air quality sensors units is opening new possibilities for individuals to assess their exposure to air pollutants at specific place and time, and share this information through the Internet connection. Such portable sensors units are being used in an ongoing citizen science project called CITI-SENSE, which enables citizens to measure and share the data. The project aims through creating citizens observatories' to empower citizens to contribute to and participate in environmental governance, enabling them to support and influence community and societal priorities as well as associated decision making. An air quality measurement system based on VESNA sensor platform was primarily designed within the project for the use as portable sensor unit in selected pilot cities (Belgrade, Ljubljana and Vienna) for monitoring outdoor exposure to pollutants. However, functionally the same unit with different set of sensors could be used for example as an indoor platform. The version designed for the pilot studies was equipped with the following sensors: NO2, O3, CO, temperature, relative humidity, pressure and accelerometer. The personal sensor unit is battery powered and housed in a plastic box. The VESNA-based air quality (AQ) monitoring system comprises the VESNA-AQ portable sensor unit, a smartphone app and the remote server. Personal sensor unit supports wireless connection to an Android smartphone via built-in Wi-Fi. The smartphone in turn serves also as the communication gateway towards the remote server using any of available data connections. Besides the gateway functionality the role of smartphone is to enrich data coming from the personal sensor unit with the GPS location, timestamps and user defined context. This, together with an accelerometer, enables the user to better estimate ones exposure in relation to physical activities, time and location. The end user can monitor the measured parameters through a smartphone application. The smartphone app implements a custom developed LCSP (Lightweight Client Server Protocol) protocol which is used to send requests to the VESNA-AQ unit and to exchange information. When the data is obtained from the VESNA-AQ unit, the mobile application visualizes the data. It also has an option to forward the data to the remote server in a custom JSON structure over a HTTP POST request. The server stores the data in the database and in parallel translates the data to WFS and forwards it to the main CITI-SENSE platform over WFS-T in a common XML format over HTTP POST request. From there data can be accessed through the Internet and visualised in different forms and web applications developed by the CITI-SENSE project. In the course of the project, the collected data will be made publicly available enabling the citizens to participate in environmental governance. Acknowledgements: CITI-SENSE is a Collaborative Project partly funded by the EU FP7-ENV-2012 under grant agreement no 308524 (www.citi-sense.eu).
[Design of Smart Care Tele-Monitoring System for Mother and Fetus].
Xi, Haiyan; Gan, Guanghui; Zhang, Huilian; Chen, Chaomin
2015-03-01
To study and design a maternal and fetal monitoring system based on the cloud computing and internet of things, which can monitor and take smart care of the mother and fetus in 24 h. Using a new kind of wireless fetal monitoring detector and a mobile phone, thus the doctor can keep touch with hospital through internet. The mobile terminal was developed on the Android system, which accepted the data of fetal heart rate and uterine contraction transmitted from the wireless detector, exchange information with the server and display the monitoring data and the doctor's advice in real-time. The mobile phone displayed the fetal heart rate line and uterine contraction line in real-time, recorded the fetus' grow process. It implemented the real-time communication between the doctor and the user, through wireless communication technology. The system removes the constraint of traditional telephone cable for users, while the users can get remote monitoring from the medical institutions at home or in the nearest community at any time, providing health and safety guarantee for mother and fetus.
A Testbed for Data Fusion for Helicopter Diagnostics and Prognostics
2003-03-01
and algorithm design and tuning in order to develop advanced diagnostic and prognostic techniques for air craft health monitoring . Here a...and development of models for diagnostics, prognostics , and anomaly detection . Figure 5 VMEP Server Browser Interface 7 Download... detections , and prognostic prediction time horizons. The VMEP system and in particular the web component are ideal for performing data collection
NASA Technical Reports Server (NTRS)
Bailey, Brandon
2015-01-01
Historically security within organizations was thought of as an IT function (web sites/servers, email, workstation patching, etc.) Threat landscape has evolved (Script Kiddies, Hackers, Advanced Persistent Threat (APT), Nation States, etc.) Attack surface has expanded -Networks interconnected!! Some security posture factors Network Layer (Routers, Firewalls, etc.) Computer Network Defense (IPS/IDS, Sensors, Continuous Monitoring, etc.) Industrial Control Systems (ICS) Software Security (COTS, FOSS, Custom, etc.)
Dominguez, Luis A.; Yildirim, Battalgazi; Husker, Allen L.; Cochran, Elizabeth S.; Christensen, Carl; Cruz-Atienza, Victor M.
2015-01-01
Each volunteer computer monitors ground motion and communicates using the Berkeley Open Infrastructure for Network Computing (BOINC, Anderson, 2004). Using a standard short‐term average, long‐term average (STLA) algorithm (Earle and Shearer, 1994; Cochran, Lawrence, Christensen, Chung, 2009; Cochran, Lawrence, Christensen, and Jakka, 2009), volunteer computer and sensor systems detect abrupt changes in the acceleration recordings. Each time a possible trigger signal is declared, a small package of information containing sensor and ground‐motion information is streamed to one of the QCN servers (Chung et al., 2011). Trigger signals, correlated in space and time, are then processed by the QCN server to look for potential earthquakes.
Luque, Joaquín; Larios, Diego F; Personal, Enrique; Barbancho, Julio; León, Carlos
2016-05-18
Environmental audio monitoring is a huge area of interest for biologists all over the world. This is why some audio monitoring system have been proposed in the literature, which can be classified into two different approaches: acquirement and compression of all audio patterns in order to send them as raw data to a main server; or specific recognition systems based on audio patterns. The first approach presents the drawback of a high amount of information to be stored in a main server. Moreover, this information requires a considerable amount of effort to be analyzed. The second approach has the drawback of its lack of scalability when new patterns need to be detected. To overcome these limitations, this paper proposes an environmental Wireless Acoustic Sensor Network architecture focused on use of generic descriptors based on an MPEG-7 standard. These descriptors demonstrate it to be suitable to be used in the recognition of different patterns, allowing a high scalability. The proposed parameters have been tested to recognize different behaviors of two anuran species that live in Spanish natural parks; the Epidalea calamita and the Alytes obstetricans toads, demonstrating to have a high classification performance.
A remote monitor of bed patient cardiac vibration, respiration and movement.
Mukai, Koji; Yonezawa, Yoshiharu; Ogawa, Hidekuni; Maki, Hiromichi; Caldwell, W Morton
2009-01-01
We have developed a remote system for monitoring heart rate, respiration rate and movement behavior of at-home elderly people who are living alone. The system consists of a 40 kHz ultrasonic transmitter and receiver, linear integrated circuits, a low-power 8-bit single chip microcomputer and an Internet server computer. The 40 kHz ultrasonic transmitter and receiver are installed into a bed mattress. The transmitted signal diffuses into the bed mattress, and the amplitude of the received ultrasonic wave is modulated by the shape of the mattress and parameters such as respiration, cardiac vibration and movement. The modulated ultrasonic signal is received and demodulated by an envelope detection circuit. Low, high and band pass filters separate the respiration, cardiac vibration and movement signals, which are fed into the microcontroller and digitized at a sampling rate of 50 Hz by 8-bit A/D converters. The digitized data are sent to the server computer as a serial signal. This computer stores the data and also creates a graphic chart of the latest hour. The person's family or caregiver can download this chart via the Internet at any time.
Luque, Joaquín; Larios, Diego F.; Personal, Enrique; Barbancho, Julio; León, Carlos
2016-01-01
Environmental audio monitoring is a huge area of interest for biologists all over the world. This is why some audio monitoring system have been proposed in the literature, which can be classified into two different approaches: acquirement and compression of all audio patterns in order to send them as raw data to a main server; or specific recognition systems based on audio patterns. The first approach presents the drawback of a high amount of information to be stored in a main server. Moreover, this information requires a considerable amount of effort to be analyzed. The second approach has the drawback of its lack of scalability when new patterns need to be detected. To overcome these limitations, this paper proposes an environmental Wireless Acoustic Sensor Network architecture focused on use of generic descriptors based on an MPEG-7 standard. These descriptors demonstrate it to be suitable to be used in the recognition of different patterns, allowing a high scalability. The proposed parameters have been tested to recognize different behaviors of two anuran species that live in Spanish natural parks; the Epidalea calamita and the Alytes obstetricans toads, demonstrating to have a high classification performance. PMID:27213375
NASA Astrophysics Data System (ADS)
Soeharwinto; Sinulingga, Emerson; Siregar, Baihaqi
2017-01-01
An accurate information can be useful for authorities to make good policies for preventive and mitigation after volcano eruption disaster. Monitoring of environmental parameters of post-eruption volcano provides an important information for authorities. Such monitoring system can be develop using the Wireless Network Sensor technology. Many application has been developed using the Wireless Sensor Network technology, such as floods early warning system, sun radiation mapping, and watershed monitoring. This paper describes the implementation of a remote environment monitoring system of mount Sinabung post-eruption. The system monitor three environmental parameters: soil condition, water quality and air quality (outdoor). Motes equipped with proper sensors, as components of the monitoring system placed in sample locations. The measured value from the sensors periodically sends to data server using 3G/GPRS communication module. The data can be downloaded by the user for further analysis.The measurement and data analysis results generally indicate that the environmental parameters in the range of normal/standard condition. The sample locations are safe for living and suitable for cultivation, but awareness is strictly required due to the uncertainty of Sinabung status.
The Application of Wireless Sensor Networks in Management of Orchard
NASA Astrophysics Data System (ADS)
Zhu, Guizhi
A monitoring system based on wireless sensor network is established, aiming at the difficulty of information acquisition in the orchard on the hill at present. The temperature and humidity sensors are deployed around fruit trees to gather the real-time environmental parameters, and the wireless communication modules with self-organized form, which transmit the data to a remote central server, can realize the function of monitoring. By setting the parameters of data intelligent analysis judgment, the information on remote diagnosis and decision support can be timely and effectively feed back to users.
Secure Utilization of Beacons and UAVs in Emergency Response Systems for Building Fire Hazard
Seo, Seung-Hyun; Choi, Jung-In; Song, Jinseok
2017-01-01
An intelligent emergency system for hazard monitoring and building evacuation is a very important application area in Internet of Things (IoT) technology. Through the use of smart sensors, such a system can provide more vital and reliable information to first-responders and also reduce the incidents of false alarms. Several smart monitoring and warning systems do already exist, though they exhibit key weaknesses such as a limited monitoring coverage and security, which have not yet been sufficiently addressed. In this paper, we propose a monitoring and emergency response method for buildings by utilizing beacons and Unmanned Aerial Vehicles (UAVs) on an IoT security platform. In order to demonstrate the practicability of our method, we also implement a proof of concept prototype, which we call the UAV-EMOR (UAV-assisted Emergency Monitoring and Response) system. Our UAV-EMOR system provides the following novel features: (1) secure communications between UAVs, smart sensors, the control server and a smartphone app for security managers; (2) enhanced coordination between smart sensors and indoor/outdoor UAVs to expand real-time monitoring coverage; and (3) beacon-aided rescue and building evacuation. PMID:28946659
Secure Utilization of Beacons and UAVs in Emergency Response Systems for Building Fire Hazard.
Seo, Seung-Hyun; Choi, Jung-In; Song, Jinseok
2017-09-25
An intelligent emergency system for hazard monitoring and building evacuation is a very important application area in Internet of Things (IoT) technology. Through the use of smart sensors, such a system can provide more vital and reliable information to first-responders and also reduce the incidents of false alarms. Several smart monitoring and warning systems do already exist, though they exhibit key weaknesses such as a limited monitoring coverage and security, which have not yet been sufficiently addressed. In this paper, we propose a monitoring and emergency response method for buildings by utilizing beacons and Unmanned Aerial Vehicles (UAVs) on an IoT security platform. In order to demonstrate the practicability of our method, we also implement a proof of concept prototype, which we call the UAV-EMOR (UAV-assisted Emergency Monitoring and Response) system. Our UAV-EMOR system provides the following novel features: (1) secure communications between UAVs, smart sensors, the control server and a smartphone app for security managers; (2) enhanced coordination between smart sensors and indoor/outdoor UAVs to expand real-time monitoring coverage; and (3) beacon-aided rescue and building evacuation.
Cardio-PACs: a new opportunity
NASA Astrophysics Data System (ADS)
Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary
2000-05-01
It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.
LHCb Online event processing and filtering
NASA Astrophysics Data System (ADS)
Alessio, F.; Barandela, C.; Brarda, L.; Frank, M.; Franek, B.; Galli, D.; Gaspar, C.; Herwijnen, E. v.; Jacobsson, R.; Jost, B.; Köstner, S.; Moine, G.; Neufeld, N.; Somogyi, P.; Stoica, R.; Suman, S.
2008-07-01
The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. The entire data-flow is controlled and configured by means of a SCADA system and several databases. After an overview of the LHCb data acquisition and its design principles this paper will emphasize the LHCb event filter system, which is now implemented using the final hardware and will be ready for data-taking for the LHC startup. Control, configuration and security aspects will also be discussed.
NASA Astrophysics Data System (ADS)
Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui
2017-01-01
Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.
A Monitoring System for Vegetable Greenhouses based on a Wireless Sensor Network
Li, Xiu-hong; Cheng, Xiao; Yan, Ke; Gong, Peng
2010-01-01
A wireless sensor network-based automatic monitoring system is designed for monitoring the life conditions of greenhouse vegetatables. The complete system architecture includes a group of sensor nodes, a base station, and an internet data center. For the design of wireless sensor node, the JN5139 micro-processor is adopted as the core component and the Zigbee protocol is used for wireless communication between nodes. With an ARM7 microprocessor and embedded ZKOS operating system, a proprietary gateway node is developed to achieve data influx, screen display, system configuration and GPRS based remote data forwarding. Through a Client/Server mode the management software for remote data center achieves real-time data distribution and time-series analysis. Besides, a GSM-short-message-based interface is developed for sending real-time environmental measurements, and for alarming when a measurement is beyond some pre-defined threshold. The whole system has been tested for over one year and satisfactory results have been observed, which indicate that this system is very useful for greenhouse environment monitoring. PMID:22163391
The Standard Autonomous File Server, a Customized, Off-the-Shelf Success Story
NASA Technical Reports Server (NTRS)
Semancik, Susan K.; Conger, Annette M.; Obenschain, Arthur F. (Technical Monitor)
2001-01-01
The Standard Autonomous File Server (SAFS), which includes both off-the-shelf hardware and software, uses an improved automated file transfer process to provide a quicker, more reliable, prioritized file distribution for customers of near real-time data without interfering with the assets involved in the acquisition and processing of the data. It operates as a stand-alone solution, monitoring itself, and providing an automated fail-over process to enhance reliability. This paper will describe the unique problems and lessons learned both during the COTS selection and integration into SAFS, and the system's first year of operation in support of NASA's satellite ground network. COTS was the key factor in allowing the two-person development team to deploy systems in less than a year, meeting the required launch schedule. The SAFS system his been so successful, it is becoming a NASA standard resource, leading to its nomination for NASA's Software or the Year Award in 1999.
Development of EPA Protocol Information Enquiry Service System Based on Embedded ARM Linux
NASA Astrophysics Data System (ADS)
Peng, Daogang; Zhang, Hao; Weng, Jiannian; Li, Hui; Xia, Fei
Industrial Ethernet is a new technology for industrial network communications developed in recent years. In the field of industrial automation in China, EPA is the first standard accepted and published by ISO, and has been included in the fourth edition IEC61158 Fieldbus of NO.14 type. According to EPA standard, Field devices such as industrial field controller, actuator and other instruments are all able to realize communication based on the Ethernet standard. The Atmel AT91RM9200 embedded development board and open source embedded Linux are used to develop an information inquiry service system of EPA protocol based on embedded ARM Linux in this paper. The system is capable of designing an EPA Server program for EPA data acquisition procedures, the EPA information inquiry service is available for programs in local or remote host through Socket interface. The EPA client can access data and information of other EPA equipments on the EPA network when it establishes connection with the monitoring port of the server.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Application of Sensor Technology for the Efficient Positioningand Assembling of Ship Blocks
NASA Astrophysics Data System (ADS)
Lee, Sangdon; SeongbaeEun; Jung, Jai Jin; Song, Hacheol
2010-09-01
This paper proposes the application of sensor technology to assemble ship blocks efficiently. A sensor-based monitoring system is designed and implemented to improve shipbuilding productivity by reducing the labor cost for the adjustment of adequate positioning between ship blocks during pre-erection or erection stage. For the real-time remote monitoring of relative distances between two ship blocks, sensor nodes are applied to measure the distances between corresponding target points on the blocks. Highly precise positioning data can be transferred to a monitoring server via wireless network, and analyzed to support the decision making which needs to determine the next construction process; further adjustment or seam welding between the ship blocks. The developed system is expected to put to practical use, and increase the productivity during ship blocks assembly.
Twin-tailed fail-over for fileservers maintaining full performance in the presence of a failure
Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.
2008-02-12
A method for maintaining full performance of a file system in the presence of a failure is provided. The file system having N storage devices, where N is an integer greater than zero and N primary file servers where each file server is operatively connected to a corresponding storage device for accessing files therein. The file system further having a secondary file server operatively connected to at least one of the N storage devices. The method including: switching the connection of one of the N storage devices to the secondary file server upon a failure of one of the N primary file servers; and switching the connections of one or more of the remaining storage devices to a primary file server other than the failed file server as necessary so as to prevent a loss in performance and to provide each storage device with an operating file server.
A wirelessly programmable actuation and sensing system for structural health monitoring
NASA Astrophysics Data System (ADS)
Long, James; Büyüköztürk, Oral
2016-04-01
Wireless sensor networks promise to deliver low cost, low power and massively distributed systems for structural health monitoring. A key component of these systems, particularly when sampling rates are high, is the capability to process data within the network. Although progress has been made towards this vision, it remains a difficult task to develop and program 'smart' wireless sensing applications. In this paper we present a system which allows data acquisition and computational tasks to be specified in Python, a high level programming language, and executed within the sensor network. Key features of this system include the ability to execute custom application code without firmware updates, to run multiple users' requests concurrently and to conserve power through adjustable sleep settings. Specific examples of sensor node tasks are given to demonstrate the features of this system in the context of structural health monitoring. The system comprises of individual firmware for nodes in the wireless sensor network, and a gateway server and web application through which users can remotely submit their requests.
Monitoring activities of daily living based on wearable wireless body sensor network.
Kańtoch, E; Augustyniak, P; Markiewicz, M; Prusak, D
2014-01-01
With recent advances in microprocessor chip technology, wireless communication, and biomedical engineering it is possible to develop miniaturized ubiquitous health monitoring devices that are capable of recording physiological and movement signals during daily life activities. The aim of the research is to implement and test the prototype of health monitoring system. The system consists of the body central unit with Bluetooth module and wearable sensors: the custom-designed ECG sensor, the temperature sensor, the skin humidity sensor and accelerometers placed on the human body or integrated with clothes and a network gateway to forward data to a remote medical server. The system includes custom-designed transmission protocol and remote web-based graphical user interface for remote real time data analysis. Experimental results for a group of humans who performed various activities (eg. working, running, etc.) showed maximum 5% absolute error compared to certified medical devices. The results are promising and indicate that developed wireless wearable monitoring system faces challenges of multi-sensor human health monitoring during performing daily activities and opens new opportunities in developing novel healthcare services.
ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration
2016-10-01
The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
A Scalability Model for ECS's Data Server
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.; Singhal, Mukesh
1998-01-01
This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yussup, F., E-mail: nolida@nm.gov.my; Ibrahim, M. M., E-mail: maslina-i@nm.gov.my; Soh, S. C.
With the growth of technology, many devices and equipments can be connected to the network and internet to enable online data acquisition for real-time data monitoring and control from monitoring devices located at remote sites. Centralized radiation monitoring system (CRMS) is a system that enables area radiation level at various locations in Malaysian Nuclear Agency (Nuklear Malaysia) to be monitored centrally by using a web browser. The Local Area Network (LAN) in Nuclear Malaysia is utilized in CRMS as a communication media for data acquisition of the area radiation levels from radiation detectors. The development of the system involves devicemore » configuration, wiring, network and hardware installation, software and web development. This paper describes the software upgrading on the system server that is responsible to acquire and record the area radiation readings from the detectors. The recorded readings are called in a web programming to be displayed on a website. Besides the main feature which is acquiring the area radiation levels in Nuclear Malaysia centrally, the upgrading involves new features such as uniform time interval for data recording and exporting, warning system and dose triggering.« less
Research on time synchronization scheme of MES systems in manufacturing enterprise
NASA Astrophysics Data System (ADS)
Yuan, Yuan; Wu, Kun; Sui, Changhao; Gu, Jin
2018-04-01
With the popularity of information and automatic production in the manufacturing enterprise, data interaction between business systems is more and more frequent. Therefore, the accuracy of time is getting higher and higher. However, the NTP network time synchronization methods lack the corresponding redundancy and monitoring mechanisms. When failure occurs, it can only make up operations after the event, which has a great effect on production data and systems interaction. Based on this, the paper proposes a RHCS-based NTP server architecture, automatically detect NTP status and failover by script.
Yang, Chao-Tung; Liao, Chi-Jui; Liu, Jung-Chun; Den, Walter; Chou, Ying-Chyi; Tsai, Jaw-Ji
2014-02-01
Indoor air quality monitoring in healthcare environment has become a critical part of hospital management and policy. Manual air sampling and analysis are cost-inhibitive and do not provide real-time air quality data and response measures. In this month-long study over 14 sampling locations in a public hospital in Taiwan, we observed a positive correlation between CO(2) concentration and population, total bacteria, and particulate matter concentrations, thus monitoring CO(2) concentration as a general indicator for air quality could be a viable option. Consequently, an intelligent environmental monitoring system consisting of a CO(2)/temperature/humidity sensor, a digital plug, and a ZigBee Router and Coordinator was developed and tested. The system also included a backend server that received and analyzed data, as well as activating ventilation and air purifiers when CO(2) concentration exceeded a pre-set value. Alert messages can also be delivered to offsite users through mobile devices.
Zebra: A striped network file system
NASA Technical Reports Server (NTRS)
Hartman, John H.; Ousterhout, John K.
1992-01-01
The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
NASA Astrophysics Data System (ADS)
Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi
2016-08-01
The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).
Integrated Environment for Ubiquitous Healthcare and Mobile IPv6 Networks
NASA Astrophysics Data System (ADS)
Cagalaban, Giovanni; Kim, Seoksoo
The development of Internet technologies based on the IPv6 protocol will allow real-time monitoring of people with health deficiencies and improve the independence of elderly people. This paper proposed a ubiquitous healthcare system for the personalized healthcare services with the support of mobile IPv6 networks. Specifically, this paper discusses the integration of ubiquitous healthcare and wireless networks and its functional requirements. This allow an integrated environment where heterogeneous devices such a mobile devices and body sensors can continuously monitor patient status and communicate remotely with healthcare servers, physicians, and family members to effectively deliver healthcare services.
NASA Astrophysics Data System (ADS)
Varadan, Vijay K.; Kumar, Prashanth S.; Oh, Sechang; Mathur, Gyanesh N.; Rai, Pratyush; Kegley, Lauren
2011-04-01
Heart related ailments have been a major cause for deaths in both men and women in United States. Since 1985, more women than men have died due to cardiac or cardiovascular ailments for reasons that are not well understood as yet. Lack of a deterministic understanding of this phenomenon makes continuous real time monitoring of cardiovascular health the best approach for both early detection of pathophysiological changes and events indicative of chronic cardiovascular diseases in women. This approach requires sensor systems to be seamlessly mounted on day to day clothing for women. With this application in focus, this paper describes a e-bra platform for sensors towards heart rate monitoring. The sensors, nanomaterial or textile based dry electrodes, capture the heart activity signals in form Electrocardiograph (ECG) and relay it to a compact textile mountable amplifier-wireless transmitter module for relay to a smart phone. The ECG signal, acquired on the smart phone, can be transmitted to the cyber space for post processing. As an example, the paper discusses the heart rate estimation and heart rate variability. The data flow from sensor to smart phone to server (cyber infrastructure) has been discussed. The cyber infrastructure based signal post processing offers an opportunity for automated emergency response that can be initiated from the server or the smartphone itself. Detailed protocols for both the scenarios have been presented and their relevance to the present emergency healthcare response system has been discussed.
Automatic and continuous landslide monitoring: the Rotolon Web-based platform
NASA Astrophysics Data System (ADS)
Frigerio, Simone; Schenato, Luca; Mantovani, Matteo; Bossi, Giulia; Marcato, Gianluca; Cavalli, Marco; Pasuto, Alessandro
2013-04-01
Mount Rotolon (Eastern Italian Alps) is affected by a complex landslide that, since 1985, is threatening the nearby village of Recoaro Terme. The first written proof of a landslide occurrence dated back to 1798. After the last re-activation on November 2010 (637 mm of intense rainfall recorded in the 12 days prior the event), a mass of approximately 320.000 m3 detached from the south flank of Mount Rotolon and evolved into a fast debris flow that ran for about 3 km along the stream bed. A real-time monitoring system was required to detect early indication of rapid movements, potentially saving lives and property. A web-based platform for automatic and continuous monitoring was designed as a first step in the implementation of an early-warning system. Measurements collected by the automated geotechnical and topographic instrumentation, deployed over the landslide body, are gathered in a central box station. After the calibration process, they are transmitted by web services on a local server, where graphs, maps, reports and alert announcement are automatically generated and updated. All the processed information are available by web browser with different access rights. The web environment provides the following advantages: 1) data is collected from different data sources and matched on a single server-side frame 2) a remote user-interface allows regular technical maintenance and direct access to the instruments 3) data management system is synchronized and automatically tested 4) a graphical user interface on browser provides a user-friendly tool for decision-makers to interact with a system continuously updated. On this site two monitoring systems are actually on course: 1) GB-InSAR radar interferometer (University of Florence - Department of Earth Science) and 2) Automated Total Station (ATS) combined with extensometers network in a Web-based solution (CNR-IRPI Padova). This work deals with details on methodology, services and techniques adopted for the second monitoring solution. The activity directly interfaces with local Civil Protection agency, Regional Geological Service and local authorities with integrated roles and aims.
Development and implementation of a web-based system to study children with malnutrition.
Syed-Mohamad, Sharifah-Mastura
2009-01-01
To develop and implement a collective web-based system to monitor child growth in order to study children with malnutrition. The system was developed using prototyping system development methodology. The implementation was carried out using open-source technologies that include Apache Web Server, PHP scripting, and MySQL database management system. There were four datasets collected by the system: demographic data, measurement data, parent data, and food program data. The system was designed to be used by two groups of users, the clinics and the researchers. The Growth Monitor System was successfully developed and used for the study, "Geoinformation System (GIS) and Remote Sensing in Mapping of Children with Malnutrition." Data collection was implemented in public clinics from two districts in the state of Kelantan, Malaysia. The development of an integrated web-based system, Growth Monitor, for the study of children with malnutrition has been achieved. This system can be expanded to new partners who are involved in the study of children with malnutrition in other parts of Malaysia as well as other countries.
Optimal Self-Tuning PID Controller Based on Low Power Consumption for a Server Fan Cooling System.
Lee, Chengming; Chen, Rongshun
2015-05-20
Recently, saving the cooling power in servers by controlling the fan speed has attracted considerable attention because of the increasing demand for high-density servers. This paper presents an optimal self-tuning proportional-integral-derivative (PID) controller, combining a PID neural network (PIDNN) with fan-power-based optimization in the transient-state temperature response in the time domain, for a server fan cooling system. Because the thermal model of the cooling system is nonlinear and complex, a server mockup system simulating a 1U rack server was constructed and a fan power model was created using a third-order nonlinear curve fit to determine the cooling power consumption by the fan speed control. PIDNN with a time domain criterion is used to tune all online and optimized PID gains. The proposed controller was validated through experiments of step response when the server operated from the low to high power state. The results show that up to 14% of a server's fan cooling power can be saved if the fan control permits a slight temperature response overshoot in the electronic components, which may provide a time-saving strategy for tuning the PID controller to control the server fan speed during low fan power consumption.
Mobile real-time data acquisition system for application in preventive medicine.
Neubert, Sebastian; Arndt, Dagmar; Thurow, Kerstin; Stoll, Regina
2010-05-01
In this article, the development of a system for online monitoring of a subject's physiological parameters and subjective workload regardless of location has been presented, which allows for studies on occupational health. In the sector of occupational health, modern acquisition systems are needed. Such systems can be used by the subject during usual daily routines without being influenced by the presence of an examiner. Moreover, the system's influence on the subject should be reduced to a minimum to receive reliable data from the examination. The acquisition system is based on a mobile handheld (or smart phone), which allows both management of the communication process and input of several dialog data (e.g., questionnaires). A sensor electronics module permits the acquisition of different physiological parameters and their online transmission to the handheld via Bluetooth. The mobile handheld and the sensor electronics module constitute a wireless personal area network. The handheld allows the first analysis, the synchronization of the data, and the continuous data transfer to a communication server by the integrated mobile radio standards of the handheld. The communication server stores the incoming data of several subjects in an application-dependent database and allows access from all over the world via a Web-based management system. The developed system permits one examiner to monitor the physiological parameters and the subjective workload of several subjects in different locations at the same time. Thereby the subjects can move almost freely in any area covered by the mobile network. The mobile handheld allows the popping-up of the questionnaires at flexible time intervals. This electronic input of the dialog data, in comparison to the manual documentation on papers, is more comfortable to the subject as well as to the examiner for an analysis. A Web-based management application facilitates a continuous remote monitoring of the physiological and the subjective data of the subject.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, X; Liu, L; Xing, L
Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less
On the optimal use of a slow server in two-stage queueing systems
NASA Astrophysics Data System (ADS)
Papachristos, Ioannis; Pandelis, Dimitrios G.
2017-07-01
We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.
Analysis of practical backoff protocols for contention resolution with multiple servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, L.A.; MacKenzie, P.D.
Backoff protocols are probably the most widely used protocols for contention resolution in multiple access channels. In this paper, we analyze the stochastic behavior of backoff protocols for contention resolution among a set of clients and servers, each server being a multiple access channel that deals with contention like an Ethernet channel. We use the standard model in which each client generates requests for a given server according to a Bemoulli distribution with a specified mean. The client-server request rate of a system is the maximum over all client-server pairs (i, j) of the sum of all request rates associatedmore » with either client i or server j. Our main result is that any superlinear polynomial backoff protocol is stable for any multiple-server system with a sub-unit client-server request rate. We confirm the practical relevance of our result by demonstrating experimentally that the average waiting time of requests is very small when such a system is run with reasonably few clients and reasonably small request rates such as those that occur in actual ethernets. Our result is the first proof of stability for any backoff protocol for contention resolution with multiple servers. Our result is also the first proof that any weakly acknowledgment based protocol is stable for contention resolution with multiple servers and such high request rates. Two special cases of our result are of interest. Hastad, Leighton and Rogoff have shown that for a single-server system with a sub-unit client-server request rate any modified superlinear polynomial backoff protocol is stable. These modified backoff protocols are similar to standard backoff protocols but require more random bits to implement. The special case of our result in which there is only one server extends the result of Hastad, Leighton and Rogoff to standard (practical) backoff protocols. Finally, our result applies to dynamic routing in optical networks.« less
A sensor monitoring system for telemedicine, safety and security applications
NASA Astrophysics Data System (ADS)
Vlissidis, Nikolaos; Leonidas, Filippos; Giovanis, Christos; Marinos, Dimitrios; Aidinis, Konstantinos; Vassilopoulos, Christos; Pagiatakis, Gerasimos; Schmitt, Nikolaus; Pistner, Thomas; Klaue, Jirka
2017-02-01
A sensor system capable of medical, safety and security monitoring in avionic and other environments (e.g. homes) is examined. For application inside an aircraft cabin, the system relies on an optical cellular network that connects each seat to a server and uses a set of database applications to process data related to passengers' health, safety and security status. Health monitoring typically encompasses electrocardiogram, pulse oximetry and blood pressure, body temperature and respiration rate while safety and security monitoring is related to the standard flight attendance duties, such as cabin preparation for take-off, landing, flight in regions of turbulence, etc. In contrast to previous related works, this article focuses on the system's modules (medical and safety sensors and associated hardware), the database applications used for the overall control of the monitoring function and the potential use of the system for security applications. Further tests involving medical, safety and security sensing performed in an real A340 mock-up set-up are also described and reference is made to the possible use of the sensing system in alternative environments and applications, such as health monitoring within other means of transport (e.g. trains or small passenger sea vessels) as well as for remotely located home users, over a wired Ethernet network or the Internet.
2009-01-01
Oracle 9i, 10g MySQL MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server Windows 2000 Server (32 bit...WebStar (Mac OS X) SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server MS SQL Server Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular
Interfaces for Distributed Systems of Information Servers.
ERIC Educational Resources Information Center
Kahle, Brewster; And Others
1992-01-01
Describes two systems--Wide Area Information Servers (WAIS) and Rosebud--that provide protocol-based mechanisms for accessing remote full-text information servers. Design constraints, human interface design, and implementation are examined for five interfaces to these systems developed to run on the Macintosh or Unix terminals. Sample screen…
NASA Astrophysics Data System (ADS)
Anugrah, Wirdah; Suryono; Suseno, Jatmiko Endro
2018-02-01
Management of water resources based on Geographic Information System can provide substantial benefits to water availability settings. Monitoring the potential water level is needed in the development sector, agriculture, energy and others. In this research is developed water resource information system using real-time Geographic Information System concept for monitoring the potential water level of web based area by applying rule based system method. GIS consists of hardware, software, and database. Based on the web-based GIS architecture, this study uses a set of computer that are connected to the network, run on the Apache web server and PHP programming language using MySQL database. The Ultrasound Wireless Sensor System is used as a water level data input. It also includes time and geographic location information. This GIS maps the five sensor locations. GIS is processed through a rule based system to determine the level of potential water level of the area. Water level monitoring information result can be displayed on thematic maps by overlaying more than one layer, and also generating information in the form of tables from the database, as well as graphs are based on the timing of events and the water level values.
An Indoor Location-Based Control System Using Bluetooth Beacons for IoT Systems.
Huh, Jun-Ho; Seo, Kyungryong
2017-12-19
The indoor location-based control system estimates the indoor position of a user to provide the service he/she requires. The major elements involved in the system are the localization server, service-provision client, user application positioning technology. The localization server controls access of terminal devices (e.g., Smart Phones and other wireless devices) to determine their locations within a specified space first and then the service-provision client initiates required services such as indoor navigation and monitoring/surveillance. The user application provides necessary data to let the server to localize the devices or allow the user to receive various services from the client. The major technological elements involved in this system are indoor space partition method, Bluetooth 4.0, RSSI (Received Signal Strength Indication) and trilateration. The system also employs the BLE communication technology when determining the position of the user in an indoor space. The position information obtained is then used to control a specific device(s). These technologies are fundamental in achieving a "Smart Living". An indoor location-based control system that provides services by estimating user's indoor locations has been implemented in this study (First scenario). The algorithm introduced in this study (Second scenario) is effective in extracting valid samples from the RSSI dataset but has it has some drawbacks as well. Although we used a range-average algorithm that measures the shortest distance, there are some limitations because the measurement results depend on the sample size and the sample efficiency depends on sampling speeds and environmental changes. However, the Bluetooth system can be implemented at a relatively low cost so that once the problem of precision is solved, it can be applied to various fields.
An Indoor Location-Based Control System Using Bluetooth Beacons for IoT Systems
Huh, Jun-Ho; Seo, Kyungryong
2017-01-01
The indoor location-based control system estimates the indoor position of a user to provide the service he/she requires. The major elements involved in the system are the localization server, service-provision client, user application positioning technology. The localization server controls access of terminal devices (e.g., Smart Phones and other wireless devices) to determine their locations within a specified space first and then the service-provision client initiates required services such as indoor navigation and monitoring/surveillance. The user application provides necessary data to let the server to localize the devices or allow the user to receive various services from the client. The major technological elements involved in this system are indoor space partition method, Bluetooth 4.0, RSSI (Received Signal Strength Indication) and trilateration. The system also employs the BLE communication technology when determining the position of the user in an indoor space. The position information obtained is then used to control a specific device(s). These technologies are fundamental in achieving a “Smart Living”. An indoor location-based control system that provides services by estimating user’s indoor locations has been implemented in this study (First scenario). The algorithm introduced in this study (Second scenario) is effective in extracting valid samples from the RSSI dataset but has it has some drawbacks as well. Although we used a range-average algorithm that measures the shortest distance, there are some limitations because the measurement results depend on the sample size and the sample efficiency depends on sampling speeds and environmental changes. However, the Bluetooth system can be implemented at a relatively low cost so that once the problem of precision is solved, it can be applied to various fields. PMID:29257044
Improving INPE'S balloon ground facilities for operation of the protoMIRAX experiment
NASA Astrophysics Data System (ADS)
Mattiello-Francisco, F.; Rinke, E.; Fernandes, J. O.; Cardoso, L.; Cardoso, P.; Braga, J.
2014-10-01
The system requirements for reusing the scientific balloon ground facilities available at INPE were a challenge to the ground system engineers involved in the protoMIRAX X-ray astronomy experiment. A significant effort on software updating was required for the balloon ground station. Considering that protoMIRAX is a pathfinder for the MIRAX satellite mission, a ground infrastructure compatible with INPE's satellite operation approach would be useful and highly recommended to control and monitor the experiment during the balloon flights. This approach will make use of the SATellite Control System (SATCS), a software-based architecture developed at INPE for satellite commanding and monitoring. SATCS complies with particular operational requirements of different satellites by using several customized object-oriented software elements and frameworks. We present the ground solution designed for protoMIRAX operation, the Control and Reception System (CRS). A new server computer, properly configured with Ethernet, has extended the existing ground station facilities with switch, converters and new software (OPS/SERVER) in order to support the available uplink and downlink channels being mapped to TCP/IP gateways required by SATCS. Currently, the CRS development is customizing the SATCS for the kernel functions of protoMIRAX command and telemetry processing. Design-patterns, component-based libraries and metadata are widely used in the SATCS in order to extend the frameworks to address the Packet Utilization Standard (PUS) for ground-balloon communication, in compliance with the services provided by the data handling computer onboard the protoMIRAX balloon.
GPS signal loss in the wide area monitoring system: Prevalence, impact, and solution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Wenxuan; Zhou, Dao; Zhan, Lingwei
The phasor measurement unit (PMUs), equipped with Global Positioning System (GPS) receivers for precise time synchronization, provides measurements of voltage and current phasors at different nodes of the wide area monitoring system. However, GPS receivers are likely to lose satellite signals due to various unpredictable factors. The prevalence of GPS signal loss (GSL) on PMUs is first investigated using real PMU data. The historical GSL events are extracted from a phasor data concentrator (PDC) and FNET/GridEye server. The correlation between GSL and time, spatial location, solar activity are explored via comprehensive statistical analysis. Furthermore, the impact of GSL on phasormore » measurement accuracy has been studied via experiments. Finally, several potential solutions to mitigate the impact of GSL on PMUs are discussed and compared.« less
GPS signal loss in the wide area monitoring system: Prevalence, impact, and solution
Yao, Wenxuan; Zhou, Dao; Zhan, Lingwei; ...
2017-03-19
The phasor measurement unit (PMUs), equipped with Global Positioning System (GPS) receivers for precise time synchronization, provides measurements of voltage and current phasors at different nodes of the wide area monitoring system. However, GPS receivers are likely to lose satellite signals due to various unpredictable factors. The prevalence of GPS signal loss (GSL) on PMUs is first investigated using real PMU data. The historical GSL events are extracted from a phasor data concentrator (PDC) and FNET/GridEye server. The correlation between GSL and time, spatial location, solar activity are explored via comprehensive statistical analysis. Furthermore, the impact of GSL on phasormore » measurement accuracy has been studied via experiments. Finally, several potential solutions to mitigate the impact of GSL on PMUs are discussed and compared.« less
An Elderly Care System Based on Multiple Information Fusion
Lu, Dongwei
2018-01-01
With the development of social economy in the 21st century, and the rising of medical level, the aging of population have become a global trend. However lots of elderly people are in “empty nest” state. In order to solve the problem of high risk of daily life in this group, this paper proposed a method to integrate the information of video images, sound, infrared, pulse, and other information into the elderly care system. The whole system consists of four major components, that is, the main control board, the information acquisition boards, the server, and the client. The control board receives, processes and analyzes the data collected by the information acquisition boards, and uploads necessary information to the server, which are to be saved to the database. When something unexpected occurs to the elderly, the system will notify the relatives through the GPRS (general packet radio service) module. The system also provides an interface for the relatives to inquire the living status of the elderly through an app. The system can monitor the living status for the elderly with the characteristics of quick response, high accuracy, and low cost and can be widely applied to the elderly care at home. PMID:29599947
An Elderly Care System Based on Multiple Information Fusion.
He, Zhiwei; Lu, Dongwei; Yang, Yuxiang; Gao, Mingyu
2018-01-01
With the development of social economy in the 21st century, and the rising of medical level, the aging of population have become a global trend. However lots of elderly people are in "empty nest" state. In order to solve the problem of high risk of daily life in this group, this paper proposed a method to integrate the information of video images, sound, infrared, pulse, and other information into the elderly care system. The whole system consists of four major components, that is, the main control board, the information acquisition boards, the server, and the client. The control board receives, processes and analyzes the data collected by the information acquisition boards, and uploads necessary information to the server, which are to be saved to the database. When something unexpected occurs to the elderly, the system will notify the relatives through the GPRS (general packet radio service) module. The system also provides an interface for the relatives to inquire the living status of the elderly through an app. The system can monitor the living status for the elderly with the characteristics of quick response, high accuracy, and low cost and can be widely applied to the elderly care at home.
Process Management inside ATLAS DAQ
NASA Astrophysics Data System (ADS)
Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.
2002-10-01
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
The Ames Power Monitoring System
NASA Technical Reports Server (NTRS)
Osetinsky, Leonid; Wang, David
2003-01-01
The Ames Power Monitoring System (APMS) is a centralized system of power meters, computer hardware, and specialpurpose software that collects and stores electrical power data by various facilities at Ames Research Center (ARC). This system is needed because of the large and varying nature of the overall ARC power demand, which has been observed to range from 20 to 200 MW. Large portions of peak demand can be attributed to only three wind tunnels (60, 180, and 100 MW, respectively). The APMS helps ARC avoid or minimize costly demand charges by enabling wind-tunnel operators, test engineers, and the power manager to monitor total demand for center in real time. These persons receive the information they need to manage and schedule energy-intensive research in advance and to adjust loads in real time to ensure that the overall maximum allowable demand is not exceeded. The APMS (see figure) includes a server computer running the Windows NT operating system and can, in principle, include an unlimited number of power meters and client computers. As configured at the time of reporting the information for this article, the APMS includes more than 40 power meters monitoring all the major research facilities, plus 15 Windows-based client personal computers that display real-time and historical data to users via graphical user interfaces (GUIs). The power meters and client computers communicate with the server using Transmission Control Protocol/Internet Protocol (TCP/IP) on Ethernet networks, variously, through dedicated fiber-optic cables or through the pre-existing ARC local-area network (ARCLAN). The APMS has enabled ARC to achieve significant savings ($1.2 million in 2001) in the cost of power and electric energy by helping personnel to maintain total demand below monthly allowable levels, to manage the overall power factor to avoid low power factor penalties, and to use historical system data to identify opportunities for additional energy savings. The APMS also provides power engineers and electricians with the information they need to plan modifications in advance and perform day-to-day maintenance of the ARC electric-power distribution system.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-24
... Communications System Server Software, Wireless Handheld Devices and Battery Packs; Notice of Investigation..., wireless handheld devices and battery packs by reason of infringement of certain claims of U.S. Patent Nos... certain wireless communications system server software, wireless handheld devices or battery packs that...
NASA Astrophysics Data System (ADS)
Andrade, P.; Fiorini, B.; Murphy, S.; Pigueiras, L.; Santos, M.
2015-12-01
Over the past two years, the operation of the CERN Data Centres went through significant changes with the introduction of new mechanisms for hardware procurement, new services for cloud provisioning and configuration management, among other improvements. These changes resulted in an increase of resources being operated in a more dynamic environment. Today, the CERN Data Centres provide over 11000 multi-core processor servers, 130 PB disk servers, 100 PB tape robots, and 150 high performance tape drives. To cope with these developments, an evolution of the data centre monitoring tools was also required. This modernisation was based on a number of guiding rules: sustain the increase of resources, adapt to the new dynamic nature of the data centres, make monitoring data easier to share, give more flexibility to Service Managers on how they publish and consume monitoring metrics and logs, establish a common repository of monitoring data, optimise the handling of monitoring notifications, and replace the previous toolset by new open source technologies with large adoption and community support. This contribution describes how these improvements were delivered, present the architecture and technologies of the new monitoring tools, and review the experience of its production deployment.
Improvements in multimedia data buffering using master/slave architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheikh, S.; Ganesan, R.
1996-12-31
Advances in the networking technology and multimedia technology has necessitated a need for multimedia servers to be robust and reliable. Existing solutions have direct limitations such as I/O bottleneck and reliability of data retrieval. The system can store the stream of incoming data if enough buffer space is available or the mass storage is clearing the buffer data faster than queue input. A single buffer queue is not sufficient to handle the large frames. Queue sizes are normally several megabytes in length and thus in turn will introduce a state of overflow. The system should also keep track of themore » rewind, fast forwarding, and pause requests, otherwise queue management will become intricate. In this paper, we present a master/slave (server that is designated to monitor the workflow of the complete system. This server holds every other information of slaves by maintaining a dynamic table. It also controls the workload on each of the systems by redistributing request to others or handles the request by itself) approach which will overcome the limitations of today`s storage and also satisfy tomorrow`s storage needs. This approach will maintain the system reliability and yield faster response by using more storage units in parallel. A network of master/slave can handle many requests and synchronize them at all times. Using dedicated CPU and a common pool of queues we explain how queues can be controlled and buffer overflow can be avoided. We propose a layered approach to the buffering problem and provide a read-ahead solution to ensure continuous storage and retrieval of multimedia data.« less
Report #11-P-0597, September 9, 2011. Vulnerability testing of EPA’s directory service system authentication and authorization servers conducted in March 2011 identified authentication and authorization servers with numerous vulnerabilities.
WEBSLIDE: A "Virtual" Slide Projector Based on World Wide Web
NASA Astrophysics Data System (ADS)
Barra, Maria; Ferrandino, Salvatore; Scarano, Vittorio
1999-03-01
We present here the design key concepts of WEBSLIDE, a software project whose objective is to provide a simple, cheap and efficient solution for showing slides during lessons in computer labs. In fact, WEBSLIDE allows the video monitors of several client machines (the "STUDENTS") to be synchronously updated by the actions of a particular client machine, called the "INSTRUCTOR." The system is based on the World Wide Web and the software components of WEBSLIDE mainly consists in a WWW server, browsers and small Cgi-Bill scripts. What makes WEBSLIDE particularly appealing for small educational institutions is that WEBSLIDE is built with "off the shelf" products: it does not involve using a specifically designed program but any Netscape browser, one of the most popular browsers available on the market, is sufficient. Another possible use is to use our system to implement "guided automatic tours" through several pages or Intranets internal news bulletins: the company Web server can broadcast to all employees relevant information on their browser.
Development of a Personal Integrated Environmental Monitoring System
Wong, Man Sing; Yip, Tsan Pong; Mok, Esmond
2014-01-01
Environmental pollution in the urban areas of Hong Kong has become a serious public issue but most urban inhabitants have no means of judging their own living environment in terms of dangerous threshold and overall livability. Currently there exist many low-cost sensors such as ultra-violet, temperature and air quality sensors that provide reasonably accurate data quality. In this paper, the development and evaluation of Integrated Environmental Monitoring System (IEMS) are illustrated. This system consists of three components: (i) position determination and sensor data collection for real-time geospatial-based environmental monitoring; (ii) on-site data communication and visualization with the aid of an Android-based application; and (iii) data analysis on a web server. This system has shown to be working well during field tests in a bus journey and a construction site. It provides an effective service platform for collecting environmental data in near real-time, and raises the public awareness of environmental quality in micro-environments. PMID:25420154
Remote monitoring and security alert based on motion detection using mobile
NASA Astrophysics Data System (ADS)
Suganya Devi, K.; Srinivasan, P.
2016-03-01
Background model does not have any robust solution and constitutes one of the main problems in surveillance systems. The aim of the paper is to provide a mobile based security to a remote monitoring system through a WAP using GSM modem. It is most designed to provide durability and versatility for a wide variety of indoor and outdoor applications. It is compatible with both narrow and band networks and provides simultaneous image detection. The communicator provides remote control, event driven recording, including pre-alarm and post-alarm and image motion detection. The web cam allowing them to be mounted either to a ceiling or wall without requiring bracket, with the use of web cam. We could continuously monitoring status in the client system through the web. If any intruder arrives in the client system, server will provide an alert to the mobile (what we are set in the message that message send to the authorized person) and the client can view the image using WAP.
Web Service Distributed Management Framework for Autonomic Server Virtualization
NASA Astrophysics Data System (ADS)
Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea
Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.
Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
NASA Astrophysics Data System (ADS)
Dumitrescu, Catalin; Nowack, Andreas; Padhi, Sanjay; Sarkar, Subir
2010-04-01
This paper presents a web-based Job Monitoring framework for individual Grid sites that allows users to follow in detail their jobs in quasi-real time. The framework consists of several independent components : (a) a set of sensors that run on the site CE and worker nodes and update a database, (b) a simple yet extensible web services framework and (c) an Ajax powered web interface having a look-and-feel and control similar to a desktop application. The monitoring framework supports LSF, Condor and PBS-like batch systems. This is one of the first monitoring systems where an X.509 authenticated web interface can be seamlessly accessed by both end-users and site administrators. While a site administrator has access to all the possible information, a user can only view the jobs for the Virtual Organizations (VO) he/she is a part of. The monitoring framework design supports several possible deployment scenarios. For a site running a supported batch system, the system may be deployed as a whole, or existing site sensors can be adapted and reused with the web services components. A site may even prefer to build the web server independently and choose to use only the Ajax powered web interface. Finally, the system is being used to monitor a glideinWMS instance. This broadens the scope significantly, allowing it to monitor jobs over multiple sites.
Gogaert, Stefan; Vande Veegaete, Axel; Scholliers, Annelies; Vandekerckhove, Philippe
2016-10-01
First aid (FA) services are provisioned on-site as a preventive measure at most public events. In Flanders, Belgium, the Belgian Red Cross-Flanders (BRCF) is the major provider of these FA services with volunteers being deployed at approximately 10,000 public events annually. The BRCF has systematically registered information on the patients being treated in FA posts at major events and mass gatherings during the last 10 years. This information has been collected in a web-based client server system called "MedTRIS" (Medical Triage and Registration Informatics System). MedTRIS contains data on more than 200,000 patients at 335 mass events. This report describes the MedTRIS architecture, the data collected, and how the system operates in the field. This database consolidates different types of information with regards to FA interventions in a standardized way for a variety of public events. MedTRIS allows close monitoring in "real time" of the situation at mass gatherings and immediate intervention, when necessary; allows more accurate prediction of resources needed; allows to validate conceptual and predictive models for medical resources at (mass) public events; and can contribute to the definition of a standardized minimum data set (MDS) for mass-gathering health research and evaluation. Gogaert S , Vande veegaete A , Scholliers A , Vandekerckhove P . "MedTRIS" (Medical Triage and Registration Informatics System): a web-based client server system for the registration of patients being treated in first aid posts at public events and mass gatherings. Prehosp Disaster Med. 2016;31(5):557-562.
National Medical Terminology Server in Korea
NASA Astrophysics Data System (ADS)
Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee
Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.
Effect of video server topology on contingency capacity requirements
NASA Astrophysics Data System (ADS)
Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.
1996-03-01
Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.
Maitra, Tanmoy; Giri, Debasis
2014-12-01
The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.
A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less
Antony, Joby; Mathuria, D S; Datta, T S; Maity, Tanmoy
2015-12-01
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW(®). This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.
NASA Astrophysics Data System (ADS)
Antony, Joby; Mathuria, D. S.; Datta, T. S.; Maity, Tanmoy
2015-12-01
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW®. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antony, Joby; Mathuria, D. S.; Datta, T. S.
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similarmore » control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as “CADS,” which stands for “Complete Automation of Distribution System.” CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW{sup ®}. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.« less
Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi
2015-08-01
Radio Frequency Identification (RFID) based solutions are widely used for providing many healthcare applications include patient monitoring, object traceability, drug administration system and telecare medicine information system (TMIS) etc. In order to reduce malpractices and ensure patient privacy, in 2015, Srivastava et al. proposed a hash based RFID tag authentication protocol in TMIS. Their protocol uses lightweight hash operation and synchronized secret value shared between back-end server and tag, which is more secure and efficient than other related RFID authentication protocols. Unfortunately, in this paper, we demonstrate that Srivastava et al.'s tag authentication protocol has a serious security problem in that an adversary may use the stolen/lost reader to connect to the medical back-end server that store information associated with tagged objects and this privacy damage causing the adversary could reveal medical data obtained from stolen/lost readers in a malicious way. Therefore, we propose a secure and efficient RFID tag authentication protocol to overcome security flaws and improve the system efficiency. Compared with Srivastava et al.'s protocol, the proposed protocol not only inherits the advantages of Srivastava et al.'s authentication protocol for TMIS but also provides better security with high system efficiency.
Web-based DAQ systems: connecting the user and electronics front-ends
NASA Astrophysics Data System (ADS)
Lenzi, Thomas
2016-12-01
Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.
Design and development of an IoT-based web application for an intelligent remote SCADA system
NASA Astrophysics Data System (ADS)
Kao, Kuang-Chi; Chieng, Wei-Hua; Jeng, Shyr-Long
2018-03-01
This paper presents a design of an intelligent remote electrical power supervisory control and data acquisition (SCADA) system based on the Internet of Things (IoT), with Internet Information Services (IIS) for setting up web servers, an ASP.NET model-view- controller (MVC) for establishing a remote electrical power monitoring and control system by using responsive web design (RWD), and a Microsoft SQL Server as the database. With the web browser connected to the Internet, the sensing data is sent to the client by using the TCP/IP protocol, which supports mobile devices with different screen sizes. The users can provide instructions immediately without being present to check the conditions, which considerably reduces labor and time costs. The developed system incorporates a remote measuring function by using a wireless sensor network and utilizes a visual interface to make the human-machine interface (HMI) more instinctive. Moreover, it contains an analog input/output and a basic digital input/output that can be applied to a motor driver and an inverter for integration with a remote SCADA system based on IoT, and thus achieve efficient power management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less
LEMON - LHC Era Monitoring for Large-Scale Infrastructures
NASA Astrophysics Data System (ADS)
Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron
2011-12-01
At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.
The DICOM-based radiation therapy information system
NASA Astrophysics Data System (ADS)
Law, Maria Y. Y.; Chan, Lawrence W. C.; Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
Similar to DICOM for PACS (Picture Archiving and Communication System), standards for radiotherapy (RT) information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions), which are more than just images. This presentation describes how a DICOM-based RT Information System Server can be built based on the PACS technology and its data model for a web-based distribution. Methods: The RT information System consists of a Modality Simulator, a data format translator, a RT Gateway, the DICOM RT Server, and the Web-based Application Server. The DICOM RT Server was designed based on a PACS data model and was connected to a Web application Server for distribution of the RT information including therapeutic plans, structures, dose distribution, images and records. Various DICOM RT objects of the patient transmitted to the RT Server were routed to the Web Application Server where the contents of the DICOM RT objects were decoded and mapped to the corresponding location of the RT data model for display in the specially-designed Graphic User Interface. The non-DICOM objects were first rendered to DICOM RT Objects in the translator before they were sent to the RT Server. Results: Ten clinical cases have been collected from different hopsitals for evaluation of the DICOM-based RT Information System. They were successfully routed through the data flow and displayed in the client workstation of the RT information System. Conclusion: Using the DICOM-RT standards, integration of RT data from different vendors is possible.
NASA Astrophysics Data System (ADS)
Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.
2018-04-01
This paper examines bulk arrival and batch service queueing system with functioning server failure and multiple vacations. Customers are arriving into the system in bulk according to Poisson process with rate λ. Arriving customers are served in batches with minimum of ‘a’ and maximum of ‘b’ number of customers according to general bulk service rule. In the service completion epoch if the queue length is less than ‘a’ then the server leaves for vacation (secondary job) of random length. After a vacation completion, if the queue length is still less than ‘a’ then the server leaves for another vacation. The server keeps on going vacation until the queue length reaches the value ‘a’. The server is not stable at all the times. Sometimes it may fails during functioning of customers. Though the server fails service process will not be interrupted.It will be continued for the current batch of customers with lower service rate than the regular service rate. The server will be repaired after the service completion with lower service rate. The probability generating function of the queue size at an arbitrary time epoch will be obtained for the modelled queueing system by using supplementary variable technique. Moreover various performance characteristics will also be derived with suitable numerical illustrations.
SciServer Compute brings Analysis to Big Data in the Cloud
NASA Astrophysics Data System (ADS)
Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara
2016-06-01
SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.
The European Drought Observatory (EDO): Current State and Future Directions
NASA Astrophysics Data System (ADS)
Vogt, Jürgen; Sepulcre, Guadalupe; Magni, Diego; Valentini, Luana; Singleton, Andrew; Micale, Fabio; Barbosa, Paulo
2013-04-01
Europe has repeatedly been affected by droughts, resulting in considerable ecological and economic damage and climate change studies indicate a trend towards increasing climate variability most likely resulting in more frequent drought occurrences also in Europe. Against this background, the European Commission's Joint Research Centre (JRC) is developing methods and tools for assessing, monitoring and forecasting droughts in Europe and develops a European Drought Observatory (EDO) to complement and integrate national activities with a European view. At the core of the European Drought Observatory (EDO) is a portal, including a map server, a metadata catalogue, a media-monitor and analysis tools. The map server presents Europe-wide up-to-date information on the occurrence and severity of droughts, which is complemented by more detailed information provided by regional, national and local observatories through OGC compliant web mapping and web coverage services. In addition, time series of historical maps as well as graphs of the temporal evolution of drought indices for individual grid cells and administrative regions in Europe can be retrieved and analysed. Current work is focusing on validating the available products, developing combined indicators, improving the functionalities, extending the linkage to additional national and regional drought information systems and testing options for medium-range probabilistic drought forecasting across Europe. Longer-term goals include the development of long-range drought forecasting products, the analysis of drought hazard and risk, the monitoring of drought impact and the integration of EDO in a global drought information system. The talk will provide an overview on the development and state of EDO, the different products, and the ways to include a wide range of stakeholders (i.e. European, national river basin, and local authorities) in the development of the system as well as an outlook on the future developments.
Migration of the CERN IT Data Centre Support System to ServiceNow
NASA Astrophysics Data System (ADS)
Alvarez Alonso, R.; Arneodo, G.; Barring, O.; Bonfillou, E.; Coelho dos Santos, M.; Dore, V.; Lefebure, V.; Fedorko, I.; Grossir, A.; Hefferman, J.; Mendez Lorenzo, P.; Moller, M.; Pera Mira, O.; Salter, W.; Trevisani, F.; Toteva, Z.
2014-06-01
The large potential and flexibility of the ServiceNow infrastructure based on "best practises" methods is allowing the migration of some of the ticketing systems traditionally used for the monitoring of the servers and services available at the CERN IT Computer Centre. This migration enables the standardization and globalization of the ticketing and control systems implementing a generic system extensible to other departments and users. One of the activities of the Service Management project together with the Computing Facilities group has been the migration of the ITCM structure based on Remedy to ServiceNow within the context of one of the ITIL processes called Event Management. The experience gained during the first months of operation has been instrumental towards the migration to ServiceNow of other service monitoring systems and databases. The usage of this structure is also extended to the service tracking at the Wigner Centre in Budapest.
Park, Dae-Heon; Park, Jang-Woo
2011-01-01
Dew condensation on the leaf surface of greenhouse crops can promote diseases caused by fungus and bacteria, affecting the growth of the crops. In this paper, we present a WSN (Wireless Sensor Network)-based automatic monitoring system to prevent dew condensation in a greenhouse environment. The system is composed of sensor nodes for collecting data, base nodes for processing collected data, relay nodes for driving devices for adjusting the environment inside greenhouse and an environment server for data storage and processing. Using the Barenbrug formula for calculating the dew point on the leaves, this system is realized to prevent dew condensation phenomena on the crop’s surface acting as an important element for prevention of diseases infections. We also constructed a physical model resembling the typical greenhouse in order to verify the performance of our system with regard to dew condensation control. PMID:22163813
Park, Dae-Heon; Park, Jang-Woo
2011-01-01
Dew condensation on the leaf surface of greenhouse crops can promote diseases caused by fungus and bacteria, affecting the growth of the crops. In this paper, we present a WSN (Wireless Sensor Network)-based automatic monitoring system to prevent dew condensation in a greenhouse environment. The system is composed of sensor nodes for collecting data, base nodes for processing collected data, relay nodes for driving devices for adjusting the environment inside greenhouse and an environment server for data storage and processing. Using the Barenbrug formula for calculating the dew point on the leaves, this system is realized to prevent dew condensation phenomena on the crop's surface acting as an important element for prevention of diseases infections. We also constructed a physical model resembling the typical greenhouse in order to verify the performance of our system with regard to dew condensation control.
Verifying the secure setup of UNIX client/servers and detection of network intrusion
NASA Astrophysics Data System (ADS)
Feingold, Richard; Bruestle, Harry R.; Bartoletti, Tony; Saroyan, R. A.; Fisher, John M.
1996-03-01
This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today's global `Infosphere' presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to check on their security configuration. SPI's broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI's use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on the Ethernet broadcast Local Area Network segment and product transcripts of suspicious user connections. NID's retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.
UNIX based client/server hospital information system.
Nakamura, S; Sakurai, K; Uchiyama, M; Yoshii, Y; Tachibana, N
1995-01-01
SMILE (St. Luke's Medical Center Information Linkage Environment) is a HIS which is a client/server system using a UNIX workstation under an open network, LAN(FDDI&10BASE-T). It provides a multivendor environment, high performance with low cost and a user-friendly GUI. However, the client/server architecture with a UNIX workstation does not have the same OLTP environment (ex. TP monor) as the mainframe. So, our system problems and the steps used to solve them were reviewed. Several points that are necessary for a client/server system with a UNIX workstation in the future are presented.
Informatics in radiology (infoRAD): A complete continuous-availability PACS archive server.
Liu, Brent J; Huang, H K; Cao, Fei; Zhou, Michael Z; Zhang, Jianguo; Mogel, Greg
2004-01-01
The operational reliability of the picture archiving and communication system (PACS) server in a filmless hospital environment is always a major concern because server failure could cripple the entire PACS operation. A simple, low-cost, continuous-availability (CA) PACS archive server was designed and developed. The server makes use of a triple modular redundancy (TMR) system with a simple majority voting logic that automatically identifies a faulty module and removes it from service. The remaining two modules continue normal operation with no adverse effects on data flow or system performance. In addition, the server is integrated with two external mass storage devices for short- and long-term storage. Evaluation and testing of the server were conducted with laboratory experiments in which hardware failures were simulated to observe recovery time and the resumption of normal data flow. The server provides maximum uptime (99.999%) for end users while ensuring the transactional integrity of all clinical PACS data. Hardware failure has only minimal impact on performance, with no interruption of clinical data flow or loss of data. As hospital PACS become more widespread, the need for CA PACS solutions will increase. A TMR CA PACS archive server can reliably help achieve CA in this setting. Copyright RSNA, 2004
CLOUDCLOUD : general-purpose instrument monitoring and data managing software
NASA Astrophysics Data System (ADS)
Dias, António; Amorim, António; Tomé, António
2016-04-01
An effective experiment is dependent on the ability to store and deliver data and information to all participant parties regardless of their degree of involvement in the specific parts that make the experiment a whole. Having fast, efficient and ubiquitous access to data will increase visibility and discussion, such that the outcome will have already been reviewed several times, strengthening the conclusions. The CLOUD project aims at providing users with a general purpose data acquisition, management and instrument monitoring platform that is fast, easy to use, lightweight and accessible to all participants of an experiment. This work is now implemented in the CLOUD experiment at CERN and will be fully integrated with the experiment as of 2016. Despite being used in an experiment of the scale of CLOUD, this software can also be used in any size of experiment or monitoring station, from single computers to large networks of computers to monitor any sort of instrument output without influencing the individual instrument's DAQ. Instrument data and meta data is stored and accessed via a specially designed database architecture and any type of instrument output is accepted using our continuously growing parsing application. Multiple databases can be used to separate different data taking periods or a single database can be used if for instance an experiment is continuous. A simple web-based application gives the user total control over the monitored instruments and their data, allowing data visualization and download, upload of processed data and the ability to edit existing instruments or add new instruments to the experiment. When in a network, new computers are immediately recognized and added to the system and are able to monitor instruments connected to them. Automatic computer integration is achieved by a locally running python-based parsing agent that communicates with a main server application guaranteeing that all instruments assigned to that computer are monitored with parsing intervals as fast as milliseconds. This software (server+agents+interface+database) comes in easy and ready-to-use packages that can be installed in any operating system, including Android and iOS systems. This software is ideal for use in modular experiments or monitoring stations with large variability in instruments and measuring methods or in large collaborations, where data requires homogenization in order to be effectively transmitted to all involved parties. This work presents the software and provides performance comparison with previously used monitoring systems in the CLOUD experiment at CERN.
Improvements to Autoplot's HAPI Support
NASA Astrophysics Data System (ADS)
Faden, J.; Vandegriff, J. D.; Weigel, R. S.
2017-12-01
Autoplot handles data from a variety of data servers. These servers communicate data in different forms, each somewhat different in capabilities and each needing new software to interface. The Heliophysics Application Programmer's Interface (HAPI) attempts to ease this by providing a standard target for clients and servers to meet. Autoplot fully supports reading data from HAPI servers, and support continues to improve as the HAPI server spec matures. This collaboration has already produced robust clients and documentation which would be expensive for groups creating their own protocol. For example, client-side data caching is introduced where Autoplot maintains a cache of data for performance and off-line use. This is a feature we considered for previous data systems, but we could never afford the time to study and implement this carefully. Also, Autoplot itself can be used as a server, making the data it can read and the results of its processing available to other data systems. Autoplot use with other data transmission systems is reviewed as well, outlining features of each system.
Secure Server Login by Using Third Party and Chaotic System
NASA Astrophysics Data System (ADS)
Abdulatif, Firas A.; zuhiar, Maan
2018-05-01
Server is popular among all companies and it used by most of them but due to the security threat on the server make this companies are concerned when using it so that in this paper we will design a secure system based on one time password and third parity authentication (smart phone). The proposed system make security to the login process of server by using one time password to authenticate person how have permission to login and third parity device (smart phone) as other level of security.
NASA Astrophysics Data System (ADS)
Shahzad, Muhammad A.
1999-02-01
With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.
[Radiology information system using HTML, JavaScript, and Web server].
Sone, M; Sasaki, M; Oikawa, H; Yoshioka, K; Ehara, S; Tamakawa, Y
1997-12-01
We have developed a radiology information system using intranet techniques, including hypertext markup language, JavaScript, and Web server. JavaScript made it possible to develop an easy-to-use application, as well as to reduce network traffic and load on the server. The system we have developed is inexpensive and flexible, and its development and maintenance are much easier than with the previous system.
Upgrade to the control system of the reflectometry diagnostic of ASDEX upgrade
NASA Astrophysics Data System (ADS)
Graça, S.; Santos, J.; Manso, M. E.
2004-10-01
The broadband frequency modulation-continuous wave microwave/millimeter wave reflectometer of ASDEX upgrade tokamak (Institut für Plasma Physik (IPP), Garching, Germany) developed by Centro de Fusão Nuclear (Lisboa, Portugal) with the collaboration of IPP, is a complex system with 13 channels (O and X modes) and two types of operation modes (swept and fixed frequency). The control system that ensures remote operation of the diagnostic incorporates VME and CAMAC bus based acquisition/timing systems. Microprocessor input/output boards are used to control and monitor the microwave circuitry and associated electronic devices. The implementation of the control system is based on an object-oriented client/server model: a centralized server manages the hardware and receives input from remote clients. Communication is handled through transmission control protocol/internet protocol sockets. Here we describe recent upgrades of the control system aiming to: (i) accommodate new channels; (ii) adapt to the heterogeneity of computing platforms and operating systems; and (iii) overcome remote access restrictions. Platform and operating system independence was achieved by redesigning the graphical user interface in JAVA. As secure shell is the standard remote access protocol adopted in major fusion laboratories, secure shell tunneling was implemented to allow remote operation of the diagnostic through the existing firewalls.
Implementing TCP/IP and a socket interface as a server in a message-passing operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hipp, E.; Wiltzius, D.
1990-03-01
The UNICOS 4.3BSD network code and socket transport interface are the basis of an explicit network server for NLTSS, a message passing operating system on the Cray YMP. A BSD socket user library provides access to the network server using an RPC mechanism. The advantages of this server methodology are its modularity and extensibility to migrate to future protocol suites (e.g. OSI) and transport interfaces. In addition, the network server is implemented in an explicit multi-tasking environment to take advantage of the Cray YMP multi-processor platform. 19 refs., 5 figs.
Providing Internet Access to High-Resolution Mars Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
Smiianov, Vladyslav A; Dryha, Natalia O; Smiianova, Olha I; Obodyak, Victor K; Zudina, Tatyana O
2018-01-01
Introduction: Today mobile health`s protection service has no concrete meaning. As an research object it was called mHealth and named by Global observatory of electronic health`s protection as "Doctor and social health practice that can be supported by any mobile units (mobile phones or smartphones), units for patient`s health control, personal computers and other units of non-wired communication". An active usage of SMS in programs for patients` cure regimen keeping was quiet predictable. Mobile and electronic units only begin their development in medical sphere. Thus, to solve all health`s protection system reformation problems a special memorandum about cooperation in creating E-Health system in Ukraine was signed. The aim: Development of ICS for monitoring and non-infection ill patients` informing system optimization as a first level of medical help. Materials and methods: During research, we used systematical approach, meta-analysis, informational-analytical systems` schemes projection, expositive modeling. Developing the backend (server part of the site), we used next technologies: 1) the Apache web server; 2) programming language PHP; 3) Yii 2 PHP Framework. In the frontend developing were used the following technologies (client part of the site): 1) Bootstrap 3; 2) Vue JS Framework. Results and conclusions: Created duo-channel system "doctor-patient" and "patient-doctor" will allow usual doctors of family medicine (DFM) take the interactive dispensary cure and avoid uncontrolled illness progress. Doctor will monitor basic physical data of patient`s health and curing process. The main goal is to create automatic system to allow doctor regularly write periodical or non-periodical notifications, get patients` questioning answers and spread information between doctor and patient; that will optimize work of DFMs.
Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.
Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580
Series quartz crystal sensor for remote bacteria population monitoring in raw milk via the Internet.
Chang, Ku-Shang; Jang, Hung-Der; Lee, Ching-Fu; Lee, Yuan-Guey; Yuan, Chiun-Jye; Lee, Sheng-Hsien
2006-02-15
A remote monitoring system based on a piezoelectric quartz crystal (SPQC) sensor was developed for the determination of the bacteria population in raw milk. The system employs the Windows XP server operating system, and its programs for data acquisition, display and transmission were developed using the LabVIEW 7.1 programming language. The circuit design consists of a circuit with a piezoelectric quartz crystal (SPQC) and a pair of electrodes. This system can provide dynamic data monitoring on a web-page via the Internet. Immersion of the electrodes in a cell culture with bacteria inoculums resulted in a change of frequency caused by the impedance change due to microbial metabolism and the adherence of bacteria on the surface of the electrodes. The calibration curve of detection times against density of bacteria showed a linear correlation coefficient (R(2) = 0.9165) over the range of 70-10(6) CFU ml(-1). The sensor could acquire sufficient data rapidly (within 4 h) and thus enabled real-time monitoring of bacteria growth via the Internet. This system has potential application in the detection of bacteria concentration of milk at dairy farms.
NASA Astrophysics Data System (ADS)
Sasikala, S.; Indhira, K.; Chandrasekaran, V. M.
2017-11-01
In this paper, we have considered an MX / (a,b) / 1 queueing system with server breakdown without interruption, multiple vacations, setup times and N-policy. After a batch of service, if the size of the queue is ξ (< a), then the server immediately takes a vacation. Upon returns from a vacation, if the queue is less than N, then the server takes another vacation. This process continues until the server finds atleast N customers in the queue. After a vacation, if the server finds at least N customers waiting for service, then the server needs a setup time to start the service. After a batch of service, if the amount of waiting customers in the queue is ξ (≥ a) then the server serves a batch of min(ξ,b) customers, where b ≥ a. We derived the probability generating function of queue length at arbitrary time epoch. Further, we obtained some important performance measures.
Jiang, Jiehui; Yan, Zhuangzhi; Kandachar, Prabhu; Freudenthal, Adinda
2010-05-01
High blood pressure (BP, hypertension) is a leading chronic condition in China and has become the main risk factor for many high-risk diseases, such as heart attacks. However, the platform for chronic disease measurement and management is still lacking, especially for underserved Chinese. To achieve the early diagnosis of hypertension, one BP monitoring system has been designed. The proposed design consists of three main parts: user domain, server domain, and channel domain. All three units and their materialization, validation tests on reliability, and usability are described in this paper, and the conclusion is that the current design concept is feasible and the system can be developed toward sufficient reliability and affordability with further optimization. This idea might also be extended into one platform for other physiological signals, such as blood sugar and ECG.
Real-Time and Secure Wireless Health Monitoring
Dağtaş, S.; Pekhteryev, G.; Şahinoğlu, Z.; Çam, H.; Challa, N.
2008-01-01
We present a framework for a wireless health monitoring system using wireless networks such as ZigBee. Vital signals are collected and processed using a 3-tiered architecture. The first stage is the mobile device carried on the body that runs a number of wired and wireless probes. This device is also designed to perform some basic processing such as the heart rate and fatal failure detection. At the second stage, further processing is performed by a local server using the raw data transmitted by the mobile device continuously. The raw data is also stored at this server. The processed data as well as the analysis results are then transmitted to the service provider center for diagnostic reviews as well as storage. The main advantages of the proposed framework are (1) the ability to detect signals wirelessly within a body sensor network (BSN), (2) low-power and reliable data transmission through ZigBee network nodes, (3) secure transmission of medical data over BSN, (4) efficient channel allocation for medical data transmission over wireless networks, and (5) optimized analysis of data using an adaptive architecture that maximizes the utility of processing and computational capacity at each platform. PMID:18497866
Pak, JuGeon; Park, KeeHyun
2012-01-01
We propose a smart medication dispenser having a high degree of scalability and remote manageability. We construct the dispenser to have extensible hardware architecture for achieving scalability, and we install an agent program in it for achieving remote manageability. The dispenser operates as follows: when the real-time clock reaches the predetermined medication time and the user presses the dispense button at that time, the predetermined medication is dispensed from the medication dispensing tray (MDT). In the proposed dispenser, the medication for each patient is stored in an MDT. One smart medication dispenser contains mainly one MDT; however, the dispenser can be extended to include more MDTs in order to support multiple users using one dispenser. For remote management, the proposed dispenser transmits the medication status and the system configurations to the monitoring server. In the case of a specific event such as a shortage of medication, memory overload, software error, or non-adherence, the event is transmitted immediately. All these operations are performed automatically without the intervention of patients, through the agent program installed in the dispenser. Results of implementation and verification show that the proposed dispenser operates normally and performs the management operations from the medication monitoring server suitably.
Long-Term Animal Observation by Wireless Sensor Networks with Sound Recognition
NASA Astrophysics Data System (ADS)
Liu, Ning-Han; Wu, Chen-An; Hsieh, Shu-Ju
Due to wireless sensor networks can transmit data wirelessly and can be disposed easily, they are used in the wild to monitor the change of environment. However, the lifetime of sensor is limited by the battery, especially when the monitored data type is audio, the lifetime is very short due to a huge amount of data transmission. By intuition, sensor mote analyzes the sensed data and decides not to deliver them to server that can reduce the expense of energy. Nevertheless, the ability of sensor mote is not powerful enough to work on complicated methods. Therefore, it is an urgent issue to design a method to keep analyzing speed and accuracy under the restricted memory and processor. This research proposed an embedded audio processing module in the sensor mote to extract and analyze audio features in advance. Then, through the estimation of likelihood of observed animal sound by the frequencies distribution, only the interesting audio data are sent back to server. The prototype of WSN system is built and examined in the wild to observe frogs. According to the results of experiments, the energy consumed by sensors through our method can be reduced effectively to prolong the observing time of animal detecting sensors.
The Mayak Worker Dosimetry System (MWDS-2013): Implementation of the Dose Calculations.
Zhdanov, А; Vostrotin, V; Efimov, А; Birchall, A; Puncher, M
2016-07-15
The calculation of internal doses for the Mayak Worker Dosimetry System (MWDS-2013) involved extensive computational resources due to the complexity and sheer number of calculations required. The required output consisted of a set of 1000 hyper-realizations: each hyper-realization consists of a set (1 for each worker) of probability distributions of organ doses. This report describes the hardware components and computational approaches required to make the calculation tractable. Together with the software, this system is referred to here as the 'PANDORA system'. It is based on a commercial SQL server database in a series of six work stations. A complete run of the entire Mayak worker cohort entailed a huge amount of calculations in PANDORA and due to the relatively slow speed of writing the data into the SQL server, each run took about 47 days. Quality control was monitored by comparing doses calculated in PANDORA with those in a specially modified version of the commercial software 'IMBA Professional Plus'. Suggestions are also made for increasing calculation and storage efficiency for future dosimetry calculations using PANDORA. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Weber, K.; Schnase, J. L.; Carroll, M.; Brown, M. E.; Gill, R.; Haskett, G.; Gardner, T.
2013-12-01
In partnership with the Department of Interior's Bureau of Land Management (BLM) and the Idaho Department of Lands (IDL), we are building and evaluating the RECOVER decision support system. RECOVER - which stands for Rehabilitation Capability Convergence for Ecosystem Recovery - is an automatically deployable, context-aware decision support system for savanna wildfires that brings together in a single application the information necessary for post-fire rehabilitation decision-making and long-term ecosystem monitoring. RECOVER uses state-of-the-art cloud-based data management technologies to improve performance, reduce cost, and provide site-specific flexibility for each fire. The RECOVER Server uses Integrated Rule-Oriented Data System (iRODS) data grid technology deployed in the Amazon Elastic Compute Cloud (EC2). The RECOVER Client is an Adobe Flex web map application that is able to provide a suite of convenient GIS analytical capabilities. In a typical use scenario, the RECOVER Server is provided a wildfire name and geospatial extent. The Server then automatically gathers Earth observational data and other relevant products from various geographically distributed data sources. The Server creates a database in the cloud where all relevant information about the wildfire is stored. This information is made available to the RECOVER Client and ultimately to fire managers through their choice of web browser. The Server refreshes the data throughout the burn and subsequent recovery period (3-5 years) with each refresh requiring two minutes to complete. Since remediation plans must be completed within 14 days of a fire's containment, RECOVER has the potential to significantly improve the decision-making process. RECOVER adds an important new dimension to post-fire decision-making by focusing on ecosystem rehabilitation in semiarid savannas. A novel aspect of RECOVER's approach involves the use of soil moisture estimates, which are an important but difficult-to-obtain element of post-fire rehabilitation planning. We will use downscaled soil moisture data from three primary observational sources to begin evaluation of soil moisture products and build the technology needed for RECOVER to use future SMAP products. As a result, RECOVER, BLM, and the fire applications community will be ready customers for data flowing out of new NASA missions, such as NPP, LDCM, and SMAP.
Lee, Ren-Guey; Lai, Chien-Chih; Chiang, Shao-Shan; Liu, Hsin-Sheng; Chen, Chun-Chang; Hsieh, Guan-Yu
2006-01-01
According to home healthcare requirement of chronic patients, this paper proposes a mobile-care system integrated with a variety of vital-sign monitoring, where all the front-end vital-sign measuring devices are portable and have the ability of short-range wireless communication. In order to make the system more suitable for home applications, the technology of wireless sensor network is introduced to transmit the captured vital signs to the residential gateway by means of multi-hop relay. Then the residential gateway uploads data to the care server via Internet to carry out patient's condition monitoring and the management of pathological data. Furthermore, the system is added in the alarm mechanism, which the portable care device is able to immediately perceive the critical condition of the patient and to send a warning message to medical and nursing personnels in order to achieve the goal of prompt rescue.
Process evaluation distributed system
NASA Technical Reports Server (NTRS)
Moffatt, Christopher L. (Inventor)
2006-01-01
The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.
Parallel Computing Using Web Servers and "Servlets".
ERIC Educational Resources Information Center
Lo, Alfred; Bloor, Chris; Choi, Y. K.
2000-01-01
Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…
CDC WONDER: a cooperative processing architecture for public health.
Friede, A; Rosen, D H; Reid, J A
1994-01-01
CDC WONDER is an information management architecture designed for public health. It provides access to information and communications without the user's needing to know the location of data or communication pathways and mechanisms. CDC WONDER users have access to extractions from some 40 databases; electronic mail (e-mail); and surveillance data processing. System components include the Remote Client, the Communications Server, the Queue Managers, and Data Servers and Process Servers. The Remote Client software resides in the user's machine; other components are at the Centers for Disease Control and Prevention (CDC). The Remote Client, the Communications Server, and the Applications Server provide access to the information and functions in the Data Servers and Process Servers. The system architecture is based on cooperative processing, and components are coupled via pure message passing, using several protocols. This architecture allows flexibility in the choice of hardware and software. One system limitation is that final results from some subsystems are obtained slowly. Although designed for public health, CDC WONDER could be useful for other disciplines that need flexible, integrated information exchange. PMID:7719813
Experience with Adaptive Security Policies.
1998-03-01
3.1 Introduction r: 3.2 Logical Groupings of audited permission checks 29 3.3 Auditing of system servers via microkernel snooping 31 3.4...performed by servers other than the microkernel . Since altering each server to audit events would complicate the integration of new servers, a...modification to the microkernel was implemented to allow the microkernel to audit the requests made of other servers. Both methods for enhancing audit
2007-11-01
accuracy. FPGA ADC data acquisition is controlled by distributed Java -based software. Java -based server application sits on each of the acquisition...JNI ( Java Native Interface) is used to allow Java indirect control of the USB driver. Fig. 5. Photograph of mobile electronics rack...supplies with the monitor and keyboard. The server application on each of these machines is controlled by a remote client Java -based application
NASA Astrophysics Data System (ADS)
Kwon, Hyeokjun; Oh, Sechang; Kumar, Prashanth S.; Varadan, Vijay K.
2012-10-01
CardioVascular Disease(CVD)s lead the sudden cardiac death due to irregular phenomenon of the cardiac signal by the abnormal case of blood vessel and cardiac structure. For last two decades, cardiac disease research for man is under active discussion. As a result, the death rate by cardiac disease in men has been falling gradually compared with relatively increasing the women death rate due to CVD[2]. The main reason of this phenomenon causes the lack a sense of the seriousness to female CVD and different symptom of female CVD compared with the symptoms of male CVD. Usually, because the women CVD accompanies with ordinary symptoms unrecognizing the heart abnormality signal such as unusual fatigue, sleep disturbances, shortness of breath, anxiety, chest discomfort, and indigestion dyspepsia, most women CVD patients do not realize that these symptoms are related to the CVD symptoms. Therefore, periodic ECG signal observation is required for women cardiac disease patients. ElectroCardioGram(ECG) detection, treadmill test/exercise ECG, nuclear scan, coronary angiography, and intracoronary ultrasound are used to diagnose abnormality of heart. Among the medical checkup methods for CVDs checkup, it is very effective method for the diagnosis of cardiac disease and the early detection of heart abnormality to monitor ECG periodically. This paper suggests the effective ECG monitoring system for woman by attaching the system on woman's brassiere by using augmented chest lead attachment method. The suggested system in this paper consists of ECG signal transmission system and a server program to display and analyze the transmitted ECG. The ECG signal transmission system consists of three parts such as ECG physical signal detection part with two electrodes made by gold nanowire structure, data acquisition with AD converter, and data transmission part with GPRS(General Packet Radio Service) communication. Usually, to detect human bio signal, Ag/AgCl or gold cup electrodes are used with conductive gel. However, the gel can be dried when taking long time monitoring. The gold nanowire structure electrodes without consideration of uncomfortable usage of gel are attached on beneath the chest position of a brassiere, and the electrodes convert the physical ECG signal to voltage potential signal. The voltage potential ECG signal is converted to digital signal by AD converter included in microprocessor. The converted ECG signal by AD converter is saved on every 1 sec period in the internal RAM in microprocessor. For transmission of the saved data in the internal RAM to a server computer locating at remote area, the system uses the GPRS communication technology, which can develop the wide area network(WAP) without any gateway and repeater. In addition, the transmission system is operated on client mode of GPRS communication. The remote server is installed a program including the functions of displaying and analyzing the transmitted ECG. To display the ECG data, the program is operated with TCP/IP server mode and static IP address, and to analyze the ECG data, the paper suggests motion artifact remove algorithm including adaptive filter with LMS(least mean square), baseline detection algorithm using predictability estimation theory, a filter with moving weighted factor, low pass filter, peak to peak detection, and interpolation.
DIABCARE Quality Network in Europe--a model for quality management in chronic diseases.
Piwernetz, K
2001-04-01
The DIABCARE Q-Net project developed a complete and integrated information technology system to monitor diabetes care, according to the gold standards of the St Vincent Declaration Action Program. This is the first Telematic platform for standardized documentation on medical quality and evaluation across Europe, which will serve as a model for other chronic diseases. Quality development starts from the comparison of diabetes services, based on the key data on diabetes care in the basic information sheet. This is a 141 field form, which is to be completed once a year for each patient under the care of the diabetes team. The system performs an analysis of the local data and compares the data with peer teams by means of telecommunication of anonymous data. These data are collected regionally. At the next level these regional data are compared on a national basis across Europe using dedicated communication lines. National data can be compared transnationally by the use of the Internet and the DIABCARE benchmarking servers. These different lines are used according to the necessary security standards. Medical data are transferred via dedicated lines, aggregated data via the Internet. The architecture follows the open-platform concept in order to allow for heterogeneous technical environments. Already at the start of the project, the necessity for expanding the quality approach to telemedicine methodology was identified and included. For each level, specific programs are available to improve the performance of diabetes care delivery: DIABCARE data as client and DIABCARE server as regional and DIABCARE 'international server' as transnational server. Functioning pilots were established across all levels. The clients have been linked to the servers on a routine basis. According to the open architecture design, the various countries decided on different systems at the entry point: full system--Portugal; fax systems--Italy, Bavaria; implementation into doctor's office systems--Norway; paper forms and chip cards--France. This system can improve the local, regional and national diabetes care. Initiatives in several countries proved the feasibility of the system. The most extensive use, from Portugal, will be reported later in this paper. The exploitation of the DIABCARE Q-Net system will be performed with the DIABCARE International European Economic Interest Grouping as a co-ordinator and several commercial companies as contractors to market the products inside the system. The key project participants are: DIABCARE Office EURO, DIABCARE Portugal, DIABCARE France, DIABCARE Bavaria, DIABCARE UK, DIABCARE Netherlands, DIABCARE Norway, DIABCARE Italy, DIABCARE Sweden, DIABCARE Austria, DIABCARE Spain, GSF Research Centre for Health and Environment, FAST Research Institute for Applied Software Technology, Tromsø University Hospital, Stavanger Technical College, Technical University of Ilmenau, World Health Organisation (WHO), Regional Office for Europe.
Network Upgrade for the SLC: PEP II Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crane, M.; Call, M.; Clark, S.
2011-09-09
The PEP-II control system required a new network to support the system functions. This network, called CTLnet, is an FDDI/Ethernet based network using only TCP/IP protocols. An upgrade of the SLC Control System micro communications to use TCP/IP and SLCNET would allow all PEP-II control system nodes to use TCP/IP. CTLnet is private and separate from the SLAC public network. Access to nodes and control system functions is provided by multi-homed application servers with connections to both the private CTLnet and the SLAC public network. Monitoring and diagnostics are provided using a dedicated system. Future plans and current status informationmore » is included.« less
Integrated Speed Limiter and Fatigue Analyzer System
NASA Astrophysics Data System (ADS)
Pranoto, Hadi; Leman, A. M.; Wahab, Abdi; Sebayang, Darwin
2018-03-01
The traffic accident increase in line with the growth of the vehicle, so the safety system must be developed to decrease the accident. This paper will purpose the integrated between speed limiter and fatigue analyser to improve the safety for vehicle, and also to analyse if there is an accident. The device and the software or application are developed and then integrated into one system. The testing held to prove the integrated between device and the application, and it show the system can work well. The next improvement for this system can be developing the server to collect data from internet, so the driver and the vehicle owner can monitor the system by internet.
IoT based Growth Monitoring System of Guava (Psidium guajava L.) Fruits
NASA Astrophysics Data System (ADS)
Slamet, W.; Irham, N. M.; Sutan, M. S. A.
2018-05-01
Growth monitoring of plant is important especially to evaluate the influence of environment or growing condition on its productivity. One way to monitor the plant growth is by measuring the radial growth (i.e., the change of circumference) of certain part of plant such as trunk, branch, and fruit. In this study we develop an internet of things (IoT) based monitoring system of radial growth of plant using a low-cost optoelectronic sensor. The system was applied to monitor radial growth of guava fruits (Psidium guajava L.). The principle of the developed sensor is based on the optoelectronic sensor which detects alternating white and black narrow bar printed on reflective tapes. Reflective tape was installed encircling the fruit. The movement of reflective tapes will follow the radial growth of the fruit so that the infrared sensor on the optoelectronic would response reflective tapes movement. This device is designed to measure object continuously and long-term monitor with minimum maintenance. The data collected by the sensors are then sent to the server and also can be monitored in real-time. Based on field test, at current stage, the developed sensor could measure the radial growth of the fruits with a maximum error 2 mm. In term of data transfer, the success rate of the developed system was 97.54%. The result indicated that the developed system can be used as an effective tool for growth monitoring of plant.
A Study on Secure Medical-Contents Strategies with DRM Based on Cloud Computing
Měsíček, Libor; Choi, Jongsun
2018-01-01
Many hospitals and medical clinics have been using a wearable sensor in its health care system because the wearable sensor, which is able to measure the patients' biometric information, has been developed to analyze their patients remotely. The measured information is saved to a server in a medical center, and the server keeps the medical information, which also involves personal information, on a cloud system. The server and network devices are used by connecting each other, and sensitive medical records are dealt with remotely. However, these days, the attackers, who try to attack the server or the network systems, are increasing. In addition, the server and the network system have a weak protection and security policy against the attackers. In this paper, it is suggested that security compliance of medical contents should be followed to improve the level of security. As a result, the medical contents are kept safely. PMID:29796233
System and Method for Providing a Climate Data Persistence Service
NASA Technical Reports Server (NTRS)
Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)
2018-01-01
A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.
A Study on Secure Medical-Contents Strategies with DRM Based on Cloud Computing.
Ko, Hoon; Měsíček, Libor; Choi, Jongsun; Hwang, Seogchan
2018-01-01
Many hospitals and medical clinics have been using a wearable sensor in its health care system because the wearable sensor, which is able to measure the patients' biometric information, has been developed to analyze their patients remotely. The measured information is saved to a server in a medical center, and the server keeps the medical information, which also involves personal information, on a cloud system. The server and network devices are used by connecting each other, and sensitive medical records are dealt with remotely. However, these days, the attackers, who try to attack the server or the network systems, are increasing. In addition, the server and the network system have a weak protection and security policy against the attackers. In this paper, it is suggested that security compliance of medical contents should be followed to improve the level of security. As a result, the medical contents are kept safely.
Distributed road assessment system
Beer, N. Reginald; Paglieroni, David W
2014-03-25
A system that detects damage on or below the surface of a paved structure or pavement is provided. A distributed road assessment system includes road assessment pods and a road assessment server. Each road assessment pod includes a ground-penetrating radar antenna array and a detection system that detects road damage from the return signals as the vehicle on which the pod is mounted travels down a road. Each road assessment pod transmits to the road assessment server occurrence information describing each occurrence of road damage that is newly detected on a current scan of a road. The road assessment server maintains a road damage database of occurrence information describing the previously detected occurrences of road damage. After the road assessment server receives occurrence information for newly detected occurrences of road damage for a portion of a road, the road assessment server determines which newly detected occurrences correspond to which previously detected occurrences of road damage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, H.; Chen, K.; Jusko, M.
The Packaging Certification Program (PCP) of the U.S. Department of Energy (DOE) Environmental Management (EM), Office of Packaging and Transportation (EM-14), has developed a radio frequency identification (RFID) tracking and monitoring system for the management of nuclear materials during storage and transportation. The system, developed by the PCP team at Argonne National Laboratory, consists of hardware (Mk-series sensor tags, fixed and handheld readers, form factor for multiple drum types, seal integrity sensors, and enhanced battery management), software (application programming interface, ARG-US software for local and remote/web applications, secure server and database management), and cellular/satellite communication interfaces for vehicle tracking andmore » item monitoring during transport. The ability of the above system to provide accurate, real-time tracking and monitoring of the status of multiple, certified containers of nuclear materials has been successfully demonstrated in a week-long, 1,700-mile DEMO performed in April 2008. While the feedback from the approximately fifty (50) stakeholders who participated in and/or observed the DEMO progression were very positive and encouraging, two major areas of further improvements - system integration and web application enhancement - were identified in the post-DEMO evaluation. The principal purpose of the MiniDemo described in this report was to verify these two specific improvements. The MiniDemo was conducted on August 28, 2009. In terms of system integration, a hybrid communication interface - combining the RFID item-monitoring features and a commercial vehicle tracking system by Qualcomm - was developed and implemented. In the MiniDemo, the new integrated system worked well in reporting tag status and vehicle location accurately and promptly. There was no incompatibility of components. The robust commercial communication gear, as expected, helped improve system reliability. The MiniDemo confirmed that system integration is technically feasible and reliable with the existing RFID and Qualcomm satellite equipment. In terms of web application, improvements in mapping, tracking, data presentation, and post-incident spatial query reporting were implemented in ARG-US, the application software that manages the dataflow among the RFID tags, readers, and servers. These features were tested in the MiniDemo and found to be satisfactory. The resulting web application is both informative and user-friendly. A joint developmental project is being planned between the PCP and the DOE TRANSCOM that uses the Qualcomm gear in vehicles for tracking and communication of radioactive material shipments across the country. Adding an RFID interface to TRANSCOM is a significant enhancement to the DOE infrastructure for tracking and monitoring shipments of radioactive materials.« less
Online decision support system for surface irrigation management
NASA Astrophysics Data System (ADS)
Wang, Wenchao; Cui, Yuanlai
2017-04-01
Irrigation has played an important role in agricultural production. Irrigation decision support system is developed for irrigation water management, which can raise irrigation efficiency with few added engineering services. An online irrigation decision support system (OIDSS), in consist of in-field sensors and central computer system, is designed for surface irrigation management in large irrigation district. Many functions have acquired in OIDSS, such as data acquisition and detection, real-time irrigation forecast, water allocation decision and irrigation information management. The OIDSS contains four parts: Data acquisition terminals, Web server, Client browser and Communication system. Data acquisition terminals are designed to measure paddy water level, soil water content in dry land, ponds water level, underground water level, and canals water level. A web server is responsible for collecting meteorological data, weather forecast data, the real-time field data, and manager's feedback data. Water allocation decisions are made in the web server. Client browser is responsible for friendly displaying, interacting with managers, and collecting managers' irrigation intention. Communication system includes internet and the GPRS network used by monitoring stations. The OIDSS's model is based on water balance approach for both lowland paddy and upland crops. Considering basic database of different crops water demands in the whole growth stages and irrigation system engineering information, the OIDSS can make efficient decision of water allocation with the help of real-time field water detection and weather forecast. This system uses technical methods to reduce requirements of user's specialized knowledge and can also take user's managerial experience into account. As the system is developed by the Browser/Server model, it is possible to make full use of the internet resources, to facilitate users at any place where internet exists. The OIDSS has been applied in Zhanghe Irrigation District (Center China) to manage the required irrigation deliveries. Two years' application indicates that the proposed OIDSS can achieve promising performance for surface irrigation. Historical data of rice growing period in 2014 has been applied to test the OIDSS: it gives out 3 irrigation decisions, which is consistent with actual irrigation times and the forecast irrigation dates are well fit with the actual situations; the corresponding amount of total irrigation decreases by 15.13% compared to those without using the OIDSS.
Wake-up transceivers for structural health monitoring of bridges
NASA Astrophysics Data System (ADS)
Kumberg, T.; Kokert, J.; Younesi, V.; Koenig, S.; Reindl, L. M.
2016-04-01
In this article we present a wireless sensor network to monitor the structural health of a large-scale highway bridge in Germany. The wireless sensor network consists of several sensor nodes that use wake-up receivers to realize latency free and low-power communication. The sensor nodes are either equipped with very accurate tilt sensor developed by Northrop Grumman LITEF GmbH or with a Novatel OEM615 GNSS receiver. Relay nodes are required to forward measurement data to a base station located on the bridge. The base station is a gateway that transmits the local measurement data to a remote server where it can be further analyzed and processed. Further on, we present an energy harvesting system to supply the energy demanding GNSS sensor nodes to realize long term monitoring.
ATLAS tile calorimeter cesium calibration control and analysis software
NASA Astrophysics Data System (ADS)
Solovyanov, O.; Solodkov, A.; Starchenko, E.; Karyukhin, A.; Isaev, A.; Shalanda, N.
2008-07-01
An online control system to calibrate and monitor ATLAS Barrel hadronic calorimeter (TileCal) with a movable radioactive source, driven by liquid flow, is described. To read out and control the system an online software has been developed, using ATLAS TDAQ components like DVS (Diagnostic and Verification System) to verify the hardware before running, IS (Information Server) for data and status exchange between networked computers, and other components like DDC (DCS to DAQ Connection), to connect to PVSS-based slow control systems of Tile Calorimeter, high voltage and low voltage. A system of scripting facilities, based on Python language, is used to handle all the calibration and monitoring processes from hardware perspective to final data storage, including various abnormal situations. A QT based graphical user interface to display the status of the calibration system during the cesium source scan is described. The software for analysis of the detector response, using online data, is discussed. Performance of the system and first experience from the ATLAS pit are presented.
Seamless personal health information system in cloud computing.
Chung, Wan-Young; Fong, Ee May
2014-01-01
Noncontact ECG measurement has gained popularity these days due to its noninvasive and conveniences to be applied on daily life. This approach does not require any direct contact between patient's skin and sensor for physiological signal measurement. The noncontact ECG measurement is integrated with mobile healthcare system for health status monitoring. Mobile phone acts as the personal health information system displaying health status and body mass index (BMI) tracking. Besides that, it plays an important role being the medical guidance providing medical knowledge database including symptom checker and health fitness guidance. At the same time, the system also features some unique medical functions that cater to the living demand of the patients or users, including regular medication reminders, alert alarm, medical guidance, appointment scheduling. Lastly, we demonstrate mobile healthcare system with web application for extended uses, thus health data are clouded into web server system and web database storage. This allows remote health status monitoring easily and so forth it promotes a cost effective personal healthcare system.
Optimal Resource Allocation under Fair QoS in Multi-tier Server Systems
NASA Astrophysics Data System (ADS)
Akai, Hirokazu; Ushio, Toshimitsu; Hayashi, Naoki
Recent development of network technology realizes multi-tier server systems, where several tiers perform functionally different processing requested by clients. It is an important issue to allocate resources of the systems to clients dynamically based on their current requests. On the other hand, Q-RAM has been proposed for resource allocation in real-time systems. In the server systems, it is important that execution results of all applications requested by clients are the same QoS(quality of service) level. In this paper, we extend Q-RAM to multi-tier server systems and propose a method for optimal resource allocation with fairness of the QoS levels of clients’ requests. We also consider an assignment problem of physical machines to be sleep in each tier sothat the energy consumption is minimized.
Global system for hydrological monitoring and forecasting in real time at high resolution
NASA Astrophysics Data System (ADS)
Ortiz, Enrique; De Michele, Carlo; Todini, Ezio; Cifres, Enrique
2016-04-01
This project presented at the EGU 2016 born of solidarity and the need to dignify the most disadvantaged people living in the poorest countries (Africa, South America and Asia, which are continually exposed to changes in the hydrologic cycle suffering events of large floods and/or long periods of droughts. It is also a special year this 2016, Year of Mercy, in which we must engage with the most disadvantaged of our Planet (Gaia) making available to them what we do professionally and scientifically. The project called "Global system for hydrological monitoring and forecasting in real time at high resolution" is Non-Profit and aims to provide at global high resolution (1km2) hydrological monitoring and forecasting in real time and continuously coupling Weather Forecast of Global Circulation Models, such us GFS-0.25° (Deterministic and Ensembles Run) forcing a physically based distributed hydrological model computationally efficient, such as the latest version extended of TOPKAPI model, named TOPKAPI-eXtended. Finally using the MCP approach for the proper use of ensembles for Predictive Uncertainty assessment essentially based on a multiple regression in the Normal space, can be easily extended to use ensembles to represent the local (in time) smaller or larger conditional predictive uncertainty, as a function of the ensemble spread. In this way, each prediction in time accounts for both the predictive uncertainty of the ensemble mean and that of the ensemble spread. To perform a continuous hydrological modeling with TOPKAPI-X model and have hot start of hydrological status of watersheds, the system assimilated products of rainfall and temperature derived from remote sensing, such as product 3B42RT of TRMM NASA and others.The system will be integrated into a Decision Support System (DSS) platform, based on geographical data. The DSS is a web application (For Pc, Tablet/Mobile phone): It does not need installation (all you need is a web browser and an internet connection) and not need update (all upgrade are deployed on the remote server)and DSS is a classical client-server application. The client side will be an HTML 5-CSS 3 application, it runs in one of the most common browser. The server side consist in: A web server (Apache web server); a map server (Geoserver); a Geographical q3456Relational Database Management Sytem (Postgresql+Postgis); Tools based on GDAL Lybraries. A customized web page will be implemented to publish all hydrometeorological information and forecast runs (free) for all users in the world. In this first presentation of the project are invited to attend all those scientific / technical people, Universities, Research Centers (public or private) who want to collaborate in it, opening a brainstorming to improve the System. References: • Liu Z. and Todini E., (2002). Towards a comprehensive physically based rainfall-runoff model. Hydrology and Earth System Sciences (HESS), 6(5):859-881, 2002. • Thielen, J., Bartholmes, J., Ramos, M.-H., and de Roo, A., (2009): The European Flood Alert System - Part 1: Concept and development, Hydrol. Earth Syst. Sci., 13, 125-140, 2009. • Coccia C., Mazzetti C., Ortiz E., Todini E., (2010) - A different soil conceptualization for the TOPKAPI model application within the DMIP 2. American Geophysical Union. Fall Meeting, San Francisco H21H-07, 2010. • Pappenberger, F., Cloke, H. L., Balsamo, G., Ngo-Duc, T., and Oki,T., (2010) Global runoff routing with the hydrological component of the ECMWF NWP system, Int. J. Climatol., 30, 2155-2174, 2010. • Coccia, G. and Todini, E., (2011). Recent developments in predictive uncertainty assessment based on the Model Conditional Processor approach. Hydrology and Earth System Sciences, 15, 3253-3274, 2011. • Wu, H., Adler, R. F., Hong, Y., Tian, Y., and Policelli, F.,(2012): Evaluation of Global Flood Detection Using Satellite-Based Rainfall and a Hydrologic Model, J. Hydrometeorol., 13, 1268-1284, 2012. • Simth M. et al., (2013). The Distributed Model Intercomparison Project - Phase 2: Experiment Design and Summary Results of the Western Basin Experiments, Journal of Hydrology 507, 300-329, 2013. • Pontificiae Academiae Scientiarvm (2014). Proceedings of the Joint Workshop on 2-6 May 2014: Sustainable Humanity Sustainable Nature Our Responsibility. Pontificiae Academiae Scientiarvm Extra Series 41. Vatican City. 2014 • Encyclical letter CARITAS IN VERITATE of the supreme pontiff Benedict XVI to the bishops, priests and deacons, men and women religious the lay faithful and all people of good will on integral human development in charity and truth. Vatican City . 2009. • Encyclical letter LAUDATO SI' of the holy father Francis on care for our common home. Vatican City. 2015
Designing and Implementation of River Classification Assistant Management System
NASA Astrophysics Data System (ADS)
Zhao, Yinjun; Jiang, Wenyuan; Yang, Rujun; Yang, Nan; Liu, Haiyan
2018-03-01
In an earlier publication, we proposed a new Decision Classifier (DCF) for Chinese river classification based on their structures. To expand, enhance and promote the application of the DCF, we build a computer system to support river classification named River Classification Assistant Management System. Based on ArcEngine and ArcServer platform, this system implements many functions such as data management, extraction of river network, river classification, and results publication under combining Client / Server with Browser / Server framework.
González, Fernando Cornelio Jimènez; Villegas, Osslan Osiris Vergara; Ramírez, Dulce Esperanza Torres; Sánchez, Vianey Guadalupe Cruz; Domínguez, Humberto Ochoa
2014-01-01
Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia. PMID:25230306
González, Fernando Cornelio Jiménez; Villegas, Osslan Osiris Vergara; Ramírez, Dulce Esperanza Torres; Sánchez, Vianey Guadalupe Cruz; Domínguez, Humberto Ochoa
2014-09-16
Technological innovations in the field of disease prevention and maintenance of patient health have enabled the evolution of fields such as monitoring systems. One of the main advances is the development of real-time monitors that use intelligent and wireless communication technology. In this paper, a system is presented for the remote monitoring of the body temperature and heart rate of a patient by means of a wireless sensor network (WSN) and mobile augmented reality (MAR). The combination of a WSN and MAR provides a novel alternative to remotely measure body temperature and heart rate in real time during patient care. The system is composed of (1) hardware such as Arduino microcontrollers (in the patient nodes), personal computers (for the nurse server), smartphones (for the mobile nurse monitor and the virtual patient file) and sensors (to measure body temperature and heart rate), (2) a network layer using WiFly technology, and (3) software such as LabView, Android SDK, and DroidAR. The results obtained from tests show that the system can perform effectively within a range of 20 m and requires ten minutes to stabilize the temperature sensor to detect hyperthermia, hypothermia or normal body temperature conditions. Additionally, the heart rate sensor can detect conditions of tachycardia and bradycardia.
An Optimization of the Basic School Military Occupational Skill Assignment Process
2003-06-01
Corps Intranet (NMCI)23 supports it. We evaluated the use of Microsoft’s SQL Server, but dismissed this after learning that TBS did not possess a SQL ...Server license or a qualified SQL Server administrator.24 SQL Server would have provided for additional security measures not available in MS...administrator. Although not has powerful as SQL Server, MS Access can handle the multi-user environment necessary for this system.25 The training
Enhanced networked server management with random remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2003-08-01
In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.
Client - server programs analysis in the EPOCA environment
NASA Astrophysics Data System (ADS)
Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano
1996-09-01
Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.
Stockburger, D W
1999-05-01
Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.
Verifying the secure setup of Unix client/servers and detection of network intrusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feingold, R.; Bruestle, H.R.; Bartoletti, T.
1995-07-01
This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today`s global ``Infosphere`` presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to checkmore » on their security configuration. SPI`s broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI`s use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on an Ethernet broadcast Local Area Network segment and produce transcripts of suspicious user connections. NID`s retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.« less
Designing a scalable video-on-demand server with data sharing
NASA Astrophysics Data System (ADS)
Lim, Hyeran; Du, David H.
2000-12-01
As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.
Designing a scalable video-on-demand server with data sharing
NASA Astrophysics Data System (ADS)
Lim, Hyeran; Du, David H. C.
2001-01-01
As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.
Developments and applications of DAQ framework DABC v2
NASA Astrophysics Data System (ADS)
Adamczewski-Musch, J.; Kurz, N.; Linev, S.
2015-12-01
The Data Acquisition Backbone Core (DABC) is a software framework for distributed data acquisition. In 2013 Version 2 of DABC has been released with several improvements. For monitoring and control, an HTTP web server and a proprietary command channel socket have been provided. Web browser GUIs have been implemented for configuration and control of DABC and MBS DAQ nodes via such HTTP server. Several specific plug-ins, for example interfacing PEXOR/KINPEX optical readout PCIe boards, or HADES trbnet input and hld file output, have been further developed. In 2014, DABC v2 was applied for production data taking of the HADES collaboration's pion beam time at GSI. It fully replaced the functionality of the previous event builder software and added new features concerning online monitoring.
Data Transport Subsystem - The SFOC glue
NASA Technical Reports Server (NTRS)
Parr, Stephen J.
1988-01-01
The design and operation of the Data Transport Subsystem (DTS) for the JPL Space Flight Operation Center (SFOC) are described. The SFOC is the ground data system under development to serve interplanetary space probes; in addition to the DTS, it comprises a ground interface facility, a telemetry-input subsystem, data monitor and display facilities, and a digital TV system. DTS links the other subsystems via an ISO OSI presentation layer and an LAN. Here, particular attention is given to the DTS services and service modes (virtual circuit, datagram, and broadcast), the DTS software architecture, the logical-name server, the role of the integrated AI library, and SFOC as a distributed system.
Kent, Alexander Dale [Los Alamos, NM
2008-09-02
Methods and systems in a data/computer network for authenticating identifying data transmitted from a client to a server through use of a gateway interface system which are communicately coupled to each other are disclosed. An authentication packet transmitted from a client to a server of the data network is intercepted by the interface, wherein the authentication packet is encrypted with a one-time password for transmission from the client to the server. The one-time password associated with the authentication packet can be verified utilizing a one-time password token system. The authentication packet can then be modified for acceptance by the server, wherein the response packet generated by the server is thereafter intercepted, verified and modified for transmission back to the client in a similar but reverse process.
Analysis of the Appropriateness of the Use of Peltier Cells as Energy Sources.
Hájovský, Radovan; Pieš, Martin; Richtár, Lukáš
2016-05-25
The article describes the possibilities of using Peltier cells as an energy source to power the telemetry units, which are used in large-scale monitoring systems as central units, ensuring the collection of data from sensors, processing, and sending to the database server. The article describes the various experiments that were carried out, their progress and results. Based on experiments evaluated, the paper also discusses the possibilities of using various types depending on the temperature difference of the cold and hot sides.
2017-06-22
setup” versus “use” issue. What rights does Honeywell need to run the software? We/Honeywell may not know. Discuss with Jabe? We may need to just try...Summary: It appears the direction we are going will be to install new me- ters in buildings, connect them to the Smart Servers, run the data through...damper blade type (check the appropriate item) Flat late Airfoil Calculate the nominal damper face velocity Face velocity = Flow rate ÷ Area
Monitoring Moving Queries inside a Safe Region
Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan
2014-01-01
With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652
Chen, Hung-Ming; Lo, Jung-Wen; Yeh, Chang-Kuo
2012-12-01
The rapidly increased availability of always-on broadband telecommunication environments and lower-cost vital signs monitoring devices bring the advantages of telemedicine directly into the patient's home. Hence, the control of access to remote medical servers' resources has become a crucial challenge. A secure authentication scheme between the medical server and remote users is therefore needed to safeguard data integrity, confidentiality and to ensure availability. Recently, many authentication schemes that use low-cost mobile devices have been proposed to meet these requirements. In contrast to previous schemes, Khan et al. proposed a dynamic ID-based remote user authentication scheme that reduces computational complexity and includes features such as a provision for the revocation of lost or stolen smart cards and a time expiry check for the authentication process. However, Khan et al.'s scheme has some security drawbacks. To remedy theses, this study proposes an enhanced authentication scheme that overcomes the weaknesses inherent in Khan et al.'s scheme and demonstrated this scheme is more secure and robust for use in a telecare medical information system.
Exploiting volatile opportunistic computing resources with Lobster
NASA Astrophysics Data System (ADS)
Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas
2015-12-01
Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.
A complete history of everything
NASA Astrophysics Data System (ADS)
Lanclos, Kyle; Deich, William T. S.
2012-09-01
This paper discusses Lick Observatory's local solution for retaining a complete history of everything. Leveraging our existing deployment of a publish/subscribe communications model that is used to broadcast the state of all systems at Lick Observatory, a monitoring daemon runs on a dedicated server that subscribes to and records all published messages. Our success with this system is a testament to the power of simple, straightforward approaches to complex problems. The solution itself is written in Python, and the initial version required about a week of development time; the data are stored in PostgreSQL database tables using a distinctly simple schema. Over time, we addressed scaling issues as the data set grew, which involved reworking the PostgreSQL database schema on the back-end. We also duplicate the data in flat files to enable recovery or migration of the data from one server to another. This paper will cover both the initial design as well as the solutions to the subsequent deployment issues, the trade-offs that motivated those choices, and the integration of this history database with existing client applications.
Home-based mobile cardio-pulmonary rehabilitation consultant system.
Lee, Hsu-En; Wang, Wen-Chih; Lu, Shao-Wei; Wu, Bo-Yuan; Ko, Li-Wei
2011-01-01
Cardiovascular diseases are the most popular cause of death in the world recently. For postoperatives, cardiac rehabilitation is still asked to maintain at home (phase II) to improve cardiac function. However, only one third of outpatients do the exercise regularly, reflecting the difficulty for home-based healthcare: lacking of monitoring and motivation. Hence, a cardio-pulmonary rehabilitation system was proposed in this research to improve rehabilitation efficiency for better prognosis. The proposed system was built on mobile phone and receiving electrocardiograph (ECG) signal from a wireless ECG holter via Bluetooth connection. Apart from heart rate (HR) monitor, an ECG derived respiration (EDR) technique is also included to provide respiration rate (RR). Both HR and RR are the most important vital signs during exercise but only used one physiological signal recorder in this system. In clinical test, there were 15 subjects affording Bruce Task (treadmill) to simulate rehabilitation procedure. Correlation between this system and commercial product (Custo-Med) was up to 98% in HR and 81% in RR. Considering the prevention of sudden heart attack, an arrhythmia detection expert system and healthcare server at the backend were also integrated to this system for comprehensive cardio-pulmonary monitoring whenever and wherever doing the exercise.
EMMNet: sensor networking for electricity meter monitoring.
Lin, Zhi-Ting; Zheng, Jie; Ji, Yu-Sheng; Zhao, Bao-Hua; Qu, Yu-Gui; Huang, Xu-Dong; Jiang, Xiu-Fang
2010-01-01
Smart sensors are emerging as a promising technology for a large number of application domains. This paper presents a collection of requirements and guidelines that serve as a basis for a general smart sensor architecture to monitor electricity meters. It also presents an electricity meter monitoring network, named EMMNet, comprised of data collectors, data concentrators, hand-held devices, a centralized server, and clients. EMMNet provides long-distance communication capabilities, which make it suitable suitable for complex urban environments. In addition, the operational cost of EMMNet is low, compared with other existing remote meter monitoring systems based on GPRS. A new dynamic tree protocol based on the application requirements which can significantly improve the reliability of the network is also proposed. We are currently conducting tests on five networks and investigating network problems for further improvements. Evaluation results indicate that EMMNet enhances the efficiency and accuracy in the reading, recording, and calibration of electricity meters.
EMMNet: Sensor Networking for Electricity Meter Monitoring
Lin, Zhi-Ting; Zheng, Jie; Ji, Yu-Sheng; Zhao, Bao-Hua; Qu, Yu-Gui; Huang, Xu-Dong; Jiang, Xiu-Fang
2010-01-01
Smart sensors are emerging as a promising technology for a large number of application domains. This paper presents a collection of requirements and guidelines that serve as a basis for a general smart sensor architecture to monitor electricity meters. It also presents an electricity meter monitoring network, named EMMNet, comprised of data collectors, data concentrators, hand-held devices, a centralized server, and clients. EMMNet provides long-distance communication capabilities, which make it suitable suitable for complex urban environments. In addition, the operational cost of EMMNet is low, compared with other existing remote meter monitoring systems based on GPRS. A new dynamic tree protocol based on the application requirements which can significantly improve the reliability of the network is also proposed. We are currently conducting tests on five networks and investigating network problems for further improvements. Evaluation results indicate that EMMNet enhances the efficiency and accuracy in the reading, recording, and calibration of electricity meters. PMID:22163551
Arnold, Robert W; Jacob, Jack; Matrix, Zinnia
2012-01-01
Screening by neonatologists and staging by ophthalmologists is a cost-effective intervention, but inadvertent missed examinations create a high liability. Paper tracking, bedside schedule reminders, and a computer scheduling and reminder program were compared for speed of input and retrospective missed examination rate. A neonatal intensive care unit (NICU) process was then programmed for cloud-based distribution for inpatient and outpatient retinopathy of prematurity monitoring. Over 11 years, 367 premature infants in one NICU were prospectively monitored. The initial paper system missed 11% of potential examinations, the Windows server-based system missed 2%, and the current cloud-based system missed 0% of potential inpatient and outpatient examinations. Computer input of examinations took the same or less time than paper recording. A computer application with a deliberate NICU process improved the proportion of eligible neonates getting their scheduled eye examinations in a timely manner. Copyright 2012, SLACK Incorporated.
Sowan, Azizeh Khaled; Reed, Charles Calhoun; Staggers, Nancy
2016-09-30
Large datasets of the audit log of modern physiologic monitoring devices have rarely been used for predictive modeling, capturing unsafe practices, or guiding initiatives on alarm systems safety. This paper (1) describes a large clinical dataset using the audit log of the physiologic monitors, (2) discusses benefits and challenges of using the audit log in identifying the most important alarm signals and improving the safety of clinical alarm systems, and (3) provides suggestions for presenting alarm data and improving the audit log of the physiologic monitors. At a 20-bed transplant cardiac intensive care unit, alarm data recorded via the audit log of bedside monitors were retrieved from the server of the central station monitor. Benefits of the audit log are many. They include easily retrievable data at no cost, complete alarm records, easy capture of inconsistent and unsafe practices, and easy identification of bedside monitors missed from a unit change of alarm settings adjustments. Challenges in analyzing the audit log are related to the time-consuming processes of data cleaning and analysis, and limited storage and retrieval capabilities of the monitors. The audit log is a function of current capabilities of the physiologic monitoring systems, monitor's configuration, and alarm management practices by clinicians. Despite current challenges in data retrieval and analysis, large digitalized clinical datasets hold great promise in performance, safety, and quality improvement. Vendors, clinicians, researchers, and professional organizations should work closely to identify the most useful format and type of clinical data to expand medical devices' log capacity.
Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology
NASA Astrophysics Data System (ADS)
Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; King, Todd; Hughes, John; Fung, Shing; Galkin, Ivan; Hapgood, Michael; Belehaki, Anna
2015-04-01
Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology European Union ESPAS, Japanese IUGONET and GFZ ISDC data server are developed for the ingestion, archiving and distributing of geo and space science domain data. Main parts of the data -managed by the mentioned data server- are related to near earth-space and geomagnetic field data. A smart mashup of the data server would allow a seamless browse and access to data and related context information. However the achievement of a high level of interoperability is a challenge because the data server are based on different data models and software frameworks. This paper is focused on the latest experiments and results for the mashup of the data server using the semantic Web approach. Besides the mashup of domain and terminological ontologies, especially the options to connect data managed by relational databases using D2R server and SPARQL technology will be addressed. A successful realization of the data server mashup will not only have a positive impact to the data users of the specific scientific domain but also to related projects, such as e.g. the development of a new interoperable version of NASA's Planetary Data System (PDS) or ICUS's World Data System alliance. ESPAS data server: https://www.espas-fp7.eu/portal/ IUGONET data server: http://search.iugonet.org/iugonet/ GFZ ISDC data server (semantic Web based prototype): http://rz-vm30.gfz-potsdam.de/drupal-7.9/ NASA PDS: http://pds.nasa.gov ICSU-WDS: https://www.icsu-wds.org
Filmless PACS in a multiple facility environment
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.
1996-05-01
A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.
Understanding Customer Dissatisfaction with Underutilized Distributed File Servers
NASA Technical Reports Server (NTRS)
Riedel, Erik; Gibson, Garth
1996-01-01
An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.
Remote humidity and temperature real time monitoring system for studying seed biology
NASA Astrophysics Data System (ADS)
Balachandran, Thiruparan
This thesis discusses the design, prototyping, and testing of a remote monitoring system that is used to study the biology of seeds under various controlled conditions. Seed scientists use air-tight boxes to maintain relative humidity, which influences seed longevity and seed dormancy break. The common practice is the use of super-saturated solutions either with different chemicals or different concentrations of LiCl to create various relative humidity. Theretofore, no known system has been developed to remotely monitor the environmental conditions inside these boxes in real time. This thesis discusses the development of a remote monitoring system that can be used to accurately monitor and measure the relative humidity and temperature inside sealed boxes for the study of seed biology. The system allows the remote and real-time monitoring of these two parameters in five boxes with different conditions. It functions as a client that is connected to the internet using Wireless Fidelity (Wi-Fi) technology while Google spreadsheet is used as the server for uploading and plotting the data. This system directly gets connected to the Google sever through Wi-Fi and uploads the sensors' values in a Google spread sheet. Application-specific software is created and the user can monitor the data in real time and/or download the data into Excel for further analyses. Using Google drive app the data can be viewed using a smart phone or a tablet. Furthermore, an electronic mail (e-mail) alert is also integrated into the system. Whenever measured values go beyond the threshold values, the user will receive an e-mail alert.
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Furukawa, Isao; Okumura, Akira; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu; Suzuki, Junji; Matsuya, Shoji; Ishihara, Teruo
2001-08-01
The wide spread of digital technology in the medical field has led to a demand for the high-quality, high-speed, and user-friendly digital image presentation system in the daily medical conferences. To fulfill this demand, we developed a presentation system for radiological and pathological images. It is composed of a super-high-definition (SHD) imaging system, a radiological image database (R-DB), a pathological image database (P-DB), and the network interconnecting these three. The R-DB consists of a 270GB RAID, a database server workstation, and a film digitizer. The P-DB includes an optical microscope, a four-million-pixel digital camera, a 90GB RAID, and a database server workstation. A 100Mbps Ethernet LAN interconnects all the sub-systems. The Web-based system operation software was developed for easy operation. We installed the whole system in NTT East Kanto Hospital to evaluate it in the weekly case conferences. The SHD system could display digital full-color images of 2048 x 2048 pixels on a 28-inch CRT monitor. The doctors evaluated the image quality and size, and found them applicable to the actual medical diagnosis. They also appreciated short image switching time that contributed to smooth presentation. Thus, we confirmed that its characteristics met the requirements.
Consumer server: A UNIX based event distributor in new CDF data acquisition system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, F.; Morita, Y.; Nomachi, M.
1994-12-31
Consumer Server is a program to handle event data and consumer trigger requests I/Os among Level 3 farm and consumer processes in CDF new data acquisition system. This program uses standard UNIX libraries and commercial network technologies to obtain higher portability. The authors describe the concept and configuration of the Consumer Server and report its performance.
How to securely replicate services
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth
1992-01-01
A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by n servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter k, at least k servers are correct and fewer than k servers are corrupt. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires fewer than k servers to be corrupt and that is live if at least k+b servers are correct, where b is the assumed maximum total number of corrupt servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service. The practicality of these schemes is illustrated through a discussion of several issues pertinent to their implementation and use, and their intended role in a secure version of the Isis system is also described.
NASA Technical Reports Server (NTRS)
Lyle, Stacey D.
2009-01-01
A software package that has been designed to allow authentication for determining if the rover(s) is/are within a set of boundaries or a specific area to access critical geospatial information by using GPS signal structures as a means to authenticate mobile devices into a network wirelessly and in real-time has been developed. The advantage lies in that the system only allows those with designated geospatial boundaries or areas into the server. The Geospatial Authentication software has two parts Server and Client. The server software is a virtual private network (VPN) developed in Linux operating system using Perl programming language. The server can be a stand-alone VPN server or can be combined with other applications and services. The client software is a GUI Windows CE software, or Mobile Graphical Software, that allows users to authenticate into a network. The purpose of the client software is to pass the needed satellite information to the server for authentication.
Java RMI Software Technology for the Payload Planning System of the International Space Station
NASA Technical Reports Server (NTRS)
Bryant, Barrett R.
1999-01-01
The Payload Planning System is for experiment planning on the International Space Station. The planning process has a number of different aspects which need to be stored in a database which is then used to generate reports on the planning process in a variety of formats. This process is currently structured as a 3-tier client/server software architecture comprised of a Java applet at the front end, a Java server in the middle, and an Oracle database in the third tier. This system presently uses CGI, the Common Gateway Interface, to communicate between the user-interface and server tiers and Active Data Objects (ADO) to communicate between the server and database tiers. This project investigated other methods and tools for performing the communications between the three tiers of the current system so that both the system performance and software development time could be improved. We specifically found that for the hardware and software platforms that PPS is required to run on, the best solution is to use Java Remote Method Invocation (RMI) for communication between the client and server and SQLJ (Structured Query Language for Java) for server interaction with the database. Prototype implementations showed that RMI combined with SQLJ significantly improved performance and also greatly facilitated construction of the communication software.
Openlobby: an open game server for lobby and matchmaking
NASA Astrophysics Data System (ADS)
Zamzami, E. M.; Tarigan, J. T.; Jaya, I.; Hardi, S. M.
2018-03-01
Online Multiplayer is one of the most essential feature in modern games. However, while developing a multiplayer feature can be done with a simple computer networking programming, creating a balanced multiplayer session requires more player management components such as game lobby and matchmaking system. Our objective is to develop OpenLobby, a server that available to be used by other developers to support their multiplayer application. The proposed system acts as a lobby and matchmaker where queueing players will be matched to other player according to a certain criteria defined by developer. The solution provides an application programing interface that can be used by developer to interact with the server. For testing purpose, we developed a game that uses the server as their multiplayer server.
Towards optimizing server performance in an educational MMORPG for teaching computer programming
NASA Astrophysics Data System (ADS)
Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios
2013-10-01
Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.
NASA Astrophysics Data System (ADS)
Mehring, James W.; Thomas, Scott D.
1995-11-01
The Data Services Segment of the Defense Mapping Agency's Digital Production System provides a digital archive of imagery source data for use by DMA's cartographic user's. This system was developed in the mid-1980's and is currently undergoing modernization. This paper addresses the modernization of the imagery buffer function that was performed by custom hardware in the baseline system and is being replaced by a RAID Server based on commercial off the shelf (COTS) hardware. The paper briefly describes the baseline DMA image system and the modernization program, that is currently under way. Throughput benchmark measurements were made to make design configuration decisions for a commercial off the shelf (COTS) RAID Server to perform as system image buffer. The test program began with performance measurements of the RAID read and write operations between the RAID arrays and the server CPU for RAID levels 0, 5 and 0+1. Interface throughput measurements were made for the HiPPI interface between the RAID Server and the image archive and processing system as well as the client side interface between a custom interface board that provides the interface between the internal bus of the RAID Server and the Input- Output Processor (IOP) external wideband network currently in place in the DMA system to service client workstations. End to end measurements were taken from the HiPPI interface through the RAID write and read operations to the IOP output interface.
Mathematical defense method of networked servers with controlled remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2006-05-01
The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software
NASA Astrophysics Data System (ADS)
Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.
2006-12-01
OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for a SOAP interface, the new server includes support for the THREDDS cataloging protocol. THREDDS is tightly integrated into the Front-End of Server4. The Server4 Front-End can make full use of the advanced THREDDS features such as attribute specification and inheritance, custom catalogs which segue into automatically generated catalogs as well as providing a default behavior which requires almost no catalog configuration.
NASA Astrophysics Data System (ADS)
Jain, Madhu; Meena, Rakesh Kumar
2018-03-01
Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732
IoT/M2M wearable-based activity-calorie monitoring and analysis for elders.
Soraya, Sabrina I; Ting-Hui Chiang; Guo-Jing Chan; Yi-Juan Su; Chih-Wei Yi; Yu-Chee Tseng; Yu-Tai Ching
2017-07-01
With the growth of aging population, elder care service has become an important part of the service industry of Internet of Things. Activity monitoring is one of the most important services in the field of the elderly care service. In this paper, we proposed a wearable solution to provide an activity monitoring service on elders for caregivers. The system uses wireless signals to estimate calorie burned by the walking and localization. In addition, it also uses wireless motion sensors to recognize physical activity, such as drinking and restroom activity. Overall, the system can be divided into four parts: wearable device, gateway, cloud server, and caregiver's android application. The algorithms we proposed for drinking activity are Decision Tree (J48) and Random Forest (RF). While for restroom activity, we proposed supervised Reduced Error Pruning (REP) Tree and Variable Order Hidden Markov Model (VOHMM). We developed a prototype service Android app to provide a life log for the recording of the activity sequence which would be useful for the caregiver to monitor elder activity and its calorie consumption.
Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.
Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio
2008-11-24
Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.
PACS quality control and automatic problem notifier
NASA Astrophysics Data System (ADS)
Honeyman-Buck, Janice C.; Jones, Douglas; Frost, Meryll M.; Staab, Edward V.
1997-05-01
One side effect of installing a clinical PACS Is that users become dependent upon the technology and in some cases it can be very difficult to revert back to a film based system if components fail. The nature of system failures range from slow deterioration of function as seen in the loss of monitor luminance through sudden catastrophic loss of the entire PACS networks. This paper describes the quality control procedures in place at the University of Florida and the automatic notification system that alerts PACS personnel when a failure has happened or is anticipated. The goal is to recover from a failure with a minimum of downtime and no data loss. Routine quality control is practiced on all aspects of PACS, from acquisition, through network routing, through display, and including archiving. Whenever possible, the system components perform self and between platform checks for active processes, file system status, errors in log files, and system uptime. When an error is detected or a exception occurs, an automatic page is sent to a pager with a diagnostic code. Documentation on each code, trouble shooting procedures, and repairs are kept on an intranet server accessible only to people involved in maintaining the PACS. In addition to the automatic paging system for error conditions, acquisition is assured by an automatic fax report sent on a daily basis to all technologists acquiring PACS images to be used as a cross check that all studies are archived prior to being removed from the acquisition systems. Daily quality control is preformed to assure that studies can be moved from each acquisition and contrast adjustment. The results of selected quality control reports will be presented. The intranet documentation server will be described with the automatic pager system. Monitor quality control reports will be described and the cost of quality control will be quantified. As PACS is accepted as a clinical tool, the same standards of quality control must be established as are expected on other equipment used in the diagnostic process.
Lin, Shih-Sung; Hung, Min-Hsiung; Tsai, Chang-Lung; Chou, Li-Ping
2012-12-01
The study aims to provide an ease-of-use approach for senior patients to utilize remote healthcare systems. An ease-of-use remote healthcare system (RHS) architecture using RFID (Radio Frequency Identification) and networking technologies is developed. Specifically, the codes in RFID tags are used for authenticating the patients' ID to secure and ease the login process. The patient needs only to take one action, i.e. placing a RFID tag onto the reader, to automatically login and start the RHS and then acquire automatic medical services. An ease-of-use emergency monitoring and reporting mechanism is developed as well to monitor and protect the safety of the senior patients who have to be left alone at home. By just pressing a single button, the RHS can automatically report the patient's emergency information to the clinic side so that the responsible medical personnel can take proper urgent actions for the patient. Besides, Web services technology is used to build the Internet communication scheme of the RHS so that the interoperability and data transmission security between the home server and the clinical server can be enhanced. A prototype RHS is constructed to validate the effectiveness of our designs. Testing results show that the proposed RHS architecture possesses the characteristics of ease to use, simplicity to operate, promptness in login, and no need to preserve identity information. The proposed RHS architecture can effectively increase the willingness of senior patients who act slowly or are unfamiliar with computer operations to use the RHS. The research results can be used as an add-on for developing future remote healthcare systems.
Server-Based and Server-Less Byod Solutions to Support Electronic Learning
2016-06-01
Knowledge Online NSD National Security Directive OS operating system OWA Outlook Web Access PC personal computer PED personal electronic device PDA...mobile devices, institute mobile device policies and standards, and promote the development and use of DOD mobile and web -enabled applications” (DOD...with an isolated BYOD web server, properly educated system administrators must carry out and execute the necessary, pre-defined network security
Ho, C
2008-01-01
(1) Remote monitoring for ambulatory heart failure patients uses an implantable device to record hemodynamic data and transmit it to a central server for continuous assessment. (2) Preliminary evidence from observational studies suggests a potential for reducing hospitalizations with the use of right ventricle implantable hemodynamic monitoring (IHM). However, although a multicentre, randomized controlled trial (COMPASS-HF) showed a reduction in hospitalizations in the IHM group the results were not statistically significant and the US Food and Drug Administration panel concluded the trial failed to meet its primary efficacy endpoint. (3) In the COMPASS-HF study the most common device-related complication was lead dislodgement. (4) Large randomized controlled trials are needed to demonstrate the clinical utility of IHM, particularly in terms of its impact on reducing hospitalization and improving patient outcomes.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
Liquid volume monitoring based on ultrasonic sensor and Arduino microcontroller
NASA Astrophysics Data System (ADS)
Husni, M.; Siahaan, D. O.; Ciptaningtyas, H. T.; Studiawan, H.; Aliarham, Y. P.
2016-04-01
Incident of oil leakage and theft in oil tank often happens. To prevent it, the liquid volume insides the tank needs to be monitored continuously. Aim of the study is to calculate the liquid volume inside oil tank on any road condition and send the volume data and location data to the user. This research use some ultrasonic sensors (to monitor the fluid height), Bluetooth modules (to sent data from the sensors to the Arduino microcontroller), Arduino Microcontroller (to calculate the liquid volume), and also GPS/GPRS/GSM Shield module (to get location of vehicle and sent the data to the Server). The experimental results show that the accuracy rate of monitoring liquid volume inside tanker while the vehicle is in the flat road is 99.33% and the one while the vehicle is in the road with elevation angle is 84%. Thus, this system can be used to monitor the tanker position and the liquid volume in any road position continuously via web application to prevent illegal theft.
Thermal feature extraction of servers in a datacenter using thermal image registration
NASA Astrophysics Data System (ADS)
Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan
2017-09-01
Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.
Akiyama, M
2001-01-01
The Hospital Information System (HIS) has been positioned as the hub of the healthcare information management architecture. In Japan, the billing system assigns an "insurance disease names" to performed exams based on the diagnosis type. Departmental systems provide localized, departmental services, such as order receipt and diagnostic reporting, but do not provide patient demographic information. The system above has many problems. The departmental system's terminals and the HIS's terminals are not integrated. Duplicate data entry introduces errors and increases workloads. Order and exam data managed by the HIS can be sent to the billing system, but departmental data cannot usually be entered. Additionally, billing systems usually keep departmental data for only a short time before it is deleted. The billing system provides payment based on what is entered. The billing system is oriented towards diagnoses. Most importantly, the system is geared towards generating billing reports rather than at providing high-quality patient care. The role of the application server is that of a mediator between system components. Data and events generated by system components are sent to the application server that routes them to appropriate destinations. It also records all system events, including state changes to clinical data, access of clinical data and so on. Finally, the Resource Management System identifies all system resources available to the enterprise. The departmental systems are responsible for managing data and clinical processes at a departmental level. The client interacts with the system via the application server, which provides a general set of system-level functions. The system is implemented using current technologies CORBA and HTTP. System data is collected by the application server and assembled into XML documents for delivery to clients. Clients can access these URLs using standard HTTP clients, since each department provides an HTTP compliant web-server. We have implemented an integrated system communicating via CORBA middleware, consisting of an application server, endoscopy departmental server, pathology departmental server and wrappered legacy HIS. We have found this new approach solves the problems outlined earlier. It provides the services needed to ensure that data is never lost and is always available, that events that occur in the hospital are always captured, and that resources are managed and tracked effectively. Finally, it reduces costs, raises efficiency, increases the quality of patient care, and ultimately saves lives. Now, we are going to integrate all remaining hospital departments, and ultimately, all hospital functions.
NASA Astrophysics Data System (ADS)
Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.
2009-12-01
Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.
A self-configuring control system for storage and computing departments at INFN-CNAF Tierl
NASA Astrophysics Data System (ADS)
Gregori, Daniele; Dal Pra, Stefano; Ricci, Pier Paolo; Pezzi, Michele; Prosperini, Andrea; Sapunenko, Vladimir
2015-05-01
The storage and farming departments at the INFN-CNAF Tier1[1] manage approximately thousands of computing nodes and several hundreds of servers that provides access to the disk and tape storage. In particular, the storage server machines should provide the following services: an efficient access to about 15 petabytes of disk space with different cluster of GPFS file system, the data transfers between LHC Tiers sites (Tier0, Tier1 and Tier2) via GridFTP cluster and Xrootd protocol and finally the writing and reading data operations on magnetic tape backend. One of the most important and essential point in order to get a reliable service is a control system that can warn if problems arise and which is able to perform automatic recovery operations in case of service interruptions or major failures. Moreover, during daily operations the configurations can change, i.e. if the GPFS cluster nodes roles can be modified and therefore the obsolete nodes must be removed from the control system production, and the new servers should be added to the ones that are already present. The manual management of all these changes is an operation that can be somewhat difficult in case of several changes, it can also take a long time and is easily subject to human error or misconfiguration. For these reasons we have developed a control system with the feature of self-configure itself if any change occurs. Currently, this system has been in production for about a year at the INFN-CNAF Tier1 with good results and hardly any major drawback. There are three major key points in this system. The first is a software configurator service (e.g. Quattor or Puppet) for the servers machines that we want to monitor with the control system; this service must ensure the presence of appropriate sensors and custom scripts on the nodes to check and should be able to install and update software packages on them. The second key element is a database containing information, according to a suitable format, on all the machines in production and able to provide for each of them the principal information such as the type of hardware, the network switch to which the machine is connected, if the machine is real (physical) or virtual, the possible hypervisor to which it belongs and so on. The last key point is a control system software (in our implementation we choose the Nagios software), capable of assessing the status of the servers and services, and that can attempt to restore the working state, restart or inhibit software services and send suitable alarm messages to the site administrators. The integration of these three elements was made by appropriate scripts and custom implementation that allow the self-configuration of the system according to a decisional logic and the whole combination of all the above-mentioned components will be deeply discussed in this paper.
Remote Adaptive Communication System
2001-10-25
manage several different devices using the software tool A. Client /Server Architecture The architecture we are proposing is based on the Client ...Server model (see figure 3). We want both client and server to be accessible from anywhere via internet. The computer, acting as a server, is in...the other hand, each of the client applications will act as sender or receiver, depending on the associated interface: user interface or device
NASA Astrophysics Data System (ADS)
Osei, Richard
There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers. In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users' requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users' requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user's resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able to reduce energy cost on an average of $.70 per second. The ACO for Optimal Server Activation and Task Placement algorithm has proven to work as an alternative or improvement in reducing energy cost in geo-distributed data centers.
The SAMS: Smartphone Addiction Management System and verification.
Lee, Heyoung; Ahn, Heejune; Choi, Samwook; Choi, Wanbok
2014-01-01
While the popularity of smartphones has given enormous convenience to our lives, their pathological use has created a new mental health concern among the community. Hence, intensive research is being conducted on the etiology and treatment of the condition. However, the traditional clinical approach based surveys and interviews has serious limitations: health professionals cannot perform continual assessment and intervention for the affected group and the subjectivity of assessment is questionable. To cope with these limitations, a comprehensive ICT (Information and Communications Technology) system called SAMS (Smartphone Addiction Management System) is developed for objective assessment and intervention. The SAMS system consists of an Android smartphone application and a web application server. The SAMS client monitors the user's application usage together with GPS location and Internet access location, and transmits the data to the SAMS server. The SAMS server stores the usage data and performs key statistical data analysis and usage intervention according to the clinicians' decision. To verify the reliability and efficacy of the developed system, a comparison study with survey-based screening with the K-SAS (Korean Smartphone Addiction Scale) as well as self-field trials is performed. The comparison study is done using usage data from 14 users who are 19 to 50 year old adults that left at least 1 week usage logs and completed the survey questionnaires. The field trial fully verified the accuracy of the time, location, and Internet access information in the usage measurement and the reliability of the system operation over more than 2 weeks. The comparison study showed that daily use count has a strong correlation with K-SAS scores, whereas daily use times do not strongly correlate for potentially addicted users. The correlation coefficients of count and times with total K-SAS score are CC = 0.62 and CC =0.07, respectively, and the t-test analysis for the contrast group of potential addicts and the values for the non-addicts were p = 0.047 and p = 0.507, respectively.
The European Drought Observatory (EDO): Current State and Future Directions
NASA Astrophysics Data System (ADS)
Vogt, J.; Singleton, A.; Sepulcre, G.; Micale, F.; Barbosa, P.
2012-12-01
Europe has repeatedly been affected by droughts, resulting in considerable ecological and economic damage and climate change studies indicate a trend towards increasing climate variability most likely resulting in more frequent drought occurrences also in Europe. Against this background, the European Commission's Joint Research Centre (JRC) is developing methods and tools for assessing, monitoring and forecasting droughts in Europe and develops a European Drought Observatory (EDO) to complement and integrate national activities with a European view. At the core of the European Drought Observatory (EDO) is a portal, including a map server, a metadata catalogue, a media-monitor and analysis tools. The map server presents Europe-wide up-to-date information on the occurrence and severity of droughts, which is complemented by more detailed information provided by regional, national and local observatories through OGC compliant web mapping and web coverage services. In addition, time series of historical maps as well as graphs of the temporal evolution of drought indices for individual grid cells and administrative regions in Europe can be retrieved and analysed. Current work is focusing on validating the available products, improving the functionalities, extending the linkage to additional national and regional drought information systems and improving medium to long-range probabilistic drought forecasting products. Probabilistic forecasts are attractive in that they provide an estimate of the range of uncertainty in a particular forecast. Longer-term goals include the development of long-range drought forecasting products, the analysis of drought hazard and risk, the monitoring of drought impact and the integration of EDO in a global drought information system. The talk will provide an overview on the development and state of EDO, the different products, and the ways to include a wide range of stakeholders (i.e. European, national river basin, and local authorities) in the development of the system as well as an outlook on the future developments.
Mireskandari, Masoud; Kayser, Gian; Hufnagl, Peter; Schrader, Thomas; Kayser, Klaus
2004-01-01
Eighty pathology cases were sent independently to each of two telepathology servers. Cases were submitted from the Department of Pathology at the University of Kerman in Iran (40 cases) and from the Institute of Pathology in Berlin, Germany (40 cases). The telepathology servers were located in Berlin (the UICC server) and Basel in Switzerland (the iPATH server). A scoring system was developed to quantify the differences between the diagnoses of the referring pathologist and the remote expert. Preparation of the cases, as well as the submission of images, took considerably longer from Kerman than from Berlin; this was independent of the server system. The Kerman delay was mainly associated with a slower transmission rate and longer image preparation. The diagnostic gap between referrers' and experts' diagnoses was greater with the iPATH system, but not significantly so. The experts' response time was considerably shorter for the iPATH system. The results showed that telepathology is feasible for requesting pathologists working in a developing country or in an industrialized country. The key factor in the quality of the service is the work of the experts: they should be selected according to their diagnostic expertise, and their commitment to the provision of telepathology services is critical.
Remotely Accessed Vehicle Traffic Management System
NASA Astrophysics Data System (ADS)
Al-Alawi, Raida
2010-06-01
The ever increasing number of vehicles in most metropolitan cities around the world and the limitation in altering the transportation infrastructure, led to serious traffic congestion and an increase in the travelling time. In this work we exploit the emergence of novel technologies such as the internet, to design an intelligent Traffic Management System (TMS) that can remotely monitor and control a network of traffic light controllers located at different sites. The system is based on utilizing Embedded Web Servers (EWS) technology to design a web-based TMS. The EWS located at each intersection uses IP technology for communicating remotely with a Central Traffic Management Unit (CTMU) located at the traffic department authority. Friendly GUI software installed at the CTMU will be able to monitor the sequence of operation of the traffic lights and the presence of traffic at each intersection as well as remotely controlling the operation of the signals. The system has been validated by constructing a prototype that resembles the real application.
Wi-GIM system: a new wireless sensor network (WSN) for accurate ground instability monitoring
NASA Astrophysics Data System (ADS)
Mucchi, Lorenzo; Trippi, Federico; Schina, Rosa; Fornaciai, Alessandro; Gigli, Giovanni; Nannipieri, Luca; Favalli, Massimiliano; Marturia Alavedra, Jordi; Intrieri, Emanuele; Agostini, Andrea; Carnevale, Ennio; Bertolini, Giovanni; Pizziolo, Marco; Casagli, Nicola
2016-04-01
Landslides are among the most serious and common geologic hazards around the world. Their impact on human life is expected to increase in the next future as a consequence of human-induced climate change as well as the population growth in proximity of unstable slopes. Therefore, developing better performing technologies for monitoring landslides and providing local authorities with new instruments able to help them in the decision making process, is becoming more and more important. The recent progresses in Information and Communication Technologies (ICT) allow us to extend the use of wireless technologies in landslide monitoring. In particular, the developments in electronics components have permitted to lower the price of the sensors and, at the same time, to actuate more efficient wireless communications. In this work we present a new wireless sensor network (WSN) system, designed and developed for landslide monitoring in the framework of EU Wireless Sensor Network for Ground Instability Monitoring - Wi-GIM project (LIFE12 ENV/IT/001033). We show the preliminary performance of the Wi-GIM system after the first period of monitoring on the active Roncovetro Landslide and on a large subsiding area in the neighbourhood of Sallent village. The Roncovetro landslide is located in the province of Reggio Emilia (Italy) and moved an inferred volume of about 3 million cubic meters. Sallent village is located at the centre of the Catalan evaporitic basin in Spain. The Wi-GIM WSN monitoring system consists of three levels: 1) Master/Gateway level coordinates the WSN and performs data aggregation and local storage; 2) Master/Server level takes care of acquiring and storing data on a remote server; 3) Nodes level that is based on a mesh of peripheral nodes, each consisting in a sensor board equipped with sensors and wireless module. The nodes are located in the landslide ground perimeter and are able to create an ad-hoc WSN. The location of each sensor on the ground is determined by integrating an ultra wideband technology with a radar technology; this integration allows to push the accuracy towards the cm. An extended Kalman filter is also used to reduce the noise and enhance the accuracy of the measures. The sensor nodes are organized as a hierarchical cluster, composed by one master and several slave nodes. The landslide movement is detected by comparing day by day the x, y and z coordinates of each nodes. The 3D movements of each sensor during the monitoring period are represented as vector and displayed on a Web-GIS which is accessible at the following link: www.life-wigim.eu.
Group-oriented coordination models for distributed client-server computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Hughes, Craig S.
1994-01-01
This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.
An ECG storage and retrieval system embedded in client server HIS utilizing object-oriented DB.
Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S
1996-02-01
In the University of Tokyo Hospital, the improved client server HIS has been applied to clinical practice and physicians can order prescription, laboratory examination, ECG examination and radiographic examination, etc. directly by themselves and read results of these examinations, except medical signal waves, schema and image, on UNIX workstations. Recently, we designed and developed an ECG storage and retrieval system embedded in the client server HIS utilizing object-oriented database to take the first step in dealing with digitized signal, schema and image data and show waves, graphics, and images directly to physicians by the client server HIS. The system was developed based on object-oriented analysis and design, and implemented with object-oriented database management system (OODMS) and C++ programming language. In this paper, we describe the ECG data model, functions of the storage and retrieval system, features of user interface and the result of its implementation in the HIS.
Deceit: A flexible distributed file system
NASA Technical Reports Server (NTRS)
Siegel, Alex; Birman, Kenneth; Marzullo, Keith
1989-01-01
Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.
Building climate adaptation capabilities through technology and community
NASA Astrophysics Data System (ADS)
Murray, D.; McWhirter, J.; Intsiful, J. D.; Cozzini, S.
2011-12-01
To effectively plan for adaptation to changes in climate, decision makers require infrastructure and tools that will provide them with timely access to current and future climate information. For example, climate scientists and operational forecasters need to access global and regional model projections and current climate information that they can use to prepare monitoring products and reports and then publish these for the decision makers. Through the UNDP African Adaption Programme, an infrastructure is being built across Africa that will provide multi-tiered access to such information. Web accessible servers running RAMADDA, an open source content management system for geoscience information, will provide access to the information at many levels: from the raw and processed climate model output to real-time climate conditions and predictions to documents and presentation for government officials. Output from regional climate models (e.g. RegCM4) and downscaled global climate models will be accessible through RAMADDA. The Integrated Data Viewer (IDV) is being used by scientists to create visualizations that assist the understanding of climate processes and projections, using the data on these as well as external servers. Since RAMADDA is more than a data server, it is also being used as a publishing platform for the generated material that will be available and searchable by the decision makers. Users can wade through the enormous volumes of information and extract subsets for their region or project of interest. Participants from 20 countries attended workshops at ICTP during 2011. They received training on setting up and installing the servers and necessary software and are now working on deploying the systems in their respective countries. This is the first time an integrated and comprehensive approach to climate change adaptation has been widely applied in Africa. It is expected that this infrastructure will enhance North-South collaboration and improve the delivery of technical support and services. This improved infrastructure will enhance the capacity of countries to provide a wide range of robust products and services in a timely manner.
An analysis of the low-earth-orbit communications environment
NASA Astrophysics Data System (ADS)
Diersing, Robert Joseph
Advances in microprocessor technology and availability of launch opportunities have caused interest in low-earth-orbit satellite based communications systems to increase dramatically during the past several years. In this research the capabilities of two low-cost, store-and-forward LEO communications satellites operating in the public domain are examined--PACSAT-1 (operated by the Radio Amateur Satellite Corporation) and UoSAT-3 (operated by the University of Surrey, England, Electrical Engineering Department). The file broadcasting and file transfer facilities are examined in detail and a simulation model of the downlink traffic pattern is developed. The simulator will aid the assessment of changes in design and implementation for other systems. The development of the downlink traffic simulator is based on three major parts. First, is a characterization of the low-earth-orbit operating environment along with preliminary measurements of the PACSAT-1 and UoSAT-3 systems including: satellite visibility constraints on communications, monitoring equipment configuration, link margin computations, determination of block and bit error rates, and establishing typical data capture rates for ground stations using computer-pointed directional antennas and fixed omni-directional antennas. Second, arrival rates for successful and unsuccessful file server connections are established along with transaction service times. Downlink traffic has been further characterized by measuring: frame and byte counts for all data-link layer traffic; 30-second interval average response time for all traffic and for file server traffic only; file server response time on a per-connection basis; and retry rates for information and supervisory frames. Finally, the model is verified by comparison with measurements of actual traffic not previously used in the model building process. The simulator is then used to predict operation of the PACSAT-1 satellite with modifications to the original design.
Migration of legacy mumps applications to relational database servers.
O'Kane, K C
2001-07-01
An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.
Network issues for large mass storage requirements
NASA Technical Reports Server (NTRS)
Perdue, James
1992-01-01
File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.
Log-less metadata management on metadata server for parallel file systems.
Liao, Jianwei; Xiao, Guoqiang; Peng, Xiaoning
2014-01-01
This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally.
Log-Less Metadata Management on Metadata Server for Parallel File Systems
Xiao, Guoqiang; Peng, Xiaoning
2014-01-01
This paper presents a novel metadata management mechanism on the metadata server (MDS) for parallel and distributed file systems. In this technique, the client file system backs up the sent metadata requests, which have been handled by the metadata server, so that the MDS does not need to log metadata changes to nonvolatile storage for achieving highly available metadata service, as well as better performance improvement in metadata processing. As the client file system backs up certain sent metadata requests in its memory, the overhead for handling these backup requests is much smaller than that brought by the metadata server, while it adopts logging or journaling to yield highly available metadata service. The experimental results show that this newly proposed mechanism can significantly improve the speed of metadata processing and render a better I/O data throughput, in contrast to conventional metadata management schemes, that is, logging or journaling on MDS. Besides, a complete metadata recovery can be achieved by replaying the backup logs cached by all involved clients, when the metadata server has crashed or gone into nonoperational state exceptionally. PMID:24892093
NASA Astrophysics Data System (ADS)
Rao, Hanumantha; Kumar, Vasanta; Srinivasa Rao, T.; Srinivasa Kumar, B.
2018-04-01
In this paper, we examine a two-stage queueing system where the arrivals are Poisson with rate depends on the condition of the server to be specific: vacation, pre-service, operational or breakdown state. The service station is liable to breakdowns and deferral in repair because of non-accessibility of the repair facility. The service is in two basic stages, the first being bulk service to every one of the customers holding up on the line and the second stage is individual to each of them. The server works under N-policy. The server needs preliminary time (startup time) to begin batch service after a vacation period. Startup times, uninterrupted service times, the length of each vacation period, delay times and service times follows an exponential distribution. The closed form of expressions for the mean system size at different conditions of the server is determined. Numerical investigations are directed to concentrate the impact of the system parameters on the ideal limit N and the minimum base expected unit cost.
Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less
An implementation of wireless medical image transmission system on mobile devices.
Lee, SangBock; Lee, Taesoo; Jin, Gyehwan; Hong, Juhyun
2008-12-01
The advanced technology of computing system was followed by the rapid improvement of medical instrumentation and patient record management system. The typical examples are hospital information system (HIS) and picture archiving and communication system (PACS), which computerized the management procedure of medical records and images in hospital. Because these systems were built and used in hospitals, doctors out of hospital have problems to access them immediately on emergent cases. To solve these problems, this paper addressed the realization of system that could transmit the images acquired by medical imaging systems in hospital to the remote doctors' handheld PDA's using CDMA cellular phone network. The system consists of server and PDA. The server was developed to manage the accounts of doctors and patients and allocate the patient images to each doctor. The PDA was developed to display patient images through remote server connection. To authenticate the personal user, remote data access (RDA) method was used in PDA accessing the server database and file transfer protocol (FTP) was used to download patient images from the remove server. In laboratory experiments, it was calculated to take ninety seconds to transmit thirty images with 832 x 488 resolution and 24 bit depth and 0.37 Mb size. This result showed that the developed system has no problems for remote doctors to receive and review the patient images immediately on emergent cases.
An unreliable group arrival queue with k stages of service, retrial under variant vacation policy
NASA Astrophysics Data System (ADS)
Radha, J.; Indhira, K.; Chandrasekaran, V. M.
2017-11-01
In this research work we considered repairable retrial queue with group arrival and the server utilize the variant vacations. A server gives service in k stages. Any arriving group of units finds the server free, one from the group entering the first stage of service and the rest are joining into the orbit. After completion of the i th stage of service, the customer may have the option to choose (i+1)th stage of service with probability θi , with probability pi may join into orbit as feedback customer or may leave the system with probability {q}i=≤ft\\{\\begin{array}{l}1-{p}i-{θ }i,i=1,2,\\cdots k-1\\ 1-{p}i,i=k\\end{array}\\right\\}. If the orbit is empty at the service completion of each stage service, the server takes modified vacation until at least one customer appears in the orbit on the server returns from a vacation. Busy server may get to breakdown and the service channel will fail for a short interval of time. By using the supplementary variable method, steady state probability generating function for system size, some system performance measures are discussed.
Laboratory Information Systems.
Henricks, Walter H
2015-06-01
Laboratory information systems (LISs) supply mission-critical capabilities for the vast array of information-processing needs of modern laboratories. LIS architectures include mainframe, client-server, and thin client configurations. The LIS database software manages a laboratory's data. LIS dictionaries are database tables that a laboratory uses to tailor an LIS to the unique needs of that laboratory. Anatomic pathology LIS (APLIS) functions play key roles throughout the pathology workflow, and laboratories rely on LIS management reports to monitor operations. This article describes the structure and functions of APLISs, with emphasis on their roles in laboratory operations and their relevance to pathologists. Copyright © 2015 Elsevier Inc. All rights reserved.
Multi stage unreliable retrial Queueing system with Bernoulli vacation
NASA Astrophysics Data System (ADS)
Radha, J.; Indhira, K.; Chandrasekaran, V. M.
2017-11-01
In this work we considered the Bernoulli vacation in group arrival retrial queues with unreliable server. Here, a server providing service in k stages. Any arriving group of units finds the server free, one from the group entering the first stage of service and the rest are joining into the orbit. After completion of the i th, (i=1,2,…k) stage of service, the customer may go to (i+1)th stage with probability θi , or leave the system with probability qi = 1 - θi , (i = 1,2,…k - 1) and qi = 1, (i = k). The server may enjoy vacation (orbit is empty or not) with probability v after finishing the service or continuing the service with probability 1-v. After finishing the vacation, the server search for the customer in the orbit with probability θ or remains idle for new arrival with probability 1-θ. We analyzed the system using the method of supplementary variable.
Adaptable radiation monitoring system and method
Archer, Daniel E [Livermore, CA; Beauchamp, Brock R [San Ramon, CA; Mauger, G Joseph [Livermore, CA; Nelson, Karl E [Livermore, CA; Mercer, Michael B [Manteca, CA; Pletcher, David C [Sacramento, CA; Riot, Vincent J [Berkeley, CA; Schek, James L [Tracy, CA; Knapp, David A [Livermore, CA
2006-06-20
A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.
Ng, Curtise K C; White, Peter; McKay, Janice C
2009-04-01
Increasingly, the use of web database portfolio systems is noted in medical and health education, and for continuing professional development (CPD). However, the functions of existing systems are not always aligned with the corresponding pedagogy and hence reflection is often lost. This paper presents the development of a tailored web database portfolio system with Picture Archiving and Communication System (PACS) connectivity, which is based on the portfolio pedagogy. Following a pre-determined portfolio framework, a system model with the components of web, database and mail servers, server side scripts, and a Query/Retrieve (Q/R) broker for conversion between Hypertext Transfer Protocol (HTTP) requests and Q/R service class of Digital Imaging and Communication in Medicine (DICOM) standard, is proposed. The system was piloted with seventy-seven volunteers. A tailored web database portfolio system (http://radep.hti.polyu.edu.hk) was developed. Technological arrangements for reinforcing portfolio pedagogy include popup windows (reminders) with guidelines and probing questions of 'collect', 'select' and 'reflect' on evidence of development/experience, limitation in the number of files (evidence) to be uploaded, the 'Evidence Insertion' functionality to link the individual uploaded artifacts with reflective writing, capability to accommodate diversity of contents and convenient interfaces for reviewing portfolios and communication. Evidence to date suggests the system supports users to build their portfolios with sound hypertext reflection under a facilitator's guidance, and with reviewers to monitor students' progress providing feedback and comments online in a programme-wide situation.
Resource Management Scheme Based on Ubiquitous Data Analysis
Lee, Heung Ki; Jung, Jaehee
2014-01-01
Resource management of the main memory and process handler is critical to enhancing the system performance of a web server. Owing to the transaction delay time that affects incoming requests from web clients, web server systems utilize several web processes to anticipate future requests. This procedure is able to decrease the web generation time because there are enough processes to handle the incoming requests from web browsers. However, inefficient process management results in low service quality for the web server system. Proper pregenerated process mechanisms are required for dealing with the clients' requests. Unfortunately, it is difficult to predict how many requests a web server system is going to receive. If a web server system builds too many web processes, it wastes a considerable amount of memory space, and thus performance is reduced. We propose an adaptive web process manager scheme based on the analysis of web log mining. In the proposed scheme, the number of web processes is controlled through prediction of incoming requests, and accordingly, the web process management scheme consumes the least possible web transaction resources. In experiments, real web trace data were used to prove the improved performance of the proposed scheme. PMID:25197692
Analysis of the Appropriateness of the Use of Peltier Cells as Energy Sources
Hájovský, Radovan; Pieš, Martin; Richtár, Lukáš
2016-01-01
The article describes the possibilities of using Peltier cells as an energy source to power the telemetry units, which are used in large-scale monitoring systems as central units, ensuring the collection of data from sensors, processing, and sending to the database server. The article describes the various experiments that were carried out, their progress and results. Based on experiments evaluated, the paper also discusses the possibilities of using various types depending on the temperature difference of the cold and hot sides. PMID:27231913
AMS data production facilities at science operations center at CERN
NASA Astrophysics Data System (ADS)
Choutko, V.; Egorov, A.; Eline, A.; Shan, B.
2017-10-01
The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment on the board of the International Space Station (ISS). This paper presents the hardware and software facilities of Science Operation Center (SOC) at CERN. Data Production is built around production server - a scalable distributed service which links together a set of different programming modules for science data transformation and reconstruction. The server has the capacity to manage 1000 paralleled job producers, i.e. up to 32K logical processors. Monitoring and management tool with Production GUI is also described.
NASA Astrophysics Data System (ADS)
Despa, D.; Nama, G. F.; Muhammad, M. A.; Anwar, K.
2018-04-01
Electrical quantities such as Voltage, Current, Power, Power Factor, Energy, and Frequency in electrical power system tends to fluctuate, as a result of load changes, disturbances, or other abnormal states. The change-state in electrical quantities should be identify immediately, otherwise it can lead to serious problem for whole system. Therefore a necessity is required to determine the condition of electricity change-state quickly and appropriately in order to make effective decisions. Online monitoring of power distribution system based on Internet of Things (IoT) technology was deploy and implemented on Department of Mechanical Engineering University of Lampung (Unila), especially at three-phase main distribution panel H-building. The measurement system involve multiple sensors such current sensors and voltage sensors, while data processing conducted by Arduino, the measurement data stored in to the database server and shown in a real-time through a web-based application. This measurement system has several important features especially for realtime monitoring, robust data acquisition and logging, system reporting, so it will produce an important information that can be used for various purposes of future power analysis such estimation and planning. The result of this research shown that the condition of electrical power system at H-building performed unbalanced load, which often leads to drop-voltage condition
Scalable Integrated Multi-Mission Support System Simulator Release 3.0
NASA Technical Reports Server (NTRS)
Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis
2012-01-01
The Scalable Integrated Multi-mission Support System (SIMSS) is a tool that performs a variety of test activities related to spacecraft simulations and ground segment checks. SIMSS is a distributed, component-based, plug-and-play client-server system useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations and is designed to be user-configurable or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as significantly reducing a mission s integration time and risk.
Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II
NASA Astrophysics Data System (ADS)
Krejcar, Ondrej; Janckulik, Dalibor; Motalova, Leona; Kufel, Jan
The main area of interest of our project is to provide solution which can be used in different areas of health care and which will be available through PDAs (Personal Digital Assistants), web browsers or desktop clients. The realized system deals with an ECG sensor connected to mobile equipment, such as PDA/Embedded, based on Microsoft Windows Mobile operating system. The whole system is based on the architecture of .NET Compact Framework, and Microsoft SQL Server. Visualization possibilities of web interface and ECG data are also discussed and final suggestion is made to Microsoft Silverlight solution along with current screenshot representation of implemented solution. The project was successfully tested in real environment in cryogenic room (-136OC).
Smartphone-based Continuous Blood Pressure Measurement Using Pulse Transit Time.
Gholamhosseini, Hamid; Meintjes, Andries; Baig, Mirza; Linden, Maria
2016-01-01
The increasing availability of low cost and easy to use personalized medical monitoring devices has opened the door for new and innovative methods of health monitoring to emerge. Cuff-less and continuous methods of measuring blood pressure are particularly attractive as blood pressure is one of the most important measurements of long term cardiovascular health. Current methods of noninvasive blood pressure measurement are based on inflation and deflation of a cuff with some effects on arteries where blood pressure is being measured. This inflation can also cause patient discomfort and alter the measurement results. In this work, a mobile application was developed to collate the PhotoPlethysmoGramm (PPG) waveform provided by a pulse oximeter and the electrocardiogram (ECG) for calculating the pulse transit time. This information is then indirectly related to the user's systolic blood pressure. The developed application successfully connects to the PPG and ECG monitoring devices using Bluetooth wireless connection and stores the data onto an online server. The pulse transit time is estimated in real time and the user's systolic blood pressure can be estimated after the system has been calibrated. The synchronization between the two devices was found to pose a challenge to this method of continuous blood pressure monitoring. However, the implemented continuous blood pressure monitoring system effectively serves as a proof of concept. This combined with the massive benefits that an accurate and robust continuous blood pressure monitoring system would provide indicates that it is certainly worthwhile to further develop this system.
Integrating RFID technique to design mobile handheld inventory management system
NASA Astrophysics Data System (ADS)
Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung
2008-04-01
An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.
Distributed PACS using distributed file system with hierarchical meta data servers.
Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato
2012-01-01
In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.
NASA Astrophysics Data System (ADS)
Joshi, Ramesh; Singh, Manoj; Jadav, H. M.; Misra, Kishor; Kulkarni, S. V.; ICRH-RF Group
2010-02-01
Ion Cyclotron Resonance Heating (ICRH) is a promising heating method for a fusion device due to its localized power deposition profile, a direct ion heating at high density, and established technology for high RF power generation and transmission at low cost. Multiple analog pulse with different duty cycle in master of digital pulse for Data acquisition and Control system for steady state RF ICRH System(RF ICRH DAC) to be used for operating of RF Generator in Aditya to produce pre ionization and second analog pulse will produce heating. The control system software is based upon single digital pulse operation for RF source. It is planned to integrate multiple analog pulses with different duty cycle in master of digital pulse for Data acquisition and Control system for RF ICRH System(RF ICRH DAC) to be used for operating of RF Generator in Aditya tokamak. The task of RF ICRH DAC is to control and acquisition of all ICRH system operation with all control loop and acquisition for post analysis of data with java based tool. For pre ionization startup as well as heating experiments using multiple RF Power of different powers and duration. The experiment based upon the idea of using single RF generator to energize antenna inside the tokamak to radiate power twise, out of which first analog pulse will produce pre ionization and second analog pulse will produce heating. The whole system is based on standard client server technology using tcp/ip protocol. DAC Software is based on linux operating system for highly reliable, secure and stable system operation in failsafe manner. Client system is based on tcl/tk like toolkit for user interface with c/c++ like environment which is reliable programming languages widely used on stand alone system operation with server as vxWorks real time operating system like environment. The paper is focused on the Data acquisition and monitoring system software on Aditya RF ICRH System with analog pulses in slave mode with digital pulse in master mode for control acquisition and monitoring and interlocking.
The World-Wide Web and Mosaic: An Overview for Librarians.
ERIC Educational Resources Information Center
Morgan, Eric Lease
1994-01-01
Provides an overview of the Internet's World-Wide Web (Web), a hypertext system. Highlights include the client/server model; Uniform Resource Locator; examples of software; Web servers versus Gopher servers; HyperText Markup Language (HTML); converting files; Common Gateway Interface; organizing Web information; and the role of librarians in…
Server Level Analysis of Network Operation Utilizing System Call Data
2010-09-25
Server DLL Inject 6 Executable Download and Execute 7 Execute Command 8 Execute net user /ADD 9 PassiveX ActiveX Inject Meterpreter Payload...10 PassiveX ActiveX Inject VNC Server Payload 11 PassiveX ActiveX Injection Payload 12 Recv Tag Findsock Meterpreter 13 Recv Tag Findsock
Naughton, Felix
2016-05-28
Smoking lapses early on during a quit attempt are highly predictive of failing to quit. A large proportion of these lapses are driven by cravings brought about by situational and environmental cues. Use of cognitive-behavioral lapse prevention strategies to combat cue-induced cravings is associated with a reduced risk of lapse, but evidence is lacking in how these strategies can be effectively promoted. Unlike most traditional methods of delivering behavioral support, mobile phones can in principle deliver automated support, including lapse prevention strategy recommendations, Just-In-Time (JIT) for when a smoker is most vulnerable, and prevent early lapse. JIT support can be activated by smokers themselves (user-triggered), by prespecified rules (server-triggered) or through sensors that dynamically monitor a smoker's context and trigger support when a high risk environment is sensed (context-triggered), also known as a Just-In-Time Adaptive Intervention (JITAI). However, research suggests that user-triggered JIT cessation support is seldom used and existing server-triggered JIT support is likely to lack sufficient accuracy to effectively target high-risk situations in real time. Evaluations of mobile phone cessation interventions that include user and/or server-triggered JIT support have yet to adequately assess whether this improves management of high risk situations. While context-triggered systems have the greatest potential to deliver JIT support, there are, as yet, no impact evaluations of such systems. Although it may soon be feasible to learn about and monitor a smoker's context unobtrusively using their smartphone without burdensome data entry, there are several potential advantages to involving the smoker in data collection. This commentary describes the current knowledge on the potential for mobile phones to deliver automated support to help smokers manage or cope with high risk environments or situations for smoking, known as JIT support. The article categorizes JIT support into three main types: user-triggered, server-triggered, and context-triggered. For each type of JIT support, a description of the evidence and their potential to effectively target specific high risk environments or situations is described. The concept of unobtrusive sensing without user data entry to inform the delivery of JIT support is finally discussed in relation to potential advantages and disadvantages for behavior change. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Information resources assessment of a healthcare integrated delivery system.
Gadd, C. S.; Friedman, C. P.; Douglas, G.; Miller, D. J.
1999-01-01
While clinical healthcare systems may have lagged behind computer applications in other fields in the shift from mainframes to client-server architectures, the rapid deployment of newer applications is closing that gap. Organizations considering the transition to client-server must identify and position themselves to provide the resources necessary to implement and support the infrastructure requirements of client-server architectures and to manage the accelerated complexity at the desktop, including hardware and software deployment, training, and maintenance needs. This paper describes an information resources assessment of the recently aligned Pennsylvania regional Veterans Administration Stars and Stripes Health Network (VISN4), in anticipation of the shift from a predominantly mainframe to a client-server information systems architecture in its well-established VistA clinical information system. The multimethod assessment study is described here to demonstrate this approach and its value to regional healthcare networks undergoing organizational integration and/or significant information technology transformations. PMID:10566414
A Design of a Network Model to the Electric Power Trading System Using Web Services
NASA Astrophysics Data System (ADS)
Maruo, Tomoaki; Matsumoto, Keinosuke; Mori, Naoki; Kitayama, Masashi; Izumi, Yoshio
Web services are regarded as a new application paradigm in the world of the Internet. On the other hand, many business models of a power trading system has been proposed to aim at load reduction by consumers cooperating with electric power suppliers in an electric power market. Then, we propose a network model of power trading system using Web service in this paper. The adaptability of Web services to power trading system was checked in the prototype of our network model and we got good results for it. Each server provides functions as a SOAP server, and it is coupled loosely with each other through SOAP. Storing SOAP message in HTTP packet can establish the penetration communication way that is not conscious of a firewall. Switching of a dynamic server is possible by means of rewriting the server point information on WSDL at the time of obstacle generating.
Performance Monitoring of Residential Hot Water Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Anna; Lanzisera, Steven; Lutz, Jim
Current water distribution systems are designed such that users need to run the water for some time to achieve the desired temperature, wasting energy and water in the process. We developed a wireless sensor network for large-scale, long time-series monitoring of residential water end use. Our system consists of flow meters connected to wireless motes transmitting data to a central manager mote, which in turn posts data to our server via the internet. This project also demonstrates a reliable and flexible data collection system that could be configured for various other forms of end use metering in buildings. The purposemore » of this study was to determine water and energy use and waste in hot water distribution systems in California residences. We installed meters at every end use point and the water heater in 20 homes and collected 1s flow and temperature data over an 8 month period. For a typical shower and dishwasher events, approximately half the energy is wasted. This relatively low efficiency highlights the importance of further examining the energy and water waste in hot water distribution systems.« less
A uniqueness-and-anonymity-preserving remote user authentication scheme for connected health care.
Chang, Ya-Fen; Yu, Shih-Hui; Shiao, Ding-Rui
2013-04-01
Connected health care provides new opportunities for improving financial and clinical performance. Many connected health care applications such as telecare medicine information system, personally controlled health records system, and patient monitoring have been proposed. Correct and quality care is the goal of connected heath care, and user authentication can ensure the legality of patients. After reviewing authentication schemes for connected health care applications, we find that many of them cannot protect patient privacy such that others can trace users/patients by the transmitted data. And the verification tokens used by these authentication schemes to authenticate users or servers are only password, smart card and RFID tag. Actually, these verification tokens are not unique and easy to copy. On the other hand, biometric characteristics, such as iris, face, voiceprint, fingerprint and so on, are unique, easy to be verified, and hard to be copied. In this paper, a biometrics-based user authentication scheme will be proposed to ensure uniqueness and anonymity at the same time. With the proposed scheme, only the legal user/patient himself/herself can access the remote server, and no one can trace him/her according to transmitted data.
Networked Instructional Chemistry: Using Technology To Teach Chemistry
NASA Astrophysics Data System (ADS)
Smith, Stanley; Stovall, Iris
1996-10-01
Networked multimedia microcomputers provide new ways to help students learn chemistry and to help instructors manage the learning environment. This technology is used to replace some traditional laboratory work, collect on-line experimental data, enhance lectures and quiz sections with multimedia presentations, provide prelaboratory training for beginning nonchemistry- major organic laboratory, provide electronic homework for organic chemistry students, give graduate students access to real NMR data for analysis, and provide access to molecular modeling tools. The integration of all of these activities into an active learning environment is made possible by a client-server network of hundreds of computers. This requires not only instructional software but also classroom and course management software, computers, networking, and room management. Combining computer-based work with traditional course material is made possible with software management tools that allow the instructor to monitor the progress of each student and make available an on-line gradebook so students can see their grades and class standing. This client-server based system extends the capabilities of the earlier mainframe-based PLATO system, which was used for instructional computing. This paper outlines the components of a technology center used to support over 5,000 students per semester.
Aviation System Analysis Capability Quick Response System Report Server User’s Guide.
1996-10-01
primary data sources for the QRS Report Server are the following: ♦ United States Department of Transportation airline service quality per- formance...and to cross-reference sections of this document. is used to indicate quoted text messages from WWW pages. is used for WWW page and section titles...would link the user to another document or another section of the same document. ALL CAPS is used to indicate Report Server variables for which the
NASA Astrophysics Data System (ADS)
Baudel, S.; Blanc, F.; Jolibois, T.; Rosmorduc, V.
2004-12-01
The Products and Services (P&S) department in the Space Oceanography Division at CLS is in charge of diffusing and promoting altimetry and operational oceanography data. P&S is so involved in Aviso satellite altimetry project, in Mercator ocean operational forecasting system, and in the European Godae /Mersea ocean portal. Aiming to a standardisation and a common vision and management of all these ocean data, these projects led to the implementation of several OPeNDAP/LAS Internet servers. OPeNDAP allows the user to extract via a client software (like IDL, Matlab or Ferret) the data he is interested in and only this data, avoiding him to download full information files. OPeNDAP allows to extract a geographic area, a period time, an oceanic variable, and an output format. LAS is an OPeNDAP data access web server whose special feature consists in the facility for unify in a single vision the access to multiple types of data from distributed data sources. The LAS can make requests to different remote OPeNDAP servers. This enables to make comparisons or statistics upon several different data types. Aviso is the CNES/CLS service which distributes altimetry products since 1993. The Aviso LAS distributes several Ssalto/Duacs altimetry products such as delayed and near-real time mean sea level anomaly, absolute dynamic topography, absolute geostrophic velocities, gridded significant wave height and gridded wind speed modulus. Mercator-Ocean is a French operational oceanography centre which distributes its products by several means among them LAS/OPeNDAP servers as part of Mercator Mersea-strand1 contribution. 3D ocean description (temperature, salinity, current and other oceanic variables) of the North Atlantic and Mediterranean are real-time available and weekly updated. LAS special feature consisting in the possibility of making requests to several remote data centres with same OPeNDAP configurations particularly fitted to Mersea strand-1 problematics. This European project (June 2003 to June 2004) sponsored by the European Commission was the first experience of an integrated operational oceanography project. The objective was the assessment of several existing operational in situ and satellite monitoring and numerical forecasting systems for the future elaboration (Mersea Integrated Project, 2004-2008) of an integrated system able to deliver, operationally, information products (physical, chemical, biological) towards end-users in several domains related to environment, security and safety. Five forecasting ocean models with data assimilation coming from operational in situ or satellite data centres, have been intercompared. The main difficulty of this LAS implementation has lied in the ocean model metrics definition and a common file format adoption which forced the model teams to produce the same datasets in the same formats (NetCDF, COARDS/CF convention). Notice that this was a pioneer approach and that it has been adopted by Godae standards (see F. Blanc's paper in this session). Going on these web technologies implementation and entering a more user-oriented issue, perspectives deal with the implementation of a Map Server, a GIS opensource server which will communicate with the OPeNDAP server. The Map server will be able to manipulate simultaneously raster and vector multidisciplinary remote data. The aim is to construct a full complete web oceanic data distribution service. The projects in which we are involved allow us to progress towards that.
Model of load balancing using reliable algorithm with multi-agent system
NASA Astrophysics Data System (ADS)
Afriansyah, M. F.; Somantri, M.; Riyadi, M. A.
2017-04-01
Massive technology development is linear with the growth of internet users which increase network traffic activity. It also increases load of the system. The usage of reliable algorithm and mobile agent in distributed load balancing is a viable solution to handle the load issue on a large-scale system. Mobile agent works to collect resource information and can migrate according to given task. We propose reliable load balancing algorithm using least time first byte (LFB) combined with information from the mobile agent. In system overview, the methodology consisted of defining identification system, specification requirements, network topology and design system infrastructure. The simulation method for simulated system was using 1800 request for 10 s from the user to the server and taking the data for analysis. Software simulation was based on Apache Jmeter by observing response time and reliability of each server and then compared it with existing method. Results of performed simulation show that the LFB method with mobile agent can perform load balancing with efficient systems to all backend server without bottleneck, low risk of server overload, and reliable.
Analysis of bulk arrival queueing system with batch size dependent service and working vacation
NASA Astrophysics Data System (ADS)
Niranjan, S. P.; Indhira, K.; Chandrasekaran, V. M.
2018-04-01
This paper concentrates on single server bulk arrival queue system with batch size dependent service and working vacation. The server provides service in two service modes depending upon the queue length. The server provides single service if the queue length is at least `a'. On the other hand the server provides fixed batch service if the queue length is at least `k' (k > a). Batch service is provided with some fixed batch size `k'. After completion of service if the queue length is less than `a' then the server leaves for working vacation. During working vacation customers are served with lower service rate than the regular service rate. Service during working vacation also contains two service modes. For the proposed model probability generating function of the queue length at an arbitrary time will be obtained by using supplementary variable technique. Some performance measures will also be presented with suitable numerical illustrations.
NASA Astrophysics Data System (ADS)
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
Keshet, Aviv; Ketterle, Wolfgang
2013-01-01
Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.
New data model with better functionality for VLab
NASA Astrophysics Data System (ADS)
da Silveira, P. R.; Wentzcovitch, R. M.; Karki, B. B.
2009-12-01
The VLab infrastructure and architecture was further developed to allow for several new features. First, workflows for first principles calculations of thermodynamics properties and static elasticity programmed in Java as Web Services can now be executed by multiple users. Second, jobs generated by these workflows can now be executed in batch in multiple servers. A simple internal schedule was implemented to handle hundreds of execution packages generated by multiple users and avoid the overload on servers. Third, a new data model was implemented to guarantee integrity of a project (workflow execution) in case of failure. The latter can happen in an execution package or in a workflow phase. By recording all executed steps of a project, its execution can be resumed after dynamic alteration of parameters through the VLab Portal. Fourth, batch jobs can also be monitored through the portal. Now, better and faster interaction with servers is achieved using Ajax technology. Finally, plots are now created on the Vlab server using Gnuplot 4.2.2. Research supported by NSF grants ATM 0428774 (VLab). Vlab is hosted by the Minnesota Supercomputing Institute.
Tsui, Fu-Chiang; Espino, Jeremy U; Weng, Yan; Choudary, Arvinder; Su, Hoah-Der; Wagner, Michael M
2005-01-01
The National Retail Data Monitor (NRDM) has monitored over-the-counter (OTC) medication sales in the United States since December 2002. The NRDM collects data from over 18,600 retail stores and processes over 0.6 million sales records per day. This paper describes key architectural features that we have found necessary for a data utility component in a national biosurveillance system. These elements include event-driven architecture to provide analyses of data in near real time, multiple levels of caching to improve query response time, high availability through the use of clustered servers, scalable data storage through the use of storage area networks and a web-service function for interoperation with affiliated systems. The methods and architectural principles are relevant to the design of any production data utility for public health surveillance-systems that collect data from multiple sources in near real time for use by analytic programs and user interfaces that have substantial requirements for time-series data aggregated in multiple dimensions.
A mobile system for skin cancer diagnosis and monitoring
NASA Astrophysics Data System (ADS)
Gu, Yanliang; Tang, Jinshan
2014-05-01
In this paper, we propose a mobile system for aiding doctors in skin cancer diagnosis and other persons in skin cancer monitoring. The basic idea is to use image retrieval techniques to help the users to find the similar skin cancer cases stored in a database by using smart phones. The query image can be taken by a smart phone from a patient or can be uploaded from other resources. The shapes of the skin lesions are used for matching two skin lesions, which are segmented from skin images using the skin lesion extraction method developed in 1. The features used in the proposed system are obtained by Fourier descriptor. A prototype application has been developed and can be installed in an iPhone. In this application, the iPhone users can use the iPhone as a diagnosis tool to find the potential skin lesions in a persons' skin and compare the skin lesions detected by the iPhone with the skin lesions stored in a database in a remote server.
Implementation of Medical Information Exchange System Based on EHR Standard
Han, Soon Hwa; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong
2010-01-01
Objectives To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. Methods To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. Results The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. Conclusions This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information. PMID:21818447
NASA Astrophysics Data System (ADS)
Xu, Chong-Yao; Zheng, Xin; Xiong, Xiao-Ming
2017-02-01
With the development of Internet of Things (IoT) and the popularity of intelligent mobile terminals, smart home system has come into people’s vision. However, due to the high cost, complex installation and inconvenience, as well as network security issues, smart home system has not been popularized. In this paper, combined with Wi-Fi technology, Android system, cloud server and SSL security protocol, a new set of smart home system is designed, with low cost, easy operation, high security and stability. The system consists of Wi-Fi smart node (WSN), Android client and cloud server. In order to reduce system cost and complexity of the installation, each Wi-Fi transceiver, appliance control logic and data conversion in the WSN is setup by a single chip. In addition, all the data of the WSN can be uploaded to the server through the home router, without having to transit through the gateway. All the appliance status information and environmental information are preserved in the cloud server. Furthermore, to ensure the security of information, the Secure Sockets Layer (SSL) protocol is used in the WSN communication with the server. What’s more, to improve the comfort and simplify the operation, Android client is designed with room pattern to control home appliances more realistic, and more convenient.
Implementation of Medical Information Exchange System Based on EHR Standard.
Han, Soon Hwa; Lee, Min Ho; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong
2010-12-01
To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information.
Selection of Server-Side Technologies for an E-Business Curriculum
ERIC Educational Resources Information Center
Sandvig, J. Christopher
2007-01-01
The rapid growth of e-business and e-commerce has made server-side programming an increasingly important topic in information systems (IS) and computer science (CS) curricula. This article presents an overview of the major features of several popular server-side programming technologies and discusses the factors that influence the selection of…
From Server to Desktop: Capital and Institutional Planning for Client/Server Technology.
ERIC Educational Resources Information Center
Mullig, Richard M.; Frey, Keith W.
1994-01-01
Beginning with a request for an enhanced system for decision/strategic planning support, the University of Chicago's biological sciences division has developed a range of administrative client/server tools, instituted a capital replacement plan for desktop technology, and created a planning and staffing approach enabling rapid introduction of new…
Internet-Based Solutions for a Secure and Efficient Seismic Network
NASA Astrophysics Data System (ADS)
Bhadha, R.; Black, M.; Bruton, C.; Hauksson, E.; Stubailo, I.; Watkins, M.; Alvarez, M.; Thomas, V.
2017-12-01
The Southern California Seismic Network (SCSN), operated by Caltech and USGS, leverages modern Internet-based computing technologies to provide timely earthquake early warning for damage reduction, event notification, ShakeMap, and other data products. Here we present recent and ongoing innovations in telemetry, security, cloud computing, virtualization, and data analysis that have allowed us to develop a network that runs securely and efficiently.Earthquake early warning systems must process seismic data within seconds of being recorded, and SCSN maintains a robust and resilient network of more than 350 digital strong motion and broadband seismic stations to achieve this goal. We have continued to improve the path diversity and fault tolerance within our network, and have also developed new tools for latency monitoring and archiving.Cyberattacks are in the news almost daily, and with most of our seismic data streams running over the Internet, it is only a matter of time before SCSN is targeted. To ensure system integrity and availability across our network, we have implemented strong security, including encryption and Virtual Private Networks (VPNs).SCSN operates its own data center at Caltech, but we have also installed real-time servers on Amazon Web Services (AWS), to provide an additional level of redundancy, and eventually to allow full off-site operations continuity for our network. Our AWS systems receive data from Caltech-based import servers and directly from field locations, and are able to process the seismic data, calculate earthquake locations and magnitudes, and distribute earthquake alerts, directly from the cloud.We have also begun a virtualization project at our Caltech data center, allowing us to serve data from Virtual Machines (VMs), making efficient use of high-performance hardware and increasing flexibility and scalability of our data processing systems.Finally, we have developed new monitoring of station average noise levels at most stations. Noise monitoring is effective at identifying anthropogenic noise sources and malfunctioning acquisition equipment. We have built a dynamic display of results with sorting and mapping capabilities that allow us to quickly identify problematic sites and areas with elevated noise.
PEM public key certificate cache server
NASA Astrophysics Data System (ADS)
Cheung, T.
1993-12-01
Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.
Park, Hyo Seon; Shin, Yunah; Choi, Se Woon; Kim, Yousok
2013-01-01
In this study, a practical and integrative SHM system was developed and applied to a large-scale irregular building under construction, where many challenging issues exist. In the proposed sensor network, customized energy-efficient wireless sensing units (sensor nodes, repeater nodes, and master nodes) were employed and comprehensive communications from the sensor node to the remote monitoring server were conducted through wireless communications. The long-term (13-month) monitoring results recorded from a large number of sensors (75 vibrating wire strain gauges, 10 inclinometers, and three laser displacement sensors) indicated that the construction event exhibiting the largest influence on structural behavior was the removal of bents that were temporarily installed to support the free end of the cantilevered members during their construction. The safety of each member could be confirmed based on the quantitative evaluation of each response. Furthermore, it was also confirmed that the relation between these responses (i.e., deflection, strain, and inclination) can provide information about the global behavior of structures induced from specific events. Analysis of the measurement results demonstrates the proposed sensor network system is capable of automatic and real-time monitoring and can be applied and utilized for both the safety evaluation and precise implementation of buildings under construction. PMID:23860317
A Scalable Monitoring for the CMS Filter Farm Based on Elasticsearch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andre, J.M.; et al.
2015-12-23
A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured informationmore » can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central” es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioning for LHC Run 2.« less
P43-S Computational Biology Applications Suite for High-Performance Computing (BioHPC.net)
Pillardy, J.
2007-01-01
One of the challenges of high-performance computing (HPC) is user accessibility. At the Cornell University Computational Biology Service Unit, which is also a Microsoft HPC institute, we have developed a computational biology application suite that allows researchers from biological laboratories to submit their jobs to the parallel cluster through an easy-to-use Web interface. Through this system, we are providing users with popular bioinformatics tools including BLAST, HMMER, InterproScan, and MrBayes. The system is flexible and can be easily customized to include other software. It is also scalable; the installation on our servers currently processes approximately 8500 job submissions per year, many of them requiring massively parallel computations. It also has a built-in user management system, which can limit software and/or database access to specified users. TAIR, the major database of the plant model organism Arabidopsis, and SGN, the international tomato genome database, are both using our system for storage and data analysis. The system consists of a Web server running the interface (ASP.NET C#), Microsoft SQL server (ADO.NET), compute cluster running Microsoft Windows, ftp server, and file server. Users can interact with their jobs and data via a Web browser, ftp, or e-mail. The interface is accessible at http://cbsuapps.tc.cornell.edu/.
Defense strategies for cloud computing multi-site server infrastructures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Nageswara S.; Ma, Chris Y. T.; He, Fei
We consider cloud computing server infrastructures for big data applications, which consist of multiple server sites connected over a wide-area network. The sites house a number of servers, network elements and local-area connections, and the wide-area network plays a critical, asymmetric role of providing vital connectivity between them. We model this infrastructure as a system of systems, wherein the sites and wide-area network are represented by their cyber and physical components. These components can be disabled by cyber and physical attacks, and also can be protected against them using component reinforcements. The effects of attacks propagate within the systems, andmore » also beyond them via the wide-area network.We characterize these effects using correlations at two levels using: (a) aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual site or network, and (b) first-order differential conditions on system survival probabilities that characterize the component-level correlations within individual systems. We formulate a game between an attacker and a provider using utility functions composed of survival probability and cost terms. At Nash Equilibrium, we derive expressions for the expected capacity of the infrastructure given by the number of operational servers connected to the network for sum-form, product-form and composite utility functions.« less
Experience with ATLAS MySQL PanDA database service
NASA Astrophysics Data System (ADS)
Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.
2010-04-01
The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.
An Array Library for Microsoft SQL Server with Astrophysical Applications
NASA Astrophysics Data System (ADS)
Dobos, L.; Szalay, A. S.; Blakeley, J.; Falck, B.; Budavári, T.; Csabai, I.
2012-09-01
Today's scientific simulations produce output on the 10-100 TB scale. This unprecedented amount of data requires data handling techniques that are beyond what is used for ordinary files. Relational database systems have been successfully used to store and process scientific data, but the new requirements constantly generate new challenges. Moving terabytes of data among servers on a timely basis is a tough problem, even with the newest high-throughput networks. Thus, moving the computations as close to the data as possible and minimizing the client-server overhead are absolutely necessary. At least data subsetting and preprocessing have to be done inside the server process. Out of the box commercial database systems perform very well in scientific applications from the prospective of data storage optimization, data retrieval, and memory management but lack basic functionality like handling scientific data structures or enabling advanced math inside the database server. The most important gap in Microsoft SQL Server is the lack of a native array data type. Fortunately, the technology exists to extend the database server with custom-written code that enables us to address these problems. We present the prototype of a custom-built extension to Microsoft SQL Server that adds array handling functionality to the database system. With our Array Library, fix-sized arrays of all basic numeric data types can be created and manipulated efficiently. Also, the library is designed to be able to be seamlessly integrated with the most common math libraries, such as BLAS, LAPACK, FFTW, etc. With the help of these libraries, complex operations, such as matrix inversions or Fourier transformations, can be done on-the-fly, from SQL code, inside the database server process. We are currently testing the prototype with two different scientific data sets: The Indra cosmological simulation will use it to store particle and density data from N-body simulations, and the Milky Way Laboratory project will use it to store galaxy simulation data.
I/O performance evaluation of a Linux-based network-attached storage device
NASA Astrophysics Data System (ADS)
Sun, Zhaoyan; Dong, Yonggui; Wu, Jinglian; Jia, Huibo; Feng, Guanping
2002-09-01
In a Local Area Network (LAN), clients are permitted to access the files on high-density optical disks via a network server. But the quality of read service offered by the conventional server is not satisfied because of the multiple functions on the server and the overmuch caller. This paper develops a Linux-based Network-Attached Storage (NAS) server. The Operation System (OS), composed of an optimized kernel and a miniaturized file system, is stored in a flash memory. After initialization, the NAS device is connected into the LAN. The administrator and users could configure the access the server through the web page respectively. In order to enhance the quality of access, the management of buffer cache in file system is optimized. Some benchmark programs are peformed to evaluate the I/O performance of the NAS device. Since data recorded in optical disks are usually for reading accesses, our attention is focused on the reading throughput of the device. The experimental results indicate that the I/O performance of our NAS device is excellent.
Control and Information Systems for the National Ignition Facility
Brunton, Gordon; Casey, Allan; Christensen, Marvin; ...
2017-03-23
Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less
Control and Information Systems for the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunton, Gordon; Casey, Allan; Christensen, Marvin
Orchestration of every National Ignition Facility (NIF) shot cycle is managed by the Integrated Computer Control System (ICCS), which uses a scalable software architecture running code on more than 1950 front-end processors, embedded controllers, and supervisory servers. The ICCS operates laser and industrial control hardware containing 66 000 control and monitor points to ensure that all of NIF’s laser beams arrive at the target within 30 ps of each other and are aligned to a pointing accuracy of less than 50 μm root-mean-square, while ensuring that a host of diagnostic instruments record data in a few billionths of a second.more » NIF’s automated control subsystems are built from a common object-oriented software framework that distributes the software across the computer network and achieves interoperation between different software languages and target architectures. A large suite of business and scientific software tools supports experimental planning, experimental setup, facility configuration, and post-shot analysis. Standard business services using open-source software, commercial workflow tools, and database and messaging technologies have been developed. An information technology infrastructure consisting of servers, network devices, and storage provides the foundation for these systems. Thus, this work is an overview of the control and information systems used to support a wide variety of experiments during the National Ignition Campaign.« less
Sung, Wen-Tsai; Lin, Jia-Syun
2013-01-01
This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.
Web-based home telemedicine system for orthopedics
NASA Astrophysics Data System (ADS)
Lau, Christopher; Churchill, Sean; Kim, Janice; Matsen, Frederick A., III; Kim, Yongmin
2001-05-01
Traditionally, telemedicine systems have been designed to improve access to care by allowing physicians to consult a specialist about a case without sending the patient to another location, which may be difficult or time-consuming to reach. The cost of the equipment and network bandwidth needed for this consultation has restricted telemedicine use to contact between physicians instead of between patients and physicians. Recently, however, the wide availability of Internet connectivity and client and server software for e- mail, world wide web, and conferencing has made low-cost telemedicine applications feasible. In this work, we present a web-based system for asynchronous multimedia messaging between shoulder replacement surgery patients at home and their surgeons. A web browser plug-in was developed to simplify the process of capturing video and transferring it to a web site. The video capture plug-in can be used as a template to construct a plug-in that captures and transfers any type of data to a web server. For example, readings from home biosensor instruments (e.g., blood glucose meters and spirometers) that can be connected to a computing platform can be transferred to a home telemedicine web site. Both patients and doctors can access this web site to monitor progress longitudinally. The system has been tested with 3 subjects for the past 7 weeks, and we plan to continue testing in the foreseeable future.
Use of a secure Internet Web site for collaborative medical research.
Marshall, W W; Haley, R W
2000-10-11
Researchers who collaborate on clinical research studies from diffuse locations need a convenient, inexpensive, secure way to record and manage data. The Internet, with its World Wide Web, provides a vast network that enables researchers with diverse types of computers and operating systems anywhere in the world to log data through a common interface. Development of a Web site for scientific data collection can be organized into 10 steps, including planning the scientific database, choosing a database management software system, setting up database tables for each collaborator's variables, developing the Web site's screen layout, choosing a middleware software system to tie the database software to the Web site interface, embedding data editing and calculation routines, setting up the database on the central server computer, obtaining a unique Internet address and name for the Web site, applying security measures to the site, and training staff who enter data. Ensuring the security of an Internet database requires limiting the number of people who have access to the server, setting up the server on a stand-alone computer, requiring user-name and password authentication for server and Web site access, installing a firewall computer to prevent break-ins and block bogus information from reaching the server, verifying the identity of the server and client computers with certification from a certificate authority, encrypting information sent between server and client computers to avoid eavesdropping, establishing audit trails to record all accesses into the Web site, and educating Web site users about security techniques. When these measures are carefully undertaken, in our experience, information for scientific studies can be collected and maintained on Internet databases more efficiently and securely than through conventional systems of paper records protected by filing cabinets and locked doors. JAMA. 2000;284:1843-1849.
NASA Astrophysics Data System (ADS)
Mauntz, M.; Peuser, J.
2017-05-01
The demand for wind energy grows at exponential rates. At the same time improving reliability, reduced operation and maintenance costs are the key priorities in wind tur-bine maintenance strategies [1]. This paper provides information about a novel online oil condition monitoring system to give a solution to the mentioned priorities. The presented sensor system enables damage prevention of the wind turbine gear-box by an advanced warning time of critical operation conditions and an enhanced oil exchange interval realized by a precise measurement of the electrical conductivity, the relative permittivity and the oil temperature. A new parameter, the WearSens® Index (WSi) is introduced. The mathematical model of the WSi combines all measured values and its gradients in one single parameter for a comprehensive monitoring to prevent wind turbines from damage. Furthermore, the WSi enables a long-term prognosis on the next oil change by 24/7 server data logging. Corrective procedures and/or maintenance can be carried out before actual damage occurs. First WSi results of an onshore wind turbine installation compared to traditional vibration monitoring are shown.
A portable, inexpensive, wireless vital signs monitoring system.
Kaputa, David; Price, David; Enderle, John D
2010-01-01
The University of Connecticut, Department of Biomedical Engineering has developed a device to be used by patients to collect physiological data outside of a medical facility. This device facilitates modes of data collection that would be expensive, inconvenient, or impossible to obtain by traditional means within the medical facility. Data can be collected on specific days, at specific times, during specific activities, or while traveling. The device uses biosensors to obtain information such as pulse oximetry (SpO2), heart rate, electrocardiogram (ECG), non-invasive blood pressure (NIBP), and weight which are sent via Bluetooth to an interactive monitoring device. The data can then be downloaded to an electronic storage device or transmitted to a company server, physician's office, or hospital. The data collection software is usable on any computer device with Bluetooth capability, thereby removing the need for special hardware for the monitoring device and reducing the total cost of the system. The modular biosensors can be added or removed as needed without changing the monitoring device software. The user is prompted by easy-to-follow instructions written in non-technical language. Additional features, such as screens with large buttons and large text, allow for use by those with limited vision or limited motor skills.
Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks
Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio
2008-01-01
Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper. PMID:27873941
A secure online image trading system for untrusted cloud environments.
Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi
2015-01-01
In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.
Proactive Byzantine Quorum Systems
NASA Astrophysics Data System (ADS)
Alchieri, Eduardo A. P.; Bessani, Alysson Neves; Pereira, Fernando Carlos; da Silva Fraga, Joni
Byzantine Quorum Systems is a replication technique used to ensure availability and consistency of replicates data even in presence of arbitrary faults. This paper presents a Byzantine Quorum Systems protocol that provides atomic semantics despite the existence of Byzantine clients and servers. Moreover, this protocol is integrated with a protocol for proactive recovery of servers. In that way, the system tolerates any number of failures during its lifetime, since no more than f out of n servers fail during a small interval of time between recoveries. All solutions proposed in this paper can be used on asynchronous systems, which requires no time assumptions. The proposed quorum system read and write protocols have been implemented and their efficiency is demonstrated through some experiments carried out in the Emulab platform.
SciServer: An Online Collaborative Environment for Big Data in Research and Education
NASA Astrophysics Data System (ADS)
Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr
2017-01-01
For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast cross-matching between very large astronomical datasets.Demos, documentation, and more information about all these resources can be found at www.sciserver.org.
Operator Interface for the ALMA Observing System
NASA Astrophysics Data System (ADS)
Grosbøl, P.; Schilling, M.
2009-09-01
The Atacama Large Millimeter/submillimeter Array (ALMA) is a major new ground-based radio-astronomical facility being constructed in Chile in an international collaboration between Europe, Japan and North America in cooperation with the Republic of Chile. The facility will include 54 12m and 12 7m antennas at the Altiplano de Chajnantor and be operated from the Operations Support Facilities (OSF) near San Pedro. This paper describes design and baseline implementation of the Graphical User Interface (GUI) used by operators to monitor and control the observing facility. It is written in Java and provides a simple plug-in interface which allows different subsystems to add their own panels to the GUI. The design is based on a client/server concept and supports multiple operators to share or monitor operations.
NASA Astrophysics Data System (ADS)
Oya, I.; Anguner, E. A.; Behera, B.; Birsin, E.; Fuessling, M.; Lindemann, R.; Melkumyan, D.; Schlenstedt, S.; Schmidt, T.; Schwanke, U.; Sternberger, R.; Wegner, P.; Wiesand, S.
2014-07-01
The Cherenkov Telescope Array (CTA) will be the next generation ground-based very-high energy -ray observatory. CTA will consist of two arrays: one in the Northern hemisphere composed of about 20 telescopes, and the other one in the Southern hemisphere composed of about 100 telescopes, both arrays containing telescopes of different sizes and types and in addition numerous auxiliary devices. In order to provide a test-ground for the CTA array control, the steering software of the 12-m medium size telescope (MST) prototype deployed in Berlin has been implemented using the tools and design concepts under consideration to be used for the control of the CTA array. The prototype control system is implemented based on the Atacama Large Millimeter/submillimeter Array (ALMA) Common Software (ACS) control middleware, with components implemented in Java, C++ and Python. The interfacing to the hardware is standardized via the Object Linking and Embedding for Process Control Unified Architecture (OPC UA). In order to access the OPC UA servers from the ACS framework in a common way, a library has been developed that allows to tie the OPC UA server nodes, methods and events to the equivalents in ACS components. The front-end of the archive system is able to identify the deployed components and to perform the sampling of the monitoring points of each component following time and value change triggers according to the selected configurations. The back-end of the archive system of the prototype is composed by two different databases: MySQL and MongoDB. MySQL has been selected as storage of the system configurations, while MongoDB is used to have an efficient storage of device monitoring data, CCD images, logging and alarm information. In this contribution, the details and conclusions on the implementation of the control software of the MST prototype are presented.
The MSG Central Facility - A Mission Control System for Windows NT
NASA Astrophysics Data System (ADS)
Thompson, R.
The MSG Central Facility, being developed by Science Systems for EUMETSAT1, represents the first of a new generation of satellite mission control systems, based on the Windows NT operating system. The system makes use of a range of new technologies to provide an integrated environment for the planning, scheduling, control and monitoring of the entire Meteosat Second Generation mission. It supports packetised TM/TC and uses Science System's Space UNiT product to provide automated operations support at both Schedule (Timeline) and Procedure levels. Flexible access to historical data is provided through an operations archive based on ORACLE Enterprise Server, hosted on a large RAID array and off-line tape jukebox. Event driven real-time data distribution is based on the CORBA standard. Operations preparation and configuration control tools form a fully integrated element of the system.
The ATLAS PanDA Monitoring System and its Evolution
NASA Astrophysics Data System (ADS)
Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.
2011-12-01
The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.
Client-Server: What Is It and Are We There Yet?
ERIC Educational Resources Information Center
Gershenfeld, Nancy
1995-01-01
Discusses client-server architecture in dumb terminals, personal computers, local area networks, and graphical user interfaces. Focuses on functions offered by client personal computers: individualized environments; flexibility in running operating systems; advanced operating system features; multiuser environments; and centralized data…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orvis, W.J.
1993-11-03
The Computer Incident Advisory Capability (CIAC) operates two information servers for the DOE community, FELICIA (formerly FELIX) and IRBIS. FELICIA is a computer Bulletin Board System (BBS) that can be accessed by telephone with a modem. IRBIS is an anonymous ftp server that can be accessed on the Internet. Both of these servers contain all of the publicly available CIAC, CERT, NIST, and DDN bulletins, virus descriptions, the VIRUS-L moderated virus bulletin board, copies of public domain and shareware virus- detection/protection software, and copies of useful public domain and shareware utility programs. This guide describes how to connect these systemsmore » and obtain files from them.« less
Real-time Web GIS to monitor marine water quality using wave glider
NASA Astrophysics Data System (ADS)
Maneesa Amiruddin, Siti
2016-06-01
In the past decade, Malaysia has experienced unprecedented economic development and associated socioeconomic changes. As environmentalists anticipate these changes could have negative impacts on the marine and coastal environment, a comprehensive, continuous and long term marine water quality monitoring programme needs to be strengthened to reflect the government's aggressive mind-set of enhancing its authority in protection, preservation, management and enrichment of vast resources of the ocean. Wave Glider, an autonomous, unmanned marine vehicle provides continuous ocean monitoring at all times and is durable in any weather condition. Geographic Information System (GIS) technology is ideally suited as a tool for the presentation of data derived from continuous monitoring of locations, and used to support and deliver information to environmental managers and the public. Combined with GeoEvent Processor, an extension from ArcGIS for Server, it extends the Web GIS capabilities in providing real-time data from the monitoring activities. Therefore, there is a growing need of Web GIS for easy and fast dissemination, sharing, displaying and processing of spatial information which in turn helps in decision making for various natural resources based applications.
Volume serving and media management in a networked, distributed client/server environment
NASA Technical Reports Server (NTRS)
Herring, Ralph H.; Tefend, Linda L.
1993-01-01
The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.
PlaIMoS: A Remote Mobile Healthcare Platform to Monitor Cardiovascular and Respiratory Variables
Miramontes, Ramses; Aquino, Raúl; Flores, Arturo; Rodríguez, Guillermo; Anguiano, Rafael; Ríos, Arturo; Edwards, Arthur
2017-01-01
The number of elderly and chronically ill patients has grown significantly over the past few decades as life expectancy has increased worldwide, leading to increased demands on the health care system and significantly taxing traditional health care practices. Consequently, there is an urgent need to use technology to innovate and more constantly and intensely monitor, report and analyze critical patient physiological parameters beyond conventional clinical settings in a more efficient and cost effective manner. This paper presents a technological platform called PlaIMoS which consists of wearable sensors, a fixed measurement station, a network infrastructure that employs IEEE 802.15.4 and IEEE 802.11 to transmit data with security mechanisms, a server to analyze all information collected and apps for iOS, Android and Windows 10 mobile operating systems to provide real-time measurements. The developed architecture, designed primarily to record and report electrocardiogram and heart rate data, also monitors parameters associated with chronic respiratory illnesses, including patient blood oxygen saturation and respiration rate, body temperature, fall detection and galvanic resistance. PMID:28106832
PlaIMoS: A Remote Mobile Healthcare Platform to Monitor Cardiovascular and Respiratory Variables.
Miramontes, Ramses; Aquino, Raúl; Flores, Arturo; Rodríguez, Guillermo; Anguiano, Rafael; Ríos, Arturo; Edwards, Arthur
2017-01-19
The number of elderly and chronically ill patients has grown significantly over the past few decades as life expectancy has increased worldwide, leading to increased demands on the health care system and significantly taxing traditional health care practices. Consequently, there is an urgent need to use technology to innovate and more constantly and intensely monitor, report and analyze critical patient physiological parameters beyond conventional clinical settings in a more efficient and cost effective manner. This paper presents a technological platform called PlaIMoS which consists of wearable sensors, a fixed measurement station, a network infrastructure that employs IEEE 802.15.4 and IEEE 802.11 to transmit data with security mechanisms, a server to analyze all information collected and apps for iOS, Android and Windows 10 mobile operating systems to provide real-time measurements. The developed architecture, designed primarily to record and report electrocardiogram and heart rate data, also monitors parameters associated with chronic respiratory illnesses, including patient blood oxygen saturation and respiration rate, body temperature, fall detection and galvanic resistance.
Web-based video monitoring of CT and MRI procedures
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Dahlbom, Magdalena; Kho, Hwa T.; Valentino, Daniel J.; McCoy, J. Michael
2000-05-01
A web-based video transmission of images from CT and MRI consoles was implemented in an Intranet environment for real- time monitoring of ongoing procedures. Images captured from the consoles are compressed to video resolution and broadcasted through a web server. When called upon, the attending radiologists can view these live images on any computer within the secured Intranet network. With adequate compression, these images can be displayed simultaneously in different locations at a rate of 2 to 5 images/sec through standard LAN. The quality of the images being insufficient for diagnostic purposes, our users survey showed that they were suitable for supervising a procedure, positioning the imaging slices and for routine quality checking before completion of a study. The system was implemented at UCLA to monitor 9 CTs and 6 MRIs distributed in 4 buildings. This system significantly improved the radiologists productivity by saving precious time spent in trips between reading rooms and examination rooms. It also improved patient throughput by reducing the waiting time for the radiologists to come to check a study before moving the patient from the scanner.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... Market Maker Standard quote server as a gateway for communicating eQuotes to MIAX. Because of the... connect the Limited Service Ports to independent servers that host their eQuote and purge functionality... same server for all of their Market Maker quoting activity. Currently, Market Makers in the MIAX System...
Embedded controller for GEM detector readout system
NASA Astrophysics Data System (ADS)
Zabołotny, Wojciech M.; Byszuk, Adrian; Chernyshova, Maryna; Cieszewski, Radosław; Czarski, Tomasz; Dominik, Wojciech; Jakubowska, Katarzyna L.; Kasprowicz, Grzegorz; Poźniak, Krzysztof; Rzadkiewicz, Jacek; Scholz, Marek
2013-10-01
This paper describes the embedded controller used for the multichannel readout system for the GEM detector. The controller is based on the embedded Mini ITX mainboard, running the GNU/Linux operating system. The controller offers two interfaces to communicate with the FPGA based readout system. FPGA configuration and diagnostics is controlled via low speed USB based interface, while high-speed setup of the readout parameters and reception of the measured data is handled by the PCI Express (PCIe) interface. Hardware access is synchronized by the dedicated server written in C. Multiple clients may connect to this server via TCP/IP network, and different priority is assigned to individual clients. Specialized protocols have been implemented both for low level access on register level and for high level access with transfer of structured data with "msgpack" protocol. High level functionalities have been split between multiple TCP/IP servers for parallel operation. Status of the system may be checked, and basic maintenance may be performed via web interface, while the expert access is possible via SSH server. System was designed with reliability and flexibility in mind.
Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V
2001-06-01
In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.
Tablet PC as a mobil PACS terminal using wireless LAN
NASA Astrophysics Data System (ADS)
Tsao, Bo-Shen; Ching, Yu-Tai; Lee, Wen-Jeng; Chen, Shyh-Jye; Chang, Chia-Hung; Chen, Chien-Jung; Yen, York; Lee, Yuan-Ten
2003-05-01
A PACS mobile terminal has applications in ward round, emergency room and remote teleradiology consultation. Personal Digital Assistants (PDAs) have the highest mobility and are used for many medical applications. However, their roles are limited in the field of radiology due to small screen size. In this study, we built a wireless PACS terminal using a hand-held tablet-PC. A tablet PC (X-pilot, LEO systems, Taiwan) running the WinCE operating systems was used as our mobile PACS terminal. This device is equipped with 800×600 resolution 10.4 inch TFT monitor. The network connection between the tablet PC and the server was linked via wireless LAN (IEEE 802.11b).
Yu, Kaijun
2010-07-01
This paper Analys the design goals of Medical Instrumentation standard information retrieval system. Based on the B /S structure,we established a medical instrumentation standard retrieval system with ASP.NET C # programming language, IIS f Web server, SQL Server 2000 database, in the. NET environment. The paper also Introduces the system structure, retrieval system modules, system development environment and detailed design of the system.
The distributed annotation system.
Dowell, R D; Jokerst, R M; Day, A; Eddy, S R; Stein, L
2001-01-01
Currently, most genome annotation is curated by centralized groups with limited resources. Efforts to share annotations transparently among multiple groups have not yet been satisfactory. Here we introduce a concept called the Distributed Annotation System (DAS). DAS allows sequence annotations to be decentralized among multiple third-party annotators and integrated on an as-needed basis by client-side software. The communication between client and servers in DAS is defined by the DAS XML specification. Annotations are displayed in layers, one per server. Any client or server adhering to the DAS XML specification can participate in the system; we describe a simple prototype client and server example. The DAS specification is being used experimentally by Ensembl, WormBase, and the Berkeley Drosophila Genome Project. Continued success will depend on the readiness of the research community to adopt DAS and provide annotations. All components are freely available from the project website http://www.biodas.org/.
New web technologies for astronomy
NASA Astrophysics Data System (ADS)
Sprimont, P.-G.; Ricci, D.; Nicastro, L.
2014-12-01
Thanks to the new HTML5 capabilities and the huge improvements of the JavaScript language, it is now possible to design very complex and interactive web user interfaces. On top of that, the once monolithic and file-server oriented web servers are evolving into easily programmable server applications capable to cope with the complex interactions made possible by the new generation of browsers. We believe that the whole community of amateur and professionals astronomers can benefit from the potential of these new technologies. New web interfaces can be designed to provide the user with a large deal of much more intuitive and interactive tools. Accessing astronomical data archives, schedule, control and monitor observatories, and in particular robotic telescopes, supervising data reduction pipelines, all are capabilities that can now be implemented in a JavaScript web application. In this paper we describe the Sadira package we are implementing exactly to this aim.
Recommendation System Based On Association Rules For Distributed E-Learning Management Systems
NASA Astrophysics Data System (ADS)
Mihai, Gabroveanu
2015-09-01
Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.
Disaster recovery plan for HANDI 2000 business management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, D.E.
The BMS production implementation will be complete by October 1, 1998 and the server environment will be comprised of two types of platforms. The PassPort Supply and the PeopleSoft Financials will reside on LNIX servers and the PeopleSoft Human Resources and Payroll will reside on Microsoft NT servers. Because of the wide scope and the requirements of the COTS products to run in various environments backup and recovery responsibilities are divided between two groups in Technical Operations. The Central Computer Systems Management group provides support for the LTNIX/NT Backup Data Center, and the Network Infrastructure Systems group provides support formore » the NT Application Server Backup outside the Data Center. The disaster recovery process is dependent on a good backup and recovery process. Information and integrated system data for determining the disaster recovery process is identified from the Fluor Daniel Hanford (FDH) Risk Assessment Plan, Contingency Plan, and Backup and Recovery Plan, and Backup Form for HANDI 2000 BMS.« less
Audit Trail Management System in Community Health Care Information Network.
Nakamura, Naoki; Nakayama, Masaharu; Nakaya, Jun; Tominaga, Teiji; Suganuma, Takuo; Shiratori, Norio
2015-01-01
After the Great East Japan Earthquake we constructed a community health care information network system. Focusing on the authentication server and portal server capable of SAML&ID-WSF, we proposed an audit trail management system to look over audit events in a comprehensive manner. Through implementation and experimentation, we verified the effectiveness of our proposed audit trail management system.
Distributed control system for demand response by servers
NASA Astrophysics Data System (ADS)
Hall, Joseph Edward
Within the broad topical designation of smart grid, research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match generation and more efficiently integrate intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.
Development of a novel SCADA system for laboratory testing.
Patel, M; Cole, G R; Pryor, T L; Wilmot, N A
2004-07-01
This document summarizes the supervisory control and data acquisition (SCADA) system that allows communication with, and controlling the output of, various I/O devices in the renewable energy systems and components test facility RESLab. This SCADA system differs from traditional SCADA systems in that it supports a continuously changing operating environment depending on the test to be performed. The SCADA System is based on the concept of having one Master I/O Server and multiple client computer systems. This paper describes the main features and advantages of this dynamic SCADA system, the connections of various field devices to the master I/O server, the device servers, and numerous software features used in the system. The system is based on the graphical programming language "LabVIEW" and its "Datalogging and Supervisory Control" (DSC) module. The DSC module supports a real-time database called the "tag engine," which performs the I/O operations with all field devices attached to the master I/O server and communications with the other tag engines running on the client computers connected via a local area network. Generic and detailed communication block diagrams illustrating the hierarchical structure of this SCADA system are presented. The flow diagram outlining a complete test performed using this system in one of its standard configurations is described.
Towards Direct Manipulation and Remixing of Massive Data: The EarthServer Approach
NASA Astrophysics Data System (ADS)
Baumann, P.
2012-04-01
Complex analytics on "big data" is one of the core challenges of current Earth science, generating strong requirements for on-demand processing and fil tering of massive data sets. Issues under discussion include flexibility, performance, scalability, and the heterogeneity of the information types invo lved. In other domains, high-level query languages (such as those offered by database systems) have proven successful in the quest for flexible, scalable data access interfaces to massive amounts of data. However, due to the lack of support for many of the Earth science data structures, database systems are only used for registries and catalogs, but not for the bulk of spatio-temporal data. One core information category in this field is given by coverage data. ISO 19123 defines coverages, simplifying, as a representation of a "space-time varying phenomenon". This model can express a large class of Earth science data structures, including rectified and non-rectified rasters, curvilinear grids, point clouds, TINs, general meshes, trajectories, surfaces, and solids. This abstract definition, which is too high-level to establish interoperability, is concretized by the OGC GML 3.2.1 Application Schema for Coverages Standard into an interoperable representation. The OGC Web Coverage Processing Service (WCPS) Standard defines a declarative query language on multi-dimensional raster-type coverages, such as 1D in-situ sensor timeseries, 2D EO imagery, 3D x/y/t image time series and x/y/z geophysical data, 4D x/y/z/t climate and ocean data. Hence, important ingredients for versatile coverage retrieval are given - however, this potential has not been fully unleashed by service architectures up to now. The EU FP7-INFRA project EarthServer, launched in September 2011, aims at enabling standards-based on-demand analytics over the Web for Earth science data based on an integration of W3C XQuery for alphanumeric data and OGC-WCPS for raster data. Ultimately, EarthServer will support all OGC coverage types. The platform used by EarthServer is the rasdaman raster database system. To exploit heterogeneous multi-parallel platforms, automatic request distribution and orchestration is being established. Client toolkits are under development which will allow to quickly compose bespoke interactive clients, ranging from mobile devices over Web clients to high-end immersive virtual reality. The EarthServer platform has been deployed in six large-scale data centres with the aim of setting up Lighthouse Applications addressing all Earth Sciences, including satellite and airborne earth observation as well as use cases from atmosphere, ocean, snow, and ice monitoring, and geology on Earth and Mars. These services, each of which will ultimately host at least 100 TB, will form a peer cloud with distributed query processing for arbitrarily mixing database and in-situ access. With its ability to directly manipulate, analyze and remix massive data, the goal of EarthServer is to lift the data providers' semantic level from data stewardship to service stewardship.
Transparent Proxy for Secure E-Mail
NASA Astrophysics Data System (ADS)
Michalák, Juraj; Hudec, Ladislav
2010-05-01
The paper deals with the security of e-mail messages and e-mail server implementation by means of a transparent SMTP proxy. The security features include encryption and signing of transported messages. The goal is to design and implement a software proxy for secure e-mail including its monitoring, administration, encryption and signing keys administration. In particular, we focus on automatic public key on-the-fly encryption and signing of e-mail messages according to S/MIME standard by means of an embedded computer system whose function can be briefly described as a brouter with transparent SMTP proxy.
High performance network and channel-based storage
NASA Technical Reports Server (NTRS)
Katz, Randy H.
1991-01-01
In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.
Real-time, interactive animation of deformable two- and three-dimensional objects
Desbrun, Mathieu; Schroeder, Peter; Meyer, Mark; Barr, Alan H.
2003-06-03
A method of updating in real-time the locations and velocities of mass points of a two- or three-dimensional object represented by a mass-spring system. A modified implicit Euler integration scheme is employed to determine the updated locations and velocities. In an optional post-integration step, the updated locations are corrected to preserve angular momentum. A processor readable medium and a network server each tangibly embodying the method are also provided. A system comprising a processor in combination with the medium, and a system comprising the server in combination with a client for accessing the server over a computer network, are also provided.
Service Management Database for DSN Equipment
NASA Technical Reports Server (NTRS)
Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Wolgast, Paul; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed
2009-01-01
This data- and event-driven persistent storage system leverages the use of commercial software provided by Oracle for portability, ease of maintenance, scalability, and ease of integration with embedded, client-server, and multi-tiered applications. In this role, the Service Management Database (SMDB) is a key component of the overall end-to-end process involved in the scheduling, preparation, and configuration of the Deep Space Network (DSN) equipment needed to perform the various telecommunication services the DSN provides to its customers worldwide. SMDB makes efficient use of triggers, stored procedures, queuing functions, e-mail capabilities, data management, and Java integration features provided by the Oracle relational database management system. SMDB uses a third normal form schema design that allows for simple data maintenance procedures and thin layers of integration with client applications. The software provides an integrated event logging system with ability to publish events to a JMS messaging system for synchronous and asynchronous delivery to subscribed applications. It provides a structured classification of events and application-level messages stored in database tables that are accessible by monitoring applications for real-time monitoring or for troubleshooting and analysis over historical archives.
An Interoperable, Agricultural Information System Based on Satellite Remote Sensing Data
NASA Technical Reports Server (NTRS)
Teng, William; Chiu, Long; Doraiswamy, Paul; Kempler, Steven; Liu, Zhong; Pham, Long; Rui, Hualan
2005-01-01
Monitoring global agricultural crop conditions during the growing season and estimating potential seasonal production are critically important for market development of US. agricultural products and for global food security. The Goddard Space Flight Center Earth Sciences Data and Information Services Center Distributed Active Archive Center (GES DISC DAAC) is developing an Agricultural Information System (AIS), evolved from an existing TRMM Online Visualization and Analysis System (TOVAS), which will operationally provide satellite remote sensing data products (e.g., rainfall) and services. The data products will include crop condition and yield prediction maps, generated from a crop growth model with satellite data inputs, in collaboration with the USDA Agricultural Research Service. The AIS will enable the remote, interoperable access to distributed data, by using the GrADS-DODS Server (GDS) and by being compliant with Open GIS Consortium standards. Users will be able to download individual files, perform interactive online analysis, as well as receive operational data flows. AIS outputs will be integrated into existing operational decision support systems for global crop monitoring, such as those of the USDA Foreign Agricultural Service and the U.N. World Food Program.
Development of a Smart Mobile Data Module for Fetal Monitoring in E-Healthcare.
Houzé de l'Aulnoit, Agathe; Boudet, Samuel; Génin, Michaël; Gautier, Pierre-François; Schiro, Jessica; Houzé de l'Aulnoit, Denis; Beuscart, Régis
2018-03-23
The fetal heart rate (FHR) is a marker of fetal well-being in utero (when monitoring maternal and/or fetal pathologies) and during labor. Here, we developed a smart mobile data module for the remote acquisition and transmission (via a Wi-Fi or 4G connection) of FHR recordings, together with a web-based viewer for displaying the FHR datasets on a computer, smartphone or tablet. In order to define the features required by users, we modelled the fetal monitoring procedure (in home and hospital settings) via semi-structured interviews with midwives and obstetricians. Using this information, we developed a mobile data transfer module based on a Raspberry Pi. When connected to a standalone fetal monitor, the module acquires the FHR signal and sends it (via a Wi-Fi or a 3G/4G mobile internet connection) to a secure server within our hospital information system. The archived, digitized signal data are linked to the patient's electronic medical records. An HTML5/JavaScript web viewer converts the digitized FHR data into easily readable and interpretable graphs for viewing on a computer (running Windows, Linux or MacOS) or a mobile device (running Android, iOS or Windows Phone OS). The data can be viewed in real time or offline. The application includes tools required for correct interpretation of the data (signal loss calculation, scale adjustment, and precise measurements of the signal's characteristics). We performed a proof-of-concept case study of the transmission, reception and visualization of FHR data for a pregnant woman at 30 weeks of amenorrhea. She was hospitalized in the pregnancy assessment unit and FHR data were acquired three times a day with a Philips Avalon® FM30 fetal monitor. The prototype (Raspberry Pi) was connected to the fetal monitor's RS232 port. The emission and reception of prerecorded signals were tested and the web server correctly received the signals, and the FHR recording was visualized in real time on a computer, a tablet and smartphones (running Android and iOS) via the web viewer. This process did not perturb the hospital's computer network. There was no data delay or loss during a 60-min test. The web viewer was tested successfully in the various usage situations. The system was as user-friendly as expected, and enabled rapid, secure archiving. We have developed a system for the acquisition, transmission, recording and visualization of RCF data. Healthcare professionals can view the FHR data remotely on their computer, tablet or smartphone. Integration of FHR data into a hospital information system enables optimal, secure, long-term data archiving.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-16
... the Exchange in order that they may locate their electronic servers in close physical proximity to the... execution systems through the same order gateway regardless of whether the sender is co-located in the... scheduled at least 1 day in advance. Rack and Stack Installation of one $200 per server. server in User's...
A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System
ERIC Educational Resources Information Center
Chim, Hung; Deng, Xiaotie
2008-01-01
We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…
Solid waste information and tracking system server conversion project management plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
MAY, D.L.
1999-04-12
The Project Management Plan governing the conversion of Solid Waste Information and Tracking System (SWITS) to a client-server architecture. The Solid Waste Information and Tracking System Project Management Plan (PMP) describes the background, planning and management of the SWITS conversion. Requirements and specification documentation needed for the SWITS conversion will be released as supporting documents.
A low-cost wireless system for autonomous generation of road safety alerts
NASA Astrophysics Data System (ADS)
Banks, B.; Harms, T.; Sedigh Sarvestani, S.; Bastianini, F.
2009-03-01
This paper describes an autonomous wireless system that generates road safety alerts, in the form of SMS and email messages, and sends them to motorists subscribed to the service. Drivers who regularly traverse a particular route are the main beneficiaries of the proposed system, which is intended for sparsely populated rural areas, where information available to drivers about road safety, especially bridge conditions, is very limited. At the heart of this system is the SmartBrick, a wireless system for remote structural health monitoring that has been presented in our previous work. Sensors on the SmartBrick network regularly collect data on water level, temperature, strain, and other parameters important to safety of a bridge. This information is stored on the device, and reported to a remote server over the GSM cellular infrastructure. The system generates alerts indicating hazardous road conditions when the data exceeds thresholds that can be remotely changed. The remote server and any number of designated authorities can be notified by email, FTP, and SMS. Drivers can view road conditions and subscribe to SMS and/or email alerts through a web page. The subscription-only form of alert generation has been deliberately selected to mitigate privacy concerns. The proposed system can significantly increase the safety of travel through rural areas. Real-time availability of information to transportation authorities and law enforcement officials facilitates early or proactive reaction to road hazards. Direct notification of drivers further increases the utility of the system in increasing the safety of the traveling public.
NASA Astrophysics Data System (ADS)
Egeland, R.; Huang, C.-H.; Rossman, P.; Sundarrajan, P.; Wildish, T.
2012-12-01
PhEDEx is the data-transfer management solution written by CMS. It consists of agents running at each site, a website for presentation of information, and a web-based data-service for scripted access to information. The website allows users to monitor the progress of data-transfers, the status of site agents and links between sites, and the overall status and behaviour of everything about PhEDEx. It also allows users to make and approve requests for data-transfers and for deletion of data. It is the main point-of-entry for all users wishing to interact with PhEDEx. For several years, the website has consisted of a single perl program with about 10K SLOC. This program has limited capabilities for exploring the data, with only coarse filtering capabilities and no context-sensitive awareness. Graphical information is presented as static images, generated on the server, with no interactivity. It is also not well connected to the rest of the PhEDEx codebase, since much of it was written before the data-service was developed. All this makes it hard to maintain and extend. We are re-implementing the website to address these issues. The UI is being rewritten in Javascript, replacing most of the server-side code. We are using the YUI toolkit to provide advanced features and context-sensitive interaction, and will adopt a Javascript charting library for generating graphical representations client-side. This relieves the server of much of its load, and automatically improves server-side security. The Javascript components can be re-used in many ways, allowing custom pages to be developed for specific uses. In particular, standalone test-cases using small numbers of components make it easier to debug the Javascript than it is to debug a large server program. Information about PhEDEx is accessed through the PhEDEx data-service, since direct SQL is not available from the clients’ browser. This provides consistent semantics with other, externally written monitoring tools, which already use the data-service. It also reduces redundancy in the code, yielding a simpler, consolidated codebase. In this talk we describe our experience of re-factoring this monolithic server-side program into a lighter client-side framework. We describe some of the techniques that worked well for us, and some of the mistakes we made along the way. We present the current state of the project, and its future direction.
Lawrence, J. F.; Cochran, E.S.; Chung, A.; Kaiser, A.; Christensen, C. M.; Allen, R.; Baker, J.W.; Fry, B.; Heaton, T.; Kilb, Debi; Kohler, M.D.; Taufer, M.
2014-01-01
We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections.
Testnodes: a Lightweight Node-Testing Infrastructure
NASA Astrophysics Data System (ADS)
Fay, R.; Bland, J.
2014-06-01
A key aspect of ensuring optimum cluster reliability and productivity lies in keeping worker nodes in a healthy state. Testnodes is a lightweight node testing solution developed at Liverpool. While Nagios has been used locally for general monitoring of hosts and services, Testnodes is optimised to answer one question: is there any reason this node should not be accepting jobs? This tight focus enables Testnodes to inspect nodes frequently with minimal impact and provide a comprehensive and easily extended check with each inspection. On the server side, Testnodes, implemented in python, interoperates with the Torque batch server to control the nodes production status. Testnodes remotely and in parallel executes client-side test scripts and processes the return codes and output, adjusting the node's online/offline status accordingly to preserve the integrity of the overall batch system. Testnodes reports via log, email and Nagios, allowing a quick overview of node status to be reviewed and specific node issues to be identified and resolved quickly. This presentation will cover testnodes design and implementation, together with the results of its use in production at Liverpool, and future development plans.
Nakamura, R; Sasaki, M; Oikawa, H; Harada, S; Tamakawa, Y
2000-03-01
To use an intranet technique to develop an information system that simultaneously supports both diagnostic reports and radiotherapy planning images. Using a file server as the gateway a radiation oncology LAN was connected to an already operative RIS LAN. Dose-distribution images were saved in tagged-image-file format by way of a screen dump to the file server. X-ray simulator images and portal images were saved in encapsulated postscript format in the file server and automatically converted to portable document format. The files on the file server were automatically registered to the Web server by the search engine and were available for searching and browsing using the Web browser. It took less than a minute to register planning images. For clients, searching and browsing the file took less than 3 seconds. Over 150,000 reports and 4,000 images from a six-month period were accessible. Because the intranet technique was used, construction and maintenance was completed without specialty. Prompt access to essential information about radiotherapy has been made possible by this system. It promotes public access to radiotherapy planning that may improve the quality of treatment.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... Corporation Including Express Employment Professionals. 74,111 Alstom Transportation, Hornell, NY May 14, 2009... Serv., Server Systems, IC1, Storage, Backup. 74,316A International Business Cambridge, MA......... June 10, 2009. Machines (IBM), Global Tech Serv., Server Systems, IC1, Storage, Backup. 74,316B...
Things That Go "Bump": in the Virtual Night.
ERIC Educational Resources Information Center
Fore, Julie A.
1997-01-01
Introduces concepts of server security and includes articles and sidebars of firsthand accounts of consequences of not devoting enough time to security measures. Outlines the following factors to consider when evaluating a server's risk potential: confidentiality/reproducibility of the data; complexity of the system; backup system and hardware…
Interfaces for Distributed Systems of Information Servers.
ERIC Educational Resources Information Center
Kahle, Brewster M.; And Others
1993-01-01
Describes five interfaces to remote, full-text databases accessed through distributed systems of servers. These are WAIStation for the Macintosh, XWAIS for X-Windows, GWAIS for Gnu-Emacs; SWAIS for dumb terminals, and Rosebud for the Macintosh. Sixteen illustrations provide examples of display screens. Problems and needed improvements are…
ARIANE: integration of information databases within a hospital intranet.
Joubert, M; Aymard, S; Fieschi, D; Volot, F; Staccini, P; Robert, J J; Fieschi, M
1998-05-01
Large information systems handle massive volume of data stored in heterogeneous sources. Each server has its own model of representation of concepts with regard to its aims. One of the main problems end-users encounter when accessing different servers is to match their own viewpoint on biomedical concepts with the various representations that are made in the databases servers. The aim of the project ARIANE is to provide end-users with easy-to-use and natural means to access and query heterogeneous information databases. The objectives of this research work consist in building a conceptual interface by means of the Internet technology inside an enterprise Intranet and to propose a method to realize it. This method is based on the knowledge sources provided by the Unified Medical Language System (UMLS) project of the US National Library of Medicine. Experiments concern queries to three different information servers: PubMed, a Medline server of the NLM; Thériaque, a French database on drugs implemented in the Hospital Intranet; and a Web site dedicated to Internet resources in gastroenterology and nutrition, located at the Faculty of Medicine of Nice (France). Accessing to each of these servers is different according to the kind of information delivered and according to the technology used to query it. Dealing with health care professional workstation, the authors introduced in the ARIANE project quality criteria in order to attempt a homogeneous and efficient way to build a query system able to be integrated in existing information systems and to integrate existing and new information sources.
Metrics for Assessing the Reliability of a Telemedicine Remote Monitoring System
Fox, Mark; Papadopoulos, Amy; Crump, Cindy
2013-01-01
Abstract Objective: The goal of this study was to assess using new metrics the reliability of a real-time health monitoring system in homes of older adults. Materials and Methods: The “MobileCare Monitor” system was installed into the homes of nine older adults >75 years of age for a 2-week period. The system consisted of a wireless wristwatch-based monitoring system containing sensors for location, temperature, and impacts and a “panic” button that was connected through a mesh network to third-party wireless devices (blood pressure cuff, pulse oximeter, weight scale, and a survey-administering device). To assess system reliability, daily phone calls instructed participants to conduct system tests and reminded them to fill out surveys and daily diaries. Phone reports and participant diary entries were checked against data received at a secure server. Results: Reliability metrics assessed overall system reliability, data concurrence, study effectiveness, and system usability. Except for the pulse oximeter, system reliability metrics varied between 73% and 92%. Data concurrence for proximal and distal readings exceeded 88%. System usability following the pulse oximeter firmware update varied between 82% and 97%. An estimate of watch-wearing adherence within the home was quite high, about 80%, although given the inability to assess watch-wearing when a participant left the house, adherence likely exceeded the 10 h/day requested time. In total, 3,436 of 3,906 potential measurements were obtained, indicating a study effectiveness of 88%. Conclusions: The system was quite effective in providing accurate remote health data. The different system reliability measures identify important error sources in remote monitoring systems. PMID:23611640