Sample records for environment monitoring server

  1. Study on an agricultural environment monitoring server system using Wireless Sensor Networks.

    PubMed

    Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun

    2010-01-01

    This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.

  2. Implementation experience of a patient monitoring solution based on end-to-end standards.

    PubMed

    Martinez, I; Fernandez, J; Galarraga, M; Serrano, L; de Toledo, P; Escayola, J; Jimenez-Fernandez, S; Led, S; Martinez-Espronceda, M; Garcia, J

    2007-01-01

    This paper presents a proof-of-concept design of a patient monitoring solution for Intensive Care Unit (ICU). It is end-to-end standards-based, using ISO/IEEE 11073 (X73) in the bedside environment and EN13606 to communicate the information to an Electronic Healthcare Record (EHR) server. At the bedside end a plug-and-play sensor network is implemented, which communicates with a gateway that collects the medical information and sends it to a monitoring server. At this point the server transforms the data frame into an EN13606 extract, to be stored on the EHR server. The presented system has been tested in a laboratory environment to demonstrate the feasibility of this end-to-end standards-based solution.

  3. The Development of a Remote Patient Monitoring System using Java-enabled Mobile Phones.

    PubMed

    Kogure, Y; Matsuoka, H; Kinouchi, Y; Akutagawa, M

    2005-01-01

    A remote patient monitoring system is described. This system is to monitor information of multiple patients in ICU/CCU via 3G mobile phones. Conventionally, various patient information, such as vital signs, is collected and stored on patient information systems. In proposed system, the patient information is recollected by remote information server, and transported to mobile phones. The server is worked as a gateway between hospital intranet and public networks. Provided information from the server consists of graphs and text data. Doctors can browse patient's information on their mobile phones via the server. A custom Java application software is used to browse these data. In this study, the information server and Java application are developed, and communication between the server and mobile phone in model environment is confirmed. To apply this system to practical products of patient information systems is future work.

  4. Web-Accessible Scientific Workflow System for Performance Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roelof Versteeg; Roelof Versteeg; Trevor Rowe

    2006-03-01

    We describe the design and implementation of a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic Javascript and HTML/CSS) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This environment allows for reproducible, transparent result generation by a diverse user base. It has been implemented for several monitoringmore » systems with different degrees of complexity.« less

  5. The Fluke Security Project

    DTIC Science & Technology

    2000-04-01

    be an extension of Utah’s nascent Quarks system, oriented to closely coupled cluster environments. However, the grant did not actually begin until... Intel x86, implemented ten virtual machine monitors and servers, including a virtual memory manager, a checkpointer, a process manager, a file server...Fluke, we developed a novel hierarchical processor scheduling frame- work called CPU inheritance scheduling [5]. This is a framework for scheduling

  6. Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments

    PubMed Central

    Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo

    2015-01-01

    A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613

  7. Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments.

    PubMed

    Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo

    2015-09-18

    A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.

  8. Monitoring Evolution at CERN

    NASA Astrophysics Data System (ADS)

    Andrade, P.; Fiorini, B.; Murphy, S.; Pigueiras, L.; Santos, M.

    2015-12-01

    Over the past two years, the operation of the CERN Data Centres went through significant changes with the introduction of new mechanisms for hardware procurement, new services for cloud provisioning and configuration management, among other improvements. These changes resulted in an increase of resources being operated in a more dynamic environment. Today, the CERN Data Centres provide over 11000 multi-core processor servers, 130 PB disk servers, 100 PB tape robots, and 150 high performance tape drives. To cope with these developments, an evolution of the data centre monitoring tools was also required. This modernisation was based on a number of guiding rules: sustain the increase of resources, adapt to the new dynamic nature of the data centres, make monitoring data easier to share, give more flexibility to Service Managers on how they publish and consume monitoring metrics and logs, establish a common repository of monitoring data, optimise the handling of monitoring notifications, and replace the previous toolset by new open source technologies with large adoption and community support. This contribution describes how these improvements were delivered, present the architecture and technologies of the new monitoring tools, and review the experience of its production deployment.

  9. Design of Deformation Monitoring System for Volcano Mitigation

    NASA Astrophysics Data System (ADS)

    Islamy, M. R. F.; Salam, R. A.; Munir, M. M.; Irsyam, M.; Khairurrijal

    2016-08-01

    Indonesia has many active volcanoes that are potentially disastrous. It needs good mitigation systems to prevent victims and to reduce casualties from potential disaster caused by volcanoes eruption. Therefore, the system to monitor the deformation of volcano was built. This system employed telemetry with the combination of Radio Frequency (RF) communications of XBEE and General Packet Radio Service (GPRS) communication of SIM900. There are two types of modules in this system, first is the coordinator as a parent and second is the node as a child. Each node was connected to coordinator forming a Wireless Sensor Network (WSN) with a star topology and it has an inclinometer based sensor, a Global Positioning System (GPS), and an XBEE module. The coordinator collects data to each node, one a time, to prevent collision data between nodes, save data to SD Card and transmit data to web server via GPRS. Inclinometer was calibrated with self-built in calibrator and tested in high temperature environment to check the durability. The GPS was tested by displaying its position in web server via Google Map Application Protocol Interface (API v.3). It was shown that the coordinator can receive and transmit data from every node to web server very well and the system works well in a high temperature environment.

  10. Mobile Phone-Based Field Monitoring for Satsuma Mandarin and Its Application to Watering Advice System

    NASA Astrophysics Data System (ADS)

    Kamiya, Toshiyuki; Numano, Nagisa; Yagyu, Hiroyuki; Shimazu, Hideo

    This paper describes a mobile phone-based data logging system for monitoring the growing status of Satsuma mandarin, a type of citrus fruit, in the field. The system can provide various feedback to the farm producers with collected data, such as visualization of related data as a timeline chart or advice on the necessity of watering crops. It is important to collect information on environment conditions, plant status and product quality, to analyze it and to provide it as feedback to the farm producers to aid their operations. This paper proposes a novel framework of field monitoring and feedback for open-field farming. For field monitoring, it combines a low-cost plant status monitoring method using a simple apparatus and a Field Server for environment condition monitoring. Each field worker has a simple apparatus to measure fruit firmness and records data with a mobile phone. The logged data are stored in the database of the system on the server. The system analyzes stored data for each field and is able to show the necessity of watering to the user in five levels. The system is also able to show various stored data in timeline chart form. The user and coach can compare or analyze these data via a web interface. A test site was built at a Satsuma mandarin field at Kumano in Mie Prefecture, Japan using the framework, and farm workers monitor in the area used and evaluated the system.

  11. Integrated Environment for Ubiquitous Healthcare and Mobile IPv6 Networks

    NASA Astrophysics Data System (ADS)

    Cagalaban, Giovanni; Kim, Seoksoo

    The development of Internet technologies based on the IPv6 protocol will allow real-time monitoring of people with health deficiencies and improve the independence of elderly people. This paper proposed a ubiquitous healthcare system for the personalized healthcare services with the support of mobile IPv6 networks. Specifically, this paper discusses the integration of ubiquitous healthcare and wireless networks and its functional requirements. This allow an integrated environment where heterogeneous devices such a mobile devices and body sensors can continuously monitor patient status and communicate remotely with healthcare servers, physicians, and family members to effectively deliver healthcare services.

  12. Advanced Pulse Oximetry System for Remote Monitoring and Management

    PubMed Central

    Pak, Ju Geon; Park, Kee Hyun

    2012-01-01

    Pulse oximetry data such as saturation of peripheral oxygen (SpO2) and pulse rate are vital signals for early diagnosis of heart disease. Therefore, various pulse oximeters have been developed continuously. However, some of the existing pulse oximeters are not equipped with communication capabilities, and consequently, the continuous monitoring of patient health is restricted. Moreover, even though certain oximeters have been built as network models, they focus on exchanging only pulse oximetry data, and they do not provide sufficient device management functions. In this paper, we propose an advanced pulse oximetry system for remote monitoring and management. The system consists of a networked pulse oximeter and a personal monitoring server. The proposed pulse oximeter measures a patient's pulse oximetry data and transmits the data to the personal monitoring server. The personal monitoring server then analyzes the received data and displays the results to the patient. Furthermore, for device management purposes, operational errors that occur in the pulse oximeter are reported to the personal monitoring server, and the system configurations of the pulse oximeter, such as thresholds and measurement targets, are modified by the server. We verify that the proposed pulse oximetry system operates efficiently and that it is appropriate for monitoring and managing a pulse oximeter in real time. PMID:22933841

  13. Advanced pulse oximetry system for remote monitoring and management.

    PubMed

    Pak, Ju Geon; Park, Kee Hyun

    2012-01-01

    Pulse oximetry data such as saturation of peripheral oxygen (SpO(2)) and pulse rate are vital signals for early diagnosis of heart disease. Therefore, various pulse oximeters have been developed continuously. However, some of the existing pulse oximeters are not equipped with communication capabilities, and consequently, the continuous monitoring of patient health is restricted. Moreover, even though certain oximeters have been built as network models, they focus on exchanging only pulse oximetry data, and they do not provide sufficient device management functions. In this paper, we propose an advanced pulse oximetry system for remote monitoring and management. The system consists of a networked pulse oximeter and a personal monitoring server. The proposed pulse oximeter measures a patient's pulse oximetry data and transmits the data to the personal monitoring server. The personal monitoring server then analyzes the received data and displays the results to the patient. Furthermore, for device management purposes, operational errors that occur in the pulse oximeter are reported to the personal monitoring server, and the system configurations of the pulse oximeter, such as thresholds and measurement targets, are modified by the server. We verify that the proposed pulse oximetry system operates efficiently and that it is appropriate for monitoring and managing a pulse oximeter in real time.

  14. Real-time indoor monitoring system based on wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Wu, Zhengzhong; Liu, Zilin; Huang, Xiaowei; Liu, Jun

    2008-10-01

    Wireless sensor networks (WSN) greatly extend our ability to monitor and control the physical world. It can collaborate and aggregate a huge amount of sensed data to provide continuous and spatially dense observation of environment. The control and monitoring of indoor atmosphere conditions represents an important task with the aim of ensuring suitable working and living spaces to people. However, the comprehensive air quality, which includes monitoring of humidity, temperature, gas concentrations, etc., is not so easy to be monitored and controlled. In this paper an indoor WSN monitoring system was developed. In the system several sensors such as temperature sensor, humidity sensor, gases sensor, were built in a RF transceiver board for monitoring indoor environment conditions. The indoor environmental monitoring parameters can be transmitted by wireless to database server and then viewed throw PC or PDA accessed to the local area networks by administrators. The system, which was also field-tested and showed a reliable and robust characteristic, is significant and valuable to people.

  15. Client-Server Connection Status Monitoring Using Ajax Push Technology

    NASA Technical Reports Server (NTRS)

    Lamongie, Julien R.

    2008-01-01

    This paper describes how simple client-server connection status monitoring can be implemented using Ajax (Asynchronous JavaScript and XML), JSF (Java Server Faces) and ICEfaces technologies. This functionality is required for NASA LCS (Launch Control System) displays used in the firing room for the Constellation project. Two separate implementations based on two distinct approaches are detailed and analyzed.

  16. A system for monitoring the radiation effects of a proton linear accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skorkin, V. M., E-mail: skorkin@inr.ru; Belyanski, K. L.; Skorkin, A. V.

    2016-12-15

    The system for real-time monitoring of radioactivity of a high-current proton linear accelerator detects secondary neutron emission from proton beam losses in transport channels and measures the activity of radionuclides in gas and aerosol emissions and the radiation background in the environment affected by a linear accelerator. The data provided by gamma, beta, and neutron detectors are transferred over a computer network to the central server. The system allows one to monitor proton beam losses, the activity of gas and aerosol emissions, and the radiation emission level of a linear accelerator in operation.

  17. An efficient biometric and password-based remote user authentication using smart card for Telecare Medical Information Systems in multi-server environment.

    PubMed

    Maitra, Tanmoy; Giri, Debasis

    2014-12-01

    The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.

  18. Using OPC and HL7 Standards to Incorporate an Industrial Big Data Historian in a Health IT Environment.

    PubMed

    Cruz, Márcio Freire; Cavalcante, Carlos Arthur Mattos Teixeira; Sá Barretto, Sérgio Torres

    2018-05-30

    Health Level Seven (HL7) is one of the standards most used to centralize data from different vital sign monitoring systems. This solution significantly limits the data available for historical analysis, because it typically uses databases that are not effective in storing large volumes of data. In industry, a specific Big Data Historian, known as a Process Information Management System (PIMS), solves this problem. This work proposes the same solution to overcome the restriction on storing vital sign data. The PIMS needs a compatible communication standard to allow storing, and the one most commonly used is the OLE for Process Control (OPC). This paper presents a HL7-OPC Server that permits communication between vital sign monitoring systems with PIMS, thus allowing the storage of long historical series of vital signs. In addition, it carries out a review about local and cloud-based Big Medical Data researches, followed by an analysis of the PIMS in a Health IT Environment. Then it shows the architecture of HL7 and OPC Standards. Finally, it shows the HL7-OPC Server and a sequence of tests that proved its full operation and performance.

  19. Improving STEM Education and Workforce Development by the Inclusion of Research Experiences in the Curriculum at SWC

    DTIC Science & Technology

    2016-06-08

    server environment. While the college’s two Cisco blade -servers are located in separate buildings, these 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND...college’s two Cisco blade -servers are located in separate buildings, these units now work as one unit. Critical databases and software packages are...server environment. While the college’s two Cisco blade -servers are located in separate buildings, these units now work as one unit. Critical

  20. Current Status of an Implementation of a System Monitoring for Seamless Auxiliary Data at the Geodetic Observatory Wettzell

    NASA Astrophysics Data System (ADS)

    Neidhardt, Alexander; Kirschbauer, Katharina; Plötz, Christian; Schönberger, Matthias; Böer, Armin; Wettzell VLBI Team

    2016-12-01

    The first test implementation of an auxiliary data archive is tested at the Geodetic Observatory Wetttzell. It is software which follows on the Wettzell SysMon, extending the database and data sensors with the functionalities of a professional monitoring environment, named Zabbix. Some extensions to the remote control server on the NASA Field System PC enable the inclusion of data from external antennas. The presentation demonstrates the implementation and discusses the current possibilities to encourage other antennas to join the auxiliary archive.

  1. Automatic analysis of attack data from distributed honeypot network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel

    2013-05-01

    There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.

  2. Wireless Sensor Network-Based Greenhouse Environment Monitoring and Automatic Control System for Dew Condensation Prevention

    PubMed Central

    Park, Dae-Heon; Park, Jang-Woo

    2011-01-01

    Dew condensation on the leaf surface of greenhouse crops can promote diseases caused by fungus and bacteria, affecting the growth of the crops. In this paper, we present a WSN (Wireless Sensor Network)-based automatic monitoring system to prevent dew condensation in a greenhouse environment. The system is composed of sensor nodes for collecting data, base nodes for processing collected data, relay nodes for driving devices for adjusting the environment inside greenhouse and an environment server for data storage and processing. Using the Barenbrug formula for calculating the dew point on the leaves, this system is realized to prevent dew condensation phenomena on the crop’s surface acting as an important element for prevention of diseases infections. We also constructed a physical model resembling the typical greenhouse in order to verify the performance of our system with regard to dew condensation control. PMID:22163813

  3. Wireless sensor network-based greenhouse environment monitoring and automatic control system for dew condensation prevention.

    PubMed

    Park, Dae-Heon; Park, Jang-Woo

    2011-01-01

    Dew condensation on the leaf surface of greenhouse crops can promote diseases caused by fungus and bacteria, affecting the growth of the crops. In this paper, we present a WSN (Wireless Sensor Network)-based automatic monitoring system to prevent dew condensation in a greenhouse environment. The system is composed of sensor nodes for collecting data, base nodes for processing collected data, relay nodes for driving devices for adjusting the environment inside greenhouse and an environment server for data storage and processing. Using the Barenbrug formula for calculating the dew point on the leaves, this system is realized to prevent dew condensation phenomena on the crop's surface acting as an important element for prevention of diseases infections. We also constructed a physical model resembling the typical greenhouse in order to verify the performance of our system with regard to dew condensation control.

  4. Home medical monitoring network based on embedded technology

    NASA Astrophysics Data System (ADS)

    Liu, Guozhong; Deng, Wenyi; Yan, Bixi; Lv, Naiguang

    2006-11-01

    Remote medical monitoring network for long-term monitoring of physiological variables would be helpful for recovery of patients as people are monitored at more comfortable conditions. Furthermore, long-term monitoring would be beneficial to investigate slowly developing deterioration in wellness status of a subject and provide medical treatment as soon as possible. The home monitor runs on an embedded microcomputer Rabbit3000 and interfaces with different medical monitoring module through serial ports. The network based on asymmetric digital subscriber line (ADSL) or local area network (LAN) is established and a client - server model, each embedded home medical monitor is client and the monitoring center is the server, is applied to the system design. The client is able to provide its information to the server when client's request of connection to the server is permitted. The monitoring center focuses on the management of the communications, the acquisition of medical data, and the visualization and analysis of the data, etc. Diagnosing model of sleep apnea syndrome is built basing on ECG, heart rate, respiration wave, blood pressure, oxygen saturation, air temperature of mouth cavity or nasal cavity, so sleep status can be analyzed by physiological data acquired as people in sleep. Remote medical monitoring network based on embedded micro Internetworking technology have advantages of lower price, convenience and feasibility, which have been tested by the prototype.

  5. Hardware Assisted Stealthy Diversity (CHECKMATE)

    DTIC Science & Technology

    2013-09-01

    applicable across multiple architectures. Figure 29 shows an example an attack against an interpreted environment with a Java executable. CHECKMATE can...Architectures ARM PPCx86 Java VM Java VMJava VM Java Executable Attack APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED 33 a user executes “/usr/bin/wget...Server 1 - Administration Server 2 – Database ( mySQL ) Server 3 – Web server (Mongoose) Server 4 – File server (SSH) Server 5 – Email server

  6. xDSL connection monitor

    DOEpatents

    Horton, John J.

    2006-04-11

    A system and method of maintaining communication between a computer and a server, the server being in communication with the computer via xDSL service or dial-up modem service, with xDSL service being the default mode of communication, the method including sending a request to the server via xDSL service to which the server should respond and determining if a response has been received. If no response has been received, displaying on the computer a message (i) indicating that xDSL service has failed and (ii) offering to establish communication between the computer and the server via the dial-up modem, and thereafter changing the default mode of communication between the computer and the server to dial-up modem service. In a preferred embodiment, an xDSL service provider monitors dial-up modem communications and determines if the computer dialing in normally establishes communication with the server via xDSL service. The xDSL service provider can thus quickly and easily detect xDSL failures.

  7. THttpServer class in ROOT

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, Joern; Linev, Sergey

    2015-12-01

    The new THttpServer class in ROOT implements HTTP server for arbitrary ROOT applications. It is based on Civetweb embeddable HTTP server and provides direct access to all objects registered for the server. Objects data could be provided in different formats: binary, XML, GIF/PNG, and JSON. A generic user interface for THttpServer has been implemented with HTML/JavaScript based on JavaScript ROOT development. With any modern web browser one could list, display, and monitor objects available on the server. THttpServer is used in Go4 framework to provide HTTP interface to the online analysis.

  8. Centralized Fabric Management Using Puppet, Git, and GLPI

    NASA Astrophysics Data System (ADS)

    Smith, Jason A.; De Stefano, John S., Jr.; Fetzko, John; Hollowell, Christopher; Ito, Hironori; Karasawa, Mizuki; Pryor, James; Rao, Tejas; Strecker-Kellogg, William

    2012-12-01

    Managing the infrastructure of a large and complex data center can be extremely difficult without taking advantage of recent technological advances in administrative automation. Puppet is a seasoned open-source tool that is designed for enterprise class centralized configuration management. At the RHIC and ATLAS Computing Facility (RACF) at Brookhaven National Laboratory, we use Puppet along with Git, GLPI, and some custom scripts as part of our centralized configuration management system. In this paper, we discuss how we use these tools for centralized configuration management of our servers and services, change management requiring authorized approval of production changes, a complete version controlled history of all changes made, separation of production, testing and development systems using puppet environments, semi-automated server inventory using GLPI, and configuration change monitoring and reporting using the Puppet dashboard. We will also discuss scalability and performance results from using these tools on a 2,000+ node cluster and 400+ infrastructure servers with an administrative staff of approximately 25 full-time employees (FTEs).

  9. Self-Organizing Peer-To-Peer Middleware for Healthcare Monitoring in Real-Time

    PubMed Central

    Kim, Hyun Ho; Jo, Hyeong Gon

    2017-01-01

    As the number of elderly persons with chronic illnesses increases, a new public infrastructure for their care is becoming increasingly necessary. In particular, technologies that can monitoring bio-signals in real-time have been receiving significant attention. Currently, most healthcare monitoring services are implemented by wireless carrier through centralized servers. These services are vulnerable to data concentration because all data are sent to a remote server. To solve these problems, we propose self-organizing P2P middleware for healthcare monitoring that enables a real-time multi bio-signal streaming without any central server by connecting the caregiver and care recipient. To verify the performance of the proposed middleware, we evaluated the monitoring service matching time based on a monitoring request. We also confirmed that it is possible to provide an effective monitoring service by evaluating the connectivity between Peer-to-Peer and average jitter. PMID:29149045

  10. Self-Organizing Peer-To-Peer Middleware for Healthcare Monitoring in Real-Time.

    PubMed

    Kim, Hyun Ho; Jo, Hyeong Gon; Kang, Soon Ju

    2017-11-17

    As the number of elderly persons with chronic illnesses increases, a new public infrastructure for their care is becoming increasingly necessary. In particular, technologies that can monitoring bio-signals in real-time have been receiving significant attention. Currently, most healthcare monitoring services are implemented by wireless carrier through centralized servers. These services are vulnerable to data concentration because all data are sent to a remote server. To solve these problems, we propose self-organizing P2P middleware for healthcare monitoring that enables a real-time multi bio-signal streaming without any central server by connecting the caregiver and care recipient. To verify the performance of the proposed middleware, we evaluated the monitoring service matching time based on a monitoring request. We also confirmed that it is possible to provide an effective monitoring service by evaluating the connectivity between Peer-to-Peer and average jitter.

  11. UNIX based client/server hospital information system.

    PubMed

    Nakamura, S; Sakurai, K; Uchiyama, M; Yoshii, Y; Tachibana, N

    1995-01-01

    SMILE (St. Luke's Medical Center Information Linkage Environment) is a HIS which is a client/server system using a UNIX workstation under an open network, LAN(FDDI&10BASE-T). It provides a multivendor environment, high performance with low cost and a user-friendly GUI. However, the client/server architecture with a UNIX workstation does not have the same OLTP environment (ex. TP monor) as the mainframe. So, our system problems and the steps used to solve them were reviewed. Several points that are necessary for a client/server system with a UNIX workstation in the future are presented.

  12. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards.

    PubMed

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user's management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.'s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.'s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.'s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties.

  13. A daily living activity remote monitoring system for solitary elderly people.

    PubMed

    Maki, Hiromichi; Ogawa, Hidekuni; Matsuoka, Shingo; Yonezawa, Yoshiharu; Caldwell, W Morton

    2011-01-01

    A daily living activity remote monitoring system has been developed for supporting solitary elderly people. The monitoring system consists of a tri-axis accelerometer, six low-power active filters, a low-power 8-bit microcontroller (MC), a 1GB SD memory card (SDMC) and a 2.4 GHz low transmitting power mobile phone (PHS). The tri-axis accelerometer attached to the subject's chest can simultaneously measure dynamic and static acceleration forces produced by heart sound, respiration, posture and behavior. The heart rate, respiration rate, activity, posture and behavior are detected from the dynamic and static acceleration forces. These data are stored in the SD. The MC sends the data to the server computer every hour. The server computer stores the data and makes a graphic chart from the data. When the caregiver calls from his/her mobile phone to the server computer, the server computer sends the graphical chart via the PHS. The caregiver's mobile phone displays the chart to the monitor graphically.

  14. Naver: a PC-cluster-based VR system

    NASA Astrophysics Data System (ADS)

    Park, ChangHoon; Ko, HeeDong; Kim, TaiYun

    2003-04-01

    In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.

  15. Do you see what I hear: experiments in multi-channel sound and 3D visualization for network monitoring?

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Hall, David L.

    2010-04-01

    Detection of intrusions is a continuing problem in network security. Due to the large volumes of data recorded in Web server logs, analysis is typically forensic, taking place only after a problem has occurred. This paper describes a novel method of representing Web log information through multi-channel sound, while simultaneously visualizing network activity using a 3-D immersive environment. We are exploring the detection of intrusion signatures and patterns, utilizing human aural and visual pattern recognition ability to detect intrusions as they occur. IP addresses and return codes are mapped to an informative and unobtrusive listening environment to act as a situational sound track of Web traffic. Web log data is parsed and formatted using Python, then read as a data array by the synthesis language SuperCollider [1], which renders it as a sonification. This can be done either for the study of pre-existing data sets or in monitoring Web traffic in real time. Components rendered aurally include IP address, geographical information, and server Return Codes. Users can interact with the data, speeding or slowing the speed of representation (for pre-existing data sets) or "mixing" sound components to optimize intelligibility for tracking suspicious activity.

  16. The Trauma Patient Tracking System: implementing a wireless monitoring infrastructure for emergency response.

    PubMed

    Maltz, Jonathan; C Ng, Thomas; Li, Dustin; Wang, Jian; Wang, Kang; Bergeron, William; Martin, Ron; Budinger, Thomas

    2005-01-01

    In mass trauma situations, emergency personnel are challenged with the task of prioritizing the care of many injured victims. We propose a trauma patient tracking system (TPTS) where first-responders tag all patients with a wireless monitoring device that continuously reports the location of each patient. The system can be used not only to prioritize patient care, but also to determine the time taken for each patient to receive treatment. This is important in training emergency personnel and in identifying bottlenecks in the disaster response process. In situations where biochemical agents are involved, a TPTS may be employed to determine sites of cross-contamination. In order to track patient location in both outdoor and indoor environments, we employ both Global Positioning System (GPS) and Television/ Radio Frequency (TVRF) technologies. Each patient tag employs IEEE 802.11 (Wi-Fi)/TCP/IP networking to communicate with a central server via any available Wi-Fi basestation. A key component to increase TPTS fault-tolerance is a mobile Wi-Fi basestation that employs redundant Internet connectivity to ensure that tags at the disaster scene can send information to the central server even when local infrastructure is unavailable for use. We demonstrate the robustness of the system in tracking multiple patients in a simulated trauma situation in an urban environment.

  17. A visualization environment for supercomputing-based applications in computational mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  18. Introduction to the Space Weather Monitoring System at KASI

    NASA Astrophysics Data System (ADS)

    Baek, J.; Choi, S.; Kim, Y.; Cho, K.; Bong, S.; Lee, J.; Kwak, Y.; Hwang, J.; Park, Y.; Hwang, E.

    2014-05-01

    We have developed the Space Weather Monitoring System (SWMS) at the Korea Astronomy and Space Science Institute (KASI). Since 2007, the system has continuously evolved into a better system. The SWMS consists of several subsystems: applications which acquire and process observational data, servers which run the applications, data storage, and display facilities which show the space weather information. The applications collect solar and space weather data from domestic and oversea sites. The collected data are converted to other format and/or visualized in real time as graphs and illustrations. We manage 3 data acquisition and processing servers, a file service server, a web server, and 3 sets of storage systems. We have developed 30 applications for a variety of data, and the volume of data is about 5.5 GB per day. We provide our customers with space weather contents displayed at the Space Weather Monitoring Lab (SWML) using web services.

  19. A Case Study in Software Adaptation

    DTIC Science & Technology

    2002-01-01

    1 A Case Study in Software Adaptation Giuseppe Valetto Telecom Italia Lab Via Reiss Romoli 274 10148, Turin, Italy +39 011 2288788...configuration of the service; monitoring of database connectivity from within the service; monitoring of crashes and shutdowns of IM servers; monitoring of...of the IM server all share a relational database and a common runtime state repository, which make up the backend tier, and allow replicas to

  20. Integrated technologies for solid waste bin monitoring system.

    PubMed

    Arebey, Maher; Hannan, M A; Basri, Hassan; Begum, R A; Abdullah, Huda

    2011-06-01

    The integration of communication technologies such as radio frequency identification (RFID), global positioning system (GPS), general packet radio system (GPRS), and geographic information system (GIS) with a camera are constructed for solid waste monitoring system. The aim is to improve the way of responding to customer's inquiry and emergency cases and estimate the solid waste amount without any involvement of the truck driver. The proposed system consists of RFID tag mounted on the bin, RFID reader as in truck, GPRS/GSM as web server, and GIS as map server, database server, and control server. The tracking devices mounted in the trucks collect location information in real time via the GPS. This information is transferred continuously through GPRS to a central database. The users are able to view the current location of each truck in the collection stage via a web-based application and thereby manage the fleet. The trucks positions and trash bin information are displayed on a digital map, which is made available by a map server. Thus, the solid waste of the bin and the truck are being monitored using the developed system.

  1. An Improvement of Robust Biometrics-Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards

    PubMed Central

    Moon, Jongho; Choi, Younsung; Jung, Jaewook; Won, Dongho

    2015-01-01

    In multi-server environments, user authentication is a very important issue because it provides the authorization that enables users to access their data and services; furthermore, remote user authentication schemes for multi-server environments have solved the problem that has arisen from user’s management of different identities and passwords. For this reason, numerous user authentication schemes that are designed for multi-server environments have been proposed over recent years. In 2015, Lu et al. improved upon Mishra et al.’s scheme, claiming that their remote user authentication scheme is more secure and practical; however, we found that Lu et al.’s scheme is still insecure and incorrect. In this paper, we demonstrate that Lu et al.’s scheme is vulnerable to outsider attack and user impersonation attack, and we propose a new biometrics-based scheme for authentication and key agreement that can be used in multi-server environments; then, we show that our proposed scheme is more secure and supports the required security properties. PMID:26709702

  2. PREDICT: Privacy and Security Enhancing Dynamic Information Monitoring

    DTIC Science & Technology

    2015-08-03

    consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided local...12], consisting of global server-side probabilistic assignment by an untrusted server using cloaked locations, followed by feedback-loop guided...these methods achieve high sensing coverage with low cost using cloaked locations [3]. In follow-on work, the issue of mobility is addressed. Task

  3. Construction and application of an intelligent air quality monitoring system for healthcare environment.

    PubMed

    Yang, Chao-Tung; Liao, Chi-Jui; Liu, Jung-Chun; Den, Walter; Chou, Ying-Chyi; Tsai, Jaw-Ji

    2014-02-01

    Indoor air quality monitoring in healthcare environment has become a critical part of hospital management and policy. Manual air sampling and analysis are cost-inhibitive and do not provide real-time air quality data and response measures. In this month-long study over 14 sampling locations in a public hospital in Taiwan, we observed a positive correlation between CO(2) concentration and population, total bacteria, and particulate matter concentrations, thus monitoring CO(2) concentration as a general indicator for air quality could be a viable option. Consequently, an intelligent environmental monitoring system consisting of a CO(2)/temperature/humidity sensor, a digital plug, and a ZigBee Router and Coordinator was developed and tested. The system also included a backend server that received and analyzed data, as well as activating ventilation and air purifiers when CO(2) concentration exceeded a pre-set value. Alert messages can also be delivered to offsite users through mobile devices.

  4. [The Development of Information Centralization and Management Integration System for Monitors Based on Wireless Sensor Network].

    PubMed

    Xu, Xiu; Zhang, Honglei; Li, Yiming; Li, Bin

    2015-07-01

    Developed the information centralization and management integration system for monitors of different brands and models with wireless sensor network technologies such as wireless location and wireless communication, based on the existing wireless network. With adaptive implementation and low cost, the system which possesses the advantages of real-time, efficiency and elaboration is able to collect status and data of the monitors, locate the monitors, and provide services with web server, video server and locating server via local network. Using an intranet computer, the clinical and device management staffs can access the status and parameters of monitors. Applications of this system provide convenience and save human resource for clinical departments, as well as promote the efficiency, accuracy and elaboration for the device management. The successful achievement of this system provides solution for integrated and elaborated management of the mobile devices including ventilator and infusion pump.

  5. Remote diagnosis server

    NASA Technical Reports Server (NTRS)

    Deb, Somnath (Inventor); Ghoshal, Sudipto (Inventor); Malepati, Venkata N. (Inventor); Kleinman, David L. (Inventor); Cavanaugh, Kevin F. (Inventor)

    2004-01-01

    A network-based diagnosis server for monitoring and diagnosing a system, the server being remote from the system it is observing, comprises a sensor for generating signals indicative of a characteristic of a component of the system, a network-interfaced sensor agent coupled to the sensor for receiving signals therefrom, a broker module coupled to the network for sending signals to and receiving signals from the sensor agent, a handler application connected to the broker module for transmitting signals to and receiving signals therefrom, a reasoner application in communication with the handler application for processing, and responding to signals received from the handler application, wherein the sensor agent, broker module, handler application, and reasoner applications operate simultaneously relative to each other, such that the present invention diagnosis server performs continuous monitoring and diagnosing of said components of the system in real time. The diagnosis server is readily adaptable to various different systems.

  6. KFC Server: interactive forecasting of protein interaction hot spots.

    PubMed

    Darnell, Steven J; LeGault, Laura; Mitchell, Julie C

    2008-07-01

    The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model-a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein-protein or protein-DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org.

  7. KFC Server: interactive forecasting of protein interaction hot spots

    PubMed Central

    Darnell, Steven J.; LeGault, Laura; Mitchell, Julie C.

    2008-01-01

    The KFC Server is a web-based implementation of the KFC (Knowledge-based FADE and Contacts) model—a machine learning approach for the prediction of binding hot spots, or the subset of residues that account for most of a protein interface's; binding free energy. The server facilitates the automated analysis of a user submitted protein–protein or protein–DNA interface and the visualization of its hot spot predictions. For each residue in the interface, the KFC Server characterizes its local structural environment, compares that environment to the environments of experimentally determined hot spots and predicts if the interface residue is a hot spot. After the computational analysis, the user can visualize the results using an interactive job viewer able to quickly highlight predicted hot spots and surrounding structural features within the protein structure. The KFC Server is accessible at http://kfc.mitchell-lab.org. PMID:18539611

  8. Network Consumption and Storage Needs when Working in a Full-Time Routine Digital Environment in a Large Nonacademic Training Hospital.

    PubMed

    Nap, Marius

    2016-01-01

    Digital pathology is indisputably connected with high demands on data traffic and storage. As a consequence, control of the logistic process and insight into the management of both traffic and storage is essential. We monitored data traffic from scanners to server and server to workstation and registered storage needs for diagnostic images and additional projects. The results showed that data traffic inside the hospital network (1 Gbps) never exceeded 80 Mbps for scanner-to-server activity, and activity from the server to the workstation took at most 5 Mbps. Data storage per image increased from 300 MB to an average of 600 MB as a result of camera and software updates, and, due to the increased scanning speed, the scanning time was reduced with almost 8 h/day. Introduction of a storage policy of only 12 months for diagnostic images and rescanning if needed resulted in a manageable storage window of 45 TB for the period of 1 year. Using simple registration tools allowed the transition of digital pathology into a concise package that allows planning and control. Incorporating retrieval of such information from scanning and storage devices will reduce the fear of losing control by the management when introducing digital pathology in daily routine. © 2016 S. Karger AG, Basel.

  9. Intellectual Production Supervision Perform based on RFID Smart Electricity Meter

    NASA Astrophysics Data System (ADS)

    Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng

    2018-03-01

    This topic develops the RFID intelligent electricity meter production supervision project management system. The system is designed for energy meter production supervision in the management of the project schedule, quality and cost information management requirements in RFID intelligent power, and provide quantitative information more comprehensive, timely and accurate for supervision engineer and project manager management decisions, and to provide technical information for the product manufacturing stage file. From the angle of scheme analysis, design, implementation and test, the system development of production supervision project management system for RFID smart meter project is discussed. Focus on the development of the system, combined with the main business application and management mode at this stage, focuses on the energy meter to monitor progress information, quality information and cost based information on RFID intelligent power management function. The paper introduces the design scheme of the system, the overall client / server architecture, client oriented graphical user interface universal, complete the supervision of project management and interactive transaction information display, the server system of realizing the main program. The system is programmed with C# language and.NET operating environment, and the client and server platforms use Windows operating system, and the database server software uses Oracle. The overall platform supports mainstream information and standards and has good scalability.

  10. A Tale of Two Observing Systems: Interoperability in the World of Microsoft Windows

    NASA Astrophysics Data System (ADS)

    Babin, B. L.; Hu, L.

    2008-12-01

    Louisiana Universities Marine Consortium's (LUMCON) and Dauphin Island Sea Lab's (DISL) Environmental Monitoring System provide a unified coastal ocean observing system. These two systems are mirrored to maintain autonomy while offering an integrated data sharing environment. Both systems collect data via Campbell Scientific Data loggers, store the data in Microsoft SQL servers, and disseminate the data in real- time on the World Wide Web via Microsoft Internet Information Servers and Active Server Pages (ASP). The utilization of Microsoft Windows technologies presented many challenges to these observing systems as open source tools for interoperability grow. The current open source tools often require the installation of additional software. In order to make data available through common standards formats, "home grown" software has been developed. One example of this is the development of software to generate xml files for transmission to the National Data Buoy Center (NDBC). OOSTethys partners develop, test and implement easy-to-use, open-source, OGC-compliant software., and have created a working prototype of networked, semantically interoperable, real-time data systems. Partnering with OOSTethys, we are developing a cookbook to implement OGC web services. The implementation will be written in ASP, will run in a Microsoft operating system environment, and will serve data via Sensor Observation Services (SOS). This cookbook will give observing systems running Microsoft Windows the tools to easily participate in the Open Geospatial Consortium (OGC) Oceans Interoperability Experiment (OCEANS IE).

  11. Efficient monitoring of CRAB jobs at CMS

    NASA Astrophysics Data System (ADS)

    Silva, J. M. D.; Balcas, J.; Belforte, S.; Ciangottini, D.; Mascheroni, M.; Rupeika, E. A.; Ivanov, T. T.; Hernandez, J. M.; Vaandering, E.

    2017-10-01

    CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates the design choices and gives a report on our experience with the tools we developed and the external ones we used.

  12. Efficient Monitoring of CRAB Jobs at CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Silva, J. M.D.; Balcas, J.; Belforte, S.

    CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN. As with most complex software, good monitoring tools are crucial for efficient use and longterm maintainability. This work gives an overview of the monitoring tools developed to ensure the CRAB server and infrastructure are functional, help operators debug user problems, and minimize overhead and operating cost. This work also illustrates themore » design choices and gives a report on our experience with the tools we developed and the external ones we used.« less

  13. An Efficient Algorithm for Server Thermal Fault Diagnosis Based on Infrared Image

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Xie, Ting; Ran, Jian; Gao, Shan

    2017-10-01

    It is essential for a data center to maintain server security and stability. Long-time overload operation or high room temperature may cause service disruption even a server crash, which would result in great economic loss for business. Currently, the methods to avoid server outages are monitoring and forecasting. Thermal camera can provide fine texture information for monitoring and intelligent thermal management in large data center. This paper presents an efficient method for server thermal fault monitoring and diagnosis based on infrared image. Initially thermal distribution of server is standardized and the interest regions of the image are segmented manually. Then the texture feature, Hu moments feature as well as modified entropy feature are extracted from the segmented regions. These characteristics are applied to analyze and classify thermal faults, and then make efficient energy-saving thermal management decisions such as job migration. For the larger feature space, the principal component analysis is employed to reduce the feature dimensions, and guarantee high processing speed without losing the fault feature information. Finally, different feature vectors are taken as input for SVM training, and do the thermal fault diagnosis after getting the optimized SVM classifier. This method supports suggestions for optimizing data center management, it can improve air conditioning efficiency and reduce the energy consumption of the data center. The experimental results show that the maximum detection accuracy is 81.5%.

  14. Remote Monitoring of Post-eruption Volcano Environment Based-On Wireless Sensor Network (WSN): The Mount Sinabung Case

    NASA Astrophysics Data System (ADS)

    Soeharwinto; Sinulingga, Emerson; Siregar, Baihaqi

    2017-01-01

    An accurate information can be useful for authorities to make good policies for preventive and mitigation after volcano eruption disaster. Monitoring of environmental parameters of post-eruption volcano provides an important information for authorities. Such monitoring system can be develop using the Wireless Network Sensor technology. Many application has been developed using the Wireless Sensor Network technology, such as floods early warning system, sun radiation mapping, and watershed monitoring. This paper describes the implementation of a remote environment monitoring system of mount Sinabung post-eruption. The system monitor three environmental parameters: soil condition, water quality and air quality (outdoor). Motes equipped with proper sensors, as components of the monitoring system placed in sample locations. The measured value from the sensors periodically sends to data server using 3G/GPRS communication module. The data can be downloaded by the user for further analysis.The measurement and data analysis results generally indicate that the environmental parameters in the range of normal/standard condition. The sample locations are safe for living and suitable for cultivation, but awareness is strictly required due to the uncertainty of Sinabung status.

  15. An assessment of burn prevention knowledge in a high burn-risk environment: restaurants.

    PubMed

    Piazza-Waggoner, Carrie; Adams, C D; Goldfarb, I W; Slater, H

    2002-01-01

    Our facility has seen an increase in the number of cases of children burned in restaurants. Fieldwork has revealed many unsafe serving practices in restaurants in our tristate area. The current research targets what appears to be an underexamined burn-risk environment, restaurants, to examine server knowledge about burn prevention and burn care with customers. Participants included 71 local restaurant servers and 53 servers from various restaurants who were recruited from undergraduate courses. All participants completed a brief demographic form as well as a Burn Knowledge Questionnaire. It was found that server knowledge was low (ie, less than 50% accuracy). Yet, most servers reported that they felt customer burn safety was important enough to change the way that they serve. Additionally, it was found that length of time employed as a server was a significant predictor of servers' burn knowledge (ie, more years serving associated with higher knowledge). Finally, individual items were examined to identify potential targets for developing prevention programs.

  16. Ubiquitous-health (U-Health) monitoring systems for elders and caregivers

    NASA Astrophysics Data System (ADS)

    Moon, Gyu; Lim, Kyung-won; Yoo, Young-min; An, Hye-min; Lee, Ki Seop; Szu, Harold

    2011-06-01

    This paper presents two aordable low-tack system for household biomedical wellness monitoring. The rst system, JIKIMI (pronounced caregiver in Korean), is a remote monitoring system that analyzes the behavior patterns of elders that live alone. JIKIMI is composed of an in-house sensing system, a set of wireless sensor nodes containing a pyroelectric infrared sensor to detect the motion of elders, an emergency button and a magnetic sensor that detects the opening and closing of doors. The system is also equipped with a server system, which is comprised of a database and web server. The server provides the mechanism for web-based monitoring to caregivers. The second system, Reader of Bottle Information (ROBI), is an assistant system which advises the contents of bottles for elders. ROBI is composed of bottles that have connected RFID tags and an advice system, which is composed of a wireless RFID reader, a gateway and a remote database server. The RFID tags are connected to the caps of the bottles are used in conjunction with the advice system These systems have been in use for three years and have proven to be useful for caregivers to provide more ecient and eective care services.

  17. Measurement and Data Transmission Validity of a Multi-Biosensor System for Real-Time Remote Exercise Monitoring Among Cardiac Patients.

    PubMed

    Rawstorn, Jonathan C; Gant, Nicholas; Warren, Ian; Doughty, Robert Neil; Lever, Nigel; Poppe, Katrina K; Maddison, Ralph

    2015-03-20

    Remote telemonitoring holds great potential to augment management of patients with coronary heart disease (CHD) and atrial fibrillation (AF) by enabling regular physiological monitoring during physical activity. Remote physiological monitoring may improve home and community exercise-based cardiac rehabilitation (exCR) programs and could improve assessment of the impact and management of pharmacological interventions for heart rate control in individuals with AF. Our aim was to evaluate the measurement validity and data transmission reliability of a remote telemonitoring system comprising a wireless multi-parameter physiological sensor, custom mobile app, and middleware platform, among individuals in sinus rhythm and AF. Participants in sinus rhythm and with AF undertook simulated daily activities, low, moderate, and/or high intensity exercise. Remote monitoring system heart rate and respiratory rate were compared to reference measures (12-lead ECG and indirect calorimeter). Wireless data transmission loss was calculated between the sensor, mobile app, and remote Internet server. Median heart rate (-0.30 to 1.10 b∙min -1 ) and respiratory rate (-1.25 to 0.39 br∙min -1 ) measurement biases were small, yet statistically significant (all P≤.003) due to the large number of observations. Measurement reliability was generally excellent (rho=.87-.97, all P<.001; intraclass correlation coefficient [ICC]=.94-.98, all P<.001; coefficient of variation [CV]=2.24-7.94%), although respiratory rate measurement reliability was poor among AF participants (rho=.43, P<.001; ICC=.55, P<.001; CV=16.61%). Data loss was minimal (<5%) when all system components were active; however, instability of the network hosting the remote data capture server resulted in data loss at the remote Internet server during some trials. System validity was sufficient for remote monitoring of heart and respiratory rates across a range of exercise intensities. Remote exercise monitoring has potential to augment current exCR and heart rate control management approaches by enabling the provision of individually tailored care to individuals outside traditional clinical environments. ©Jonathan C Rawstorn, Nicholas Gant, Ian Warren, Robert Neil Doughty, Nigel Lever, Katrina K Poppe, Ralph Maddison. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 20.03.2015.

  18. [The therapeutic drug monitoring network server of tacrolimus for Chinese renal transplant patients].

    PubMed

    Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei

    2011-07-01

    This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.

  19. An Optimization of the Basic School Military Occupational Skill Assignment Process

    DTIC Science & Technology

    2003-06-01

    Corps Intranet (NMCI)23 supports it. We evaluated the use of Microsoft’s SQL Server, but dismissed this after learning that TBS did not possess a SQL ...Server license or a qualified SQL Server administrator.24 SQL Server would have provided for additional security measures not available in MS...administrator. Although not has powerful as SQL Server, MS Access can handle the multi-user environment necessary for this system.25 The training

  20. Development of a Personal Integrated Environmental Monitoring System

    PubMed Central

    Wong, Man Sing; Yip, Tsan Pong; Mok, Esmond

    2014-01-01

    Environmental pollution in the urban areas of Hong Kong has become a serious public issue but most urban inhabitants have no means of judging their own living environment in terms of dangerous threshold and overall livability. Currently there exist many low-cost sensors such as ultra-violet, temperature and air quality sensors that provide reasonably accurate data quality. In this paper, the development and evaluation of Integrated Environmental Monitoring System (IEMS) are illustrated. This system consists of three components: (i) position determination and sensor data collection for real-time geospatial-based environmental monitoring; (ii) on-site data communication and visualization with the aid of an Android-based application; and (iii) data analysis on a web server. This system has shown to be working well during field tests in a bus journey and a construction site. It provides an effective service platform for collecting environmental data in near real-time, and raises the public awareness of environmental quality in micro-environments. PMID:25420154

  1. Application of Aquaculture Monitoring System Based on CC2530

    NASA Astrophysics Data System (ADS)

    Chen, H. L.; Liu, X. Q.

    In order to improve the intelligent level of aquaculture technology, this paper puts forward a remote wireless monitoring system based on ZigBee technology, GPRS technology and Android mobile phone platform. The system is composed of wireless sensor network (WSN), GPRS module, PC server, and Android client. The WSN was set up by CC2530 chips based on ZigBee protocol, to realize the collection of water quality parameters such as the water level, temperature, PH and dissolved oxygen. The GPRS module realizes remote communication between WSN and PC server. Android client communicates with server to monitor the level of water quality. The PID (proportion, integration, differentiation) control is adopted in the control part, the control commands from the android mobile phone is sent to the server, the server again send it to the lower machine to control the water level regulating valve and increasing oxygen pump. After practical testing to the system in Liyang, Jiangsu province, China, temperature measurement accuracy reaches 0.5°C, PH measurement accuracy reaches 0.3, water level control precision can be controlled within ± 3cm, dissolved oxygen control precision can be controlled within ±0.3 mg/L, all the indexes can meet the requirements, this system is very suitable for aquaculture.

  2. Delivering "Just-In-Time" Smoking Cessation Support Via Mobile Phones: Current Knowledge and Future Directions.

    PubMed

    Naughton, Felix

    2016-05-28

    Smoking lapses early on during a quit attempt are highly predictive of failing to quit. A large proportion of these lapses are driven by cravings brought about by situational and environmental cues. Use of cognitive-behavioral lapse prevention strategies to combat cue-induced cravings is associated with a reduced risk of lapse, but evidence is lacking in how these strategies can be effectively promoted. Unlike most traditional methods of delivering behavioral support, mobile phones can in principle deliver automated support, including lapse prevention strategy recommendations, Just-In-Time (JIT) for when a smoker is most vulnerable, and prevent early lapse. JIT support can be activated by smokers themselves (user-triggered), by prespecified rules (server-triggered) or through sensors that dynamically monitor a smoker's context and trigger support when a high risk environment is sensed (context-triggered), also known as a Just-In-Time Adaptive Intervention (JITAI). However, research suggests that user-triggered JIT cessation support is seldom used and existing server-triggered JIT support is likely to lack sufficient accuracy to effectively target high-risk situations in real time. Evaluations of mobile phone cessation interventions that include user and/or server-triggered JIT support have yet to adequately assess whether this improves management of high risk situations. While context-triggered systems have the greatest potential to deliver JIT support, there are, as yet, no impact evaluations of such systems. Although it may soon be feasible to learn about and monitor a smoker's context unobtrusively using their smartphone without burdensome data entry, there are several potential advantages to involving the smoker in data collection. This commentary describes the current knowledge on the potential for mobile phones to deliver automated support to help smokers manage or cope with high risk environments or situations for smoking, known as JIT support. The article categorizes JIT support into three main types: user-triggered, server-triggered, and context-triggered. For each type of JIT support, a description of the evidence and their potential to effectively target specific high risk environments or situations is described. The concept of unobtrusive sensing without user data entry to inform the delivery of JIT support is finally discussed in relation to potential advantages and disadvantages for behavior change. © The Author 2016. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. The development of a tele-monitoring system for physiological parameters based on the B/S model.

    PubMed

    Shuicai, Wu; Peijie, Jiang; Chunlan, Yang; Haomin, Li; Yanping, Bai

    2010-01-01

    The development of a new physiological multi-parameter remote monitoring system is based on the B/S model. The system consists of a server monitoring center, Internet network and PC-based multi-parameter monitors. Using the B/S model, the clients can browse web pages via the server monitoring center and download and install ActiveX controls. The physiological multi-parameters are collected, displayed and remotely transmitted. The experimental results show that the system is stable, reliable and operates in real time. The system is suitable for use in physiological multi-parameter remote monitoring for family and community healthcare. Copyright © 2010 Elsevier Ltd. All rights reserved.

  4. Client-Server: What Is It and Are We There Yet?

    ERIC Educational Resources Information Center

    Gershenfeld, Nancy

    1995-01-01

    Discusses client-server architecture in dumb terminals, personal computers, local area networks, and graphical user interfaces. Focuses on functions offered by client personal computers: individualized environments; flexibility in running operating systems; advanced operating system features; multiuser environments; and centralized data…

  5. Performance of a distributed superscalar storage server

    NASA Technical Reports Server (NTRS)

    Finestead, Arlan; Yeager, Nancy

    1993-01-01

    The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.

  6. Web design and development for centralize area radiation monitoring system in Malaysian Nuclear Agency

    NASA Astrophysics Data System (ADS)

    Ibrahim, Maslina Mohd; Yussup, Nolida; Haris, Mohd Fauzi; Soh @ Shaari, Syirrazie Che; Azman, Azraf; Razalim, Faizal Azrin B. Abdul; Yapp, Raymond; Hasim, Harzawardi; Aslan, Mohd Dzul Aiman

    2017-01-01

    One of the applications for radiation detector is area monitoring which is crucial for safety especially at a place where radiation source is involved. An environmental radiation monitoring system is a professional system that combines flexibility and ease of use for data collection and monitoring. Nowadays, with the growth of technology, devices and equipment can be connected to the network and Internet to enable online data acquisition. This technology enables data from the area monitoring devices to be transmitted to any place and location directly and faster. In Nuclear Malaysia, area radiation monitor devices are located at several selective locations such as laboratories and radiation facility. This system utilizes an Ethernet as a communication media for data acquisition of the area radiation levels from radiation detectors and stores the data at a server for recording and analysis. This paper discusses on the design and development of website that enable all user in Nuclear Malaysia to access and monitor the radiation level for each radiation detectors at real time online. The web design also included a query feature for history data from various locations online. The communication between the server's software and web server is discussed in detail in this paper.

  7. Design details of Intelligent Instruments for PLC-free Cryogenic measurements, control and data acquisition

    NASA Astrophysics Data System (ADS)

    Antony, Joby; Mathuria, D. S.; Chaudhary, Anup; Datta, T. S.; Maity, T.

    2017-02-01

    Cryogenic network for linear accelerator operations demand a large number of Cryogenic sensors, associated instruments and other control-instrumentation to measure, monitor and control different cryogenic parameters remotely. Here we describe an alternate approach of six types of newly designed integrated intelligent cryogenic instruments called device-servers which has the complete circuitry for various sensor-front-end analog instrumentation and the common digital back-end http-server built together, to make crateless PLC-free model of controls and data acquisition. These identified instruments each sensor-specific viz. LHe server, LN2 Server, Control output server, Pressure server, Vacuum server and Temperature server are completely deployed over LAN for the cryogenic operations of IUAC linac (Inter University Accelerator Centre linear Accelerator), New Delhi. This indigenous design gives certain salient features like global connectivity, low cost due to crateless model, easy signal processing due to integrated design, less cabling and device-interconnectivity etc.

  8. Optimal Service Capacities in a Competitive Multiple-Server Queueing Environment

    NASA Astrophysics Data System (ADS)

    Ching, Wai-Ki; Choi, Sin-Man; Huang, Min

    The study of economic behavior of service providers in a competition environment is an important and interesting research issue. A two-server queueing model has been proposed in Kalai et al. [11] for this purpose. Their model aims at studying the role and impact of service capacity in capturing larger market share so as to maximize the long-run expected profit. They formulate the problem as a two-person strategic game and analyze the equilibrium solutions. The main aim of this paper is to extend the results of the two-server queueing model in [11] to the case of multiple servers. We will only focus on the case when the queueing system is stable.

  9. Long-Term Animal Observation by Wireless Sensor Networks with Sound Recognition

    NASA Astrophysics Data System (ADS)

    Liu, Ning-Han; Wu, Chen-An; Hsieh, Shu-Ju

    Due to wireless sensor networks can transmit data wirelessly and can be disposed easily, they are used in the wild to monitor the change of environment. However, the lifetime of sensor is limited by the battery, especially when the monitored data type is audio, the lifetime is very short due to a huge amount of data transmission. By intuition, sensor mote analyzes the sensed data and decides not to deliver them to server that can reduce the expense of energy. Nevertheless, the ability of sensor mote is not powerful enough to work on complicated methods. Therefore, it is an urgent issue to design a method to keep analyzing speed and accuracy under the restricted memory and processor. This research proposed an embedded audio processing module in the sensor mote to extract and analyze audio features in advance. Then, through the estimation of likelihood of observed animal sound by the frequencies distribution, only the interesting audio data are sent back to server. The prototype of WSN system is built and examined in the wild to observe frogs. According to the results of experiments, the energy consumed by sensors through our method can be reduced effectively to prolong the observing time of animal detecting sensors.

  10. Implementation of a WAP-based telemedicine system for patient monitoring.

    PubMed

    Hung, Kevin; Zhang, Yuan-Ting

    2003-06-01

    Many parties have already demonstrated telemedicine applications that use cellular phones and the Internet. A current trend in telecommunication is the convergence of wireless communication and computer network technologies, and the emergence of wireless application protocol (WAP) devices is an example. Since WAP will also be a common feature found in future mobile communication devices, it is worthwhile to investigate its use in telemedicine. This paper describes the implementation and experiences with a WAP-based telemedicine system for patient-monitoring that has been developed in our laboratory. It utilizes WAP devices as mobile access terminals for general inquiry and patient-monitoring services. Authorized users can browse the patients' general data, monitored blood pressure (BP), and electrocardiogram (ECG) on WAP devices in store-and-forward mode. The applications, written in wireless markup language (WML), WMLScript, and Perl, resided in a content server. A MySQL relational database system was set up to store the BP readings, ECG data, patient records, clinic and hospital information, and doctors' appointments with patients. A wireless ECG subsystem was built for recording ambulatory ECG in an indoor environment and for storing ECG data into the database. For testing, a WAP phone compliant with WAP 1.1 was used at GSM 1800 MHz by circuit-switched data (CSD) to connect to the content server through a WAP gateway, which was provided by a mobile phone service provider in Hong Kong. Data were successfully retrieved from the database and displayed on the WAP phone. The system shows how WAP can be feasible in remote patient-monitoring and patient data retrieval.

  11. Implementation of an Enterprise Information Portal (EIP) in the Loyola University Health System

    PubMed Central

    Price, Ronald N.; Hernandez, Kim

    2001-01-01

    Loyola University Chicago Stritch School of Medicine and Loyola University Medical Center have long histories in the development of applications to support the institutions' missions of education, research and clinical care. In late 1998, the institutions' application development group undertook an ambitious program to re-architecture more than 10 years of legacy application development (30+ core applications) into a unified World Wide Web (WWW) environment. The primary project objectives were to construct an environment that would support the rapid development of n-tier, web-based applications while providing standard methods for user authentication/validation, security/access control and definition of a user's organizational context. The project's efforts resulted in Loyola's Enterprise Information Portal (EIP), which meets the aforementioned objectives. This environment: 1) allows access to other vertical Intranet portals (e.g., electronic medical record, patient satisfaction information and faculty effort); 2) supports end-user desktop customization; and 3) provides a means for standardized application “look and feel.” The portal was constructed utilizing readily available hardware and software. Server hardware consists of multiprocessor (Intel Pentium 500Mhz) Compaq 6500 servers with one gigabyte of random access memory and 75 gigabytes of hard disk storage. Microsoft SQL Server was selected to house the portal's internal or security data structures. Netscape Enterprise Server was selected for the web server component of the environment and Allaire's ColdFusion was chosen for access and application tiers. Total costs for the portal environment was less than $40,000. User data storage is accomplished through two Microsoft SQL Servers and an existing SUN Microsystems enterprise server with eight processors, 750 gigabytes of disk storage operating Sybase relational database manager. Total storage capacity for all system exceeds one terabyte. In the past 12 months, the EIP has supported development of more than 88 applications and is utilized by more than 2,200 users.

  12. Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks.

    PubMed

    Taniguchi, Yoshiaki; Tsutsumi, Hiroaki; Iguchi, Nobukazu; Watanabe, Kenzi

    2016-01-01

    Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator's configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance.

  13. EMMNet: sensor networking for electricity meter monitoring.

    PubMed

    Lin, Zhi-Ting; Zheng, Jie; Ji, Yu-Sheng; Zhao, Bao-Hua; Qu, Yu-Gui; Huang, Xu-Dong; Jiang, Xiu-Fang

    2010-01-01

    Smart sensors are emerging as a promising technology for a large number of application domains. This paper presents a collection of requirements and guidelines that serve as a basis for a general smart sensor architecture to monitor electricity meters. It also presents an electricity meter monitoring network, named EMMNet, comprised of data collectors, data concentrators, hand-held devices, a centralized server, and clients. EMMNet provides long-distance communication capabilities, which make it suitable suitable for complex urban environments. In addition, the operational cost of EMMNet is low, compared with other existing remote meter monitoring systems based on GPRS. A new dynamic tree protocol based on the application requirements which can significantly improve the reliability of the network is also proposed. We are currently conducting tests on five networks and investigating network problems for further improvements. Evaluation results indicate that EMMNet enhances the efficiency and accuracy in the reading, recording, and calibration of electricity meters.

  14. EMMNet: Sensor Networking for Electricity Meter Monitoring

    PubMed Central

    Lin, Zhi-Ting; Zheng, Jie; Ji, Yu-Sheng; Zhao, Bao-Hua; Qu, Yu-Gui; Huang, Xu-Dong; Jiang, Xiu-Fang

    2010-01-01

    Smart sensors are emerging as a promising technology for a large number of application domains. This paper presents a collection of requirements and guidelines that serve as a basis for a general smart sensor architecture to monitor electricity meters. It also presents an electricity meter monitoring network, named EMMNet, comprised of data collectors, data concentrators, hand-held devices, a centralized server, and clients. EMMNet provides long-distance communication capabilities, which make it suitable suitable for complex urban environments. In addition, the operational cost of EMMNet is low, compared with other existing remote meter monitoring systems based on GPRS. A new dynamic tree protocol based on the application requirements which can significantly improve the reliability of the network is also proposed. We are currently conducting tests on five networks and investigating network problems for further improvements. Evaluation results indicate that EMMNet enhances the efficiency and accuracy in the reading, recording, and calibration of electricity meters. PMID:22163551

  15. Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks

    PubMed Central

    Taniguchi, Yoshiaki; Tsutsumi, Hiroaki; Iguchi, Nobukazu; Watanabe, Kenzi

    2016-01-01

    Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator's configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance. PMID:27006977

  16. Reliable Collection of Real-Time Patient Physiologic Data from less Reliable Networks: a "Monitor of Monitors" System (MoMs).

    PubMed

    Hu, Peter F; Yang, Shiming; Li, Hsiao-Chi; Stansbury, Lynn G; Yang, Fan; Hagegeorge, George; Miller, Catriona; Rock, Peter; Stein, Deborah M; Mackenzie, Colin F

    2017-01-01

    Research and practice based on automated electronic patient monitoring and data collection systems is significantly limited by system down time. We asked whether a triple-redundant Monitor of Monitors System (MoMs) to collect and summarize key information from system-wide data sources could achieve high fault tolerance, early diagnosis of system failure, and improve data collection rates. In our Level I trauma center, patient vital signs(VS) monitors were networked to collect real time patient physiologic data streams from 94 bed units in our various resuscitation, operating, and critical care units. To minimize the impact of server collection failure, three BedMaster® VS servers were used in parallel to collect data from all bed units. To locate and diagnose system failures, we summarized critical information from high throughput datastreams in real-time in a dashboard viewer and compared the before and post MoMs phases to evaluate data collection performance as availability time, active collection rates, and gap duration, occurrence, and categories. Single-server collection rates in the 3-month period before MoMs deployment ranged from 27.8 % to 40.5 % with combined 79.1 % collection rate. Reasons for gaps included collection server failure, software instability, individual bed setting inconsistency, and monitor servicing. In the 6-month post MoMs deployment period, average collection rates were 99.9 %. A triple redundant patient data collection system with real-time diagnostic information summarization and representation improved the reliability of massive clinical data collection to nearly 100 % in a Level I trauma center. Such data collection framework may also increase the automation level of hospital-wise information aggregation for optimal allocation of health care resources.

  17. Geographic information systems - transportation ISTEA management systems server net prototype pooled fund study : phase B - summary

    DOT National Transportation Integrated Search

    1997-06-01

    The Geographic Information System-Transportation (GIS-T) ISTEA Management Systems Server Net Prototype Pooled Fund Study represents the first national cooperative effort in the transportation industry to address the management and monitoring systems ...

  18. A sensor monitoring system for telemedicine, safety and security applications

    NASA Astrophysics Data System (ADS)

    Vlissidis, Nikolaos; Leonidas, Filippos; Giovanis, Christos; Marinos, Dimitrios; Aidinis, Konstantinos; Vassilopoulos, Christos; Pagiatakis, Gerasimos; Schmitt, Nikolaus; Pistner, Thomas; Klaue, Jirka

    2017-02-01

    A sensor system capable of medical, safety and security monitoring in avionic and other environments (e.g. homes) is examined. For application inside an aircraft cabin, the system relies on an optical cellular network that connects each seat to a server and uses a set of database applications to process data related to passengers' health, safety and security status. Health monitoring typically encompasses electrocardiogram, pulse oximetry and blood pressure, body temperature and respiration rate while safety and security monitoring is related to the standard flight attendance duties, such as cabin preparation for take-off, landing, flight in regions of turbulence, etc. In contrast to previous related works, this article focuses on the system's modules (medical and safety sensors and associated hardware), the database applications used for the overall control of the monitoring function and the potential use of the system for security applications. Further tests involving medical, safety and security sensing performed in an real A340 mock-up set-up are also described and reference is made to the possible use of the sensing system in alternative environments and applications, such as health monitoring within other means of transport (e.g. trains or small passenger sea vessels) as well as for remotely located home users, over a wired Ethernet network or the Internet.

  19. Networked Instructional Chemistry: Using Technology To Teach Chemistry

    NASA Astrophysics Data System (ADS)

    Smith, Stanley; Stovall, Iris

    1996-10-01

    Networked multimedia microcomputers provide new ways to help students learn chemistry and to help instructors manage the learning environment. This technology is used to replace some traditional laboratory work, collect on-line experimental data, enhance lectures and quiz sections with multimedia presentations, provide prelaboratory training for beginning nonchemistry- major organic laboratory, provide electronic homework for organic chemistry students, give graduate students access to real NMR data for analysis, and provide access to molecular modeling tools. The integration of all of these activities into an active learning environment is made possible by a client-server network of hundreds of computers. This requires not only instructional software but also classroom and course management software, computers, networking, and room management. Combining computer-based work with traditional course material is made possible with software management tools that allow the instructor to monitor the progress of each student and make available an on-line gradebook so students can see their grades and class standing. This client-server based system extends the capabilities of the earlier mainframe-based PLATO system, which was used for instructional computing. This paper outlines the components of a technology center used to support over 5,000 students per semester.

  20. Development of HIHM (Home Integrated Health Monitor) for ubiquitous home healthcare.

    PubMed

    Kim, Jung Soo; Kim, Beom Oh; Park, Kwang Suk

    2007-01-01

    Home Integrated Health Monitor (HIHM) was developed for ubiquitous home healthcare. From quantitative analysis, we have elicited modal of chair. The HIHM could detect Electrocardiogram (ECG) and Photoplethysmography (PPG) non-intrusively. Also, it could estimate blood pressure (BP) non-intrusively, measure blood glucose and ear temperature. Detected signals and information were transmitted to home gateway and home server through Zigbee communication technology. Home server carried them to Healthcare Center, and specialists such as medical doctors could monitor by Internet. There was also feedback system. This device has a potential to study about ubiquitous home healthcare.

  1. Deploying Server-side File System Monitoring at NERSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselton, Andrew

    2009-05-01

    The Franklin Cray XT4 at the NERSC center was equipped with the server-side I/O monitoring infrastructure Cerebro/LMT, which is described here in detail. Insights gained from the data produced include a better understanding of instantaneous data rates during file system testing, file system behavior during regular production time, and long-term average behaviors. Information and insights gleaned from this monitoring support efforts to proactively manage the I/O infrastructure on Franklin. A simple model for I/O transactions is introduced and compared with the 250 million observations sent to the LMT database from August 2008 to February 2009.

  2. ORBIT: an integrated environment for user-customized bioinformatics tools.

    PubMed

    Bellgard, M I; Hiew, H L; Hunter, A; Wiebrands, M

    1999-10-01

    There are a large number of computational programs freely available to bioinformaticians via a client/server, web-based environment. However, the client interface to these tools (typically an html form page) cannot be customized from the client side as it is created by the service provider. The form page is usually generic enough to cater for a wide range of users. However, this implies that a user cannot set as 'default' advanced program parameters on the form or even customize the interface to his/her specific requirements or preferences. Currently, there is a lack of end-user interface environments that can be modified by the user when accessing computer programs available on a remote server running on an intranet or over the Internet. We have implemented a client/server system called ORBIT (Online Researcher's Bioinformatics Interface Tools) where individual clients can have interfaces created and customized to command-line-driven, server-side programs. Thus, Internet-based interfaces can be tailored to a user's specific bioinformatic needs. As interfaces are created on the client machine independent of the server, there can be different interfaces to the same server-side program to cater for different parameter settings. The interface customization is relatively quick (between 10 and 60 min) and all client interfaces are integrated into a single modular environment which will run on any computer platform supporting Java. The system has been developed to allow for a number of future enhancements and features. ORBIT represents an important advance in the way researchers gain access to bioinformatics tools on the Internet.

  3. Environmental Monitoring Using Sensor Networks

    NASA Astrophysics Data System (ADS)

    Yang, J.; Zhang, C.; Li, X.; Huang, Y.; Fu, S.; Acevedo, M. F.

    2008-12-01

    Environmental observatories, consisting of a variety of sensor systems, computational resources and informatics, are important for us to observe, model, predict, and ultimately help preserve the health of the nature. The commoditization and proliferation of coin-to-palm sized wireless sensors will allow environmental monitoring with unprecedented fine spatial and temporal resolution. Once scattered around, these sensors can identify themselves, locate their positions, describe their functions, and self-organize into a network. They communicate through wireless channel with nearby sensors and transmit data through multi-hop protocols to a gateway, which can forward information to a remote data server. In this project, we describe an environmental observatory called Texas Environmental Observatory (TEO) that incorporates a sensor network system with intertwined wired and wireless sensors. We are enhancing and expanding the existing wired weather stations to include wireless sensor networks (WSNs) and telemetry using solar-powered cellular modems. The new WSNs will monitor soil moisture and support long-term hydrologic modeling. Hydrologic models are helpful in predicting how changes in land cover translate into changes in the stream flow regime. These models require inputs that are difficult to measure over large areas, especially variables related to storm events, such as soil moisture antecedent conditions and rainfall amount and intensity. This will also contribute to improve rainfall estimations from meteorological radar data and enhance hydrological forecasts. Sensor data are transmitted from monitoring site to a Central Data Collection (CDC) Server. We incorporate a GPRS modem for wireless telemetry, a single-board computer (SBC) as Remote Field Gateway (RFG) Server, and a WSN for distributed soil moisture monitoring. The RFG provides effective control, management, and coordination of two independent sensor systems, i.e., a traditional datalogger-based wired sensor system and the WSN-based wireless sensor system. The RFG also supports remote manipulation of the devices in the field such as the SBC, datalogger, and WSN. Sensor data collected from the distributed monitoring stations are stored in a database (DB) Server. The CDC Server acts as an intermediate component to hide the heterogeneity of different devices and support data validation required by the DB Server. Daemon programs running on the CDC Server pre-process the data before it is inserted into the database, and periodically perform synchronization tasks. A SWE-compliant data repository is installed to enable data exchange, accepting data from both internal DB Server and external sources through the OGC web services. The web portal, i.e. TEO Online, serves as a user-friendly interface for data visualization, analysis, synthesis, modeling, and K-12 educational outreach activities. It also provides useful capabilities for system developers and operators to remotely monitor system status and remotely update software and system configuration, which greatly simplifies the system debugging and maintenance tasks. We also implement Sensor Observation Services (SOS) at this layer, conforming to the SWE standard to facilitate data exchange. The standard SensorML/O&M data representation makes it easy to integrate our sensor data into the existing Geographic Information Systems (GIS) web services and exchange the data with other organizations.

  4. Preliminary Results on Design and Implementation of a Solar Radiation Monitoring System

    PubMed Central

    Balan, Mugur C.; Damian, Mihai; Jäntschi, Lorentz

    2008-01-01

    The paper presents a solar radiation monitoring system, using two scientific pyranometers and an on-line computer home-made data acquisition system. The first pyranometer measures the global solar radiation and the other one, which is shaded, measure the diffuse radiation. The values of total and diffuse solar radiation are continuously stored into a database on a server. Original software was created for data acquisition and interrogation of the created system. The server application acquires the data from pyranometers and stores it into a database with a baud rate of one record at 50 seconds. The client-server application queries the database and provides descriptive statistics. A web interface allow to any user to define the including criteria and to obtain the results. In terms of results, the system is able to provide direct, diffuse and total radiation intensities as time series. Our client-server application computes also derivate heats. The ability of the system to evaluate the local solar energy potential is highlighted. PMID:27879746

  5. Implementing TCP/IP and a socket interface as a server in a message-passing operating system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hipp, E.; Wiltzius, D.

    1990-03-01

    The UNICOS 4.3BSD network code and socket transport interface are the basis of an explicit network server for NLTSS, a message passing operating system on the Cray YMP. A BSD socket user library provides access to the network server using an RPC mechanism. The advantages of this server methodology are its modularity and extensibility to migrate to future protocol suites (e.g. OSI) and transport interfaces. In addition, the network server is implemented in an explicit multi-tasking environment to take advantage of the Cray YMP multi-processor platform. 19 refs., 5 figs.

  6. Abnormal Condition Monitoring of Workpieces Based on RFID for Wisdom Manufacturing Workshops.

    PubMed

    Zhang, Cunji; Yao, Xifan; Zhang, Jianming

    2015-12-03

    Radio Frequency Identification (RFID) technology has been widely used in many fields. However, previous studies have mainly focused on product life cycle tracking, and there are few studies on real-time status monitoring of workpieces in manufacturing workshops. In this paper, a wisdom manufacturing model is introduced, a sensing-aware environment for a wisdom manufacturing workshop is constructed, and RFID event models are defined. A synthetic data cleaning method is applied to clean the raw RFID data. The Complex Event Processing (CEP) technology is adopted to monitor abnormal conditions of workpieces in real time. The RFID data cleaning method and data mining technology are examined by simulation and physical experiments. The results show that the synthetic data cleaning method preprocesses data well. The CEP based on the Rifidi(®) Edge Server technology completed abnormal condition monitoring of workpieces in real time. This paper reveals the importance of RFID spatial and temporal data analysis in real-time status monitoring of workpieces in wisdom manufacturing workshops.

  7. Abnormal Condition Monitoring of Workpieces Based on RFID for Wisdom Manufacturing Workshops

    PubMed Central

    Zhang, Cunji; Yao, Xifan; Zhang, Jianming

    2015-01-01

    Radio Frequency Identification (RFID) technology has been widely used in many fields. However, previous studies have mainly focused on product life cycle tracking, and there are few studies on real-time status monitoring of workpieces in manufacturing workshops. In this paper, a wisdom manufacturing model is introduced, a sensing-aware environment for a wisdom manufacturing workshop is constructed, and RFID event models are defined. A synthetic data cleaning method is applied to clean the raw RFID data. The Complex Event Processing (CEP) technology is adopted to monitor abnormal conditions of workpieces in real time. The RFID data cleaning method and data mining technology are examined by simulation and physical experiments. The results show that the synthetic data cleaning method preprocesses data well. The CEP based on the Rifidi® Edge Server technology completed abnormal condition monitoring of workpieces in real time. This paper reveals the importance of RFID spatial and temporal data analysis in real-time status monitoring of workpieces in wisdom manufacturing workshops. PMID:26633418

  8. Usage of Thin-Client/Server Architecture in Computer Aided Education

    ERIC Educational Resources Information Center

    Cimen, Caghan; Kavurucu, Yusuf; Aydin, Halit

    2014-01-01

    With the advances of technology, thin-client/server architecture has become popular in multi-user/single network environments. Thin-client is a user terminal in which the user can login to a domain and run programs by connecting to a remote server. Recent developments in network and hardware technologies (cloud computing, virtualization, etc.)…

  9. SciServer: An Online Collaborative Environment for Big Data in Research and Education

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Souter, Barbara; Lemson, Gerard; Taghizadeh-Popp, Manuchehr

    2017-01-01

    For the past year, SciServer Compute (http://compute.sciserver.org) has offered access to big data resources running within server-side Docker containers. Compute has allowed thousands of researchers to bring advanced analysis to big datasets like the Sloan Digital Sky Survey and others, while keeping the analysis close to the data for better performance and easier read/write access. SciServer Compute is just one part of the SciServer system being developed at Johns Hopkins University, which provides an easy-to-use collaborative research environment for astronomy and many other sciences.SciServer enables these collaborative research strategies using Jupyter notebooks, in which users can write their own Python and R scripts and execute them on the same server as the data. We have written special-purpose libraries for querying, reading, and writing data. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files.SciServer Compute’s virtual research environment has grown with the addition of task management and access control functions, allowing collaborators to share both data and analysis scripts securely across the world. These features also open up new possibilities for education, allowing instructors to share datasets with students and students to write analysis scripts to share with their instructors. We are leveraging these features into a new system called “SciServer Courseware,” which will allow instructors to share assignments with their students, allowing students to engage with big data in new ways.SciServer has also expanded to include more datasets beyond the Sloan Digital Sky Survey. A part of that growth has been the addition of the SkyQuery component, which allows for simple, fast cross-matching between very large astronomical datasets.Demos, documentation, and more information about all these resources can be found at www.sciserver.org.

  10. Cryptanalysis and improvement of a biometrics-based authentication and key agreement scheme for multi-server environments.

    PubMed

    Yang, Li; Zheng, Zhiming

    2018-01-01

    According to advancements in the wireless technologies, study of biometrics-based multi-server authenticated key agreement schemes has acquired a lot of momentum. Recently, Wang et al. presented a three-factor authentication protocol with key agreement and claimed that their scheme was resistant to several prominent attacks. Unfortunately, this paper indicates that their protocol is still vulnerable to the user impersonation attack, privileged insider attack and server spoofing attack. Furthermore, their protocol cannot provide the perfect forward secrecy. As a remedy of these aforementioned problems, we propose a biometrics-based authentication and key agreement scheme for multi-server environments. Compared with various related schemes, our protocol achieves the stronger security and provides more functionality properties. Besides, the proposed protocol shows the satisfactory performances in respect of storage requirement, communication overhead and computational cost. Thus, our protocol is suitable for expert systems and other multi-server architectures. Consequently, the proposed protocol is more appropriate in the distributed networks.

  11. Cryptanalysis and improvement of a biometrics-based authentication and key agreement scheme for multi-server environments

    PubMed Central

    Zheng, Zhiming

    2018-01-01

    According to advancements in the wireless technologies, study of biometrics-based multi-server authenticated key agreement schemes has acquired a lot of momentum. Recently, Wang et al. presented a three-factor authentication protocol with key agreement and claimed that their scheme was resistant to several prominent attacks. Unfortunately, this paper indicates that their protocol is still vulnerable to the user impersonation attack, privileged insider attack and server spoofing attack. Furthermore, their protocol cannot provide the perfect forward secrecy. As a remedy of these aforementioned problems, we propose a biometrics-based authentication and key agreement scheme for multi-server environments. Compared with various related schemes, our protocol achieves the stronger security and provides more functionality properties. Besides, the proposed protocol shows the satisfactory performances in respect of storage requirement, communication overhead and computational cost. Thus, our protocol is suitable for expert systems and other multi-server architectures. Consequently, the proposed protocol is more appropriate in the distributed networks. PMID:29534085

  12. An extensible and lightweight architecture for adaptive server applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorton, Ian; Liu, Yan; Trivedi, Nihar

    2008-07-10

    Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definitionmore » and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the erformance gains possible by adaptive behavior and the low overhead introduced by ASF.« less

  13. Implementation of an Embedded Web Server Application for Wireless Control of Brain Computer Interface Based Home Environments.

    PubMed

    Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan

    2016-01-01

    Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.

  14. SciServer Compute brings Analysis to Big Data in the Cloud

    NASA Astrophysics Data System (ADS)

    Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara

    2016-06-01

    SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.

  15. Remote online monitoring and measuring system for civil engineering structures

    NASA Astrophysics Data System (ADS)

    Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł

    2009-06-01

    In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.

  16. Real Time Monitor of Grid job executions

    NASA Astrophysics Data System (ADS)

    Colling, D. J.; Martyniak, J.; McGough, A. S.; Křenek, A.; Sitera, J.; Mulač, M.; Dvořák, F.

    2010-04-01

    In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.

  17. Environmental influences on alcohol consumption practices of alcoholic beverage servers.

    PubMed

    Nusbaumer, Michael R; Reiling, Denise M

    2002-11-01

    Public drinking establishments have long been associated with heavy drinking among both their patrons and servers. Whether these environments represent locations where heavy drinking is learned (learning hypothesis) or simply places where already-heavy drinkers gather in a supportive environment (selection hypothesis) remains an important question. A sample of licensed alcoholic beverage servers in the state of Indiana, USA, was surveyed to better understand the drinking behaviors of servers within the alcohol service industry. Responses (N = 938) to a mailed questionnaire were analyzed to assess the relative influence of environmental and demographic factors on the drinking behavior of servers. Stepwise regression revealed "drinking on the job" as the most influential environmental factor on heavy drinking behaviors, followed by age and gender as influential demographic factors. Support was found for the selection hypothesis, but not for the learning hypothesis. Policy implications are discussed. factors on the drinking behavior of servers. Stepwise regression revealed "drinking on the job" as the most influential environmental factor on heavy drinking behaviors, followed by age and gender as influential demographic factors. Support was found for the selection hypothesis, but not for the learning hypothesis. Policy implications are discussed.

  18. VoIP attacks detection engine based on neural network

    NASA Astrophysics Data System (ADS)

    Safarik, Jakub; Slachta, Jiri

    2015-05-01

    The security is crucial for any system nowadays, especially communications. One of the most successful protocols in the field of communication over IP networks is Session Initiation Protocol. It is an open-source project used by different kinds of applications, both open-source and proprietary. High penetration and text-based principle made SIP number one target in IP telephony infrastructure, so security of SIP server is essential. To keep up with hackers and to detect potential malicious attacks, security administrator needs to monitor and evaluate SIP traffic in the network. But monitoring and following evaluation could easily overwhelm the security administrator in networks, typically in networks with a number of SIP servers, users and logically or geographically separated networks. The proposed solution lies in automatic attack detection systems. The article covers detection of VoIP attacks through a distributed network of nodes. Then the gathered data analyze aggregation server with artificial neural network. Artificial neural network means multilayer perceptron network trained with a set of collected attacks. Attack data could also be preprocessed and verified with a self-organizing map. The source data is detected by distributed network of detection nodes. Each node contains a honeypot application and traffic monitoring mechanism. Aggregation of data from each node creates an input for neural networks. The automatic classification on a centralized server with low false positive detection reduce the cost of attack detection resources. The detection system uses modular design for easy deployment in final infrastructure. The centralized server collects and process detected traffic. It also maintains all detection nodes.

  19. ESUMS: a mobile system for continuous home monitoring of rehabilitation patients.

    PubMed

    Strisland, Frode; Svagård, Ingrid; Seeberg, Trine M; Mathisen, Bjørn Magnus; Vedum, Jon; Austad, Hanne O; Liverud, Anders E; Kofod-Petersen, Anders; Bendixen, Ole Christian

    2013-01-01

    The pressure on the healthcare services is building up for several reasons. The ageing population trend, the increase in life-style related disease prevalence, as well as the increased treatment capabilities with associated general expectation all add pressure. The use of ambient healthcare technologies can alleviate the situation by enabling time and cost-efficient monitoring and follow-up of patients discharged from hospital care. We report on an ambulatory system developed for monitoring of physical rehabilitation patients. The system consists of a wearable multisensor monitoring device; a mobile phone with client application aggregating the data collected; a service-oriented-architecture based server solution; and a PC application facilitating patient follow-up by their health professional carers. The system has been tested and verified for accuracy in controlled environment trials on healthy volunteers, and also been usability tested by 5 congestive heart failure patients and their nurses. This investigation indicated that patients were able to use the system, and that nurses got an improved basis for patient follow-up.

  20. A monitoring system for vegetable greenhouses based on a wireless sensor network.

    PubMed

    Li, Xiu-hong; Cheng, Xiao; Yan, Ke; Gong, Peng

    2010-01-01

    A wireless sensor network-based automatic monitoring system is designed for monitoring the life conditions of greenhouse vegetables. The complete system architecture includes a group of sensor nodes, a base station, and an internet data center. For the design of wireless sensor node, the JN5139 micro-processor is adopted as the core component and the Zigbee protocol is used for wireless communication between nodes. With an ARM7 microprocessor and embedded ZKOS operating system, a proprietary gateway node is developed to achieve data influx, screen display, system configuration and GPRS based remote data forwarding. Through a Client/Server mode the management software for remote data center achieves real-time data distribution and time-series analysis. Besides, a GSM-short-message-based interface is developed for sending real-time environmental measurements, and for alarming when a measurement is beyond some pre-defined threshold. The whole system has been tested for over one year and satisfactory results have been observed, which indicate that this system is very useful for greenhouse environment monitoring.

  1. Adaptive proxy map server for efficient vector spatial data rendering

    NASA Astrophysics Data System (ADS)

    Sayar, Ahmet

    2013-01-01

    The rapid transmission of vector map data over the Internet is becoming a bottleneck of spatial data delivery and visualization in web-based environment because of increasing data amount and limited network bandwidth. In order to improve both the transmission and rendering performances of vector spatial data over the Internet, we propose a proxy map server enabling parallel vector data fetching as well as caching to improve the performance of web-based map servers in a dynamic environment. Proxy map server is placed seamlessly anywhere between the client and the final services, intercepting users' requests. It employs an efficient parallelization technique based on spatial proximity and data density in case distributed replica exists for the same spatial data. The effectiveness of the proposed technique is proved at the end of the article by the application of creating map images enriched with earthquake seismic data records.

  2. From honeybees to Internet servers: biomimicry for distributed management of Internet hosting centers.

    PubMed

    Nakrani, Sunil; Tovey, Craig

    2007-12-01

    An Internet hosting center hosts services on its server ensemble. The center must allocate servers dynamically amongst services to maximize revenue earned from hosting fees. The finite server ensemble, unpredictable request arrival behavior and server reallocation cost make server allocation optimization difficult. Server allocation closely resembles honeybee forager allocation amongst flower patches to optimize nectar influx. The resemblance inspires a honeybee biomimetic algorithm. This paper describes details of the honeybee self-organizing model in terms of information flow and feedback, analyzes the homology between the two problems and derives the resulting biomimetic algorithm for hosting centers. The algorithm is assessed for effectiveness and adaptiveness by comparative testing against benchmark and conventional algorithms. Computational results indicate that the new algorithm is highly adaptive to widely varying external environments and quite competitive against benchmark assessment algorithms. Other swarm intelligence applications are briefly surveyed, and some general speculations are offered regarding their various degrees of success.

  3. A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation

    NASA Astrophysics Data System (ADS)

    Watanabe, Toru; Koizumi, Hisao

    In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.

  4. Secure entanglement distillation for double-server blind quantum computation.

    PubMed

    Morimae, Tomoyuki; Fujii, Keisuke

    2013-07-12

    Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the client interacts with only a single server, the client has to have some minimum quantum power, such as the ability of emitting randomly rotated single-qubit states or the ability of measuring states. If the client interacts with two servers who share Bell pairs but cannot communicate with each other, the client can be completely classical. For such a double-server scheme, two servers have to share clean Bell pairs, and therefore the entanglement distillation is necessary in a realistic noisy environment. In this Letter, we show that it is possible to perform entanglement distillation in the double-server scheme without degrading the security of blind quantum computing.

  5. An analysis of the low-earth-orbit communications environment

    NASA Astrophysics Data System (ADS)

    Diersing, Robert Joseph

    Advances in microprocessor technology and availability of launch opportunities have caused interest in low-earth-orbit satellite based communications systems to increase dramatically during the past several years. In this research the capabilities of two low-cost, store-and-forward LEO communications satellites operating in the public domain are examined--PACSAT-1 (operated by the Radio Amateur Satellite Corporation) and UoSAT-3 (operated by the University of Surrey, England, Electrical Engineering Department). The file broadcasting and file transfer facilities are examined in detail and a simulation model of the downlink traffic pattern is developed. The simulator will aid the assessment of changes in design and implementation for other systems. The development of the downlink traffic simulator is based on three major parts. First, is a characterization of the low-earth-orbit operating environment along with preliminary measurements of the PACSAT-1 and UoSAT-3 systems including: satellite visibility constraints on communications, monitoring equipment configuration, link margin computations, determination of block and bit error rates, and establishing typical data capture rates for ground stations using computer-pointed directional antennas and fixed omni-directional antennas. Second, arrival rates for successful and unsuccessful file server connections are established along with transaction service times. Downlink traffic has been further characterized by measuring: frame and byte counts for all data-link layer traffic; 30-second interval average response time for all traffic and for file server traffic only; file server response time on a per-connection basis; and retry rates for information and supervisory frames. Finally, the model is verified by comparison with measurements of actual traffic not previously used in the model building process. The simulator is then used to predict operation of the PACSAT-1 satellite with modifications to the original design.

  6. Client - server programs analysis in the EPOCA environment

    NASA Astrophysics Data System (ADS)

    Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano

    1996-09-01

    Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.

  7. [Design and implementation of medical instrument standard information retrieval system based on APS.NET].

    PubMed

    Yu, Kaijun

    2010-07-01

    This paper Analys the design goals of Medical Instrumentation standard information retrieval system. Based on the B /S structure,we established a medical instrumentation standard retrieval system with ASP.NET C # programming language, IIS f Web server, SQL Server 2000 database, in the. NET environment. The paper also Introduces the system structure, retrieval system modules, system development environment and detailed design of the system.

  8. Sensor Fusion for Nuclear Proliferation Activity Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adel Ghanem, Ph D

    2007-03-30

    The objective of Phase 1 of this STTR project is to demonstrate a Proof-of-Concept (PoC) of the Geo-Rad system that integrates a location-aware SmartTag (made by ZonTrak) and a radiation detector (developed by LLNL). It also includes the ability to transmit the collected radiation data and location information to the ZonTrak server (ZonService). The collected data is further transmitted to a central server at LLNL (the Fusion Server) to be processed in conjunction with overhead imagery to generate location estimates of nuclear proliferation and radiation sources.

  9. HIPAA-compliant automatic monitoring system for RIS-integrated PACS operation

    NASA Astrophysics Data System (ADS)

    Jin, Jin; Zhang, Jianguo; Chen, Xiaomeng; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Feng, Jie; Sheng, Liwei; Huang, H. K.

    2006-03-01

    As a governmental regulation, Health Insurance Portability and Accountability Act (HIPAA) was issued to protect the privacy of health information that identifies individuals who are living or deceased. HIPAA requires security services supporting implementation features: Access control; Audit controls; Authorization control; Data authentication; and Entity authentication. These controls, which proposed in HIPAA Security Standards, are Audit trails here. Audit trails can be used for surveillance purposes, to detect when interesting events might be happening that warrant further investigation. Or they can be used forensically, after the detection of a security breach, to determine what went wrong and who or what was at fault. In order to provide security control services and to achieve the high and continuous availability, we design the HIPAA-Compliant Automatic Monitoring System for RIS-Integrated PACS operation. The system consists of two parts: monitoring agents running in each PACS component computer and a Monitor Server running in a remote computer. Monitoring agents are deployed on all computer nodes in RIS-Integrated PACS system to collect the Audit trail messages defined by the Supplement 95 of the DICOM standard: Audit Trail Messages. Then the Monitor Server gathers all audit messages and processes them to provide security information in three levels: system resources, PACS/RIS applications, and users/patients data accessing. Now the RIS-Integrated PACS managers can monitor and control the entire RIS-Integrated PACS operation through web service provided by the Monitor Server. This paper presents the design of a HIPAA-compliant automatic monitoring system for RIS-Integrated PACS Operation, and gives the preliminary results performed by this monitoring system on a clinical RIS-integrated PACS.

  10. A web accessible scientific workflow system for vadoze zone performance monitoring: design and implementation examples

    NASA Astrophysics Data System (ADS)

    Mattson, E.; Versteeg, R.; Ankeny, M.; Stormberg, G.

    2005-12-01

    Long term performance monitoring has been identified by DOE, DOD and EPA as one of the most challenging and costly elements of contaminated site remedial efforts. Such monitoring should provide timely and actionable information relevant to a multitude of stakeholder needs. This information should be obtained in a manner which is auditable, cost effective and transparent. Over the last several years INL staff has designed and implemented a web accessible scientific workflow system for environmental monitoring. This workflow environment integrates distributed, automated data acquisition from diverse sensors (geophysical, geochemical and hydrological) with server side data management and information visualization through flexible browser based data access tools. Component technologies include a rich browser-based client (using dynamic javascript and html/css) for data selection, a back-end server which uses PHP for data processing, user management, and result delivery, and third party applications which are invoked by the back-end using webservices. This system has been implemented and is operational for several sites, including the Ruby Gulch Waste Rock Repository (a capped mine waste rock dump on the Gilt Edge Mine Superfund Site), the INL Vadoze Zone Research Park and an alternative cover landfill. Implementations for other vadoze zone sites are currently in progress. These systems allow for autonomous performance monitoring through automated data analysis and report generation. This performance monitoring has allowed users to obtain insights into system dynamics, regulatory compliance and residence times of water. Our system uses modular components for data selection and graphing and WSDL compliant webservices for external functions such as statistical analyses and model invocations. Thus, implementing this system for novel sites and extending functionality (e.g. adding novel models) is relatively straightforward. As system access requires a standard webbrowser and uses intuitive functionality, stakeholders with diverse degrees of technical insight can use this system with little or no training.

  11. Switching the JLab Accelerator Operations Environment from an HP-UX Unix-based to a PC/Linux-based environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcguckin, Theodore

    2008-10-01

    The Jefferson Lab Accelerator Controls Environment (ACE) was predominantly based on the HP-UX Unix platform from 1987 through the summer of 2004. During this period the Accelerator Machine Control Center (MCC) underwent a major renovation which included introducing Redhat Enterprise Linux machines, first as specialized process servers and then gradually as general login servers. As computer programs and scripts required to run the accelerator were modified, and inherent problems with the HP-UX platform compounded, more development tools became available for use with Linux and the MCC began to be converted over. In May 2008 the last HP-UX Unix login machinemore » was removed from the MCC, leaving only a few Unix-based remote-login servers still available. This presentation will explore the process of converting an operational Control Room environment from the HP-UX to Linux platform as well as the many hurdles that had to be overcome throughout the transition period (including a discussion of« less

  12. Using ant colony optimization on the quadratic assignment problem to achieve low energy cost in geo-distributed data centers

    NASA Astrophysics Data System (ADS)

    Osei, Richard

    There are many problems associated with operating a data center. Some of these problems include data security, system performance, increasing infrastructure complexity, increasing storage utilization, keeping up with data growth, and increasing energy costs. Energy cost differs by location, and at most locations fluctuates over time. The rising cost of energy makes it harder for data centers to function properly and provide a good quality of service. With reduced energy cost, data centers will have longer lasting servers/equipment, higher availability of resources, better quality of service, a greener environment, and reduced service and software costs for consumers. Some of the ways that data centers have tried to using to reduce energy costs include dynamically switching on and off servers based on the number of users and some predefined conditions, the use of environmental monitoring sensors, and the use of dynamic voltage and frequency scaling (DVFS), which enables processors to run at different combinations of frequencies with voltages to reduce energy cost. This thesis presents another method by which energy cost at data centers could be reduced. This method involves the use of Ant Colony Optimization (ACO) on a Quadratic Assignment Problem (QAP) in assigning user request to servers in geo-distributed data centers. In this paper, an effort to reduce data center energy cost involves the use of front portals, which handle users' requests, were used as ants to find cost effective ways to assign users requests to a server in heterogeneous geo-distributed data centers. The simulation results indicate that the ACO for Optimal Server Activation and Task Placement algorithm reduces energy cost on a small and large number of users' requests in a geo-distributed data center and its performance increases as the input data grows. In a simulation with 3 geo-distributed data centers, and user's resource request ranging from 25,000 to 25,000,000, the ACO algorithm was able to reduce energy cost on an average of $.70 per second. The ACO for Optimal Server Activation and Task Placement algorithm has proven to work as an alternative or improvement in reducing energy cost in geo-distributed data centers.

  13. Enhanced networked server management with random remote backups

    NASA Astrophysics Data System (ADS)

    Kim, Song-Kyoo

    2003-08-01

    In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.

  14. The Development of the Puerto Rico Lightning Detection Network for Meteorological Research

    NASA Technical Reports Server (NTRS)

    Legault, Marc D.; Miranda, Carmelo; Medin, J.; Ojeda, L. J.; Blakeslee, Richard J.

    2011-01-01

    A land-based Puerto Rico Lightning Detection Network (PR-LDN) dedicated to the academic research of meteorological phenomena has being developed. Five Boltek StormTracker PCI-Receivers with LTS-2 Timestamp Cards with GPS and lightning detectors were integrated to Pentium III PC-workstations running the CentOS linux operating system. The Boltek detector linux driver was compiled under CentOS, modified, and thoroughly tested. These PC-workstations with integrated lightning detectors were installed at five of the University of Puerto Rico (UPR) campuses distributed around the island of PR. The PC-workstations are left on permanently in order to monitor lightning activity at all times. Each is networked to their campus network-backbone permitting quasi-instantaneous data transfer to a central server at the UPR-Bayam n campus. Information generated by each lightning detector is managed by a C-program developed by us called the LDN-client. The LDN-client maintains an open connection to the central server operating the LDN-server program where data is sent real-time for analysis and archival. The LDN-client also manages the storing of data on the PC-workstation hard disk. The LDN-server software (also an in-house effort) analyses the data from each client and performs event triangulations. Time-of-arrival (TOA) and related hybrid algorithms, lightning-type and event discriminating routines are also implemented in the LDN-server software. We also have developed software to visually monitor lightning events in real-time from all clients and the triangulated events. We are currently monitoring and studying the spatial, temporal, and type distribution of lightning strikes associated with electrical storms and tropical cyclones in the vicinity of Puerto Rico.

  15. Video personalization for usage environment

    NASA Astrophysics Data System (ADS)

    Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.

    2002-07-01

    A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.

  16. Implementation of a real-time multi-channel gateway server in ubiquitous integrated biotelemetry system for emergency care (UIBSEC).

    PubMed

    Cheon, Gyeongwoo; Shin, Il Hyung; Jung, Min Yang; Kim, Hee Chan

    2009-01-01

    We developed a gateway server to support various types of bio-signal monitoring devices for ubiquitous emergency healthcare in a reliable, effective, and scalable way. The server provides multiple channels supporting real-time N-to-N client connections. We applied our system to four types of health monitoring devices including a 12-channel electrocardiograph (ECG), oxygen saturation (SpO(2)), and medical imaging devices (a ultrasonograph and a digital skin microscope). Different types of telecommunication networks were tested: WIBRO, CDMA, wireless LAN, and wired internet. We measured the performance of our system in terms of the transmission rate and the number of simultaneous connections. The results show that the proposed network communication strategy can be successfully applied to the ubiquitous emergency healthcare service by providing a fast rate enough for real-time video transmission and multiple connections among patients and medical personnel.

  17. Real-time Web GIS to monitor marine water quality using wave glider

    NASA Astrophysics Data System (ADS)

    Maneesa Amiruddin, Siti

    2016-06-01

    In the past decade, Malaysia has experienced unprecedented economic development and associated socioeconomic changes. As environmentalists anticipate these changes could have negative impacts on the marine and coastal environment, a comprehensive, continuous and long term marine water quality monitoring programme needs to be strengthened to reflect the government's aggressive mind-set of enhancing its authority in protection, preservation, management and enrichment of vast resources of the ocean. Wave Glider, an autonomous, unmanned marine vehicle provides continuous ocean monitoring at all times and is durable in any weather condition. Geographic Information System (GIS) technology is ideally suited as a tool for the presentation of data derived from continuous monitoring of locations, and used to support and deliver information to environmental managers and the public. Combined with GeoEvent Processor, an extension from ArcGIS for Server, it extends the Web GIS capabilities in providing real-time data from the monitoring activities. Therefore, there is a growing need of Web GIS for easy and fast dissemination, sharing, displaying and processing of spatial information which in turn helps in decision making for various natural resources based applications.

  18. Personalized professional content recommendation

    DOEpatents

    Xu, Songhua

    2015-10-27

    A personalized content recommendation system includes a client interface configured to automatically monitor a user's information data stream transmitted on the Internet. A hybrid contextual behavioral and collaborative personal interest inference engine resident to a non-transient media generates automatic predictions about the interests of individual users of the system. A database server retains the user's personal interest profile based on a plurality of monitored information. The system also includes a server programmed to filter items in an incoming information stream with the personal interest profile and is further programmed to identify only those items of the incoming information stream that substantially match the personal interest profile.

  19. Self-Powered WSN for Distributed Data Center Monitoring

    PubMed Central

    Brunelli, Davide; Passerone, Roberto; Rizzon, Luca; Rossi, Maurizio; Sartori, Davide

    2016-01-01

    Monitoring environmental parameters in data centers is gathering nowadays increasing attention from industry, due to the need of high energy efficiency of cloud services. We present the design and the characterization of an energy neutral embedded wireless system, prototyped to monitor perpetually environmental parameters in servers and racks. It is powered by an energy harvesting module based on Thermoelectric Generators, which converts the heat dissipation from the servers. Starting from the empirical characterization of the energy harvester, we present a power conditioning circuit optimized for the specific application. The whole system has been enhanced with several sensors. An ultra-low-power micro-controller stacked over the energy harvesting provides an efficient power management. Performance have been assessed and compared with the analytical model for validation. PMID:26729135

  20. Self-Powered WSN for Distributed Data Center Monitoring.

    PubMed

    Brunelli, Davide; Passerone, Roberto; Rizzon, Luca; Rossi, Maurizio; Sartori, Davide

    2016-01-02

    Monitoring environmental parameters in data centers is gathering nowadays increasing attention from industry, due to the need of high energy efficiency of cloud services. We present the design and the characterization of an energy neutral embedded wireless system, prototyped to monitor perpetually environmental parameters in servers and racks. It is powered by an energy harvesting module based on Thermoelectric Generators, which converts the heat dissipation from the servers. Starting from the empirical characterization of the energy harvester, we present a power conditioning circuit optimized for the specific application. The whole system has been enhanced with several sensors. An ultra-low-power micro-controller stacked over the energy harvesting provides an efficient power management. Performance have been assessed and compared with the analytical model for validation.

  1. Distributed metadata servers for cluster file systems using shared low latency persistent key-value metadata store

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.

    A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less

  2. GSM module for wireless radiation monitoring system via SMS

    NASA Astrophysics Data System (ADS)

    Rahman, Nur Aira Abd; Hisyam Ibrahim, Noor; Lombigit, Lojius; Azman, Azraf; Jaafar, Zainudin; Arymaswati Abdullah, Nor; Hadzir Patai Mohamad, Glam

    2018-01-01

    A customised Global System for Mobile communication (GSM) module is designed for wireless radiation monitoring through Short Messaging Service (SMS). This module is able to receive serial data from radiation monitoring devices such as survey meter or area monitor and transmit the data as text SMS to a host server. It provides two-way communication for data transmission, status query, and configuration setup. The module hardware consists of GSM module, voltage level shifter, SIM circuit and Atmega328P microcontroller. Microcontroller provides control for sending, receiving and AT command processing to GSM module. The firmware is responsible to handle task related to communication between device and host server. It process all incoming SMS, extract, and store new configuration from Host, transmits alert/notification SMS when the radiation data reach/exceed threshold value, and transmits SMS data at every fixed interval according to configuration. Integration of this module with radiation survey/monitoring device will create mobile and wireless radiation monitoring system with prompt emergency alert at high-level radiation.

  3. CrossVit: enhancing canopy monitoring management practices in viticulture.

    PubMed

    Matese, Alessandro; Vaccari, Francesco Primo; Tomasi, Diego; Di Gennaro, Salvatore Filippo; Primicerio, Jacopo; Sabatini, Francesco; Guidoni, Silvia

    2013-06-13

    A new wireless sensor network (WSN), called CrossVit, and based on MEMSIC products, has been tested for two growing seasons in two vineyards in Italy. The aims are to evaluate the monitoring performances of the new WSN directly in the vineyard and collect air temperature, air humidity and solar radiation data to support vineyard management practices. The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering; Nodes level is based on a network of peripheral nodes consisting of a MDA300 sensor board and Iris module and equipped with thermistors for air temperature, photodiodes for global and diffuse solar radiation, and an HTM2500LF sensor for relative humidity. The communication levels are: WSN links between gateways and sensor nodes by ZigBee, and long-range GSM/GPRS links between gateways and the server farm level. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity, detecting the differences between the canopy treatments applied. The performance of CrossVit, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power consumption.

  4. CrossVit: Enhancing Canopy Monitoring Management Practices in Viticulture

    PubMed Central

    Matese, Alessandro; Vaccari, Francesco Primo; Tomasi, Diego; Di Gennaro, Salvatore Filippo; Primicerio, Jacopo; Sabatini, Francesco; Guidoni, Silvia

    2013-01-01

    A new wireless sensor network (WSN), called CrossVit, and based on MEMSIC products, has been tested for two growing seasons in two vineyards in Italy. The aims are to evaluate the monitoring performances of the new WSN directly in the vineyard and collect air temperature, air humidity and solar radiation data to support vineyard management practices. The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering; Nodes level is based on a network of peripheral nodes consisting of a MDA300 sensor board and Iris module and equipped with thermistors for air temperature, photodiodes for global and diffuse solar radiation, and an HTM2500LF sensor for relative humidity. The communication levels are: WSN links between gateways and sensor nodes by ZigBee, and long-range GSM/GPRS links between gateways and the server farm level. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity, detecting the differences between the canopy treatments applied. The performance of CrossVit, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power consumption. PMID:23765273

  5. A web-based approach for electrocardiogram monitoring in the home.

    PubMed

    Magrabi, F; Lovell, N H; Celler, B G

    1999-05-01

    A Web-based electrocardiogram (ECG) monitoring service in which a longitudinal clinical record is used for management of patients, is described. The Web application is used to collect clinical data from the patient's home. A database on the server acts as a central repository where this clinical information is stored. A Web browser provides access to the patient's records and ECG data. We discuss the technologies used to automate the retrieval and storage of clinical data from a patient database, and the recording and reviewing of clinical measurement data. On the client's Web browser, ActiveX controls embedded in the Web pages provide a link between the various components including the Web server, Web page, the specialised client side ECG review and acquisition software, and the local file system. The ActiveX controls also implement FTP functions to retrieve and submit clinical data to and from the server. An intelligent software agent on the server is activated whenever new ECG data is sent from the home. The agent compares historical data with newly acquired data. Using this method, an optimum patient care strategy can be evaluated, a summarised report along with reminders and suggestions for action is sent to the doctor and patient by email.

  6. Disaster recovery plan for HANDI 2000 business management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, D.E.

    The BMS production implementation will be complete by October 1, 1998 and the server environment will be comprised of two types of platforms. The PassPort Supply and the PeopleSoft Financials will reside on LNIX servers and the PeopleSoft Human Resources and Payroll will reside on Microsoft NT servers. Because of the wide scope and the requirements of the COTS products to run in various environments backup and recovery responsibilities are divided between two groups in Technical Operations. The Central Computer Systems Management group provides support for the LTNIX/NT Backup Data Center, and the Network Infrastructure Systems group provides support formore » the NT Application Server Backup outside the Data Center. The disaster recovery process is dependent on a good backup and recovery process. Information and integrated system data for determining the disaster recovery process is identified from the Fluor Daniel Hanford (FDH) Risk Assessment Plan, Contingency Plan, and Backup and Recovery Plan, and Backup Form for HANDI 2000 BMS.« less

  7. A Server-Based Mobile Coaching System

    PubMed Central

    Baca, Arnold; Kornfeind, Philipp; Preuschl, Emanuel; Bichler, Sebastian; Tampier, Martin; Novatchkov, Hristo

    2010-01-01

    A prototype system for monitoring, transmitting and processing performance data in sports for the purpose of providing feedback has been developed. During training, athletes are equipped with a mobile device and wireless sensors using the ANT protocol in order to acquire biomechanical, physiological and other sports specific parameters. The measured data is buffered locally and forwarded via the Internet to a server. The server provides experts (coaches, biomechanists, sports medicine specialists etc.) with remote data access, analysis and (partly automated) feedback routines. In this way, experts are able to analyze the athlete’s performance and return individual feedback messages from remote locations. PMID:22163490

  8. An efficient and secure dynamic ID-based authentication scheme for telecare medical information systems.

    PubMed

    Chen, Hung-Ming; Lo, Jung-Wen; Yeh, Chang-Kuo

    2012-12-01

    The rapidly increased availability of always-on broadband telecommunication environments and lower-cost vital signs monitoring devices bring the advantages of telemedicine directly into the patient's home. Hence, the control of access to remote medical servers' resources has become a crucial challenge. A secure authentication scheme between the medical server and remote users is therefore needed to safeguard data integrity, confidentiality and to ensure availability. Recently, many authentication schemes that use low-cost mobile devices have been proposed to meet these requirements. In contrast to previous schemes, Khan et al. proposed a dynamic ID-based remote user authentication scheme that reduces computational complexity and includes features such as a provision for the revocation of lost or stolen smart cards and a time expiry check for the authentication process. However, Khan et al.'s scheme has some security drawbacks. To remedy theses, this study proposes an enhanced authentication scheme that overcomes the weaknesses inherent in Khan et al.'s scheme and demonstrated this scheme is more secure and robust for use in a telecare medical information system.

  9. Exploiting volatile opportunistic computing resources with Lobster

    NASA Astrophysics Data System (ADS)

    Woodard, Anna; Wolf, Matthias; Mueller, Charles; Tovar, Ben; Donnelly, Patrick; Hurtado Anampa, Kenyi; Brenner, Paul; Lannon, Kevin; Hildreth, Mike; Thain, Douglas

    2015-12-01

    Analysis of high energy physics experiments using the Compact Muon Solenoid (CMS) at the Large Hadron Collider (LHC) can be limited by availability of computing resources. As a joint effort involving computer scientists and CMS physicists at Notre Dame, we have developed an opportunistic workflow management tool, Lobster, to harvest available cycles from university campus computing pools. Lobster consists of a management server, file server, and worker processes which can be submitted to any available computing resource without requiring root access. Lobster makes use of the Work Queue system to perform task management, while the CMS specific software environment is provided via CVMFS and Parrot. Data is handled via Chirp and Hadoop for local data storage and XrootD for access to the CMS wide-area data federation. An extensive set of monitoring and diagnostic tools have been developed to facilitate system optimisation. We have tested Lobster using the 20 000-core cluster at Notre Dame, achieving approximately 8-10k tasks running simultaneously, sustaining approximately 9 Gbit/s of input data and 340 Mbit/s of output data.

  10. The Czech National Grid Infrastructure

    NASA Astrophysics Data System (ADS)

    Chudoba, J.; Křenková, I.; Mulač, M.; Ruda, M.; Sitera, J.

    2017-10-01

    The Czech National Grid Infrastructure is operated by MetaCentrum, a CESNET department responsible for coordinating and managing activities related to distributed computing. CESNET as the Czech National Research and Education Network (NREN) provides many e-infrastructure services, which are used by 94% of the scientific and research community in the Czech Republic. Computing and storage resources owned by different organizations are connected by fast enough network to provide transparent access to all resources. We describe in more detail the computing infrastructure, which is based on several different technologies and covers grid, cloud and map-reduce environment. While the largest part of CPUs is still accessible via distributed torque servers, providing environment for long batch jobs, part of infrastructure is available via standard EGI tools in EGI, subset of NGI resources is provided into EGI FedCloud environment with cloud interface and there is also Hadoop cluster provided by the same e-infrastructure.A broad spectrum of computing servers is offered; users can choose from standard 2 CPU servers to large SMP machines with up to 6 TB of RAM or servers with GPU cards. Different groups have different priorities on various resources, resource owners can even have an exclusive access. The software is distributed via AFS. Storage servers offering up to tens of terabytes of disk space to individual users are connected via NFS4 on top of GPFS and access to long term HSM storage with peta-byte capacity is also provided. Overview of available resources and recent statistics of usage will be given.

  11. Real-Time Data Management, IP Telemetry, Data Integration, and Data Center Operations for the Source Physics Experiment (SPE), Nevada National Security Site

    NASA Astrophysics Data System (ADS)

    Plank, G.; Slater, D.; Torrisi, J.; Presser, R.; Williams, M.; Smith, K. D.

    2012-12-01

    The Nevada Seismological Laboratory (NSL) manages time-series data and high-throughput IP telemetry for the National Center for Nuclear Security (NCNS) Source Physics Experiment (SPE), underway on the Nevada National Security Site (NNSS). During active-source experiments, SPE's heterogeneous systems record over 350 channels of a variety of data types including seismic, infrasound, acoustic, and electro-magnetic. During the interim periods, broadband and short period instruments record approximately 200 channels of continuous, high-sample-rate seismic data. Frequent changes in sensor and station configurations create a challenging meta-data environment. Meta-data account for complete operational histories, including sensor types, serial numbers, gains, sample rates, orientations, instrument responses, data-logger types etc. To date, these catalogue 217 stations, over 40 different sensor types, and over 1000 unique recording configurations (epochs). Facilities for processing, backup, and distribution of time-series data currently span four Linux servers, 60Tb of disk capacity, and two data centers. Bandwidth, physical security, and redundant power and cooling systems for acquisition, processing, and backup servers are provided by NSL's Reno data center. The Nevada System of Higher Education (NSHE) System Computer Services (SCS) in Las Vegas provides similar facilities for the distribution server. NSL staff handle setup, maintenance, and security of all data management systems. SPE PIs have remote access to meta-data, raw data, and CSS3.0 compilations, via SSL-based transfers such as rsync or secure-copy, as well as shell access for data browsing and limited processing. Meta-data are continuously updated and posted on the Las Vegas distribution server as station histories are better understood and errors are corrected. Raw time series and refined CSS3.0 data compilations with standardized formats are transferred to the Las Vegas data server as available. For better data availability and station monitoring, SPE is beginning to leverage NSL's wide-area digital IP network with nine SPE stations and six Rock Valley area stations that stream continuous recordings in real time to the NSL Reno data center. These stations, in addition to eight regional legacy stations supported by National Security Technologies (NSTec), are integrated with NSL's regional monitoring network and constrain a high-quality local earthquake catalog for NNSS. The telemetered stations provide critical capabilities for SPE, and infrastructure for earthquake response on NNSS as well as southern Nevada and the Las Vegas area.

  12. LEMON - LHC Era Monitoring for Large-Scale Infrastructures

    NASA Astrophysics Data System (ADS)

    Marian, Babik; Ivan, Fedorko; Nicholas, Hook; Hector, Lansdale Thomas; Daniel, Lenkes; Miroslav, Siket; Denis, Waldron

    2011-12-01

    At the present time computer centres are facing a massive rise in virtualization and cloud computing as these solutions bring advantages to service providers and consolidate the computer centre resources. However, as a result the monitoring complexity is increasing. Computer centre management requires not only to monitor servers, network equipment and associated software but also to collect additional environment and facilities data (e.g. temperature, power consumption, cooling efficiency, etc.) to have also a good overview of the infrastructure performance. The LHC Era Monitoring (Lemon) system is addressing these requirements for a very large scale infrastructure. The Lemon agent that collects data on every client and forwards the samples to the central measurement repository provides a flexible interface that allows rapid development of new sensors. The system allows also to report on behalf of remote devices such as switches and power supplies. Online and historical data can be visualized via a web-based interface or retrieved via command-line tools. The Lemon Alarm System component can be used for notifying the operator about error situations. In this article, an overview of the Lemon monitoring is provided together with a description of the CERN LEMON production instance. No direct comparison is made with other monitoring tool.

  13. IMIS desktop & smartphone software solutions for monitoring spacecrafts' payload from anywhere

    NASA Astrophysics Data System (ADS)

    Baroukh, J.; Queyrut, O.; Airaud, J.

    In the past years, the demand for satellite remote operations has increased guided by on one hand, the will to reduce operations cost (on-call operators out of business hours), and on the other hand, the development of cooperation space missions resulting in a world wide distribution of engineers and science team members. Only a few off-the-shelf solutions exist to fulfill the need of remote payload monitoring, and they mainly use proprietary devices. The recent advent of mobile technologies (laptops, smartphones and tablets) as well as the worldwide deployment of broadband networks (3G, Wi-Fi hotspots), has opened up a technical window that brings new options. As part of the Mars Science Laboratory (MSL) mission, the Centre National D'Etudes Spatiales (CNES, the French space agency) has developed a new software solution for monitoring spacecraft payloads. The Instrument Monitoring Interactive Software (IMIS) offers state-of-the-art operational features for payload monitoring, and can be accessed remotely. It was conceived as a generic tool that can be used for heterogeneous payloads and missions. IMIS was designed as a classical client/server architecture. The server is hosted at CNES and acts as a data provider while two different kinds of clients are available depending on the level of mobility required. The first one is a rich client application, built on Eclipse framework, which can be installed on usual operating systems and communicates with the server through the Internet. The second one is a smartphone application for any Android platform, connected to the server thanks to the mobile broadband network or a Wi-Fi connection. This second client is mainly devoted to on-call operations and thus only contains a subset of the IMIS functionalities. This paper describes the operational context, including security aspects, that led IMIS development, presents the selected software architecture and details the various features of both clients: the desktop and the sm- rtphone application.

  14. Software Re-Engineering of the Human Factors Analysis and Classification System - (Maintenance Extension) Using Object Oriented Methods in a Microsoft Environment

    DTIC Science & Technology

    2001-09-01

    replication) -- all from Visual Basic and VBA . In fact, we found that the SQL Server engine actually had a plethora of options, most formidable of...2002, the new SQL Server 2000 database engine, and Microsoft Visual Basic.NET. This thesis describes our use of the Spiral Development Model to...versions of Microsoft products? Specifically, the pending release of Microsoft Office 2002, the new SQL Server 2000 database engine, and Microsoft

  15. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    NASA Astrophysics Data System (ADS)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  16. Experiences with DCE: the pro7 communication server based on OSF-DCE functionality.

    PubMed

    Schulte, M; Lordieck, W

    1997-01-01

    The pro7-communication server is a new approach to manage communication between different applications on different hardware platforms in a hospital environment. The most important features are the use of OSF/DCE for realising remote procedure calls between different platforms, the use of an SQL-92 compatible relational database and the design of a new software development tool (called protocol definition language compiler) for describing the interface of a new application, which is to integrate in a hospital environment.

  17. ClusterControl: a web interface for distributing and monitoring bioinformatics applications on a Linux cluster.

    PubMed

    Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko

    2004-03-22

    ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl

  18. Graphic Server: A real time system for displaying and monitoring telemetry data of several satellites

    NASA Technical Reports Server (NTRS)

    Douard, Stephane

    1994-01-01

    Known as a Graphic Server, the system presented was designed for the control ground segment of the Telecom 2 satellites. It is a tool used to dynamically display telemetry data within graphic pages, also known as views. The views are created off-line through various utilities and then, on the operator's request, displayed and animated in real time as data is received. The system was designed as an independent component, and is installed in different Telecom 2 operational control centers. It enables operators to monitor changes in the platform and satellite payloads in real time. It has been in operation since December 1991.

  19. Plotting a New Course for Metasearch

    ERIC Educational Resources Information Center

    Breeding, Marshall

    2005-01-01

    Today's world demands an expansive search environment. The universe of information resources is immense and is growing rapidly. The content needed for research and scholarship is dispersed among publishers, aggregators, repositories, library catalogs, e-print servers, and servers throughout the Web. Users do not want to jump from one interface to…

  20. MDA-image: an environment of networked desktop computers for teleradiology/pathology.

    PubMed

    Moffitt, M E; Richli, W R; Carrasco, C H; Wallace, S; Zimmerman, S O; Ayala, A G; Benjamin, R S; Chee, S; Wood, P; Daniels, P

    1991-04-01

    MDA-Image, a project of The University of Texas M. D. Anderson Cancer Center, is an environment of networked desktop computers for teleradiology/pathology. Radiographic film is digitized with a film scanner and histopathologic slides are digitized using a red, green, and blue (RGB) video camera connected to a microscope. Digitized images are stored on a data server connected to the institution's computer communication network (Ethernet) and can be displayed from authorized desktop computers connected to Ethernet. Images are digitized for cases presented at the Bone Tumor Management Conference, a multidisciplinary conference in which treatment options are discussed among clinicians, surgeons, radiologists, pathologists, radiotherapists, and medical oncologists. These radiographic and histologic images are shown on a large screen computer monitor during the conference. They are available for later review for follow-up or representation.

  1. Automatic fall detection using wearable biomedical signal measurement terminal.

    PubMed

    Nguyen, Thuy-Trang; Cho, Myeong-Chan; Lee, Tae-Soo

    2009-01-01

    In our study, we developed a mobile waist-mounted device which can monitor the subject's acceleration signal and detect the fall events in real-time with high accuracy and automatically send an emergency message to a remote server via CDMA module. When fall event happens, the system also generates an alarm sound at 50Hz to alarm other people until a subject can sit up or stand up. A Kionix KXM52-1050 tri-axial accelerometer and a Bellwave BSM856 CDMA standalone modem were used to detect and manage fall events. We used not only a simple threshold algorithm but also some supporting methods to increase an accuracy of our system (nearly 100% in laboratory environment). Timely fall detection can prevent regrettable death due to long-lie effect; therefore increase the independence of elderly people in an unsupervised living environment.

  2. Operational Experience with the Frontier System in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter

    2012-06-20

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less

  3. Operational Experience with the Frontier System in CMS

    NASA Astrophysics Data System (ADS)

    Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen

    2012-12-01

    The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.

  4. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.

    PubMed

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-11-24

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  5. A Monitoring System for Vegetable Greenhouses based on a Wireless Sensor Network

    PubMed Central

    Li, Xiu-hong; Cheng, Xiao; Yan, Ke; Gong, Peng

    2010-01-01

    A wireless sensor network-based automatic monitoring system is designed for monitoring the life conditions of greenhouse vegetatables. The complete system architecture includes a group of sensor nodes, a base station, and an internet data center. For the design of wireless sensor node, the JN5139 micro-processor is adopted as the core component and the Zigbee protocol is used for wireless communication between nodes. With an ARM7 microprocessor and embedded ZKOS operating system, a proprietary gateway node is developed to achieve data influx, screen display, system configuration and GPRS based remote data forwarding. Through a Client/Server mode the management software for remote data center achieves real-time data distribution and time-series analysis. Besides, a GSM-short-message-based interface is developed for sending real-time environmental measurements, and for alarming when a measurement is beyond some pre-defined threshold. The whole system has been tested for over one year and satisfactory results have been observed, which indicate that this system is very useful for greenhouse environment monitoring. PMID:22163391

  6. A mobile phone-based ECG monitoring system.

    PubMed

    Iwamoto, Junichi; Yonezawa, Yoshiharu; Maki, Hiromichi; Ogawa, Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Hahn, Allen W; Caldwell, W Morton

    2006-01-01

    We have developed a telemedicine system for monitoring a patient's electrocardiogram during daily activities. The recording system consists of three ECG chest electrodes, a variable gain instrumentation amplifier, a low power 8-bit single-chip microcomputer, a 256 KB EEPROM and a 2.4 GHz low transmitting power mobile phone (PHS). The complete system is mounted on a single, lightweight, chest electrode array. When a heart discomfort is felt, the patient pushes the data transmission switch on the recording system. The system sends the recorded ECG waveforms of the two prior minutes and ECG waveforms of the two minutes after the switch is pressed, directly in the hospital server computer via the PHS. The server computer sends the data to the physician on call. The data is displayed on the doctor's Java mobile phone LCD (Liquid Crystal Display), so he or she can monitor the ECG regardless of their location. The developed ECG monitoring system is not only applicable to at-home patients, but should also be useful for monitoring hospital patients.

  7. Informatics in radiology (infoRAD): A complete continuous-availability PACS archive server.

    PubMed

    Liu, Brent J; Huang, H K; Cao, Fei; Zhou, Michael Z; Zhang, Jianguo; Mogel, Greg

    2004-01-01

    The operational reliability of the picture archiving and communication system (PACS) server in a filmless hospital environment is always a major concern because server failure could cripple the entire PACS operation. A simple, low-cost, continuous-availability (CA) PACS archive server was designed and developed. The server makes use of a triple modular redundancy (TMR) system with a simple majority voting logic that automatically identifies a faulty module and removes it from service. The remaining two modules continue normal operation with no adverse effects on data flow or system performance. In addition, the server is integrated with two external mass storage devices for short- and long-term storage. Evaluation and testing of the server were conducted with laboratory experiments in which hardware failures were simulated to observe recovery time and the resumption of normal data flow. The server provides maximum uptime (99.999%) for end users while ensuring the transactional integrity of all clinical PACS data. Hardware failure has only minimal impact on performance, with no interruption of clinical data flow or loss of data. As hospital PACS become more widespread, the need for CA PACS solutions will increase. A TMR CA PACS archive server can reliably help achieve CA in this setting. Copyright RSNA, 2004

  8. Request queues for interactive clients in a shared file system of a parallel computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin

    Interactive requests are processed from users of log-in nodes. A metadata server node is provided for use in a file system shared by one or more interactive nodes and one or more batch nodes. The interactive nodes comprise interactive clients to execute interactive tasks and the batch nodes execute batch jobs for one or more batch clients. The metadata server node comprises a virtual machine monitor; an interactive client proxy to store metadata requests from the interactive clients in an interactive client queue; a batch client proxy to store metadata requests from the batch clients in a batch client queue;more » and a metadata server to store the metadata requests from the interactive client queue and the batch client queue in a metadata queue based on an allocation of resources by the virtual machine monitor. The metadata requests can be prioritized, for example, based on one or more of a predefined policy and predefined rules.« less

  9. The software analysis project for the Office of Human Resources

    NASA Technical Reports Server (NTRS)

    Tureman, Robert L., Jr.

    1994-01-01

    There were two major sections of the project for the Office of Human Resources (OHR). The first section was to conduct a planning study to analyze software use with the goal of recommending software purchases and determining whether the need exists for a file server. The second section was analysis and distribution planning for retirement planning computer program entitled VISION provided by NASA Headquarters. The software planning study was developed to help OHR analyze the current administrative desktop computing environment and make decisions regarding software acquisition and implementation. There were three major areas addressed by the study: current environment new software requirements, and strategies regarding the implementation of a server in the Office. To gather data on current environment, employees were surveyed and an inventory of computers were produced. The surveys were compiled and analyzed by the ASEE fellow with interpretation help by OHR staff. New software requirements represented a compilation and analysis of the surveyed requests of OHR personnel. Finally, the information on the use of a server represents research done by the ASEE fellow and analysis of survey data to determine software requirements for a server. This included selection of a methodology to estimate the number of copies of each software program required given current use and estimated growth. The report presents the results of the computing survey, a description of the current computing environment, recommenations for changes in the computing environment, current software needs, management advantages of using a server, and management considerations in the implementation of a server. In addition, detailed specifications were presented for the hardware and software recommendations to offer a complete picture to OHR management. The retirement planning computer program available to NASA employees will aid in long-range retirement planning. The intended audience is the NASA civil service employee with several years until retirement. The employee enters current salary and savings information as well as goals concerning salary at retirement, assumptions on inflation, and the return on investments. The program produces a picture of the employee's retirement income from all sources based on the assumptions entered. A session showing features of the program was conducted for key personnel at the Center. After analysis, it was decided to offer the program through the Learning Center starting in August 1994.

  10. Verifying the secure setup of UNIX client/servers and detection of network intrusion

    NASA Astrophysics Data System (ADS)

    Feingold, Richard; Bruestle, Harry R.; Bartoletti, Tony; Saroyan, R. A.; Fisher, John M.

    1996-03-01

    This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today's global `Infosphere' presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to check on their security configuration. SPI's broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI's use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on the Ethernet broadcast Local Area Network segment and product transcripts of suspicious user connections. NID's retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.

  11. EVAcon: a protein contact prediction evaluation service

    PubMed Central

    Graña, Osvaldo; Eyrich, Volker A.; Pazos, Florencio; Rost, Burkhard; Valencia, Alfonso

    2005-01-01

    Here we introduce EVAcon, an automated web service that evaluates the performance of contact prediction servers. Currently, EVAcon is monitoring nine servers, four of which are specialized in contact prediction and five are general structure prediction servers. Results are compared for all newly determined experimental structures deposited into PDB (∼5–50 per week). EVAcon allows for a precise comparison of the results based on a system of common protein subsets and the commonly accepted evaluation criteria that are also used in the corresponding category of the CASP assessment. EVAcon is a new service added to the functionality of the EVA system for the continuous evaluation of protein structure prediction servers. The new service is accesible from any of the three EVA mirrors: PDG (CNB-CSIC, Madrid) (); CUBIC (Columbia University, NYC) (); and Sali Lab (UCSF, San Francisco) (). PMID:15980486

  12. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  13. Providing the Persistent Data Storage in a Software Engineering Environment Using Java/COBRA and a DBMS

    NASA Technical Reports Server (NTRS)

    Dhaliwal, Swarn S.

    1997-01-01

    An investigation was undertaken to build the software foundation for the WHERE (Web-based Hyper-text Environment for Requirements Engineering) project. The TCM (Toolkit for Conceptual Modeling) was chosen as the foundation software for the WHERE project which aims to provide an environment for facilitating collaboration among geographically distributed people involved in the Requirements Engineering process. The TCM is a collection of diagram and table editors and has been implemented in the C++ programming language. The C++ implementation of the TCM was translated into Java in order to allow the editors to be used for building various functionality of the WHERE project; the WHERE project intends to use the Web as its communication back- bone. One of the limitations of the translated software (TcmJava), which militated against its use in the WHERE project, was persistent data management mechanisms which it inherited from the original TCM; it was designed to be used in standalone applications. Before TcmJava editors could be used as a part of the multi-user, geographically distributed applications of the WHERE project, a persistent storage mechanism must be built which would allow data communication over the Internet, using the capabilities of the Web. An approach involving features of Java, CORBA (Common Object Request Broker), the Web, a middle-ware (Java Relational Binding (JRB)), and a database server was used to build the persistent data management infrastructure for the WHERE project. The developed infrastructure allows a TcmJava editor to be downloaded and run from a network host by using a JDK 1.1 (Java Developer's Kit) compatible Web-browser. The aforementioned editor establishes connection with a server by using the ORB (Object Request Broker) software and stores/retrieves data in/from the server. The server consists of a CORBA object or objects depending upon whether the data is to be made persistent on a single server or multiple servers. The CORBA object providing the persistent data server is implemented using the Java progranu-ning language. It uses the JRB to store/retrieve data in/from a relational database server. The persistent data management system provides transaction and user management facilities which allow multi-user, distributed access to the stored data in a secure manner.

  14. The Battle Command Sustainment Support System: Initial Analysis Report

    DTIC Science & Technology

    2016-09-01

    diagnostic monitoring, asynchronous commits, and others. The other components of the NEDP include a main forwarding gateway /web server and one or more...NATIONAL ENTERPRISE DATA PORTAL ANALYSIS The NEDP is comprised of an Oracle Database 10g referred to as the National Data Server and several other...data forwarding gateways (DFG). Together, with the Oracle Database 10g, these components provide a heterogeneous data source that aligns various data

  15. Privacy-Preserving Electrocardiogram Monitoring for Intelligent Arrhythmia Detection.

    PubMed

    Son, Junggab; Park, Juyoung; Oh, Heekuck; Bhuiyan, Md Zakirul Alam; Hur, Junbeom; Kang, Kyungtae

    2017-06-12

    Long-term electrocardiogram (ECG) monitoring, as a representative application of cyber-physical systems, facilitates the early detection of arrhythmia. A considerable number of previous studies has explored monitoring techniques and the automated analysis of sensing data. However, ensuring patient privacy or confidentiality has not been a primary concern in ECG monitoring. First, we propose an intelligent heart monitoring system, which involves a patient-worn ECG sensor (e.g., a smartphone) and a remote monitoring station, as well as a decision support server that interconnects these components. The decision support server analyzes the heart activity, using the Pan-Tompkins algorithm to detect heartbeats and a decision tree to classify them. Our system protects sensing data and user privacy, which is an essential attribute of dependability, by adopting signal scrambling and anonymous identity schemes. We also employ a public key cryptosystem to enable secure communication between the entities. Simulations using data from the MIT-BIH arrhythmia database demonstrate that our system achieves a 95.74% success rate in heartbeat detection and almost a 96.63% accuracy in heartbeat classification, while successfully preserving privacy and securing communications among the involved entities.

  16. Privacy-Preserving Electrocardiogram Monitoring for Intelligent Arrhythmia Detection †

    PubMed Central

    Son, Junggab; Park, Juyoung; Oh, Heekuck; Bhuiyan, Md Zakirul Alam; Hur, Junbeom; Kang, Kyungtae

    2017-01-01

    Long-term electrocardiogram (ECG) monitoring, as a representative application of cyber-physical systems, facilitates the early detection of arrhythmia. A considerable number of previous studies has explored monitoring techniques and the automated analysis of sensing data. However, ensuring patient privacy or confidentiality has not been a primary concern in ECG monitoring. First, we propose an intelligent heart monitoring system, which involves a patient-worn ECG sensor (e.g., a smartphone) and a remote monitoring station, as well as a decision support server that interconnects these components. The decision support server analyzes the heart activity, using the Pan–Tompkins algorithm to detect heartbeats and a decision tree to classify them. Our system protects sensing data and user privacy, which is an essential attribute of dependability, by adopting signal scrambling and anonymous identity schemes. We also employ a public key cryptosystem to enable secure communication between the entities. Simulations using data from the MIT-BIH arrhythmia database demonstrate that our system achieves a 95.74% success rate in heartbeat detection and almost a 96.63% accuracy in heartbeat classification, while successfully preserving privacy and securing communications among the involved entities. PMID:28604628

  17. An Internet of Things based physiological signal monitoring and receiving system for virtual enhanced health care network.

    PubMed

    Rajan, J Pandia; Rajan, S Edward

    2018-01-01

    Wireless physiological signal monitoring system designing with secured data communication in the health care system is an important and dynamic process. We propose a signal monitoring system using NI myRIO connected with the wireless body sensor network through multi-channel signal acquisition method. Based on the server side validation of the signal, the data connected to the local server is updated in the cloud. The Internet of Things (IoT) architecture is used to get the mobility and fast access of patient data to healthcare service providers. This research work proposes a novel architecture for wireless physiological signal monitoring system using ubiquitous healthcare services by virtual Internet of Things. We showed an improvement in method of access and real time dynamic monitoring of physiological signal of this remote monitoring system using virtual Internet of thing approach. This remote monitoring and access system is evaluated in conventional value. This proposed system is envisioned to modern smart health care system by high utility and user friendly in clinical applications. We claim that the proposed scheme significantly improves the accuracy of the remote monitoring system compared to the other wireless communication methods in clinical system.

  18. "Pack[superscript2]": VM Resource Scheduling for Fine-Grained Application SLAs in Highly Consolidated Environment

    ERIC Educational Resources Information Center

    Sukwong, Orathai

    2013-01-01

    Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…

  19. On delay adjustment for dynamic load balancing in distributed virtual environments.

    PubMed

    Deng, Yunhua; Lau, Rynson W H

    2012-04-01

    Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead.

  20. Development and application of remote video monitoring system for combine harvester based on embedded Linux

    NASA Astrophysics Data System (ADS)

    Chen, Jin; Wang, Yifan; Wang, Xuelei; Wang, Yuehong; Hu, Rui

    2017-01-01

    Combine harvester usually works in sparsely populated areas with harsh environment. In order to achieve the remote real-time video monitoring of the working state of combine harvester. A remote video monitoring system based on ARM11 and embedded Linux is developed. The system uses USB camera for capturing working state video data of the main parts of combine harvester, including the granary, threshing drum, cab and cut table. Using JPEG image compression standard to compress video data then transferring monitoring screen to remote monitoring center over the network for long-range monitoring and management. At the beginning of this paper it describes the necessity of the design of the system. Then it introduces realization methods of hardware and software briefly. And then it describes detailedly the configuration and compilation of embedded Linux operating system and the compiling and transplanting of video server program are elaborated. At the end of the paper, we carried out equipment installation and commissioning on combine harvester and then tested the system and showed the test results. In the experiment testing, the remote video monitoring system for combine harvester can achieve 30fps with the resolution of 800x600, and the response delay in the public network is about 40ms.

  1. Robust biometrics based authentication and key agreement scheme for multi-server environments using smart cards.

    PubMed

    Lu, Yanrong; Li, Lixiang; Yang, Xing; Yang, Yixian

    2015-01-01

    Biometrics authenticated schemes using smart cards have attracted much attention in multi-server environments. Several schemes of this type where proposed in the past. However, many of them were found to have some design flaws. This paper concentrates on the security weaknesses of the three-factor authentication scheme by Mishra et al. After careful analysis, we find their scheme does not really resist replay attack while failing to provide an efficient password change phase. We further propose an improvement of Mishra et al.'s scheme with the purpose of preventing the security threats of their scheme. We demonstrate the proposed scheme is given to strong authentication against several attacks including attacks shown in the original scheme. In addition, we compare the performance and functionality with other multi-server authenticated key schemes.

  2. Robust Biometrics Based Authentication and Key Agreement Scheme for Multi-Server Environments Using Smart Cards

    PubMed Central

    Lu, Yanrong; Li, Lixiang; Yang, Xing; Yang, Yixian

    2015-01-01

    Biometrics authenticated schemes using smart cards have attracted much attention in multi-server environments. Several schemes of this type where proposed in the past. However, many of them were found to have some design flaws. This paper concentrates on the security weaknesses of the three-factor authentication scheme by Mishra et al. After careful analysis, we find their scheme does not really resist replay attack while failing to provide an efficient password change phase. We further propose an improvement of Mishra et al.’s scheme with the purpose of preventing the security threats of their scheme. We demonstrate the proposed scheme is given to strong authentication against several attacks including attacks shown in the original scheme. In addition, we compare the performance and functionality with other multi-server authenticated key schemes. PMID:25978373

  3. Application-level regression testing framework using Jenkins

    DOE PAGES

    Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen

    2017-09-26

    Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less

  4. Application-level regression testing framework using Jenkins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budiardja, Reuben; Bouvet, Timothy; Arnold, Galen

    Monitoring and testing for regression of large-scale systems such as the NCSA's Blue Waters supercomputer are challenging tasks. In this paper, we describe the solution we came up with to perform those tasks. The goal was to find an automated solution for running user-level regression tests to evaluate system usability and performance. Jenkins, an automation server software, was chosen for its versatility, large user base, and multitude of plugins including collecting data and plotting test results over time. We also describe our Jenkins deployment to launch and monitor jobs on remote HPC system, perform authentication with one-time password, and integratemore » with our LDAP server for its authorization. We show some use cases and describe our best practices for successfully using Jenkins as a user-level system-wide regression testing and monitoring framework for large supercomputer systems.« less

  5. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  6. The HydroServer Platform for Sharing Hydrologic Data

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.

    2010-12-01

    The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its parts in advancing hydrologic research. Details of the CUAHSI HIS can be found at http://his.cuahsi.org, and HydroServer codeplex site http://hydroserver.codeplex.com.

  7. System and Method for Providing a Climate Data Persistence Service

    NASA Technical Reports Server (NTRS)

    Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)

    2018-01-01

    A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.

  8. Deterministic entanglement distillation for secure double-server blind quantum computation.

    PubMed

    Sheng, Yu-Bo; Zhou, Lan

    2015-01-15

    Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell state shared by two servers. Here, we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double-server BQC protocol. This protocol can get the pure maximally entangled Bell state. The success probability can reach 100% in principle. The distilled maximally entangled states can be remaind to perform the BQC protocol subsequently. The parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client. It makes this protocol unconditionally secure and suitable for the future BQC protocol.

  9. Deterministic entanglement distillation for secure double-server blind quantum computation

    PubMed Central

    Sheng, Yu-Bo; Zhou, Lan

    2015-01-01

    Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell state shared by two servers. Here, we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double-server BQC protocol. This protocol can get the pure maximally entangled Bell state. The success probability can reach 100% in principle. The distilled maximally entangled states can be remaind to perform the BQC protocol subsequently. The parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client. It makes this protocol unconditionally secure and suitable for the future BQC protocol. PMID:25588565

  10. The effective use of virtualization for selection of data centers in a cloud computing environment

    NASA Astrophysics Data System (ADS)

    Kumar, B. Santhosh; Parthiban, Latha

    2018-04-01

    Data centers are the places which consist of network of remote servers to store, access and process the data. Cloud computing is a technology where users worldwide will submit the tasks and the service providers will direct the requests to the data centers which are responsible for execution of tasks. The servers in the data centers need to employ the virtualization concept so that multiple tasks can be executed simultaneously. In this paper we proposed an algorithm for data center selection based on energy of virtual machines created in server. The virtualization energy in each of the server is calculated and total energy of the data center is obtained by the summation of individual server energy. The tasks submitted are routed to the data center with least energy consumption which will result in minimizing the operational expenses of a service provider.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perry, Marcia

    The IRCD is an IRC server that was originally distributed by the IRCD Hybrid developer team for use as a server in IRC message over the public Internet. By supporting the IRC protocol defined in the IRC RFC, IRCD allows the users to create and join channels for group or one-to-one text-based instant messaging. It stores information about channels (e.g., whether it is public, secret, or invite-only, the topic set, membership) and users (who is online and what channels they are members of). It receives messages for a specific user or channel and forwards these messages to the targeted destination.more » Since server-to-server communication is also supported, these targeted destinations may be connected to different IRC servers. Messages are exchanged over TCP connections that remain open between the client and the server. The IRCD is being used within the Pervasive Computing Collaboration Environment (PCCE) as the 'chat server' for message exchange over public and private channels. After an LBNLSecureMessaging(PCCE chat) client has been authenticated, the client connects to IRCD with its assigned nickname or 'nick.' The client can then create or join channels for group discussions or one-to-one conversations. These channels can have an initial mode of public or invite-only and the mode may be changed after creation. If a channel is public, any one online can join the discussion; if a channel is invite-only, users can only join if existing members of the channel explicity invite them. Users can be invited to any type of channel and users may be members of multiple channels simultaneously. For use with the PCCE environment, the IRCD application (which was written in C) was ported to Linux and has been tested and installed under Linux Redhat 7.2. The source code was also modified with SSL so that all messages exchanged over the network are encrypted. This modified IRC server also verifies with an authentication server that the client is who he or she claims to be and that this user is authorized to ain access to the IRCD.« less

  12. Pre-Clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices

    DTIC Science & Technology

    2007-11-01

    accuracy. FPGA ADC data acquisition is controlled by distributed Java -based software. Java -based server application sits on each of the acquisition...JNI ( Java Native Interface) is used to allow Java indirect control of the USB driver. Fig. 5. Photograph of mobile electronics rack...supplies with the monitor and keyboard. The server application on each of these machines is controlled by a remote client Java -based application

  13. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    ERIC Educational Resources Information Center

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  14. Security Framework for Pervasive Healthcare Architectures Utilizing MPEG-21 IPMP Components.

    PubMed

    Fragopoulos, Anastasios; Gialelis, John; Serpanos, Dimitrios

    2009-01-01

    Nowadays in modern and ubiquitous computing environments, it is imperative more than ever the necessity for deployment of pervasive healthcare architectures into which the patient is the central point surrounded by different types of embedded and small computing devices, which measure sensitive physical indications, interacting with hospitals databases, allowing thus urgent medical response in occurrences of critical situations. Such environments must be developed satisfying the basic security requirements for real-time secure data communication, and protection of sensitive medical data and measurements, data integrity and confidentiality, and protection of the monitored patient's privacy. In this work, we argue that the MPEG-21 Intellectual Property Management and Protection (IPMP) components can be used in order to achieve protection of transmitted medical information and enhance patient's privacy, since there is selective and controlled access to medical data that sent toward the hospital's servers.

  15. Remote vibration monitoring system using wireless internet data transfer

    NASA Astrophysics Data System (ADS)

    Lemke, John

    2000-06-01

    Vibrations from construction activities can affect infrastructure projects in several ways. Within the general vicinity of a construction site, vibrations can result in damage to existing structures, disturbance to people, damage to sensitive machinery, and degraded performance of precision instrumentation or motion sensitive equipment. Current practice for monitoring vibrations in the vicinity of construction sites commonly consists of measuring free field or structural motions using velocity transducers connected to a portable data acquisition unit via cables. This paper describes an innovative way to collect, process, transmit, and analyze vibration measurements obtained at construction sites. The system described measures vibration at the sensor location, performs necessary signal conditioning and digitization, and sends data to a Web server using wireless data transmission and Internet protocols. A Servlet program running on the Web server accepts the transmitted data and incorporates it into a project database. Two-way interaction between the Web-client and the Web server is accomplished through the use of a Servlet program and a Java Applet running inside a browser located on the Web client's computer. Advantages of this system over conventional vibration data logging systems include continuous unattended monitoring, reduced costs associated with field data collection, instant access to data files and graphs by project team members, and the ability to remotely modify data sampling schemes.

  16. Mobile cloud-computing-based healthcare service by noncontact ECG monitoring.

    PubMed

    Fong, Ee-May; Chung, Wan-Young

    2013-12-02

    Noncontact electrocardiogram (ECG) measurement technique has gained popularity these days owing to its noninvasive features and convenience in daily life use. This paper presents mobile cloud computing for a healthcare system where a noncontact ECG measurement method is employed to capture biomedical signals from users. Healthcare service is provided to continuously collect biomedical signals from multiple locations. To observe and analyze the ECG signals in real time, a mobile device is used as a mobile monitoring terminal. In addition, a personalized healthcare assistant is installed on the mobile device; several healthcare features such as health status summaries, medication QR code scanning, and reminders are integrated into the mobile application. Health data are being synchronized into the healthcare cloud computing service (Web server system and Web server dataset) to ensure a seamless healthcare monitoring system and anytime and anywhere coverage of network connection is available. Together with a Web page application, medical data are easily accessed by medical professionals or family members. Web page performance evaluation was conducted to ensure minimal Web server latency. The system demonstrates better availability of off-site and up-to-the-minute patient data, which can help detect health problems early and keep elderly patients out of the emergency room, thus providing a better and more comprehensive healthcare cloud computing service.

  17. Mobile Cloud-Computing-Based Healthcare Service by Noncontact ECG Monitoring

    PubMed Central

    Fong, Ee-May; Chung, Wan-Young

    2013-01-01

    Noncontact electrocardiogram (ECG) measurement technique has gained popularity these days owing to its noninvasive features and convenience in daily life use. This paper presents mobile cloud computing for a healthcare system where a noncontact ECG measurement method is employed to capture biomedical signals from users. Healthcare service is provided to continuously collect biomedical signals from multiple locations. To observe and analyze the ECG signals in real time, a mobile device is used as a mobile monitoring terminal. In addition, a personalized healthcare assistant is installed on the mobile device; several healthcare features such as health status summaries, medication QR code scanning, and reminders are integrated into the mobile application. Health data are being synchronized into the healthcare cloud computing service (Web server system and Web server dataset) to ensure a seamless healthcare monitoring system and anytime and anywhere coverage of network connection is available. Together with a Web page application, medical data are easily accessed by medical professionals or family members. Web page performance evaluation was conducted to ensure minimal Web server latency. The system demonstrates better availability of off-site and up-to-the-minute patient data, which can help detect health problems early and keep elderly patients out of the emergency room, thus providing a better and more comprehensive healthcare cloud computing service. PMID:24316562

  18. Applications of the pipeline environment for visual informatics and genomics computations

    PubMed Central

    2011-01-01

    Background Contemporary informatics and genomics research require efficient, flexible and robust management of large heterogeneous data, advanced computational tools, powerful visualization, reliable hardware infrastructure, interoperability of computational resources, and detailed data and analysis-protocol provenance. The Pipeline is a client-server distributed computational environment that facilitates the visual graphical construction, execution, monitoring, validation and dissemination of advanced data analysis protocols. Results This paper reports on the applications of the LONI Pipeline environment to address two informatics challenges - graphical management of diverse genomics tools, and the interoperability of informatics software. Specifically, this manuscript presents the concrete details of deploying general informatics suites and individual software tools to new hardware infrastructures, the design, validation and execution of new visual analysis protocols via the Pipeline graphical interface, and integration of diverse informatics tools via the Pipeline eXtensible Markup Language syntax. We demonstrate each of these processes using several established informatics packages (e.g., miBLAST, EMBOSS, mrFAST, GWASS, MAQ, SAMtools, Bowtie) for basic local sequence alignment and search, molecular biology data analysis, and genome-wide association studies. These examples demonstrate the power of the Pipeline graphical workflow environment to enable integration of bioinformatics resources which provide a well-defined syntax for dynamic specification of the input/output parameters and the run-time execution controls. Conclusions The LONI Pipeline environment http://pipeline.loni.ucla.edu provides a flexible graphical infrastructure for efficient biomedical computing and distributed informatics research. The interactive Pipeline resource manager enables the utilization and interoperability of diverse types of informatics resources. The Pipeline client-server model provides computational power to a broad spectrum of informatics investigators - experienced developers and novice users, user with or without access to advanced computational-resources (e.g., Grid, data), as well as basic and translational scientists. The open development, validation and dissemination of computational networks (pipeline workflows) facilitates the sharing of knowledge, tools, protocols and best practices, and enables the unbiased validation and replication of scientific findings by the entire community. PMID:21791102

  19. Test-bed for the remote health monitoring system for bridge structures using FBG sensors

    NASA Astrophysics Data System (ADS)

    Lee, Chin-Hyung; Park, Ki-Tae; Joo, Bong-Chul; Hwang, Yoon-Koog

    2009-05-01

    This paper reports on test-bed for the long-term health monitoring system for bridge structures employing fiber Bragg grating (FBG) sensors, which is remotely accessible via the web, to provide real-time quantitative information on a bridge's response to live loading and environmental changes, and fast prediction of the structure's integrity. The sensors are attached on several locations of the structure and connected to a data acquisition system permanently installed onsite. The system can be accessed through remote communication using an optical cable network, through which the evaluation of the bridge behavior under live loading can be allowed at place far away from the field. Live structural data are transmitted continuously to the server computer at the central office. The server computer is connected securely to the internet, where data can be retrieved, processed and stored for the remote web-based health monitoring. Test-bed revealed that the remote health monitoring technology will enable practical, cost-effective, and reliable condition assessment and maintenance of bridge structures.

  20. Process Management inside ATLAS DAQ

    NASA Astrophysics Data System (ADS)

    Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.

    2002-10-01

    The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.

  1. HPC in a HEP lab: lessons learned from setting up cost-effective HPC clusters

    NASA Astrophysics Data System (ADS)

    Husejko, Michal; Agtzidis, Ioannis; Baehler, Pierre; Dul, Tadeusz; Evans, John; Himyr, Nils; Meinhard, Helge

    2015-12-01

    In this paper we present our findings gathered during the evaluation and testing of Windows Server High-Performance Computing (Windows HPC) in view of potentially using it as a production HPC system for engineering applications. The Windows HPC package, an extension of Microsofts Windows Server product, provides all essential interfaces, utilities and management functionality for creating, operating and monitoring a Windows-based HPC cluster infrastructure. The evaluation and test phase was focused on verifying the functionalities of Windows HPC, its performance, support of commercial tools and the integration with the users work environment. We describe constraints imposed by the way the CERN Data Centre is operated, licensing for engineering tools and scalability and behaviour of the HPC engineering applications used at CERN. We will present an initial set of requirements, which were created based on the above constraints and requests from the CERN engineering user community. We will explain how we have configured Windows HPC clusters to provide job scheduling functionalities required to support the CERN engineering user community, quality of service, user- and project-based priorities, and fair access to limited resources. Finally, we will present several performance tests we carried out to verify Windows HPC performance and scalability.

  2. Real time monitoring to the odour of excrement for health of infants and elderly completely bedridden

    NASA Astrophysics Data System (ADS)

    Ye, Jiancheng; Huang, Guoliang

    2017-01-01

    In the domain of biomedical signals measurements, monitoring human physiological parameters is an important issue. With the rapid development of wireless body area network, it makes monitor, transmit and record physiological parameters faster and more convenient. Infants and the elderly completely bedridden are two special groups of the society who need more medical care. According to researches investigating current frontier domains and the market products, the detection of physiological parameters from the excrement is rare. However, urine and faeces contain a large number of physiological information, which are high relative to health. The mainly distributed odour from urine is NH4 and the distributed odour from feces is mainly H2S, which are both could be detected by the sensors. In this paper, we introduce the design and implementation of a portable wireless device based on body area network for real time monitoring to the odour of excrement for health of infants and the elderly completely bedridden. The device not only could monitor in real time the emitted odour of faeces and urine for health analysis, but also measures the body temperature and environment humidity, and send data to the mobile phone of paramedics to alarm or the server for storage and process, which has prospect to monitoring infants and the paralysis elderly.

  3. A Cyber-Physical System for Girder Hoisting Monitoring Based on Smartphones.

    PubMed

    Han, Ruicong; Zhao, Xuefeng; Yu, Yan; Guan, Quanhua; Hu, Weitong; Li, Mingchu

    2016-07-07

    Offshore design and construction is much more difficult than land-based design and construction, particularly due to hoisting operations. Real-time monitoring of the orientation and movement of a hoisted structure is thus required for operators' safety. In recent years, rapid development of the smart-phone commercial market has offered the possibility that everyone can carry a mini personal computer that is integrated with sensors, an operating system and communication system that can act as an effective aid for cyber-physical systems (CPS) research. In this paper, a CPS for hoisting monitoring using smartphones was proposed, including a phone collector, a controller and a server. This system uses smartphones equipped with internal sensors to obtain girder movement information, which will be uploaded to a server, then returned to controller users. An alarming system will be provided on the controller phone once the returned data exceeds a threshold. The proposed monitoring system is used to monitor the movement and orientation of a girder during hoisting on a cross-sea bridge in real time. The results show the convenience and feasibility of the proposed system.

  4. Database architectures for Space Telescope Science Institute

    NASA Astrophysics Data System (ADS)

    Lubow, Stephen

    1993-08-01

    At STScI nearly all large applications require database support. A general purpose architecture has been developed and is in use that relies upon an extended client-server paradigm. Processing is in general distributed across three processes, each of which generally resides on its own processor. Database queries are evaluated on one such process, called the DBMS server. The DBMS server software is provided by a database vendor. The application issues database queries and is called the application client. This client uses a set of generic DBMS application programming calls through our STDB/NET programming interface. Intermediate between the application client and the DBMS server is the STDB/NET server. This server accepts generic query requests from the application and converts them into the specific requirements of the DBMS server. In addition, it accepts query results from the DBMS server and passes them back to the application. Typically the STDB/NET server is local to the DBMS server, while the application client may be remote. The STDB/NET server provides additional capabilities such as database deadlock restart and performance monitoring. This architecture is currently in use for some major STScI applications, including the ground support system. We are currently investigating means of providing ad hoc query support to users through the above architecture. Such support is critical for providing flexible user interface capabilities. The Universal Relation advocated by Ullman, Kernighan, and others appears to be promising. In this approach, the user sees the entire database as a single table, thereby freeing the user from needing to understand the detailed schema. A software layer provides the translation between the user and detailed schema views of the database. However, many subtle issues arise in making this transformation. We are currently exploring this scheme for use in the Hubble Space Telescope user interface to the data archive system (DADS).

  5. The event notification and alarm system for the Open Science Grid operations center

    NASA Astrophysics Data System (ADS)

    Hayashi, S.; Teige and, S.; Quick, R.

    2012-12-01

    The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.

  6. Landslide and Flood Warning System Prototypes based on Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hloupis, George; Stavrakas, Ilias; Triantis, Dimos

    2010-05-01

    Wireless sensor networks (WSNs) are one of the emerging areas that received great attention during the last few years. This is mainly due to the fact that WSNs have provided scientists with the capability of developing real-time monitoring systems equipped with sensors based on Micro-Electro-Mechanical Systems (MEMS). WSNs have great potential for many applications in environmental monitoring since the sensor nodes that comprised from can host several MEMS sensors (such as temperature, humidity, inertial, pressure, strain-gauge) and transducers (such as position, velocity, acceleration, vibration). The resulting devices are small and inexpensive but with limited memory and computing resources. Each sensor node contains a sensing module which along with an RF transceiver. The communication is broadcast-based since the network topology can change rapidly due to node failures [1]. Sensor nodes can transmit their measurements to central servers through gateway nodes without any processing or they make preliminary calculations locally in order to produce results that will be sent to central servers [2]. Based on the above characteristics, two prototypes using WSNs are presented in this paper: A Landslide detection system and a Flood warning system. Both systems sent their data to central processing server where the core of processing routines exists. Transmission is made using Zigbee and IEEE 802.11b protocol but is capable to use VSAT communication also. Landslide detection system uses structured network topology. Each measuring node comprises of a columnar module that is half buried to the area under investigation. Each sensing module contains a geophone, an inclinometer and a set of strain gauges. Data transmitted to central processing server where possible landslide evolution is monitored. Flood detection system uses unstructured network topology since the failure rate of sensor nodes is expected higher. Each sensing module contains a custom water level sensor (based on plastic optical fiber). Data transmitted directly to server where the early warning algorithms monitor the water level variations in real time. Both sensor nodes use power harvesting techniques in order to extend their battery life as much as possible. [1] Yick J.; Mukherjee, B.; Ghosal, D. Wireless sensor network survey. Comput. Netw. 2008, 52, 2292-2330. [2] Garcia, M.; Bri, D.; Boronat, F.; Lloret, J. A new neighbor selection strategy for group-based wireless sensor networks, In The Fourth International Conference on Networking and Services (ICNS 2008), Gosier, Guadalupe, March 16-21, 2008.

  7. Modernization of the USGS Hawaiian Volcano Observatory Seismic Processing Infrastructure

    NASA Astrophysics Data System (ADS)

    Antolik, L.; Shiro, B.; Friberg, P. A.

    2016-12-01

    The USGS Hawaiian Volcano Observatory (HVO) operates a Tier 1 Advanced National Seismic System (ANSS) seismic network to monitor, characterize, and report on volcanic and earthquake activity in the State of Hawaii. Upgrades at the observatory since 2009 have improved the digital telemetry network, computing resources, and seismic data processing with the adoption of the ANSS Quake Management System (AQMS) system. HVO aims to build on these efforts by further modernizing its seismic processing infrastructure and strengthen its ability to meet ANSS performance standards. Most notably, this will also allow HVO to support redundant systems, both onsite and offsite, in order to provide better continuity of operation during intermittent power and network outages. We are in the process of implementing a number of upgrades and improvements on HVO's seismic processing infrastructure, including: 1) Virtualization of AQMS physical servers; 2) Migration of server operating systems from Solaris to Linux; 3) Consolidation of AQMS real-time and post-processing services to a single server; 4) Upgrading database from Oracle 10 to Oracle 12; and 5) Upgrading to the latest Earthworm and AQMS software. These improvements will make server administration more efficient, minimize hardware resources required by AQMS, simplify the Oracle replication setup, and provide better integration with HVO's existing state of health monitoring tools and backup system. Ultimately, it will provide HVO with the latest and most secure software available while making the software easier to deploy and support.

  8. A FPGA embedded web server for remote monitoring and control of smart sensors networks.

    PubMed

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2013-12-27

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology.

  9. A FPGA Embedded Web Server for Remote Monitoring and Control of Smart Sensors Networks

    PubMed Central

    Magdaleno, Eduardo; Rodríguez, Manuel; Pérez, Fernando; Hernández, David; García, Enrique

    2014-01-01

    This article describes the implementation of a web server using an embedded Altera NIOS II IP core, a general purpose and configurable RISC processor which is embedded in a Cyclone FPGA. The processor uses the μCLinux operating system to support a Boa web server of dynamic pages using Common Gateway Interface (CGI). The FPGA is configured to act like the master node of a network, and also to control and monitor a network of smart sensors or instruments. In order to develop a totally functional system, the FPGA also includes an implementation of the time-triggered protocol (TTP/A). Thus, the implemented master node has two interfaces, the webserver that acts as an Internet interface and the other to control the network. This protocol is widely used to connecting smart sensors and actuators and microsystems in embedded real-time systems in different application domains, e.g., industrial, automotive, domotic, etc., although this protocol can be easily replaced by any other because of the inherent characteristics of the FPGA-based technology. PMID:24379047

  10. Chemical Inventory Management at NASA Lewis Research Center

    NASA Technical Reports Server (NTRS)

    Kraft, Shirley S.; Homan, Joseph R.; Bajorek, Michael J.; Dominguez, Manuel B.; Smith, Vanessa L.

    1997-01-01

    The Chemical Management System (CMS) is a client/server application developed with Power Builder and Sybase for the Lewis Research Center (LeRC). Power Builder is a client-server application development tool, Sybase is a Relational Database Management System. The entire LeRC community can access the CMS from any desktop environment. The multiple functions and benefits of the CMS are addressed.

  11. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    PubMed

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  12. Developments and applications of DAQ framework DABC v2

    NASA Astrophysics Data System (ADS)

    Adamczewski-Musch, J.; Kurz, N.; Linev, S.

    2015-12-01

    The Data Acquisition Backbone Core (DABC) is a software framework for distributed data acquisition. In 2013 Version 2 of DABC has been released with several improvements. For monitoring and control, an HTTP web server and a proprietary command channel socket have been provided. Web browser GUIs have been implemented for configuration and control of DABC and MBS DAQ nodes via such HTTP server. Several specific plug-ins, for example interfacing PEXOR/KINPEX optical readout PCIe boards, or HADES trbnet input and hld file output, have been further developed. In 2014, DABC v2 was applied for production data taking of the HADES collaboration's pion beam time at GSI. It fully replaced the functionality of the previous event builder software and added new features concerning online monitoring.

  13. Remote Control and Monitoring of VLBI Experiments by Smartphones

    NASA Astrophysics Data System (ADS)

    Ruztort, C. H.; Hase, H.; Zapata, O.; Pedreros, F.

    2012-12-01

    For the remote control and monitoring of VLBI operations, we developed a software optimized for smartphones. This is a new tool based on a client-server architecture with a Web interface optimized for smartphone screens and cellphone networks. The server uses variables of the Field System and its station specific parameters stored in the shared memory. The client running on the smartphone by a Web interface analyzes and visualizes the current status of the radio telescope, receiver, schedule, and recorder. In addition, it allows commands to be sent remotely to the Field System computer and displays the log entries. The user has full access to the entire operation process, which is important in emergency cases. The software also integrates a webcam interface.

  14. A new mobile phone-based ECG monitoring system.

    PubMed

    Iwamoto, Junichi; Yonezawa, Yoshiharu; Ogawa, Hiromichi Maki Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Hahn, Allen W; Caldwell, W Morton

    2007-01-01

    We have developed a system for monitoring a patient's electrocardiogram (ECG) and movement during daily activities. The complete system is mounted on chest electrodes and continuously samples the ECG and three axis accelerations. When the patient feels a heart discomfort, he or she pushes the data transmission switch on the recording system and the system sends the recorded ECG waveforms and three axis accelerations of the two prior minutes, and for two minutes after the switch is pressed. The data goes directly to a hospital server computer via a 2.4 GHz low power mobile phone. These data are stored on a server computer and downloaded to the physician's Java mobile phone. The physician can display the data on the phone's liquid crystal display.

  15. Fault-Tolerant Local-Area Network

    NASA Technical Reports Server (NTRS)

    Morales, Sergio; Friedman, Gary L.

    1988-01-01

    Local-area network (LAN) for computers prevents single-point failure from interrupting communication between nodes of network. Includes two complete cables, LAN 1 and LAN 2. Microprocessor-based slave switches link cables to network-node devices as work stations, print servers, and file servers. Slave switches respond to commands from master switch, connecting nodes to two cable networks or disconnecting them so they are completely isolated. System monitor and control computer (SMC) acts as gateway, allowing nodes on either cable to communicate with each other and ensuring that LAN 1 and LAN 2 are fully used when functioning properly. Network monitors and controls itself, automatically routes traffic for efficient use of resources, and isolates and corrects its own faults, with potential dramatic reduction in time out of service.

  16. Monitoring Moving Queries inside a Safe Region

    PubMed Central

    Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan

    2014-01-01

    With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652

  17. Evaluating and Implementing Learning Environments: A United Kingdom Experience.

    ERIC Educational Resources Information Center

    Ingraham, Bruce; Watson, Barbara; McDowell, Liz; Brockett, Adrian; Fitzpatrick, Simon

    2002-01-01

    Reports on ongoing work at five universities in northeastern England that have been evaluating and implementing online learning environments known as virtual learning environments (VLEs) or managed learning environments (MLEs). Discusses do-it-yourself versus commercial systems; transferability; Web-based versus client-server; integration with…

  18. Verifying the secure setup of Unix client/servers and detection of network intrusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feingold, R.; Bruestle, H.R.; Bartoletti, T.

    1995-07-01

    This paper describes our technical approach to developing and delivering Unix host- and network-based security products to meet the increasing challenges in information security. Today`s global ``Infosphere`` presents us with a networked environment that knows no geographical, national, or temporal boundaries, and no ownership, laws, or identity cards. This seamless aggregation of computers, networks, databases, applications, and the like store, transmit, and process information. This information is now recognized as an asset to governments, corporations, and individuals alike. This information must be protected from misuse. The Security Profile Inspector (SPI) performs static analyses of Unix-based clients and servers to checkmore » on their security configuration. SPI`s broad range of security tests and flexible usage options support the needs of novice and expert system administrators alike. SPI`s use within the Department of Energy and Department of Defense has resulted in more secure systems, less vulnerable to hostile intentions. Host-based information protection techniques and tools must also be supported by network-based capabilities. Our experience shows that a weak link in a network of clients and servers presents itself sooner or later, and can be more readily identified by dynamic intrusion detection techniques and tools. The Network Intrusion Detector (NID) is one such tool. NID is designed to monitor and analyze activity on an Ethernet broadcast Local Area Network segment and produce transcripts of suspicious user connections. NID`s retrospective and real-time modes have proven invaluable to security officers faced with ongoing attacks to their systems and networks.« less

  19. Web-based video monitoring of CT and MRI procedures

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Dahlbom, Magdalena; Kho, Hwa T.; Valentino, Daniel J.; McCoy, J. Michael

    2000-05-01

    A web-based video transmission of images from CT and MRI consoles was implemented in an Intranet environment for real- time monitoring of ongoing procedures. Images captured from the consoles are compressed to video resolution and broadcasted through a web server. When called upon, the attending radiologists can view these live images on any computer within the secured Intranet network. With adequate compression, these images can be displayed simultaneously in different locations at a rate of 2 to 5 images/sec through standard LAN. The quality of the images being insufficient for diagnostic purposes, our users survey showed that they were suitable for supervising a procedure, positioning the imaging slices and for routine quality checking before completion of a study. The system was implemented at UCLA to monitor 9 CTs and 6 MRIs distributed in 4 buildings. This system significantly improved the radiologists productivity by saving precious time spent in trips between reading rooms and examination rooms. It also improved patient throughput by reducing the waiting time for the radiologists to come to check a study before moving the patient from the scanner.

  20. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks

    PubMed Central

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-01-01

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper. PMID:27873941

  1. GLobal Integrated Design Environment (GLIDE): A Concurrent Engineering Application

    NASA Technical Reports Server (NTRS)

    McGuire, Melissa L.; Kunkel, Matthew R.; Smith, David A.

    2010-01-01

    The GLobal Integrated Design Environment (GLIDE) is a client-server software application purpose-built to mitigate issues associated with real time data sharing in concurrent engineering environments and to facilitate discipline-to-discipline interaction between multiple engineers and researchers. GLIDE is implemented in multiple programming languages utilizing standardized web protocols to enable secure parameter data sharing between engineers and researchers across the Internet in closed and/or widely distributed working environments. A well defined, HyperText Transfer Protocol (HTTP) based Application Programming Interface (API) to the GLIDE client/server environment enables users to interact with GLIDE, and each other, within common and familiar tools. One such common tool, Microsoft Excel (Microsoft Corporation), paired with its add-in API for GLIDE, is discussed in this paper. The top-level examples given demonstrate how this interface improves the efficiency of the design process of a concurrent engineering study while reducing potential errors associated with manually sharing information between study participants.

  2. Developing Control System of Electrical Devices with Operational Expense Prediction

    NASA Astrophysics Data System (ADS)

    Sendari, Siti; Wahyu Herwanto, Heru; Rahmawati, Yuni; Mukti Putranto, Dendi; Fitri, Shofiana

    2017-04-01

    The purpose of this research is to develop a system that can monitor and record home electrical device’s electricity usage. This system has an ability to control electrical devices in distance and predict the operational expense. The system was developed using micro-controllers and WiFi modules connected to PC server. The communication between modules is arranged by server via WiFi. Beside of reading home electrical devices electricity usage, the unique point of the proposed-system is the ability of micro-controllers to send electricity data to server for recording the usage of electrical devices. The testing of this research was done by Black-box method to test the functionality of system. Testing system run well with 0% error.

  3. DIABCARE Quality Network in Europe--a model for quality management in chronic diseases.

    PubMed

    Piwernetz, K

    2001-04-01

    The DIABCARE Q-Net project developed a complete and integrated information technology system to monitor diabetes care, according to the gold standards of the St Vincent Declaration Action Program. This is the first Telematic platform for standardized documentation on medical quality and evaluation across Europe, which will serve as a model for other chronic diseases. Quality development starts from the comparison of diabetes services, based on the key data on diabetes care in the basic information sheet. This is a 141 field form, which is to be completed once a year for each patient under the care of the diabetes team. The system performs an analysis of the local data and compares the data with peer teams by means of telecommunication of anonymous data. These data are collected regionally. At the next level these regional data are compared on a national basis across Europe using dedicated communication lines. National data can be compared transnationally by the use of the Internet and the DIABCARE benchmarking servers. These different lines are used according to the necessary security standards. Medical data are transferred via dedicated lines, aggregated data via the Internet. The architecture follows the open-platform concept in order to allow for heterogeneous technical environments. Already at the start of the project, the necessity for expanding the quality approach to telemedicine methodology was identified and included. For each level, specific programs are available to improve the performance of diabetes care delivery: DIABCARE data as client and DIABCARE server as regional and DIABCARE 'international server' as transnational server. Functioning pilots were established across all levels. The clients have been linked to the servers on a routine basis. According to the open architecture design, the various countries decided on different systems at the entry point: full system--Portugal; fax systems--Italy, Bavaria; implementation into doctor's office systems--Norway; paper forms and chip cards--France. This system can improve the local, regional and national diabetes care. Initiatives in several countries proved the feasibility of the system. The most extensive use, from Portugal, will be reported later in this paper. The exploitation of the DIABCARE Q-Net system will be performed with the DIABCARE International European Economic Interest Grouping as a co-ordinator and several commercial companies as contractors to market the products inside the system. The key project participants are: DIABCARE Office EURO, DIABCARE Portugal, DIABCARE France, DIABCARE Bavaria, DIABCARE UK, DIABCARE Netherlands, DIABCARE Norway, DIABCARE Italy, DIABCARE Sweden, DIABCARE Austria, DIABCARE Spain, GSF Research Centre for Health and Environment, FAST Research Institute for Applied Software Technology, Tromsø University Hospital, Stavanger Technical College, Technical University of Ilmenau, World Health Organisation (WHO), Regional Office for Europe.

  4. The EarthServer project: Exploiting Identity Federations, Science Gateways and Social and Mobile Clients for Big Earth Data Analysis

    NASA Astrophysics Data System (ADS)

    Barbera, Roberto; Bruno, Riccardo; Calanducci, Antonio; Messina, Antonio; Pappalardo, Marco; Passaro, Gianluca

    2013-04-01

    The EarthServer project (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, aims at establishing open access and ad-hoc analytics on extreme-size Earth Science data, based on and extending leading-edge Array Database technology. The core idea is to use database query languages as client/server interface to achieve barrier-free "mix & match" access to multi-source, any-size, multi-dimensional space-time data -- in short: "Big Earth Data Analytics" - based on the open standards of the Open Geospatial Consortium Web Coverage Processing Service (OGC WCPS) and the W3C XQuery. EarthServer combines both, thereby achieving a tight data/metadata integration. Further, the rasdaman Array Database System (www.rasdaman.com) is extended with further space-time coverage data types. On server side, highly effective optimizations - such as parallel and distributed query processing - ensure scalability to Exabyte volumes. Six Lighthouse Applications are being established in EarthServer, each of which poses distinct challenges on Earth Data Analytics: Cryospheric Science, Airborne Science, Atmospheric Science, Geology, Oceanography, and Planetary Science. Altogether, they cover all Earth Science domains; the Planetary Science use case has been added to challenge concepts and standards in non-standard environments. In addition, EarthLook (maintained by Jacobs University) showcases use of OGC standards in 1D through 5D use cases. In this contribution we will report on the first applications integrated in the EarthServer Science Gateway and on the clients for mobile appliances developed to access them. We will also show how federated and social identity services can allow Big Earth Data Providers to expose their data in a distributed environment keeping a strict and fine-grained control on user authentication and authorisation. The degree of fulfilment of the EarthServer implementation with the recommendations made in the recent TERENA Study on AAA Platforms For Scientific Resources in Europe (https://confluence.terena.org/display/aaastudy/AAA+Study+Home+Page) will also be assessed.

  5. Security enhanced anonymous multiserver authenticated key agreement scheme using smart cards and biometrics.

    PubMed

    Choi, Younsung; Nam, Junghyun; Lee, Donghoon; Kim, Jiye; Jung, Jaewook; Won, Dongho

    2014-01-01

    An anonymous user authentication scheme allows a user, who wants to access a remote application server, to achieve mutual authentication and session key establishment with the server in an anonymous manner. To enhance the security of such authentication schemes, recent researches combined user's biometrics with a password. However, these authentication schemes are designed for single server environment. So when a user wants to access different application servers, the user has to register many times. To solve this problem, Chuang and Chen proposed an anonymous multiserver authenticated key agreement scheme using smart cards together with passwords and biometrics. Chuang and Chen claimed that their scheme not only supports multiple servers but also achieves various security requirements. However, we show that this scheme is vulnerable to a masquerade attack, a smart card attack, a user impersonation attack, and a DoS attack and does not achieve perfect forward secrecy. We also propose a security enhanced anonymous multiserver authenticated key agreement scheme which addresses all the weaknesses identified in Chuang and Chen's scheme.

  6. Design and evaluation of web-based image transmission and display with different protocols

    NASA Astrophysics Data System (ADS)

    Tan, Bin; Chen, Kuangyi; Zheng, Xichuan; Zhang, Jianguo

    2011-03-01

    There are many Web-based image accessing technologies used in medical imaging area, such as component-based (ActiveX Control) thick client Web display, Zerofootprint thin client Web viewer (or called server side processing Web viewer), Flash Rich Internet Application(RIA) ,or HTML5 based Web display. Different Web display methods have different peformance in different network environment. In this presenation, we give an evaluation on two developed Web based image display systems. The first one is used for thin client Web display. It works between a PACS Web server with WADO interface and thin client. The PACS Web server provides JPEG format images to HTML pages. The second one is for thick client Web display. It works between a PACS Web server with WADO interface and thick client running in browsers containing ActiveX control, Flash RIA program or HTML5 scripts. The PACS Web server provides native DICOM format images or JPIP stream for theses clients.

  7. Honey Bee Colonies Remote Monitoring System.

    PubMed

    Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús

    2016-12-29

    Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees' work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive-monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time.

  8. Honey Bee Colonies Remote Monitoring System

    PubMed Central

    Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús

    2016-01-01

    Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees’ work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive—monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time. PMID:28036061

  9. Performance enhancement of a web-based picture archiving and communication system using commercial off-the-shelf server clusters.

    PubMed

    Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.

  10. Assessment of Risk Communication about Undercooked Hamburgers by Restaurant Servers.

    PubMed

    Thomas, Ellen M; Binder, Andrew R; McLAUGHLIN, Anne; Jaykus, Lee-Ann; Hanson, Dana; Powell, Douglas; Chapman, Benjamin

    2016-12-01

    According to the U.S. Food and Drug Administration 2013 Model Food Code, it is the duty of a food establishment to disclose and remind consumers of risk when ordering undercooked food such as ground beef. The purpose of this study was to explore actual risk communication behaviors of food establishment servers. Secret shoppers visited 265 restaurants in seven geographic locations across the United States, ordered medium rare burgers, and collected and coded risk information from chain and independent restaurant menus and from server responses. The majority of servers reported an unreliable method of doneness (77%) or other incorrect information (66%) related to burger doneness and safety. These results indicate major gaps in server knowledge and risk communication, and the current risk communication language in the Model Food Code does not sufficiently fill these gaps. The question is "should servers even be acting as risk communicators?" There are numerous challenges associated with this practice, including high turnover rates, limited education, and the high stress environment based on pleasing a customer. If servers are designated as risk communicators, food establishment staff should be adequately trained and provided with consumer advisory messages that are accurate, audience appropriate, and delivered in a professional manner so that customers can make informed food safety decisions.

  11. Performance Enhancement of a Web-Based Picture Archiving and Communication System Using Commercial Off-the-Shelf Server Clusters

    PubMed Central

    Chang, Shu-Jun; Wu, Jay

    2014-01-01

    The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580

  12. A secure online image trading system for untrusted cloud environments.

    PubMed

    Munadi, Khairul; Arnia, Fitri; Syaryadhi, Mohd; Fujiyoshi, Masaaki; Kiya, Hitoshi

    2015-01-01

    In conventional image trading systems, images are usually stored unprotected on a server, rendering them vulnerable to untrusted server providers and malicious intruders. This paper proposes a conceptual image trading framework that enables secure storage and retrieval over Internet services. The process involves three parties: an image publisher, a server provider, and an image buyer. The aim is to facilitate secure storage and retrieval of original images for commercial transactions, while preventing untrusted server providers and unauthorized users from gaining access to true contents. The framework exploits the Discrete Cosine Transform (DCT) coefficients and the moment invariants of images. Original images are visually protected in the DCT domain, and stored on a repository server. Small representation of the original images, called thumbnails, are generated and made publicly accessible for browsing. When a buyer is interested in a thumbnail, he/she sends a query to retrieve the visually protected image. The thumbnails and protected images are matched using the DC component of the DCT coefficients and the moment invariant feature. After the matching process, the server returns the corresponding protected image to the buyer. However, the image remains visually protected unless a key is granted. Our target application is the online market, where publishers sell their stock images over the Internet using public cloud servers.

  13. Automatic and continuous landslide monitoring: the Rotolon Web-based platform

    NASA Astrophysics Data System (ADS)

    Frigerio, Simone; Schenato, Luca; Mantovani, Matteo; Bossi, Giulia; Marcato, Gianluca; Cavalli, Marco; Pasuto, Alessandro

    2013-04-01

    Mount Rotolon (Eastern Italian Alps) is affected by a complex landslide that, since 1985, is threatening the nearby village of Recoaro Terme. The first written proof of a landslide occurrence dated back to 1798. After the last re-activation on November 2010 (637 mm of intense rainfall recorded in the 12 days prior the event), a mass of approximately 320.000 m3 detached from the south flank of Mount Rotolon and evolved into a fast debris flow that ran for about 3 km along the stream bed. A real-time monitoring system was required to detect early indication of rapid movements, potentially saving lives and property. A web-based platform for automatic and continuous monitoring was designed as a first step in the implementation of an early-warning system. Measurements collected by the automated geotechnical and topographic instrumentation, deployed over the landslide body, are gathered in a central box station. After the calibration process, they are transmitted by web services on a local server, where graphs, maps, reports and alert announcement are automatically generated and updated. All the processed information are available by web browser with different access rights. The web environment provides the following advantages: 1) data is collected from different data sources and matched on a single server-side frame 2) a remote user-interface allows regular technical maintenance and direct access to the instruments 3) data management system is synchronized and automatically tested 4) a graphical user interface on browser provides a user-friendly tool for decision-makers to interact with a system continuously updated. On this site two monitoring systems are actually on course: 1) GB-InSAR radar interferometer (University of Florence - Department of Earth Science) and 2) Automated Total Station (ATS) combined with extensometers network in a Web-based solution (CNR-IRPI Padova). This work deals with details on methodology, services and techniques adopted for the second monitoring solution. The activity directly interfaces with local Civil Protection agency, Regional Geological Service and local authorities with integrated roles and aims.

  14. Conducting and Supporting a Goal-Based Scenario Learning Environment.

    ERIC Educational Resources Information Center

    Montgomery, Joel; And Others

    1994-01-01

    Discussion of goal-based scenario (GBS) learning environments focuses on a training module designed to prepare consultants with new skills in managing clients, designing user-friendly graphical computer interfaces, and working in a client/server computing environment. Transforming the environment from teaching focused to learning focused is…

  15. [Design of an anesthesia and micro-environment information management system in mobile operating room].

    PubMed

    Wang, Xianwen; Liu, Zhiguo; Zhang, Wenchang; Wu, Qingfu; Tan, Shulin

    2013-08-01

    We have designed a mobile operating room information management system. The system is composed of a client and a server. A client, consisting of a PC, medical equipments, PLC and sensors, provides the acquisition and processing of anesthesia and micro-environment data. A server is a powerful computer that stores the data of the system. The client gathers the medical device data by using the C/S mode, and analyzes the obtained HL7 messages through the class library call. The client collects the micro-environment information with PLC, and finishes the data reading with the OPC technology. Experiment results showed that the designed system could manage the patient anesthesia and micro-environment information well, and improve the efficiency of the doctors' works and the digital level of the mobile operating room.

  16. Performance of the High Sensitivity Open Source Multi-GNSS Assisted GNSS Reference Server.

    NASA Astrophysics Data System (ADS)

    Sarwar, Ali; Rizos, Chris; Glennon, Eamonn

    2015-06-01

    The Open Source GNSS Reference Server (OSGRS) exploits the GNSS Reference Interface Protocol (GRIP) to provide assistance data to GPS receivers. Assistance can be in terms of signal acquisition and in the processing of the measurement data. The data transfer protocol is based on Extensible Mark-up Language (XML) schema. The first version of the OSGRS required a direct hardware connection to a GPS device to acquire the data necessary to generate the appropriate assistance. Scenarios of interest for the OSGRS users are weak signal strength indoors, obstructed outdoors or heavy multipath environments. This paper describes an improved version of OSGRS that provides alternative assistance support from a number of Global Navigation Satellite Systems (GNSS). The underlying protocol to transfer GNSS assistance data from global casters is the Networked Transport of RTCM (Radio Technical Commission for Maritime Services) over Internet Protocol (NTRIP), and/or the RINEX (Receiver Independent Exchange) format. This expands the assistance and support model of the OSGRS to globally available GNSS data servers connected via internet casters. A variety of formats and versions of RINEX and RTCM streams become available, which strengthens the assistance provisioning capability of the OSGRS platform. The prime motivation for this work was to enhance the system architecture of the OSGRS to take advantage of globally available GNSS data sources. Open source software architectures and assistance models provide acquisition and data processing assistance for GNSS receivers operating in weak signal environments. This paper describes test scenarios to benchmark the OSGRSv2 performance against other Assisted-GNSS solutions. Benchmarking devices include the SPOT satellite messenger, MS-Based & MS-Assisted GNSS, HSGNSS (SiRFstar-III) and Wireless Sensor Networks Assisted-GNSS. Benchmarked parameters include the number of tracked satellites, the Time to Fix First (TTFF), navigation availability and accuracy. Three different configurations of Multi-GNSS assistance servers were used, namely Cloud-Client-Server, the Demilitarized Zone (DMZ) Client-Server and PC-Client-Server; with respect to the connectivity location of client and server. The impact on the performance based on server and/or client initiation, hardware capability, network latency, processing delay and computation times with their storage, scalability, processing and load sharing capabilities, were analysed. The performance of the OSGRS is compared against commercial GNSS, Assisted-GNSS and WSN-enabled GNSS devices. The OSGRS system demonstrated lower TTFF and higher availability.

  17. Enriching the Web Processing Service

    NASA Astrophysics Data System (ADS)

    Wosniok, Christoph; Bensmann, Felix; Wössner, Roman; Kohlus, Jörn; Roosmann, Rainer; Heidmann, Carsten; Lehfeldt, Rainer

    2014-05-01

    The OGC Web Processing Service (WPS) provides a standard for implementing geospatial processes in service-oriented networks. In its current version 1.0.0 it allocates the operations GetCapabilities, DescribeProcess and Execute, which can be used to offer custom processes based on single or multiple sub-processes. A large range of ready to use fine granular, fundamental geospatial processes have been developed by the GIS-community in the past. However, modern use cases or whole workflow processes demand specifications of lifecycle management and service orchestration. Orchestrating smaller sub-processes is a task towards interoperability; a comprehensive documentation by using appropriate metadata is also required. Though different approaches were tested in the past, developing complex WPS applications still requires programming skills, knowledge about software libraries in use and a lot of effort for integration. Our toolset RichWPS aims at providing a better overall experience by setting up two major components. The RichWPS ModelBuilder enables the graphics-aided design of workflow processes based on existing local and distributed processes and geospatial services. Once tested by the RichWPS Server, a composition can be deployed for production use on the RichWPS Server. The ModelBuilder obtains necessary processes and services from a directory service, the RichWPS semantic proxy. It manages the lifecycle and is able to visualize results and debugging-information. One aim will be to generate reproducible results; the workflow should be documented by metadata that can be integrated in Spatial Data Infrastructures. The RichWPS Server provides a set of interfaces to the ModelBuilder for, among others, testing composed workflow sequences, estimating their performance and to publish them as common processes. Therefore the server is oriented towards the upcoming WPS 2.0 standard and its ability to transactionally deploy and undeploy processes making use of a WPS-T interface. In order to deal with the results of these processing workflows, a server side extension enables the RichWPS Server and its clients to use WPS presentation directives (WPS-PD), a content related enhancement for the standardized WPS schema. We identified essential requirements of the components of our toolset by applying two use cases. The first enables the simplified comparison of modeled and measured data, a common task in hydro-engineering to validate the accuracy of a model. An implementation of the workflow includes reading, harmonizing and comparing two datasets in NetCDF-format. 2D Water level data from the German Bight can be chosen, presented and evaluated in a web client with interactive plots. The second use case is motivated by the Marine Strategy Directive (MSD) of the EU, which demands monitoring, action plans and at least an evaluation of the ecological situation in marine environment. Information technics adapted to those of INSPIRE should be used. One of the parameters monitored and evaluated for MSD is the expansion and quality of seagrass fields. With the view towards other evaluation parameters we decompose the complex process of evaluation of seagrass in reusable process steps and implement those packages as configurable WPS.

  18. Cloudy Solar Software - Enhanced Capabilities for Finding, Pre-processing, and Visualizing Solar Data

    NASA Astrophysics Data System (ADS)

    Istvan Etesi, Laszlo; Tolbert, K.; Schwartz, R.; Zarro, D.; Dennis, B.; Csillaghy, A.

    2010-05-01

    In our project "Extending the Virtual Solar Observatory (VSO)” we have combined some of the features available in Solar Software (SSW) to produce an integrated environment for data analysis, supporting the complete workflow from data location, retrieval, preparation, and analysis to creating publication-quality figures. Our goal is an integrated analysis experience in IDL, easy-to-use but flexible enough to allow more sophisticated procedures such as multi-instrument analysis. To that end, we have made the transition from a locally oriented setting where all the analysis is done on the user's computer, to an extended analysis environment where IDL has access to services available on the Internet. We have implemented a form of Cloud Computing that uses the VSO search and a new data retrieval and pre-processing server (PrepServer) that provides remote execution of instrument-specific data preparation. We have incorporated the interfaces to the VSO search and the PrepServer into an IDL widget (SHOW_SYNOP) that provides user-friendly searching and downloading of raw solar data and optionally sends search results for pre-processing to the PrepServer prior to downloading the data. The raw and pre-processed data can be displayed with our plotting suite, PLOTMAN, which can handle different data types (light curves, images, and spectra) and perform basic data operations such as zooming, image overlays, solar rotation, etc. PLOTMAN is highly configurable and suited for visual data analysis and for creating publishable figures. PLOTMAN and SHOW_SYNOP work hand-in-hand for a convenient working environment. Our environment supports a growing number of solar instruments that currently includes RHESSI, SOHO/EIT, TRACE, SECCHI/EUVI, HINODE/XRT, and HINODE/EIS.

  19. Supply Chain Collaboration: Information Sharing in a Tactical Operating Environment

    DTIC Science & Technology

    2013-06-01

    architecture, there are four tiers: Client (Web Application Clients ), Presentation (Web-Server), Processing (Application-Server), Data (Database...organization in each period. This data will be collected to analyze. i) Analyses and Validation: We will do a statistics test in this data, Pareto ...notes, outstanding deliveries, and inventory. i) Analyses and Validation: We will do a statistics test in this data, Pareto analyses and confirmation

  20. Mobile Monitoring Stations and Web Visualization of Biotelemetric System - Guardian II

    NASA Astrophysics Data System (ADS)

    Krejcar, Ondrej; Janckulik, Dalibor; Motalova, Leona; Kufel, Jan

    The main area of interest of our project is to provide solution which can be used in different areas of health care and which will be available through PDAs (Personal Digital Assistants), web browsers or desktop clients. The realized system deals with an ECG sensor connected to mobile equipment, such as PDA/Embedded, based on Microsoft Windows Mobile operating system. The whole system is based on the architecture of .NET Compact Framework, and Microsoft SQL Server. Visualization possibilities of web interface and ECG data are also discussed and final suggestion is made to Microsoft Silverlight solution along with current screenshot representation of implemented solution. The project was successfully tested in real environment in cryogenic room (-136OC).

  1. Migration of legacy mumps applications to relational database servers.

    PubMed

    O'Kane, K C

    2001-07-01

    An extended implementation of the Mumps language is described that facilitates vendor neutral migration of legacy Mumps applications to SQL-based relational database servers. Implemented as a compiler, this system translates Mumps programs to operating system independent, standard C code for subsequent compilation to fully stand-alone, binary executables. Added built-in functions and support modules extend the native hierarchical Mumps database with access to industry standard, networked, relational database management servers (RDBMS) thus freeing Mumps applications from dependence upon vendor specific, proprietary, unstandardized database models. Unlike Mumps systems that have added captive, proprietary RDMBS access, the programs generated by this development environment can be used with any RDBMS system that supports common network access protocols. Additional features include a built-in web server interface and the ability to interoperate directly with programs and functions written in other languages.

  2. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  3. A system for distributed intrusion detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snapp, S.R.; Brentano, J.; Dias, G.V.

    1991-01-01

    The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the intrusion-detection concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion-detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of ourmore » present research is to extend our network intrusion-detection concept from the LAN environment to arbitarily wider areas with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager in each host; a LAN manager for monitoring each LAN in the system; and a central manager which is placed at a single secure location and which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions. 11 refs., 2 figs.« less

  4. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  5. A Remote Health Monitoring System for the Elderly Based on Smart Home Gateway

    PubMed Central

    Shao, Minggang

    2017-01-01

    This paper proposed a remote health monitoring system for the elderly based on smart home gateway. The proposed system consists of three parts: the smart clothing, the smart home gateway, and the health care server. The smart clothing collects the elderly's electrocardiogram (ECG) and motion signals. The home gateway is used for data transmission. The health care server provides services of data storage and user information management; it is constructed on the Windows-Apache-MySQL-PHP (WAMP) platform and is tested on the Ali Cloud platform. To resolve the issues of data overload and network congestion of the home gateway, an ECG compression algorithm is applied. System demonstration shows that the ECG signals and motion signals of the elderly can be monitored. Evaluation of the compression algorithm shows that it has a high compression ratio and low distortion and consumes little time, which is suitable for home gateways. The proposed system has good scalability, and it is simple to operate. It has the potential to provide long-term and continuous home health monitoring services for the elderly. PMID:29204258

  6. A Remote Health Monitoring System for the Elderly Based on Smart Home Gateway.

    PubMed

    Guan, Kai; Shao, Minggang; Wu, Shuicai

    2017-01-01

    This paper proposed a remote health monitoring system for the elderly based on smart home gateway. The proposed system consists of three parts: the smart clothing, the smart home gateway, and the health care server. The smart clothing collects the elderly's electrocardiogram (ECG) and motion signals. The home gateway is used for data transmission. The health care server provides services of data storage and user information management; it is constructed on the Windows-Apache-MySQL-PHP (WAMP) platform and is tested on the Ali Cloud platform. To resolve the issues of data overload and network congestion of the home gateway, an ECG compression algorithm is applied. System demonstration shows that the ECG signals and motion signals of the elderly can be monitored. Evaluation of the compression algorithm shows that it has a high compression ratio and low distortion and consumes little time, which is suitable for home gateways. The proposed system has good scalability, and it is simple to operate. It has the potential to provide long-term and continuous home health monitoring services for the elderly.

  7. The personal receiving document management and the realization of email function in OAS

    NASA Astrophysics Data System (ADS)

    Li, Biqing; Li, Zhao

    2017-05-01

    This software is an independent software system, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs. This software is an independent software system, using the current popular B/S (browser/server) structure and ASP.NET technology development, using the Windows 7 operating system, Microsoft SQL Server2005 Visual2008 and database as a development platform, suitable for small and medium enterprises, contains personal office, scientific research project management and system management functions, independently run in relevant environment, and to solve practical needs.

  8. Measuring, managing and maximizing performance of mineral processing plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bascur, O.A.; Kennedy, J.P.

    1995-12-31

    The implementation of continuous quality improvement is the confluence of Total Quality Management, People Empowerment, Performance Indicators and Information Engineering. The supporting information technologies allow a mineral processor to narrow the gap between management business objectives and the process control level. One of the most important contributors is the user friendliness and flexibility of the personal computer in a client/server environment. This synergistic combination when used for real time performance monitoring translates into production cost savings, improved communications and enhanced decision support. Other savings come from reduced time to collect data and perform tedious calculations, act quickly with fresh newmore » data, generate and validate data to be used by others. This paper presents an integrated view of plant management. The selection of the proper tools for continuous quality improvement are described. The process of selecting critical performance monitoring indices for improved plant performance are discussed. The importance of a well balanced technological improvement, personnel empowerment, total quality management and organizational assets are stressed.« less

  9. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure.

    PubMed

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-20

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant's critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events.

  10. An integrated gateway for various PHDs in U-healthcare environments.

    PubMed

    Park, KeeHyun; Pak, JuGeon

    2012-01-01

    We propose an integrated gateway for various personal health devices (PHDs). This gateway receives measurements from various PHDs and conveys them to a remote monitoring server (MS). It provides two kinds of transmission modes: immediate transmission and integrated transmission. The former mode operates if a measurement exceeds a predetermined threshold or in the case of an emergency. In the latter mode, the gateway retains the measurements instead of forwarding them. When the reporting time comes, the gateway extracts all the stored measurements, integrates them into one message, and transmits the integrated message to the MS. Through this mechanism, the transmission overhead can be reduced. On the basis of the proposed gateway, we construct a u-healthcare system comprising an activity monitor, a medication dispenser, and a pulse oximeter. The evaluation results show that the size of separate messages from various PHDs is reduced through the integration process, and the process does not require much time; the integration time is negligible.

  11. An Integrated Gateway for Various PHDs in U-Healthcare Environments

    PubMed Central

    Park, KeeHyun; Pak, JuGeon

    2012-01-01

    We propose an integrated gateway for various personal health devices (PHDs). This gateway receives measurements from various PHDs and conveys them to a remote monitoring server (MS). It provides two kinds of transmission modes: immediate transmission and integrated transmission. The former mode operates if a measurement exceeds a predetermined threshold or in the case of an emergency. In the latter mode, the gateway retains the measurements instead of forwarding them. When the reporting time comes, the gateway extracts all the stored measurements, integrates them into one message, and transmits the integrated message to the MS. Through this mechanism, the transmission overhead can be reduced. On the basis of the proposed gateway, we construct a u-healthcare system comprising an activity monitor, a medication dispenser, and a pulse oximeter. The evaluation results show that the size of separate messages from various PHDs is reduced through the integration process, and the process does not require much time; the integration time is negligible. PMID:22899891

  12. BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.

    PubMed

    Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron

    2009-06-01

    BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).

  13. Application of wireless networks-peer-to-peer information sharing

    NASA Astrophysics Data System (ADS)

    ellappan, Vijayan; chaki, suchismita; kumar, avn

    2017-11-01

    Peer to Peer communications and its applications have gotten to be ordinary construction modelling in the wired network environment. But then, they have not been successfully adjusted with the wireless environment. Unlike the traditional client-server framework, in a P2P framework, each node can play the role of client as well as server simultaneously and exchange data or information with others. We aim to design an application which can adapt to the wireless ad-hoc networks. Peer to Peer communication can help people to share their files (information, image, audio, video and so on) and communicate with each other without relying on a particular network infrastructure or limited data usage. Here there is a central server with the help of which, the peers will have the capability to get the information about the other peers in the network. Indeed, even without the Internet, devices have the potential to allow users to connect and communicate in a special way through short range remote protocols such Wi-Fi.

  14. GUI implementation of image encryption and decryption using Open CV-Python script on secured TFTP protocol

    NASA Astrophysics Data System (ADS)

    Reddy, K. Rasool; Rao, Ch. Madhava

    2018-04-01

    Currently safety is one of the primary concerns in the transmission of images due to increasing the use of images within the industrial applications. So it's necessary to secure the image facts from unauthorized individuals. There are various strategies are investigated to secure the facts. In that encryption is certainly one of maximum distinguished method. This paper gives a sophisticated Rijndael (AES) algorithm to shield the facts from unauthorized humans. Here Exponential Key Change (EKE) concept is also introduced to exchange the key between client and server. The things are exchange in a network among client and server through a simple protocol is known as Trivial File Transfer Protocol (TFTP). This protocol is used mainly in embedded servers to transfer the data and also provide protection to the data if protection capabilities are integrated. In this paper, implementing a GUI environment for image encryption and decryption. All these experiments carried out on Linux environment the usage of Open CV-Python script.

  15. Resource Allocation in Dynamic Environments

    DTIC Science & Technology

    2012-10-01

    Utility Curve for the TOC Camera 42 Figure 20: Utility Curves for Ground Vehicle Camera and Squad Camera 43 Figure 21: Facial - Recognition Utility...A Facial - Recognition Server (FRS) can receive images from smartphones the squads use, compare them to a local database, and then return the...fallback. In addition, each squad has the ability to capture images with a smartphone and send them to a Facial - Recognition Server in the TOC to

  16. Research on Information Sharing Method for Future C2 in Network Centric Environment

    DTIC Science & Technology

    2011-06-01

    subscription (or search) request. Then, some of the information service nodes for future C2 deal with these users’ requests, locate, federated search the... federated search server is responsible for resolving the search requests sending out from the users, and executing the federated search . The information... federated search server, information filtering model, or information subscription matching algorithm (such as users subscribe the target information at two

  17. Agent-Based Framework for Discrete Entity Simulations

    DTIC Science & Technology

    2006-11-01

    Postgres database server for environment queries of neighbors and continuum data. As expected for raw database queries (no database optimizations in...form. Eventually the code was ported to GNU C++ on the same single Intel Pentium 4 CPU running RedHat Linux 9.0 and Postgres database server...Again Postgres was used for environmental queries, and the tool remained relatively slow because of the immense number of queries necessary to assess

  18. Development of water environment information management and water pollution accident response system

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Ruan, H.

    2009-12-01

    In recent years, many water pollution accidents occurred with the rapid economical development. In this study, water environment information management and water pollution accident response system are developed based on geographic information system (GIS) techniques. The system integrated spatial database, attribute database, hydraulic model, and water quality model under a user-friendly interface in a GIS environment. System ran in both Client/Server (C/S) and Browser/Server (B/S) platform which focused on model and inquiry respectively. System provided spatial and attribute data inquiry, water quality evaluation, statics, water pollution accident response case management (opening reservoir etc) and 2D and 3D visualization function, and gave assistant information to make decision on water pollution accident response. Polluted plume in Huaihe River were selected to simulate the transport of pollutes.

  19. Development of a Cloud Computing-Based Pier Type Port Structure Stability Evaluation Platform Using Fiber Bragg Grating Sensors.

    PubMed

    Jo, Byung Wan; Jo, Jun Ho; Khan, Rana Muhammad Asad; Kim, Jung Hoon; Lee, Yun Sung

    2018-05-23

    Structure Health Monitoring is a topic of great interest in port structures due to the ageing of structures and the limitations of evaluating structures. This paper presents a cloud computing-based stability evaluation platform for a pier type port structure using Fiber Bragg Grating (FBG) sensors in a system consisting of a FBG strain sensor, FBG displacement gauge, FBG angle meter, gateway, and cloud computing-based web server. The sensors were installed on core components of the structure and measurements were taken to evaluate the structures. The measurement values were transmitted to the web server via the gateway to analyze and visualize them. All data were analyzed and visualized in the web server to evaluate the structure based on the safety evaluation index (SEI). The stability evaluation platform for pier type port structures involves the efficient monitoring of the structures which can be carried out easily anytime and anywhere by converging new technologies such as cloud computing and FBG sensors. In addition, the platform has been successfully implemented at “Maryang Harbor” situated in Maryang-Meyon of Korea to test its durability.

  20. YODA++: A proposal for a semi-automatic space mission control

    NASA Astrophysics Data System (ADS)

    Casolino, M.; de Pascale, M. P.; Nagni, M.; Picozza, P.

    YODA++ is a proposal for a semi-automated data handling and analysis system for the PAMELA space experiment. The core of the routines have been developed to process a stream of raw data downlinked from the Resurs DK1 satellite (housing PAMELA) to the ground station in Moscow. Raw data consist of scientific data and are complemented by housekeeping information. Housekeeping information will be analyzed within a short time from download (1 h) in order to monitor the status of the experiment and to foreseen the mission acquisition planning. A prototype for the data visualization will run on an APACHE TOMCAT web application server, providing an off-line analysis tool using a browser and part of code for the system maintenance. Data retrieving development is in production phase, while a GUI interface for human friendly monitoring is on preliminary phase as well as a JavaServerPages/JavaServerFaces (JSP/JSF) web application facility. On a longer timescale (1 3 h from download) scientific data are analyzed. The data storage core will be a mix of CERNs ROOT files structure and MySQL as a relational database. YODA++ is currently being used in the integration and testing on ground of PAMELA data.

  1. Active Cyber Defense: Enhancing National Cyber Defense

    DTIC Science & Technology

    2011-12-01

    Prevention System ISP Internet Service Provider IT Information Technology IWM Information Warfare Monitor LOAC Law of Armed Conflict NATO...the Information Warfare Monitor ( IWM ) discovered that GhostNet had infected 1,295 computers in 103 countries. As many as thirty percent of these...By monitoring the computers in Dharamsala and at various Tibetan missions, IWM was able to determine the IP addresses of the servers hosting Gh0st

  2. [Using modern information technology in the practice of the sanitary-epidemiological surveiliance during the XXII Olympic Winter Games and XI Paralympic Winter Games in Sochi].

    PubMed

    Popova, A Yu; Kuzkin, B P; Demina, Yu V; Dubyansky, V M; Kulichenko, A N; Maletskaya, O V; Shayakhmetov, O Kh; Semenko, O V; Nazarenko, Yu V; Agapitov, D S; Mezentsev, V M; Kharchenko, T V; Efremenko, D V; Oroby, V G; Klindukhov, V P; Grechanaya, T V; Nikolaevich, P N; Tesheva, S Ch; Rafeenko, G K

    2015-01-01

    To improve the sanitary and epidemiological surveillance at the Olympic Games has developed a system of GIS for monitoring objects and situations in the region of Sochi. The system is based on software package ArcGIS, version 10.2 server, with Web-java.lang. Object, Web-server Apach, and software developed in language java. During th execution of the tasks are solved: the stratification of the region of the Olympic Games for the private and aggregate epidemiological risk OCI various eti- ologies, ranking epidemiologically important facilities for the sanitary and hygienic conditions, monitoring of infectious diseases (in real time according to the preliminary diagnosis). GIS monitoring has shown its effectiveness: Information received from various sources, but focused on one portal. Information was available in real time all the specialists involved in ensuring epidemiological well-being and use at work during the Olympic Games in Sochi.

  3. IoT based Growth Monitoring System of Guava (Psidium guajava L.) Fruits

    NASA Astrophysics Data System (ADS)

    Slamet, W.; Irham, N. M.; Sutan, M. S. A.

    2018-05-01

    Growth monitoring of plant is important especially to evaluate the influence of environment or growing condition on its productivity. One way to monitor the plant growth is by measuring the radial growth (i.e., the change of circumference) of certain part of plant such as trunk, branch, and fruit. In this study we develop an internet of things (IoT) based monitoring system of radial growth of plant using a low-cost optoelectronic sensor. The system was applied to monitor radial growth of guava fruits (Psidium guajava L.). The principle of the developed sensor is based on the optoelectronic sensor which detects alternating white and black narrow bar printed on reflective tapes. Reflective tape was installed encircling the fruit. The movement of reflective tapes will follow the radial growth of the fruit so that the infrared sensor on the optoelectronic would response reflective tapes movement. This device is designed to measure object continuously and long-term monitor with minimum maintenance. The data collected by the sensors are then sent to the server and also can be monitored in real-time. Based on field test, at current stage, the developed sensor could measure the radial growth of the fruits with a maximum error 2 mm. In term of data transfer, the success rate of the developed system was 97.54%. The result indicated that the developed system can be used as an effective tool for growth monitoring of plant.

  4. Deceit: A flexible distributed file system

    NASA Technical Reports Server (NTRS)

    Siegel, Alex; Birman, Kenneth; Marzullo, Keith

    1989-01-01

    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

  5. A Rich Client-Server Based Framework for Convenient Security and Management of Mobile Applications

    NASA Astrophysics Data System (ADS)

    Badan, Stephen; Probst, Julien; Jaton, Markus; Vionnet, Damien; Wagen, Jean-Frédéric; Litzistorf, Gérald

    Contact lists, Emails, SMS or custom applications on a professional smartphone could hold very confidential or sensitive information. What could happen in case of theft or accidental loss of such devices? Such events could be detected by the separation between the smartphone and a Bluetooth companion device. This event should typically block the applications and delete personal and sensitive data. Here, a solution is proposed based on a secured framework application running on the mobile phone as a rich client connected to a security server. The framework offers strong and customizable authentication and secured connectivity. A security server manages all security issues. User applications are then loaded via the framework. User data can be secured, synchronized, pushed or pulled via the framework. This contribution proposes a convenient although secured environment based on a client-server architecture using external authentications. Several features of the proposed system are exposed and a practical demonstrator is described.

  6. Development of Data Processing Software for NBI Spectroscopic Analysis System

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaodan; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Wu, Deyun; Cui, Qinglong

    2015-04-01

    A set of data processing software is presented in this paper for processing NBI spectroscopic data. For better and more scientific managment and querying these data, they are managed uniformly by the NBI data server. The data processing software offers the functions of uploading beam spectral original and analytic data to the data server manually and automatically, querying and downloading all the NBI data, as well as dealing with local LZO data. The set software is composed of a server program and a client program. The server software is programmed in C/C++ under a CentOS development environment. The client software is developed under a VC 6.0 platform, which offers convenient operational human interfaces. The network communications between the server and the client are based on TCP. With the help of this set software, the NBI spectroscopic analysis system realizes the unattended automatic operation, and the clear interface also makes it much more convenient to offer beam intensity distribution data and beam power data to operators for operation decision-making. supported by National Natural Science Foundation of China (No. 11075183), the Chinese Academy of Sciences Knowledge Innovation

  7. "Just Another Tool for Online Studies” (JATOS): An Easy Solution for Setup and Management of Web Servers Supporting Online Studies

    PubMed Central

    Lange, Kristian; Kühn, Simone; Filevich, Elisa

    2015-01-01

    We present here “Just Another Tool for Online Studies” (JATOS): an open source, cross-platform web application with a graphical user interface (GUI) that greatly simplifies setting up and communicating with a web server to host online studies that are written in JavaScript. JATOS is easy to install in all three major platforms (Microsoft Windows, Mac OS X, and Linux), and seamlessly pairs with a database for secure data storage. It can be installed on a server or locally, allowing researchers to try the application and feasibility of their studies within a browser environment, before engaging in setting up a server. All communication with the JATOS server takes place via a GUI (with no need to use a command line interface), making JATOS an especially accessible tool for researchers without a strong IT background. We describe JATOS’ main features and implementation and provide a detailed tutorial along with example studies to help interested researchers to set up their online studies. JATOS can be found under the Internet address: www.jatos.org. PMID:26114751

  8. 2MASS Catalog Server Kit Version 2.1

    NASA Astrophysics Data System (ADS)

    Yamauchi, C.

    2013-10-01

    The 2MASS Catalog Server Kit is open source software for use in easily constructing a high performance search server for important astronomical catalogs. This software utilizes the open source RDBMS PostgreSQL, therefore, any users can setup the database on their local computers by following step-by-step installation guide. The kit provides highly optimized stored functions for positional searchs similar to SDSS SkyServer. Together with these, the powerful SQL environment of PostgreSQL will meet various user's demands. We released 2MASS Catalog Server Kit version 2.1 in 2012 May, which supports the latest WISE All-Sky catalog (563,921,584 rows) and 9 major all-sky catalogs. Local databases are often indispensable for observatories with unstable or narrow-band networks or severe use, such as retrieving large numbers of records within a small period of time. This software is the best for such purposes, and increasing supported catalogs and improvements of version 2.1 can cover a wider range of applications including advanced calibration system, scientific studies using complicated SQL queries, etc. Official page: http://www.ir.isas.jaxa.jp/~cyamauch/2masskit/

  9. Security Enhanced Anonymous Multiserver Authenticated Key Agreement Scheme Using Smart Cards and Biometrics

    PubMed Central

    Choi, Younsung; Nam, Junghyun; Lee, Donghoon; Kim, Jiye; Jung, Jaewook; Won, Dongho

    2014-01-01

    An anonymous user authentication scheme allows a user, who wants to access a remote application server, to achieve mutual authentication and session key establishment with the server in an anonymous manner. To enhance the security of such authentication schemes, recent researches combined user's biometrics with a password. However, these authentication schemes are designed for single server environment. So when a user wants to access different application servers, the user has to register many times. To solve this problem, Chuang and Chen proposed an anonymous multiserver authenticated key agreement scheme using smart cards together with passwords and biometrics. Chuang and Chen claimed that their scheme not only supports multiple servers but also achieves various security requirements. However, we show that this scheme is vulnerable to a masquerade attack, a smart card attack, a user impersonation attack, and a DoS attack and does not achieve perfect forward secrecy. We also propose a security enhanced anonymous multiserver authenticated key agreement scheme which addresses all the weaknesses identified in Chuang and Chen's scheme. PMID:25276847

  10. AMS data production facilities at science operations center at CERN

    NASA Astrophysics Data System (ADS)

    Choutko, V.; Egorov, A.; Eline, A.; Shan, B.

    2017-10-01

    The Alpha Magnetic Spectrometer (AMS) is a high energy physics experiment on the board of the International Space Station (ISS). This paper presents the hardware and software facilities of Science Operation Center (SOC) at CERN. Data Production is built around production server - a scalable distributed service which links together a set of different programming modules for science data transformation and reconstruction. The server has the capacity to manage 1000 paralleled job producers, i.e. up to 32K logical processors. Monitoring and management tool with Production GUI is also described.

  11. [Automated anesthesia record system].

    PubMed

    Zhu, Tao; Liu, Jin

    2005-12-01

    Based on Client/Server architecture, a software of automated anesthesia record system running under Windows operation system and networks has been developed and programmed with Microsoft Visual C++ 6.0, Visual Basic 6.0 and SQL Server. The system can deal with patient's information throughout the anesthesia. It can collect and integrate the data from several kinds of medical equipment such as monitor, infusion pump and anesthesia machine automatically and real-time. After that, the system presents the anesthesia sheets automatically. The record system makes the anesthesia record more accurate and integral and can raise the anesthesiologist's working efficiency.

  12. Arc4nix: A cross-platform geospatial analytical library for cluster and cloud computing

    NASA Astrophysics Data System (ADS)

    Tang, Jingyin; Matyas, Corene J.

    2018-02-01

    Big Data in geospatial technology is a grand challenge for processing capacity. The ability to use a GIS for geospatial analysis on Cloud Computing and High Performance Computing (HPC) clusters has emerged as a new approach to provide feasible solutions. However, users lack the ability to migrate existing research tools to a Cloud Computing or HPC-based environment because of the incompatibility of the market-dominating ArcGIS software stack and Linux operating system. This manuscript details a cross-platform geospatial library "arc4nix" to bridge this gap. Arc4nix provides an application programming interface compatible with ArcGIS and its Python library "arcpy". Arc4nix uses a decoupled client-server architecture that permits geospatial analytical functions to run on the remote server and other functions to run on the native Python environment. It uses functional programming and meta-programming language to dynamically construct Python codes containing actual geospatial calculations, send them to a server and retrieve results. Arc4nix allows users to employ their arcpy-based script in a Cloud Computing and HPC environment with minimal or no modification. It also supports parallelizing tasks using multiple CPU cores and nodes for large-scale analyses. A case study of geospatial processing of a numerical weather model's output shows that arcpy scales linearly in a distributed environment. Arc4nix is open-source software.

  13. Process Integrated Mechanism for Human-Computer Collaboration and Coordination

    DTIC Science & Technology

    2012-09-12

    system we implemented the TAFLib library that provides the communication with TAF . The data received from the TAF server is collected in a data structure...send new commands and flight plans for the UAVs to the TAF server. Test scenarios Several scenarios have been implemented to test and prove our...areas. Shooting Enemies The basic scenario proved the successful integration of PIM and the TAF simulation environment. Subsequently we improved the CP

  14. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1991-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.

  15. Fully Distributed Monitoring Architecture Supporting Multiple Trackees and Trackers in Indoor Mobile Asset Management Application

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-01-01

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407

  16. An Efficient Audio Coding Scheme for Quantitative and Qualitative Large Scale Acoustic Monitoring Using the Sensor Grid Approach

    PubMed Central

    Gontier, Félix; Lagrange, Mathieu; Can, Arnaud; Lavandier, Catherine

    2017-01-01

    The spreading of urban areas and the growth of human population worldwide raise societal and environmental concerns. To better address these concerns, the monitoring of the acoustic environment in urban as well as rural or wilderness areas is an important matter. Building on the recent development of low cost hardware acoustic sensors, we propose in this paper to consider a sensor grid approach to tackle this issue. In this kind of approach, the crucial question is the nature of the data that are transmitted from the sensors to the processing and archival servers. To this end, we propose an efficient audio coding scheme based on third octave band spectral representation that allows: (1) the estimation of standard acoustic indicators; and (2) the recognition of acoustic events at state-of-the-art performance rate. The former is useful to provide quantitative information about the acoustic environment, while the latter is useful to gather qualitative information and build perceptually motivated indicators using for example the emergence of a given sound source. The coding scheme is also demonstrated to transmit spectrally encoded data that, reverted to the time domain using state-of-the-art techniques, are not intelligible, thus protecting the privacy of citizens. PMID:29186021

  17. Research on TCP/IP network communication based on Node.js

    NASA Astrophysics Data System (ADS)

    Huang, Jing; Cai, Lixiong

    2018-04-01

    In the face of big data, long connection and high synchronization, TCP/IP network communication will cause performance bottlenecks due to its blocking multi-threading service model. This paper presents a method of TCP/IP network communication protocol based on Node.js. On the basis of analyzing the characteristics of Node.js architecture and asynchronous non-blocking I/O model, the principle of its efficiency is discussed, and then compare and analyze the network communication model of TCP/IP protocol to expound the reasons why TCP/IP protocol stack is widely used in network communication. Finally, according to the large data and high concurrency in the large-scale grape growing environment monitoring process, a TCP server design based on Node.js is completed. The results show that the example runs stably and efficiently.

  18. Climatic Data Integration and Analysis - Regional Approaches to Climate Change for Pacific Northwest Agriculture (REACCH PNA)

    NASA Astrophysics Data System (ADS)

    Seamon, E.; Gessler, P. E.; Flathers, E.; Sheneman, L.; Gollberg, G.

    2013-12-01

    The Regional Approaches to Climate Change for Pacific Northwest Agriculture (REACCH PNA) is a five-year USDA/NIFA-funded coordinated agriculture project to examine the sustainability of cereal crop production systems in the Pacific Northwest, in relationship to ongoing climate change. As part of this effort, an extensive data management system has been developed to enable researchers, students, and the public, to upload, manage, and analyze various data. The REACCH PNA data management team has developed three core systems to encompass cyberinfrastructure and data management needs: 1) the reacchpna.org portal (https://www.reacchpna.org) is the entry point for all public and secure information, with secure access by REACCH PNA members for data analysis, uploading, and informational review; 2) the REACCH PNA Data Repository is a replicated, redundant database server environment that allows for file and database storage and access to all core data; and 3) the REACCH PNA Libraries which are functional groupings of data for REACCH PNA members and the public, based on their access level. These libraries are accessible thru our https://www.reacchpna.org portal. The developed system is structured in a virtual server environment (data, applications, web) that includes a geospatial database/geospatial web server for web mapping services (ArcGIS Server), use of ESRI's Geoportal Server for data discovery and metadata management (under the ISO 19115-2 standard), Thematic Realtime Environmental Distributed Data Services (THREDDS) for data cataloging, and Interactive Python notebook server (IPython) technology for data analysis. REACCH systems are housed and maintained by the Northwest Knowledge Network project (www.northwestknowledge.net), which provides data management services to support research. Initial project data harvesting and meta-tagging efforts have resulted in the interrogation and loading of over 10 terabytes of climate model output, regional entomological data, agricultural and atmospheric information, as well as imagery, publications, videos, and other soft content. In addition, the outlined data management approach has focused on the integration and interconnection of hard data (raw data output) with associated publications, presentations, or other narrative documentation - through metadata lineage associations. This harvest-and-consume data management methodology could additionally be applied to other research team environments that involve large and divergent data.

  19. Unobstructive Body Area Networks (BAN) for efficient movement monitoring.

    PubMed

    Felisberto, Filipe; Costa, Nuno; Fdez-Riverola, Florentino; Pereira, António

    2012-01-01

    The technological advances in medical sensors, low-power microelectronics and miniaturization, wireless communications and networks have enabled the appearance of a new generation of wireless sensor networks: the so-called wireless body area networks (WBAN). These networks can be used for continuous monitoring of vital parameters, movement, and the surrounding environment. The data gathered by these networks contributes to improve users' quality of life and allows the creation of a knowledge database by using learning techniques, useful to infer abnormal behaviour. In this paper we present a wireless body area network architecture to recognize human movement, identify human postures and detect harmful activities in order to prevent risk situations. The WBAN was created using tiny, cheap and low-power nodes with inertial and physiological sensors, strategically placed on the human body. Doing so, in an as ubiquitous as possible way, ensures that its impact on the users' daily actions is minimum. The information collected by these sensors is transmitted to a central server capable of analysing and processing their data. The proposed system creates movement profiles based on the data sent by the WBAN's nodes, and is able to detect in real time any abnormal movement and allows for a monitored rehabilitation of the user.

  20. New data model with better functionality for VLab

    NASA Astrophysics Data System (ADS)

    da Silveira, P. R.; Wentzcovitch, R. M.; Karki, B. B.

    2009-12-01

    The VLab infrastructure and architecture was further developed to allow for several new features. First, workflows for first principles calculations of thermodynamics properties and static elasticity programmed in Java as Web Services can now be executed by multiple users. Second, jobs generated by these workflows can now be executed in batch in multiple servers. A simple internal schedule was implemented to handle hundreds of execution packages generated by multiple users and avoid the overload on servers. Third, a new data model was implemented to guarantee integrity of a project (workflow execution) in case of failure. The latter can happen in an execution package or in a workflow phase. By recording all executed steps of a project, its execution can be resumed after dynamic alteration of parameters through the VLab Portal. Fourth, batch jobs can also be monitored through the portal. Now, better and faster interaction with servers is achieved using Ajax technology. Finally, plots are now created on the Vlab server using Gnuplot 4.2.2. Research supported by NSF grants ATM 0428774 (VLab). Vlab is hosted by the Minnesota Supercomputing Institute.

  1. Reducing Time to Science: Unidata and JupyterHub Technology Using the Jetstream Cloud

    NASA Astrophysics Data System (ADS)

    Chastang, J.; Signell, R. P.; Fischer, J. L.

    2017-12-01

    Cloud computing can accelerate scientific workflows, discovery, and collaborations by reducing research and data friction. We describe the deployment of Unidata and JupyterHub technologies on the NSF-funded XSEDE Jetstream cloud. With the aid of virtual machines and Docker technology, we deploy a Unidata JupyterHub server co-located with a Local Data Manager (LDM), THREDDS data server (TDS), and RAMADDA geoscience content management system. We provide Jupyter Notebooks and the pre-built Python environments needed to run them. The notebooks can be used for instruction and as templates for scientific experimentation and discovery. We also supply a large quantity of NCEP forecast model results to allow data-proximate analysis and visualization. In addition, users can transfer data using Globus command line tools, and perform their own data-proximate analysis and visualization with Notebook technology. These data can be shared with others via a dedicated TDS server for scientific distribution and collaboration. There are many benefits of this approach. Not only is the cloud computing environment fast, reliable and scalable, but scientists can analyze, visualize, and share data using only their web browser. No local specialized desktop software or a fast internet connection is required. This environment will enable scientists to spend less time managing their software and more time doing science.

  2. A user-friendly, dynamic web environment for remote data browsing and analysis of multiparametric geophysical data within the MULTIMO project

    NASA Astrophysics Data System (ADS)

    Carniel, Roberto; Di Cecca, Mauro; Jaquet, Olivier

    2006-05-01

    In the framework of the EU-funded project "Multi-disciplinary monitoring, modelling and forecasting of volcanic hazard" (MULTIMO), multiparametric data have been recorded at the MULTIMO station in Montserrat. Moreover, several other long time series, recorded at Montserrat and at other volcanoes, have been acquired in order to test stochastic and deterministic methodologies under development. Creating a general framework to handle data efficiently is a considerable task even for homogeneous data. In the case of heterogeneous data, this becomes a major issue. A need for a consistent way of browsing such a heterogeneous dataset in a user-friendly way therefore arose. Additionally, a framework for applying the calculation of the developed dynamical parameters on the data series was also needed in order to easily keep these parameters under control, e.g. for monitoring, research or forecasting purposes. The solution which we present is completely based on Open Source software, including Linux operating system, MySql database management system, Apache web server, Zope application server, Scilab math engine, Plone content management framework, Unified Modelling Language. From the user point of view the main advantage is the possibility of browsing through datasets recorded on different volcanoes, with different instruments, with different sampling frequencies, stored in different formats, all via a consistent, user- friendly interface that transparently runs queries to the database, gets the data from the main storage units, generates the graphs and produces dynamically generated web pages to interact with the user. The involvement of third parties for continuing the development in the Open Source philosophy and/or extending the application fields is now sought.

  3. Information Collection using Handheld Devices in Unreliable Networking Environments

    DTIC Science & Technology

    2014-06-01

    different types of mobile devices that connect wirelessly to a database 8 server. The actual backend database is not important to the mobile clients...Google’s infrastructure and local servers with MySQL and PostgreSQL on the backend (ODK 2014b). (2) Google Fusion Tables are used to do basic link...how we conduct business. Our requirements to share information do not change simply because there is little or no existing infrastructure in our

  4. Web-based remote monitoring of infant incubators in the ICU.

    PubMed

    Shin, D I; Huh, S J; Lee, T S; Kim, I Y

    2003-09-01

    A web-based real-time operating, management, and monitoring system for checking temperature and humidity within infant incubators using the Intranet has been developed and installed in the infant Intensive Care Unit (ICU). We have created a pilot system which has a temperature and humidity sensor and a measuring module in each incubator, which is connected to a web-server board via an RS485 port. The system transmits signals using standard web-based TCP/IP so that users can access the system from any Internet-connected personal computer in the hospital. Using this method, the system gathers temperature and humidity data transmitted from the measuring modules via the RS485 port on the web-server board and creates a web document containing these data. The system manager can maintain centralized supervisory monitoring of the situations in all incubators while sitting within the infant ICU at a work space equipped with a personal computer. The system can be set to monitor unusual circumstances and to emit an alarm signal expressed as a sound or a light on a measuring module connected to the related incubator. If the system is configured with a large number of incubators connected to a centralized supervisory monitoring station, it will improve convenience and assure meaningful improvement in response to incidents that require intervention.

  5. Automatic Response to Intrusion

    DTIC Science & Technology

    2002-10-01

    Computing Corporation Sidewinder Firewall [18] SRI EMERALD Basic Security Module (BSM) and EMERALD File Transfer Protocol (FTP) Monitors...the same event TCP Wrappers [24] Internet Security Systems RealSecure [31] SRI EMERALD IDIP monitor NAI Labs Generic Software Wrappers Prototype...included EMERALD , NetRadar, NAI Labs UNIX wrappers, ARGuE, MPOG, NetRadar, CyberCop Server, Gauntlet, RealSecure, and the Cyber Command System

  6. Rotor Smoothing and Vibration Monitoring Results for the US Army VMEP

    DTIC Science & Technology

    2009-06-01

    individual component CI detection thresholds, and development of models for diagnostics, prognostics , and anomaly detection . Figure 16 VMEP Server...and prognostics are of current interest. Development of those systems requires large amounts of data (collection, monitoring , manipulation) to capture...development of automated systems and for continuous updating of algorithms to improve detection , classification, and prognostic performance. A test

  7. Exploring No-SQL alternatives for ALMA monitoring system

    NASA Astrophysics Data System (ADS)

    Shen, Tzu-Chiang; Soto, Ruben; Merino, Patricio; Peña, Leonel; Bartsch, Marcelo; Aguirre, Alvaro; Ibsen, Jorge

    2014-07-01

    The Atacama Large Millimeter /submillimeter Array (ALMA) will be a unique research instrument composed of at least 66 reconfigurable high-precision antennas, located at the Chajnantor plain in the Chilean Andes at an elevation of 5000 m. This paper describes the experience gained after several years working with the monitoring system, which has a strong requirement of collecting and storing up to 150K variables with a highest sampling rate of 20.8 kHz. The original design was built on top of a cluster of relational database server and network attached storage with fiber channel interface. As the number of monitoring points increases with the number of antennas included in the array, the current monitoring system demonstrated to be able to handle the increased data rate in the collection and storage area (only one month of data), but the data query interface showed serious performance degradation. A solution based on no-SQL platform was explored as an alternative to the current long-term storage system. Among several alternatives, mongoDB has been selected. In the data flow, intermediate cache servers based on Redis were introduced to allow faster streaming of the most recently acquired data to web based charts and applications for online data analysis.

  8. Remote health monitoring system for detecting cardiac disorders.

    PubMed

    Bansal, Ayush; Kumar, Sunil; Bajpai, Anurag; Tiwari, Vijay N; Nayak, Mithun; Venkatesan, Shankar; Narayanan, Rangavittal

    2015-12-01

    Remote health monitoring system with clinical decision support system as a key component could potentially quicken the response of medical specialists to critical health emergencies experienced by their patients. A monitoring system, specifically designed for cardiac care with electrocardiogram (ECG) signal analysis as the core diagnostic technique, could play a vital role in early detection of a wide range of cardiac ailments, from a simple arrhythmia to life threatening conditions such as myocardial infarction. The system that the authors have developed consists of three major components, namely, (a) mobile gateway, deployed on patient's mobile device, that receives 12-lead ECG signals from any ECG sensor, (b) remote server component that hosts algorithms for accurate annotation and analysis of the ECG signal and (c) point of care device of the doctor to receive a diagnostic report from the server based on the analysis of ECG signals. In the present study, their focus has been toward developing a system capable of detecting critical cardiac events well in advance using an advanced remote monitoring system. A system of this kind is expected to have applications ranging from tracking wellness/fitness to detection of symptoms leading to fatal cardiac events.

  9. Micro-Environmental Signature of The Interactions between Druggable Target Protein, Dipeptidyl Peptidase-IV, and Anti-Diabetic Drugs.

    PubMed

    Chakraborty, Chiranjib; Mallick, Bidyut; Sharma, Ashish Ranjan; Sharma, Garima; Jagga, Supriya; Doss, C George Priya; Nam, Ju-Suk; Lee, Sang-Soo

    2017-01-01

    Druggability of a target protein depends on the interacting micro-environment between the target protein and drugs. Therefore, a precise knowledge of the interacting micro-environment between the target protein and drugs is requisite for drug discovery process. To understand such micro-environment, we performed in silico interaction analysis between a human target protein, Dipeptidyl Peptidase-IV (DPP-4), and three anti-diabetic drugs (saxagliptin, linagliptin and vildagliptin). During the theoretical and bioinformatics analysis of micro-environmental properties, we performed drug-likeness study, protein active site predictions, docking analysis and residual interactions with the protein-drug interface. Micro-environmental landscape properties were evaluated through various parameters such as binding energy, intermolecular energy, electrostatic energy, van der Waals'+H-bond+desolvo energy (E VHD ) and ligand efficiency (LE) using different in silico methods. For this study, we have used several servers and software, such as Molsoft prediction server, CASTp server, AutoDock software and LIGPLOT server. Through micro-environmental study, highest log P value was observed for linagliptin (1.07). Lowest binding energy was also observed for linagliptin with DPP-4 in the binding plot. We also identified the number of H-bonds and residues involved in the hydrophobic interactions between the DPP-4 and the anti-diabetic drugs. During interaction, two H-bonds and nine residues, two H-bonds and eleven residues as well as four H-bonds and nine residues were found between the saxagliptin, linagliptin as well as vildagliptin cases and DPP-4, respectively. Our in silico data obtained for drug-target interactions and micro-environmental signature demonstrates linagliptin as the most stable interacting drug among the tested anti-diabetic medicines.

  10. HARMONY: a server for the assessment of protein structures

    PubMed Central

    Pugalenthi, G.; Shameer, K.; Srinivasan, N.; Sowdhamini, R.

    2006-01-01

    Protein structure validation is an important step in computational modeling and structure determination. Stereochemical assessment of protein structures examine internal parameters such as bond lengths and Ramachandran (φ,ψ) angles. Gross structure prediction methods such as inverse folding procedure and structure determination especially at low resolution can sometimes give rise to models that are incorrect due to assignment of misfolds or mistracing of electron density maps. Such errors are not reflected as strain in internal parameters. HARMONY is a procedure that examines the compatibility between the sequence and the structure of a protein by assigning scores to individual residues and their amino acid exchange patterns after considering their local environments. Local environments are described by the backbone conformation, solvent accessibility and hydrogen bonding patterns. We are now providing HARMONY through a web server such that users can submit their protein structure files and, if required, the alignment of homologous sequences. Scores are mapped on the structure for subsequent examination that is useful to also recognize regions of possible local errors in protein structures. HARMONY server is located at PMID:16844999

  11. A performance analysis of advanced I/O architectures for PC-based network file servers

    NASA Astrophysics Data System (ADS)

    Huynh, K. D.; Khoshgoftaar, T. M.

    1994-12-01

    In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.

  12. An Analysis of Database Replication Technologies with Regard to Deep Space Network Application Requirements

    NASA Technical Reports Server (NTRS)

    Connell, Andrea M.

    2011-01-01

    The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.

  13. Parallel sort with a ranged, partitioned key-value store in a high perfomance computing environment

    DOEpatents

    Bent, John M.; Faibish, Sorin; Grider, Gary; Torres, Aaron; Poole, Stephen W.

    2016-01-26

    Improved sorting techniques are provided that perform a parallel sort using a ranged, partitioned key-value store in a high performance computing (HPC) environment. A plurality of input data files comprising unsorted key-value data in a partitioned key-value store are sorted. The partitioned key-value store comprises a range server for each of a plurality of ranges. Each input data file has an associated reader thread. Each reader thread reads the unsorted key-value data in the corresponding input data file and performs a local sort of the unsorted key-value data to generate sorted key-value data. A plurality of sorted, ranged subsets of each of the sorted key-value data are generated based on the plurality of ranges. Each sorted, ranged subset corresponds to a given one of the ranges and is provided to one of the range servers corresponding to the range of the sorted, ranged subset. Each range server sorts the received sorted, ranged subsets and provides a sorted range. A plurality of the sorted ranges are concatenated to obtain a globally sorted result.

  14. BIO-Plex Information System Concept

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)

    1999-01-01

    This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.

  15. Generation of multiple analog pulses with different duty cycles within VME control system for ICRH Aditya system

    NASA Astrophysics Data System (ADS)

    Joshi, Ramesh; Singh, Manoj; Jadav, H. M.; Misra, Kishor; Kulkarni, S. V.; ICRH-RF Group

    2010-02-01

    Ion Cyclotron Resonance Heating (ICRH) is a promising heating method for a fusion device due to its localized power deposition profile, a direct ion heating at high density, and established technology for high RF power generation and transmission at low cost. Multiple analog pulse with different duty cycle in master of digital pulse for Data acquisition and Control system for steady state RF ICRH System(RF ICRH DAC) to be used for operating of RF Generator in Aditya to produce pre ionization and second analog pulse will produce heating. The control system software is based upon single digital pulse operation for RF source. It is planned to integrate multiple analog pulses with different duty cycle in master of digital pulse for Data acquisition and Control system for RF ICRH System(RF ICRH DAC) to be used for operating of RF Generator in Aditya tokamak. The task of RF ICRH DAC is to control and acquisition of all ICRH system operation with all control loop and acquisition for post analysis of data with java based tool. For pre ionization startup as well as heating experiments using multiple RF Power of different powers and duration. The experiment based upon the idea of using single RF generator to energize antenna inside the tokamak to radiate power twise, out of which first analog pulse will produce pre ionization and second analog pulse will produce heating. The whole system is based on standard client server technology using tcp/ip protocol. DAC Software is based on linux operating system for highly reliable, secure and stable system operation in failsafe manner. Client system is based on tcl/tk like toolkit for user interface with c/c++ like environment which is reliable programming languages widely used on stand alone system operation with server as vxWorks real time operating system like environment. The paper is focused on the Data acquisition and monitoring system software on Aditya RF ICRH System with analog pulses in slave mode with digital pulse in master mode for control acquisition and monitoring and interlocking.

  16. Computerized procedures system

    DOEpatents

    Lipner, Melvin H.; Mundy, Roger A.; Franusich, Michael D.

    2010-10-12

    An online data driven computerized procedures system that guides an operator through a complex process facility's operating procedures. The system monitors plant data, processes the data and then, based upon this processing, presents the status of the current procedure step and/or substep to the operator. The system supports multiple users and a single procedure definition supports several interface formats that can be tailored to the individual user. Layered security controls access privileges and revisions are version controlled. The procedures run on a server that is platform independent of the user workstations that the server interfaces with and the user interface supports diverse procedural views.

  17. Exploiting geo-distributed clouds for a e-health monitoring system with minimum service delay and privacy preservation.

    PubMed

    Shen, Qinghua; Liang, Xiaohui; Shen, Xuemin; Lin, Xiaodong; Luo, Henry Y

    2014-03-01

    In this paper, we propose an e-health monitoring system with minimum service delay and privacy preservation by exploiting geo-distributed clouds. In the system, the resource allocation scheme enables the distributed cloud servers to cooperatively assign the servers to the requested users under the load balance condition. Thus, the service delay for users is minimized. In addition, a traffic-shaping algorithm is proposed. The traffic-shaping algorithm converts the user health data traffic to the nonhealth data traffic such that the capability of traffic analysis attacks is largely reduced. Through the numerical analysis, we show the efficiency of the proposed traffic-shaping algorithm in terms of service delay and privacy preservation. Furthermore, through the simulations, we demonstrate that the proposed resource allocation scheme significantly reduces the service delay compared to two other alternatives using jointly the short queue and distributed control law.

  18. Recommending personally interested contents by text mining, filtering, and interfaces

    DOEpatents

    Xu, Songhua

    2015-10-27

    A personalized content recommendation system includes a client interface device configured to monitor a user's information data stream. A collaborative filter remote from the client interface device generates automated predictions about the interests of the user. A database server stores personal behavioral profiles and user's preferences based on a plurality of monitored past behaviors and an output of the collaborative user personal interest inference engine. A programmed personal content recommendation server filters items in an incoming information stream with the personal behavioral profile and identifies only those items of the incoming information stream that substantially matches the personal behavioral profile. The identified personally relevant content is then recommended to the user following some priority that may consider the similarity between the personal interest matches, the context of the user information consumption behaviors that may be shown by the user's content consumption mode.

  19. A study of smart card for radiation exposure history of patient.

    PubMed

    Rehani, Madan M; Kushi, Joseph F

    2013-04-01

    The purpose of this article is to undertake a study on developing a prototype of a smart card that, when swiped in a system with access to the radiation exposure monitoring server, will locate the patient's radiation exposure history from that institution or set of associated institutions to which it has database access. Like the ATM or credit card, the card acts as a secure unique "token" rather than having cash, credit, or dose data on the card. The system provides the requested radiation history report, which then can be printed or sent by e-mail to the patient. The prototype system is capable of extending outreach to wherever the radiation exposure monitoring server extends, at county, state, or national levels. It is anticipated that the prototype shall pave the way for quick availability of patient exposure history for use in clinical practice for strengthening radiation protection of patients.

  20. [A study of the transport of three dimensional medical images to remote institutions for telediagnosis].

    PubMed

    Hayashi, Takashi; Iwai, Mitsuhiro; Takahashi, Katsuhiko; Takeda, Satoshi; Tateishi, Toshiki; Kaneko, Rumi; Ogasawara, Yoko; Yonezawa, Kazuya; Hanada, Akiko

    2011-01-01

    Using a 3D-imaging-create-function server and network services by IP-VPN, we began to deliver 3D images to the remote institution. An indication trial of the primary image, a rotary trial of a 3D image, and a reproducibility trial were studied in order to examine the practicality of using the system in a real network between Hakodate and Sapporo (communication distance of about 150 km). In these trials, basic data (time and receiving data volume) were measured for every variation of QF (quality factor) or monitor resolution. Analyzing the results of the system using a 3D image delivery server of our hospital with variations in the setting of QF and monitor resolutions, we concluded that this system has practicality in the remote interpretation-of-radiogram work, even if the access point of the region has a line speed of 6 Mbps.

  1. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography.

    PubMed

    Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young

    2016-01-01

    Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.'s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.'s protocol and existing similar protocols.

  2. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  3. Advanced Engineering Environment FY09/10 pilot project.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamph, Jane Ann; Kiba, Grant W.; Pomplun, Alan R.

    2010-06-01

    The Advanced Engineering Environment (AEE) project identifies emerging engineering environment tools and assesses their value to Sandia National Laboratories and our partners in the Nuclear Security Enterprise (NSE) by testing them in our design environment. This project accomplished several pilot activities, including: the preliminary definition of an engineering bill of materials (BOM) based product structure in the Windchill PDMLink 9.0 application; an evaluation of Mentor Graphics Data Management System (DMS) application for electrical computer-aided design (ECAD) library administration; and implementation and documentation of a Windchill 9.1 application upgrade. The project also supported the migration of legacy data from existing corporatemore » product lifecycle management systems into new classified and unclassified Windchill PDMLink 9.0 systems. The project included two infrastructure modernization efforts: the replacement of two aging AEE development servers for reliable platforms for ongoing AEE project work; and the replacement of four critical application and license servers that support design and engineering work at the Sandia National Laboratories/California site.« less

  4. A Study on Partnering Mechanism in B to B EC Server for Global Supply Chain Management

    NASA Astrophysics Data System (ADS)

    Kaihara, Toshiya

    B to B Electronic Commerce (EC) technology is now in progress and regarded as an information infrastructure for global business. As the number and diversity of EC participants grows at the agile environment, the complexity of purchasing from a vast and dynamic array of goods and services needs to be hidden from the end user. Putting the complexity into the EC system instead means providing flexible auction server for enabling commerce within different business units. Market mechanism could solve the product distribution problem in the auction server by allocating the scheduled resources according to market prices. In this paper, we propose a partnering mechanism for B to B EC with market-oriented programming that mediates amongst unspecified various companies in the trade, and demonstrate the applicability of the economic analysis to this framework after constructing a primitive EC server. The proposed mechanism facilitates sophisticated B to B EC, which conducts a Pareto optimal solution for all the participating business units in the coming agile era.

  5. Monitoring meteorological spatial variability in viticulture using a low-cost Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Matese, Alessandro; Crisci, Alfonso; Di Gennaro, Filippo; Primicerio, Jacopo; Tomasi, Diego; Guidoni, Silvia

    2014-05-01

    In a long-term perspective, the current global agricultural scenario will be characterize by critical issues in terms of water resource management and environmental protection. The concept of sustainable agriculture would become crucial at reducing waste, optimizing the use of pesticides and fertilizers to crops real needs. This can be achieved through a minimum-scale monitoring of the crop physiologic status and the environmental parameters that characterize the microclimate. Viticulture is often subject to high variability within the same vineyard, thus becomes important to monitor this heterogeneity to allow a site-specific management and maximize the sustainability and quality of production. Meteorological variability expressed both at vineyard scale (mesoclimate) and at single plant level (microclimate) plays an important role during the grape ripening process. The aim of this work was to compare temperature, humidity and solar radiation measurements at different spatial scales. The measurements were assessed for two seasons (2011, 2012) in two vineyards of the Veneto region (North-East Italy), planted with Pinot gris and Cabernet Sauvignon using a specially designed and developed Wireless Sensor Network (WSN). The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering. Nodes level is based on a network of peripheral nodes consisting of a sensor board equipped with sensors and wireless module. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity. Different sources of spatial variation were studied, from meso-scale to micro-scale. A widespread investigation was conducted, building a factorial design able to evidence the role played by any factor influencing the physical environment in the vineyard, such as the surrounding climate effect, canopy management and relative position inside the vineyard. The results highlighted that the impact of agrometeorological parameters variability is predominantly determined by differences between within-field and external-field. These results may provide support for the composition of crop production and disease model simulations where data are usually taken from an agrometeorological station not representative of actual field conditions. Finally, the WSN performances, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power.

  6. Experimental Internet Environment Software Development

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.

    1998-01-01

    Geographically distributed project teams need an Internet based collaborative work environment or "Intranet." The Virtual Research Center (VRC) is an experimental Intranet server that combines several services such as desktop conferencing, file archives, on-line publishing, and security. Using the World Wide Web (WWW) as a shared space paradigm, the Graphical User Interface (GUI) presents users with images of a lunar colony. Each project has a wing of the colony and each wing has a conference room, library, laboratory, and mail station. In FY95, the VRC development team proved the feasibility of this shared space concept by building a prototype using a Netscape commerce server and several public domain programs. Successful demonstrations of the prototype resulted in approval for a second phase. Phase 2, documented by this report, will produce a seamlessly integrated environment by introducing new technologies such as Java and Adobe Web Links to replace less efficient interface software.

  7. Evaluation of an electrocardiogram on QR code.

    PubMed

    Nakayama, Masaharu; Shimokawa, Hiroaki

    2013-01-01

    An electrocardiogram (ECG) is an indispensable tool to diagnose cardiac diseases, such as ischemic heart disease, myocarditis, arrhythmia, and cardiomyopathy. Since ECG patterns vary depend on patient status, it is also used to monitor patients during treatment and comparison with ECGs with previous results is important for accurate diagnosis. However, the comparison requires connection to ECG data server in a hospital and the availability of data connection among hospitals is limited. To improve the portability and availability of ECG data regardless of server connection, we here introduce conversion of ECG data into 2D barcodes as text data and decode of the QR code for drawing ECG with Google Chart API. Fourteen cardiologists and six general physicians evaluated the system using iPhone and iPad. Overall, they were satisfied with the system in usability and accuracy of decoded ECG compared to the original ECG. This new coding system may be useful in utilizing ECG data irrespective of server connections.

  8. New web technologies for astronomy

    NASA Astrophysics Data System (ADS)

    Sprimont, P.-G.; Ricci, D.; Nicastro, L.

    2014-12-01

    Thanks to the new HTML5 capabilities and the huge improvements of the JavaScript language, it is now possible to design very complex and interactive web user interfaces. On top of that, the once monolithic and file-server oriented web servers are evolving into easily programmable server applications capable to cope with the complex interactions made possible by the new generation of browsers. We believe that the whole community of amateur and professionals astronomers can benefit from the potential of these new technologies. New web interfaces can be designed to provide the user with a large deal of much more intuitive and interactive tools. Accessing astronomical data archives, schedule, control and monitor observatories, and in particular robotic telescopes, supervising data reduction pipelines, all are capabilities that can now be implemented in a JavaScript web application. In this paper we describe the Sadira package we are implementing exactly to this aim.

  9. Feasibility of interactive biking exercise system for telemanagement in elderly.

    PubMed

    Finkelstein, Joseph; Jeong, In Cheol

    2013-01-01

    Inexpensive cycling equipment is widely available for home exercise however its use is hampered by lack of tools supporting real-time monitoring of cycling exercise in elderly and coordination with a clinical care team. To address these barriers, we developed a low-cost mobile system aimed at facilitating safe and effective home-based cycling exercise. The system used a miniature wireless 3-axis accelerometer that transmitted the cycling acceleration data to a tablet PC that was integrated with a multi-component disease management system. An exercise dashboard was presented to a patient allowing real-time graphical visualization of exercise progress. The system was programmed to alert patients when exercise intensity exceeded the levels recommended by the patient care providers and to exchange information with a central server. The feasibility of the system was assessed by testing the accuracy of cycling speed monitoring and reliability of alerts generated by the system. Our results demonstrated high validity of the system both for upper and lower extremity exercise monitoring as well as reliable data transmission between home unit and central server.

  10. IPG Job Manager v2.0 Design Documentation

    NASA Technical Reports Server (NTRS)

    Hu, Chaumin

    2003-01-01

    This viewgraph presentation provides a high-level design of the IPG Job Manager, and satisfies its Master Requirement Specification v2.0 Revision 1.0, 01/29/2003. The presentation includes a Software Architecture/Functional Overview with the following: Job Model; Job Manager Client/Server Architecture; Job Manager Client (Job Manager Client Class Diagram and Job Manager Client Activity Diagram); Job Manager Server (Job Manager Client Class Diagram and Job Manager Client Activity Diagram); Development Environment; Project Plan; Requirement Traceability.

  11. KoBaMIN: a knowledge-based minimization web server for protein structure refinement.

    PubMed

    Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav

    2012-07-01

    The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin.

  12. Design of SIP transformation server for efficient media negotiation

    NASA Astrophysics Data System (ADS)

    Pack, Sangheon; Paik, Eun Kyoung; Choi, Yanghee

    2001-07-01

    Voice over IP (VoIP) is one of the advanced services supported by the next generation mobile communication. VoIP should support various media formats and terminals existing together. This heterogeneous environment may prevent diverse users from establishing VoIP sessions among them. To solve the problem an efficient media negotiation mechanism is required. In this paper, we propose the efficient media negotiation architecture using the transformation server and the Intelligent Location Server (ILS). The transformation server is an extended Session Initiation Protocol (SIP) proxy server. It can modify an unacceptable session INVITE message into an acceptable one using the ILS. The ILS is a directory server based on the Lightweight Directory Access Protocol (LDAP) that keeps userí*s location information and available media information. The proposed architecture can eliminate an unnecessary response and re-INVITE messages of the standard SIP architecture. It takes only 1.5 round trip times to negotiate two different media types while the standard media negotiation mechanism takes 2.5 round trip times. The extra processing time in message handling is negligible in comparison to the reduced round trip time. The experimental results show that the session setup time in the proposed architecture is less than the setup time in the standard SIP. These results verify that the proposed media negotiation mechanism is more efficient in solving diversity problems.

  13. DICOM-compliant PACS with CD-based image archival

    NASA Astrophysics Data System (ADS)

    Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.

    1998-07-01

    This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.

  14. A Generic System-Level Framework for Self-Serve Health Monitoring System through Internet of Things (IoT).

    PubMed

    Ahmed, Mobyen Uddin; Björkman, Mats; Lindén, Maria

    2015-01-01

    Sensor data are traveling from sensors to a remote server, data is analyzed remotely in a distributed manner, and health status of a user is presented in real-time. This paper presents a generic system-level framework for a self-served health monitoring system through the Internet of Things (IoT) to facilities an efficient sensor data management.

  15. Development of Geomagnetic Monitoring System Using a Magnetometer for the Field

    NASA Astrophysics Data System (ADS)

    Lee, Young-Cheol; Kim, Sung-Wook; Choi, Eun-Kyeong; Kim, In-Soo

    2014-05-01

    Three institutes including KMA (Korea Meteorological Administration), KSWC (Korean Space Weather Center) of NRRA (National Radio Research Agency) and KIGAM (Korea Institute of Geoscience and Mineral Resources) are now operating magnetic observatories. Those observatories observe the total intensity and three components of geomagnetic element. This paper comes up with a magnetic monitoring system now under development that uses a magnetometer for field survey. In monitoring magnetic variations in areas (active faults or volcanic regions), more reliable results can be obtained when an array of several magnetometers are used rather than a single magnetometer. In order to establish and operate a magnetometer array, such factors as expenses, convenience of the establishment and operation of the array should be taken into account. This study has come up with a magnetic monitoring system complete with a magnetometer for the field survey of our own designing. A magnetic monitoring system, which is composed of two parts. The one is a field part and the other a data part. The field part is composed of a magnetometer, an external memory module, a power supply and a set of data transmission equipment. The data part is a data server which can store the data transmitted from the field part, analyze the data and provide service to the web. This study has developed an external memory module for ENVI-MAG (Scintrex Ltd.) using an embedded Cortex-M3 board, which can be programmed, attach other functional devices (SD memory cards, GPS antennas for time synchronization, ethernet cards and so forth). The board thus developed can store magnetic measurements up to 8 Gbytes, synchronize with the GPS time and transmit the magnetic measurements to the data server which is now under development. A monitoring system of our own developing was installed in Jeju island, taking measurements throughout Korea. Other parts including a data transfer module, a server and a power supply using solar power will continue to be developed in the days to come. Acknowlegments This work was funded by the Korea Meteorological Administration Research and Development Program under Grant CATER 2006-5074

  16. An Intelligent System for Document Retrieval in Distributed Office Environments.

    ERIC Educational Resources Information Center

    Mukhopadhyay, Uttam; And Others

    1986-01-01

    MINDS (Multiple Intelligent Node Document Servers) is a distributed system of knowledge-based query engines for efficiently retrieving multimedia documents in an office environment of distributed workstations. By learning document distribution patterns and user interests and preferences during system usage, it customizes document retrievals for…

  17. [The Key Technology Study on Cloud Computing Platform for ECG Monitoring Based on Regional Internet of Things].

    PubMed

    Yang, Shu; Qiu, Yuyan; Shi, Bo

    2016-09-01

    This paper explores the methods of building the internet of things of a regional ECG monitoring, focused on the implementation of ECG monitoring center based on cloud computing platform. It analyzes implementation principles of automatic identifi cation in the types of arrhythmia. It also studies the system architecture and key techniques of cloud computing platform, including server load balancing technology, reliable storage of massive smalfi les and the implications of quick search function.

  18. [Implementation of ECG Monitoring System Based on Internet of Things].

    PubMed

    Lu, Liangliang; Chen, Minya

    2015-11-01

    In order to expand the capabilities of hospital's traditional ECG device and enhance medical staff's work efficiency, an ECG monitoring system based on internet of things is introduced. The system can monitor ECG signals in real time and analyze data using ECG sensor, PDA, Web servers, which embeds C language, Android systems, .NET, wireless network and other technologies. After experiments, it can be showed that the system has high reliability and stability and can bring the convenience to medical staffs.

  19. Report: Information Security Series: Security Practices Safe Drinking Water Information System

    EPA Pesticide Factsheets

    Report #2006-P-00021, March 30, 2006. We found that the Office of Water (OW) substantially complied with many of the information security controls reviewed and had implemented practices to ensure production servers are monitored.

  20. EuCliD (European Clinical Database): a database comparing different realities.

    PubMed

    Marcelli, D; Kirchgessner, J; Amato, C; Steil, H; Mitteregger, A; Moscardò, V; Carioni, C; Orlandini, G; Gatti, E

    2001-01-01

    Quality and variability of dialysis practice are generally gaining more and more importance. Fresenius Medical Care (FMC), as provider of dialysis, has the duty to continuously monitor and guarantee the quality of care delivered to patients treated in its European dialysis units. Accordingly, a new clinical database called EuCliD has been developed. It is a multilingual and fully codified database, using as far as possible international standard coding tables. EuCliD collects and handles sensitive medical patient data, fully assuring confidentiality. The Infrastructure: a Domino server is installed in each country connected to EuCliD. All the centres belonging to a country are connected via modem to the country server. All the Domino Servers are connected via Wide Area Network to the Head Quarter Server in Bad Homburg (Germany). Inside each country server only anonymous data related to that particular country are available. The only place where all the anonymous data are available is the Head Quarter Server. The data collection is strongly supported in each country by "key-persons" with solid relationships to their respective national dialysis units. The quality of the data in EuCliD is ensured at different levels. At the end of January 2001, more than 11,000 patients treated in 135 centres located in 7 countries are already included in the system. FMC has put the patient care at the centre of its activities for many years and now is able to provide transparency to the community (Authorities, Nephrologists, Patients.....) thus demonstrating the quality of the service.

  1. The PhEDEx next-gen website

    NASA Astrophysics Data System (ADS)

    Egeland, R.; Huang, C.-H.; Rossman, P.; Sundarrajan, P.; Wildish, T.

    2012-12-01

    PhEDEx is the data-transfer management solution written by CMS. It consists of agents running at each site, a website for presentation of information, and a web-based data-service for scripted access to information. The website allows users to monitor the progress of data-transfers, the status of site agents and links between sites, and the overall status and behaviour of everything about PhEDEx. It also allows users to make and approve requests for data-transfers and for deletion of data. It is the main point-of-entry for all users wishing to interact with PhEDEx. For several years, the website has consisted of a single perl program with about 10K SLOC. This program has limited capabilities for exploring the data, with only coarse filtering capabilities and no context-sensitive awareness. Graphical information is presented as static images, generated on the server, with no interactivity. It is also not well connected to the rest of the PhEDEx codebase, since much of it was written before the data-service was developed. All this makes it hard to maintain and extend. We are re-implementing the website to address these issues. The UI is being rewritten in Javascript, replacing most of the server-side code. We are using the YUI toolkit to provide advanced features and context-sensitive interaction, and will adopt a Javascript charting library for generating graphical representations client-side. This relieves the server of much of its load, and automatically improves server-side security. The Javascript components can be re-used in many ways, allowing custom pages to be developed for specific uses. In particular, standalone test-cases using small numbers of components make it easier to debug the Javascript than it is to debug a large server program. Information about PhEDEx is accessed through the PhEDEx data-service, since direct SQL is not available from the clients’ browser. This provides consistent semantics with other, externally written monitoring tools, which already use the data-service. It also reduces redundancy in the code, yielding a simpler, consolidated codebase. In this talk we describe our experience of re-factoring this monolithic server-side program into a lighter client-side framework. We describe some of the techniques that worked well for us, and some of the mistakes we made along the way. We present the current state of the project, and its future direction.

  2. Realtime Gas Emission Monitoring at Hazardous Sites Using a Distributed Point-Source Sensing Infrastructure

    PubMed Central

    Manes, Gianfranco; Collodi, Giovanni; Gelpi, Leonardo; Fusco, Rosanna; Ricci, Giuseppe; Manes, Antonio; Passafiume, Marco

    2016-01-01

    This paper describes a distributed point-source monitoring platform for gas level and leakage detection in hazardous environments. The platform, based on a wireless sensor network (WSN) architecture, is organised into sub-networks to be positioned in the plant’s critical areas; each sub-net includes a gateway unit wirelessly connected to the WSN nodes, hence providing an easily deployable, stand-alone infrastructure featuring a high degree of scalability and reconfigurability. Furthermore, the system provides automated calibration routines which can be accomplished by non-specialized maintenance operators without system reliability reduction issues. Internet connectivity is provided via TCP/IP over GPRS (Internet standard protocols over mobile networks) gateways at a one-minute sampling rate. Environmental and process data are forwarded to a remote server and made available to authenticated users through a user interface that provides data rendering in various formats and multi-sensor data fusion. The platform is able to provide real-time plant management with an effective; accurate tool for immediate warning in case of critical events. PMID:26805832

  3. Design of Accelerator Online Simulator Server Using Structured Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Guobao; /Brookhaven; Chu, Chungming

    2012-07-06

    Model based control plays an important role for a modern accelerator during beam commissioning, beam study, and even daily operation. With a realistic model, beam behaviour can be predicted and therefore effectively controlled. The approach used by most current high level application environments is to use a built-in simulation engine and feed a realistic model into that simulation engine. Instead of this traditional monolithic structure, a new approach using a client-server architecture is under development. An on-line simulator server is accessed via network accessible structured data. With this approach, a user can easily access multiple simulation codes. This paper describesmore » the design, implementation, and current status of PVData, which defines the structured data, and PVAccess, which provides network access to the structured data.« less

  4. The Orthanc Ecosystem for Medical Imaging.

    PubMed

    Jodogne, Sébastien

    2018-05-03

    This paper reviews the components of Orthanc, a free and open-source, highly versatile ecosystem for medical imaging. At the core of the Orthanc ecosystem, the Orthanc server is a lightweight vendor neutral archive that provides PACS managers with a powerful environment to automate and optimize the imaging flows that are very specific to each hospital. The Orthanc server can be extended with plugins that provide solutions for teleradiology, digital pathology, or enterprise-ready databases. It is shown how software developers and research engineers can easily develop external software or Web portals dealing with medical images, with minimal knowledge of the DICOM standard, thanks to the advanced programming interface of the Orthanc server. The paper concludes by introducing the Stone of Orthanc, an innovative toolkit for the cross-platform rendering of medical images.

  5. Grid-based International Network for Flu observation (g-INFO).

    PubMed

    Doan, Trung-Tung; Bernard, Aurélien; Da-Costa, Ana Lucia; Bloch, Vincent; Le, Thanh-Hoa; Legre, Yannick; Maigne, Lydia; Salzemann, Jean; Sarramia, David; Nguyen, Hong-Quang; Breton, Vincent

    2010-01-01

    The 2009 H1N1 outbreak has demonstrated that continuing vigilance, planning, and strong public health research capability are essential defenses against emerging health threats. Molecular epidemiology of influenza virus strains provides scientists with clues about the temporal and geographic evolution of the virus. In the present paper, researchers from France and Vietnam are proposing a global surveillance network based on grid technology: the goal is to federate influenza data servers and deploy automatically molecular epidemiology studies. A first prototype based on AMGA and the WISDOM Production Environment extracts daily from NCBI influenza H1N1 sequence data which are processed through a phylogenetic analysis pipeline deployed on EGEE and AuverGrid e-infrastructures. The analysis results are displayed on a web portal (http://g-info.healthgrid.org) for epidemiologists to monitor H1N1 pandemics.

  6. The MSG Central Facility - A Mission Control System for Windows NT

    NASA Astrophysics Data System (ADS)

    Thompson, R.

    The MSG Central Facility, being developed by Science Systems for EUMETSAT1, represents the first of a new generation of satellite mission control systems, based on the Windows NT operating system. The system makes use of a range of new technologies to provide an integrated environment for the planning, scheduling, control and monitoring of the entire Meteosat Second Generation mission. It supports packetised TM/TC and uses Science System's Space UNiT product to provide automated operations support at both Schedule (Timeline) and Procedure levels. Flexible access to historical data is provided through an operations archive based on ORACLE Enterprise Server, hosted on a large RAID array and off-line tape jukebox. Event driven real-time data distribution is based on the CORBA standard. Operations preparation and configuration control tools form a fully integrated element of the system.

  7. Spatial decision supporting for winter wheat irrigation and fertilizer optimizing in North China Plain

    NASA Astrophysics Data System (ADS)

    Yang, Xiaodong; Yang, Hao; Dong, Yansheng; Yu, Haiyang

    2014-11-01

    Production management of winter wheat is more complicated than other crops since its growth period is covered all four seasons and growth environment is very complex with frozen injury, drought, insect or disease injury and others. In traditional irrigation and fertilizer management, agricultural technicians or farmers mainly make decision based on phenology, planting experience to carry out artificial fertilizer and irrigation management. For example, wheat needs more nitrogen fertilizer in jointing and booting stage by experience, then when the wheat grow to the two growth periods, the farmer will fertilize to the wheat whether it needs or not. We developed a spatial decision support system for optimizing irrigation and fertilizer measures based on WebGIS, which monitoring winter wheat growth and soil moisture content by combining a crop model, remote sensing data and wireless sensors data, then reasoning professional management schedule from expert knowledge warehouse. This system is developed by ArcIMS, IDL in server-side and JQuery, Google Maps API, ASP.NET in client-side. All computing tasks are run on server-side, such as computing 11 normal vegetable indexes (NDVI/ NDWI/ NDWI2/ NRI/ NSI/ WI/ G_SWIR/ G_SWIR2/ SPSI/ TVDI/ VSWI) and custom VI of remote sensing image by IDL; while real-time building map configuration file and generating thematic map by ArcIMS.

  8. Validating metal binding sites in macromolecule structures using the CheckMyMetal web server

    PubMed Central

    Zheng, Heping; Chordia, Mahendra D.; Cooper, David R.; Chruszcz, Maksymilian; Müller, Peter; Sheldrick, George M.

    2015-01-01

    Metals play vital roles in both the mechanism and architecture of biological macromolecules. Yet structures of metal-containing macromolecules where metals are misidentified and/or suboptimally modeled are abundant in the Protein Data Bank (PDB). This shows the need for a diagnostic tool to identify and correct such modeling problems with metal binding environments. The "CheckMyMetal" (CMM) web server (http://csgid.org/csgid/metal_sites/) is a sophisticated, user-friendly web-based method to evaluate metal binding sites in macromolecular structures in respect to 7350 metal binding sites observed in a benchmark dataset of 2304 high resolution crystal structures. The protocol outlines how the CMM server can be used to detect geometric and other irregularities in the structures of metal binding sites and alert researchers to potential errors in metal assignment. The protocol also gives practical guidelines for correcting problematic sites by modifying the metal binding environment and/or redefining metal identity in the PDB file. Several examples where this has led to meaningful results are described in the anticipated results section. CMM was designed for a broad audience—biomedical researchers studying metal-containing proteins and nucleic acids—but is equally well suited for structural biologists to validate new structures during modeling or refinement. The CMM server takes the coordinates of a metal-containing macromolecule structure in the PDB format as input and responds within a few seconds for a typical protein structure modeled with a few hundred amino acids. PMID:24356774

  9. Designing a Virtual-Reality-Based, Gamelike Math Learning Environment

    ERIC Educational Resources Information Center

    Xu, Xinhao; Ke, Fengfeng

    2016-01-01

    This exploratory study examined the design issues related to a virtual-reality-based, gamelike learning environment (VRGLE) developed via OpenSimulator, an open-source virtual reality server. The researchers collected qualitative data to examine the VRGLE's usability, playability, and content integration for math learning. They found it important…

  10. Real-Time Access to Altimetry and Operational Oceanography Products via OPeNDAP/LAS Technologies : the Example of Aviso, Mercator and Mersea Projects

    NASA Astrophysics Data System (ADS)

    Baudel, S.; Blanc, F.; Jolibois, T.; Rosmorduc, V.

    2004-12-01

    The Products and Services (P&S) department in the Space Oceanography Division at CLS is in charge of diffusing and promoting altimetry and operational oceanography data. P&S is so involved in Aviso satellite altimetry project, in Mercator ocean operational forecasting system, and in the European Godae /Mersea ocean portal. Aiming to a standardisation and a common vision and management of all these ocean data, these projects led to the implementation of several OPeNDAP/LAS Internet servers. OPeNDAP allows the user to extract via a client software (like IDL, Matlab or Ferret) the data he is interested in and only this data, avoiding him to download full information files. OPeNDAP allows to extract a geographic area, a period time, an oceanic variable, and an output format. LAS is an OPeNDAP data access web server whose special feature consists in the facility for unify in a single vision the access to multiple types of data from distributed data sources. The LAS can make requests to different remote OPeNDAP servers. This enables to make comparisons or statistics upon several different data types. Aviso is the CNES/CLS service which distributes altimetry products since 1993. The Aviso LAS distributes several Ssalto/Duacs altimetry products such as delayed and near-real time mean sea level anomaly, absolute dynamic topography, absolute geostrophic velocities, gridded significant wave height and gridded wind speed modulus. Mercator-Ocean is a French operational oceanography centre which distributes its products by several means among them LAS/OPeNDAP servers as part of Mercator Mersea-strand1 contribution. 3D ocean description (temperature, salinity, current and other oceanic variables) of the North Atlantic and Mediterranean are real-time available and weekly updated. LAS special feature consisting in the possibility of making requests to several remote data centres with same OPeNDAP configurations particularly fitted to Mersea strand-1 problematics. This European project (June 2003 to June 2004) sponsored by the European Commission was the first experience of an integrated operational oceanography project. The objective was the assessment of several existing operational in situ and satellite monitoring and numerical forecasting systems for the future elaboration (Mersea Integrated Project, 2004-2008) of an integrated system able to deliver, operationally, information products (physical, chemical, biological) towards end-users in several domains related to environment, security and safety. Five forecasting ocean models with data assimilation coming from operational in situ or satellite data centres, have been intercompared. The main difficulty of this LAS implementation has lied in the ocean model metrics definition and a common file format adoption which forced the model teams to produce the same datasets in the same formats (NetCDF, COARDS/CF convention). Notice that this was a pioneer approach and that it has been adopted by Godae standards (see F. Blanc's paper in this session). Going on these web technologies implementation and entering a more user-oriented issue, perspectives deal with the implementation of a Map Server, a GIS opensource server which will communicate with the OPeNDAP server. The Map server will be able to manipulate simultaneously raster and vector multidisciplinary remote data. The aim is to construct a full complete web oceanic data distribution service. The projects in which we are involved allow us to progress towards that.

  11. Kentucky geotechnical database.

    DOT National Transportation Integrated Search

    2005-03-01

    Development of a comprehensive dynamic, geotechnical database is described. Computer software selected to program the client/server application in windows environment, components and structure of the geotechnical database, and primary factors cons...

  12. Report: Information Security Series: Security Practices Comprehensive Environmental Response, Compensation, and Liability Information System

    EPA Pesticide Factsheets

    Report #2006-P-00019, March 28, 2006. OSWER’s implemented practices to ensure production servers were being monitored for known vulnerabilities and personnel with significant security responsibility completed the Agency’s recommended security training.

  13. Development of real-time voltage stability monitoring tool for power system transmission network using Synchrophasor data

    NASA Astrophysics Data System (ADS)

    Pulok, Md Kamrul Hasan

    Intelligent and effective monitoring of power system stability in control centers is one of the key issues in smart grid technology to prevent unwanted power system blackouts. Voltage stability analysis is one of the most important requirements for control center operation in smart grid era. With the advent of Phasor Measurement Unit (PMU) or Synchrophasor technology, real time monitoring of voltage stability of power system is now a reality. This work utilizes real-time PMU data to derive a voltage stability index to monitor the voltage stability related contingency situation in power systems. The developed tool uses PMU data to calculate voltage stability index that indicates relative closeness of the instability by producing numerical indices. The IEEE 39 bus, New England power system was modeled and run on a Real-time Digital Simulator that stream PMU data over the Internet using IEEE C37.118 protocol. A Phasor data concentrator (PDC) is setup that receives streaming PMU data and stores them in Microsoft SQL database server. Then the developed voltage stability monitoring (VSM) tool retrieves phasor measurement data from SQL server, performs real-time state estimation of the whole network, calculate voltage stability index, perform real-time ranking of most vulnerable transmission lines, and finally shows all the results in a graphical user interface. All these actions are done in near real-time. Control centers can easily monitor the systems condition by using this tool and can take precautionary actions if needed.

  14. A clinical evaluation of a remote mobility monitoring system based on SMS messaging.

    PubMed

    Dalton, Anthony F; Ní Scanaill, Cliodhna; Carew, Sheila; Lyons, Declan; OLaighin, Gearóid

    2007-01-01

    The objective of this work was to evaluate the accuracy and viability of a mobility telemonitoring system, based on the short message service (SMS), to monitor the functional mobility of elderly subjects in an unsupervised environment. A clinical trial was conducted consisting of 6 elderly subjects; 3 male, 3 female (mean: 81.7, SD: 5.09). Mobility was monitored using an accelerometer based portable unit worn by each monitored subject for eleven hours. Every 15 minutes the mobility of the subject was summarized and transmitted as an SMS message from the portable unit to a remote server for long term analysis. The activPAL Trio Professional physical activity logger was simultaneously used for comparison with the portable unit. On conclusion of the trial each subject completed a questionnaire detailing their satisfaction with the portable unit and any recommendations for improvements. Overall a percentage difference of 2.31% was found between the activPAL Trio and the portable unit for the detection of sitting. For the combined postures of standing and walking the percentage difference was calculated as 2.9%. A bivariate correlation and regression analysis was performed on the entire data set of one subject. Strong positive correlation's were found for the detection of sitting (r = 0.996) and for the combined postures of standing and walking (r = 0.994). Subjects suggested that a lighter, smaller and wireless unit would be more effective.

  15. Pathogen transfer through environment-host contact: an agent-based queueing theoretic framework.

    PubMed

    Chen, Shi; Lenhart, Suzanne; Day, Judy D; Lee, Chihoon; Dulin, Michael; Lanzas, Cristina

    2017-11-02

    Queueing theory studies the properties of waiting queues and has been applied to investigate direct host-to-host transmitted disease dynamics, but its potential in modelling environmentally transmitted pathogens has not been fully explored. In this study, we provide a flexible and customizable queueing theory modelling framework with three major subroutines to study the in-hospital contact processes between environments and hosts and potential nosocomial pathogen transfer, where environments are servers and hosts are customers. Two types of servers with different parameters but the same utilization are investigated. We consider various forms of transfer functions that map contact duration to the amount of pathogen transfer based on existing literature. We propose a case study of simulated in-hospital contact processes and apply stochastic queues to analyse the amount of pathogen transfer under different transfer functions, and assume that pathogen amount decreases during the inter-arrival time. Different host behaviour (feedback and non-feedback) as well as initial pathogen distribution (whether in environment and/or in hosts) are also considered and simulated. We assess pathogen transfer and circulation under these various conditions and highlight the importance of the nonlinear interactions among contact processes, transfer functions and pathogen demography during the contact process. Our modelling framework can be readily extended to more complicated queueing networks to simulate more realistic situations by adjusting parameters such as the number and type of servers and customers, and adding extra subroutines. © The authors 2017. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  16. Final Scientific Report: A Scalable Development Environment for Peta-Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karbach, Carsten; Frings, Wolfgang

    2013-02-22

    This document is the final scientific report of the project DE-SC000120 (A scalable Development Environment for Peta-Scale Computing). The objective of this project is the extension of the Parallel Tools Platform (PTP) for applying it to peta-scale systems. PTP is an integrated development environment for parallel applications. It comprises code analysis, performance tuning, parallel debugging and system monitoring. The contribution of the Juelich Supercomputing Centre (JSC) aims to provide a scalable solution for system monitoring of supercomputers. This includes the development of a new communication protocol for exchanging status data between the target remote system and the client running PTP.more » The communication has to work for high latency. PTP needs to be implemented robustly and should hide the complexity of the supercomputer's architecture in order to provide a transparent access to various remote systems via a uniform user interface. This simplifies the porting of applications to different systems, because PTP functions as abstraction layer between parallel application developer and compute resources. The common requirement for all PTP components is that they have to interact with the remote supercomputer. E.g. applications are built remotely and performance tools are attached to job submissions and their output data resides on the remote system. Status data has to be collected by evaluating outputs of the remote job scheduler and the parallel debugger needs to control an application executed on the supercomputer. The challenge is to provide this functionality for peta-scale systems in real-time. The client server architecture of the established monitoring application LLview, developed by the JSC, can be applied to PTP's system monitoring. LLview provides a well-arranged overview of the supercomputer's current status. A set of statistics, a list of running and queued jobs as well as a node display mapping running jobs to their compute resources form the user display of LLview. These monitoring features have to be integrated into the development environment. Besides showing the current status PTP's monitoring also needs to allow for submitting and canceling user jobs. Monitoring peta-scale systems especially deals with presenting the large amount of status data in a useful manner. Users require to select arbitrary levels of detail. The monitoring views have to provide a quick overview of the system state, but also need to allow for zooming into specific parts of the system, into which the user is interested in. At present, the major batch systems running on supercomputers are PBS, TORQUE, ALPS and LoadLeveler, which have to be supported by both the monitoring and the job controlling component. Finally, PTP needs to be designed as generic as possible, so that it can be extended for future batch systems.« less

  17. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  18. iScreen: world's first cloud-computing web server for virtual screening and de novo drug design based on TCM database@Taiwan.

    PubMed

    Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian

    2011-06-01

    The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.

  19. An Enhanced Biometric Based Authentication with Key-Agreement Protocol for Multi-Server Architecture Based on Elliptic Curve Cryptography

    PubMed Central

    Reddy, Alavalapati Goutham; Das, Ashok Kumar; Odelu, Vanga; Yoo, Kee-Young

    2016-01-01

    Biometric based authentication protocols for multi-server architectures have gained momentum in recent times due to advancements in wireless technologies and associated constraints. Lu et al. recently proposed a robust biometric based authentication with key agreement protocol for a multi-server environment using smart cards. They claimed that their protocol is efficient and resistant to prominent security attacks. The careful investigation of this paper proves that Lu et al.’s protocol does not provide user anonymity, perfect forward secrecy and is susceptible to server and user impersonation attacks, man-in-middle attacks and clock synchronization problems. In addition, this paper proposes an enhanced biometric based authentication with key-agreement protocol for multi-server architecture based on elliptic curve cryptography using smartcards. We proved that the proposed protocol achieves mutual authentication using Burrows-Abadi-Needham (BAN) logic. The formal security of the proposed protocol is verified using the AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that our protocol can withstand active and passive attacks. The formal and informal security analyses and performance analysis demonstrates that the proposed protocol is robust and efficient compared to Lu et al.’s protocol and existing similar protocols. PMID:27163786

  20. Digital Earth Watch (DEW): How Mobile Apps Are Paving The Way Towards A Federated Web-Services Architecture For Citizen Science

    NASA Astrophysics Data System (ADS)

    Carrera, F.; Schloss, A. L.; Guerin, S.; Beaudry, J.; Pickle, J.

    2011-12-01

    Dozens of web-based initiatives allow citizens to provide information to programs that monitor the health of our environment. A concerned citizen can participate on-line as a weather "spotter", provide important phenological information to national databases, update bird counts in the area, or record the freezing of ponds, and much more. Many of these programs are developing mobile apps as companion tools to their web sites. Our group was involved in the development of one such companion app as an adjunct to the Picture Post project web site. Digital Earth Watch (DEW) and the Picture Post network support environmental monitoring through repeat digital photography and satellite imagery. A Picture Post is an eight-sided platform on a stand-alone post for taking a panoramic series of photographs. By taking pictures on a regular basis at Picture Post sites and by sharing these pictures on the program's web site (housed at the University of New Hampshire), citizen scientists are creating a photographic library of change-over-time in their local area and contributing to national monitoring programs. Our DEW Android application simplifies participation by allowing users to upload pictures instantly from their smart phone. The app also removes the constraint of the physical picture post, by allowing users to create a virtual post anywhere in the world. Posts have been set up to monitor trails, forests, water, wetlands, gardens and landscapes. The app uses the phone's GPS to position the virtual post in its geographic location and guides the user through the orientations thanks to the internal accelerometers and compass. To aid in the before-and-after comparison of images taken from the same orientation, the DEW app displays an "onionskin" of the prior image overlayed onto the camera viewfinder. With the transparent onionskin as a guide, the user can align the images more accurately, thus allowing differences between pictures to be detectable and measurable. The app interacts with the UNH server via APIs (Application Programming Interfaces) that were created to allow bi-directional machine-to-machine interaction between the mobile device and the web site. Thus, the principal functions that a user can perform on the web site, such as finding post sites on a map and viewing and adding picture sets, are available on the smartphone. The development of the APIs makes it now possible not only to communicate with our own mobile app, but, more importantly, it opens the door for other computer systems to directly interact with our server. Our ongoing discussions with the National Phenology Network and Project Budburst, have highlighted the potential (and perhaps the need) for the creation of a distributed web-service architecture whereby each national program exposes its key functionalities not only to their own mobile phone apps, but also to other organizations, in a federated system of servers, all supporting citizen-based digital earth watch programs.

  1. A WebGIS-based system for analyzing and visualizing air quality data for Shanghai Municipality

    NASA Astrophysics Data System (ADS)

    Wang, Manyi; Liu, Chaoshun; Gao, Wei

    2014-10-01

    An online visual analytical system based on Java Web and WebGIS for air quality data for Shanghai Municipality was designed and implemented to quantitatively analyze and qualitatively visualize air quality data. By analyzing the architecture of WebGIS and Java Web, we firstly designed the overall scheme for system architecture, then put forward the software and hardware environment and also determined the main function modules for the system. The visual system was ultimately established with the DIV + CSS layout method combined with JSP, JavaScript, and some other computer programming languages based on the Java programming environment. Moreover, Struts, Spring, and Hibernate frameworks (SSH) were integrated in the system for the purpose of easy maintenance and expansion. To provide mapping service and spatial analysis functions, we selected ArcGIS for Server as the GIS server. We also used Oracle database and ESRI file geodatabase to store spatial data and non-spatial data in order to ensure the data security. In addition, the response data from the Web server are resampled to implement rapid visualization through the browser. The experimental successes indicate that this system can quickly respond to user's requests, and efficiently return the accurate processing results.

  2. Applications of Multi-Channel Safety Authentication Protocols in Wireless Networks.

    PubMed

    Chen, Young-Long; Liau, Ren-Hau; Chang, Liang-Yu

    2016-01-01

    People can use their web browser or mobile devices to access web services and applications which are built into these servers. Users have to input their identity and password to login the server. The identity and password may be appropriated by hackers when the network environment is not safe. The multiple secure authentication protocol can improve the security of the network environment. Mobile devices can be used to pass the authentication messages through Wi-Fi or 3G networks to serve as a second communication channel. The content of the message number is not considered in a multiple secure authentication protocol. The more excessive transmission of messages would be easier to collect and decode by hackers. In this paper, we propose two schemes which allow the server to validate the user and reduce the number of messages using the XOR operation. Our schemes can improve the security of the authentication protocol. The experimental results show that our proposed authentication protocols are more secure and effective. In regard to applications of second authentication communication channels for a smart access control system, identity identification and E-wallet, our proposed authentication protocols can ensure the safety of person and property, and achieve more effective security management mechanisms.

  3. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  4. Web-accessible molecular modeling with Rosetta: The Rosetta Online Server that Includes Everyone (ROSIE).

    PubMed

    Moretti, Rocco; Lyskov, Sergey; Das, Rhiju; Meiler, Jens; Gray, Jeffrey J

    2018-01-01

    The Rosetta molecular modeling software package provides a large number of experimentally validated tools for modeling and designing proteins, nucleic acids, and other biopolymers, with new protocols being added continually. While freely available to academic users, external usage is limited by the need for expertise in the Unix command line environment. To make Rosetta protocols available to a wider audience, we previously created a web server called Rosetta Online Server that Includes Everyone (ROSIE), which provides a common environment for hosting web-accessible Rosetta protocols. Here we describe a simplification of the ROSIE protocol specification format, one that permits easier implementation of Rosetta protocols. Whereas the previous format required creating multiple separate files in different locations, the new format allows specification of the protocol in a single file. This new, simplified protocol specification has more than doubled the number of Rosetta protocols available under ROSIE. These new applications include pK a determination, lipid accessibility calculation, ribonucleic acid redesign, protein-protein docking, protein-small molecule docking, symmetric docking, antibody docking, cyclic toxin docking, critical binding peptide determination, and mapping small molecule binding sites. ROSIE is freely available to academic users at http://rosie.rosettacommons.org. © 2017 The Protein Society.

  5. Portable air quality sensor unit for participatory monitoring: an end-to-end VESNA-AQ based prototype

    NASA Astrophysics Data System (ADS)

    Vucnik, Matevz; Robinson, Johanna; Smolnikar, Miha; Kocman, David; Horvat, Milena; Mohorcic, Mihael

    2015-04-01

    Key words: portable air quality sensor, CITI-SENSE, participatory monitoring, VESNA-AQ The emergence of low-cost easy to use portable air quality sensors units is opening new possibilities for individuals to assess their exposure to air pollutants at specific place and time, and share this information through the Internet connection. Such portable sensors units are being used in an ongoing citizen science project called CITI-SENSE, which enables citizens to measure and share the data. The project aims through creating citizens observatories' to empower citizens to contribute to and participate in environmental governance, enabling them to support and influence community and societal priorities as well as associated decision making. An air quality measurement system based on VESNA sensor platform was primarily designed within the project for the use as portable sensor unit in selected pilot cities (Belgrade, Ljubljana and Vienna) for monitoring outdoor exposure to pollutants. However, functionally the same unit with different set of sensors could be used for example as an indoor platform. The version designed for the pilot studies was equipped with the following sensors: NO2, O3, CO, temperature, relative humidity, pressure and accelerometer. The personal sensor unit is battery powered and housed in a plastic box. The VESNA-based air quality (AQ) monitoring system comprises the VESNA-AQ portable sensor unit, a smartphone app and the remote server. Personal sensor unit supports wireless connection to an Android smartphone via built-in Wi-Fi. The smartphone in turn serves also as the communication gateway towards the remote server using any of available data connections. Besides the gateway functionality the role of smartphone is to enrich data coming from the personal sensor unit with the GPS location, timestamps and user defined context. This, together with an accelerometer, enables the user to better estimate ones exposure in relation to physical activities, time and location. The end user can monitor the measured parameters through a smartphone application. The smartphone app implements a custom developed LCSP (Lightweight Client Server Protocol) protocol which is used to send requests to the VESNA-AQ unit and to exchange information. When the data is obtained from the VESNA-AQ unit, the mobile application visualizes the data. It also has an option to forward the data to the remote server in a custom JSON structure over a HTTP POST request. The server stores the data in the database and in parallel translates the data to WFS and forwards it to the main CITI-SENSE platform over WFS-T in a common XML format over HTTP POST request. From there data can be accessed through the Internet and visualised in different forms and web applications developed by the CITI-SENSE project. In the course of the project, the collected data will be made publicly available enabling the citizens to participate in environmental governance. Acknowledgements: CITI-SENSE is a Collaborative Project partly funded by the EU FP7-ENV-2012 under grant agreement no 308524 (www.citi-sense.eu).

  6. Upgrading a CD-ROM Network for Multimedia Applications.

    ERIC Educational Resources Information Center

    Sylvia, Margaret

    1995-01-01

    Addresses issues to consider when upgrading library CD-ROM networks for multimedia applications. Topics includes security issues; workstation requirements such as soundboards and monitors; local area network configurations that avoid bottlenecks: Asynchronous Transfer Mode, Ethernet, and Integrated Services Digital Network; server performance…

  7. Supervising simulations with the Prodiguer Messaging Platform

    NASA Astrophysics Data System (ADS)

    Greenslade, Mark; Carenton, Nicolas; Denvil, Sebastien

    2015-04-01

    At any one moment in time, researchers affiliated with the Institut Pierre Simon Laplace (IPSL) climate modeling group, are running hundreds of global climate simulations. These simulations execute upon a heterogeneous set of High Performance Computing (HPC) environments spread throughout France. The IPSL's simulation execution runtime is called libIGCM (library for IPSL Global Climate Modeling group). libIGCM has recently been enhanced so as to support realtime operational use cases. Such use cases include simulation monitoring, data publication, environment metrics collection, automated simulation control … etc. At the core of this enhancement is the Prodiguer messaging platform. libIGCM now emits information, in the form of messages, for remote processing at IPSL servers in Paris. The remote message processing takes several forms, for example: 1. Persisting message content to database(s); 2. Notifying an operator of changes in a simulation's execution status; 3. Launching rollback jobs upon simulation failure; 4. Dynamically updating controlled vocabularies; 5. Notifying downstream applications such as the Prodiguer web portal; We will describe how the messaging platform has been implemented from a technical perspective and demonstrate the Prodiguer web portal receiving realtime notifications.

  8. Unobstructive Body Area Networks (BAN) for Efficient Movement Monitoring

    PubMed Central

    Felisberto, Filipe; Costa, Nuno; Fdez-Riverola, Florentino; Pereira, António

    2012-01-01

    The technological advances in medical sensors, low-power microelectronics and miniaturization, wireless communications and networks have enabled the appearance of a new generation of wireless sensor networks: the so-called wireless body area networks (WBAN). These networks can be used for continuous monitoring of vital parameters, movement, and the surrounding environment. The data gathered by these networks contributes to improve users' quality of life and allows the creation of a knowledge database by using learning techniques, useful to infer abnormal behaviour. In this paper we present a wireless body area network architecture to recognize human movement, identify human postures and detect harmful activities in order to prevent risk situations. The WBAN was created using tiny, cheap and low-power nodes with inertial and physiological sensors, strategically placed on the human body. Doing so, in an as ubiquitous as possible way, ensures that its impact on the users' daily actions is minimum. The information collected by these sensors is transmitted to a central server capable of analysing and processing their data. The proposed system creates movement profiles based on the data sent by the WBAN's nodes, and is able to detect in real time any abnormal movement and allows for a monitored rehabilitation of the user. PMID:23112726

  9. Internet-based distributed collaborative environment for engineering education and design

    NASA Astrophysics Data System (ADS)

    Sun, Qiuli

    2001-07-01

    This research investigates the use of the Internet for engineering education, design, and analysis through the presentation of a Virtual City environment. The main focus of this research was to provide an infrastructure for engineering education, test the concept of distributed collaborative design and analysis, develop and implement the Virtual City environment, and assess the environment's effectiveness in the real world. A three-tier architecture was adopted in the development of the prototype, which contains an online database server, a Web server as well as multi-user servers, and client browsers. The environment is composed of five components, a 3D virtual world, multiple Internet-based multimedia modules, an online database, a collaborative geometric modeling module, and a collaborative analysis module. The environment was designed using multiple Intenet-based technologies, such as Shockwave, Java, Java 3D, VRML, Perl, ASP, SQL, and a database. These various technologies together formed the basis of the environment and were programmed to communicate smoothly with each other. Three assessments were conducted over a period of three semesters. The Virtual City is open to the public at www.vcity.ou.edu. The online database was designed to manage the changeable data related to the environment. The virtual world was used to implement 3D visualization and tie the multimedia modules together. Students are allowed to build segments of the 3D virtual world upon completion of appropriate undergraduate courses in civil engineering. The end result is a complete virtual world that contains designs from all of their coursework and is viewable on the Internet. The environment is a content-rich educational system, which can be used to teach multiple engineering topics with the help of 3D visualization, animations, and simulations. The concept of collaborative design and analysis using the Internet was investigated and implemented. Geographically dispersed users can build the same geometric model simultaneously over the Internet and communicate with each other through a chat room. They can also conduct finite element analysis collaboratively on the same object over the Internet. They can mesh the same object, apply and edit the same boundary conditions and forces, obtain the same analysis results, and then discuss the results through the Internet.

  10. Internet Distribution of Spacecraft Telemetry Data

    NASA Technical Reports Server (NTRS)

    Specht, Ted; Noble, David

    2006-01-01

    Remote Access Multi-mission Processing and Analysis Ground Environment (RAMPAGE) is a Java-language server computer program that enables near-real-time display of spacecraft telemetry data on any authorized client computer that has access to the Internet and is equipped with Web-browser software. In addition to providing a variety of displays of the latest available telemetry data, RAMPAGE can deliver notification of an alarm by electronic mail. Subscribers can then use RAMPAGE displays to determine the state of the spacecraft and formulate a response to the alarm, if necessary. A user can query spacecraft mission data in either binary or comma-separated-value format by use of a Web form or a Practical Extraction and Reporting Language (PERL) script to automate the query process. RAMPAGE runs on Linux and Solaris server computers in the Ground Data System (GDS) of NASA's Jet Propulsion Laboratory and includes components designed specifically to make it compatible with legacy GDS software. The client/server architecture of RAMPAGE and the use of the Java programming language make it possible to utilize a variety of competitive server and client computers, thereby also helping to minimize costs.

  11. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme.

    PubMed

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.'s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks.

  12. Implications of the Java language on computer-based patient records.

    PubMed

    Pollard, D; Kucharz, E; Hammond, W E

    1996-01-01

    The growth of the utilization of the World Wide Web (WWW) as a medium for the delivery of computer-based patient records (CBPR) has created a new paradigm in which clinical information may be delivered. Until recently the authoring tools and environment for application development on the WWW have been limited to Hyper Text Markup Language (HTML) utilizing common gateway interface scripts. While, at times, this provides an effective medium for the delivery of CBPR, it is a less than optimal solution. The server-centric dynamics and low levels of interactivity do not provide for a robust application which is required in a clinical environment. The emergence of Sun Microsystems' Java language is a solution to the problem. In this paper we examine the Java language and its implications to the CBPR. A quantitative and qualitative assessment was performed. The Java environment is compared to HTML and Telnet CBPR environments. Qualitative comparisons include level of interactivity, server load, client load, ease of use, and application capabilities. Quantitative comparisons include data transfer time delays. The Java language has demonstrated promise for delivering CBPRs.

  13. CIS4/403: Design and Implementation of an Intranet-based system for Real-Time Tele-Consultation in Oncology

    PubMed Central

    Eccher, C; Berloffa, F; Demichelis, F; Larcher, B; Galvagni, M; Sboner, A; Graiff, A; Forti, S

    1999-01-01

    Introduction This study describes a tele-consultation system (TCS) developed to provide a computing environment over a Wide Area Network (WAN) in North Italy (Province of Trento), that can be used by two or more physicians to share medical data and to work co-operatively on medical records. A pilot study has been carried out in oncology to assess the effectiveness of the system. The aim of this project is to facilitate the management of oncology patients by improving communication among the specialists of central and district hospitals. Methods and Results The TCS is an Intranet-based solution. The Intranet is based on a PC WAN with Windows NT Server, Microsoft SQL Server, and Internet Information Server. TCS is composed of native and custom applications developed in the Microsoft Windows (9x and NT) environment. The basic component of the system is the multimedia digital medical record, structured as a collection of HTML and ASP pages. A distributed relational database will allow users to store and retrieve medical records, accessed by a dedicated Web browser via the Web Server. The medical data to be stored and the presentation architecture of the clinical record had been determined in close collaboration with the clinicians involved in the project. TCS will allow a multi-point tele-consultation (TC) among two or more participants on remote computers, providing synchronized surfing through the clinical report. A set of collaborative and personal tools, whiteboard with drawing tools, point-to-point digital audio-conference, chat, local notepad, e-mail service, are integrated in the system to provide an user friendly environment. TCS has been developed as a client-server architecture. The client part of the system is based on the Microsoft Web Browser control and provides the user interface and the tools described above. The server part, running all the time on a dedicated computer, accepts connection requests and manages the connections among the participants in a TC, allowing multiple TC to run simultaneously. TCS has been developed in Visual C++ environment using MFC library and COM technology; ActiveX controls have been written in Visual Basic to perform dedicated tasks from the inside of the HTML clinical report. Before deploying the system in the hospital departments involved in the project, TCS has been tested in our laboratory by clinicians involved in the project to evaluate the usability of the system. Discussion TCS has the potential to support a "multi-disciplinary distributed virtual oncological meeting". The specialists of different departments and of different hospitals can attend "virtual meetings" and interactively discuss on medical data. An expected benefit of the "virtual meeting" should be the possibility to provide expert remote advice from oncologists to peripheral cancer units in formulating treatment plans, conducting follow-up sessions and supporting clinical research.

  14. Broadband network on-line data acquisition system with web based interface for control and basic analysis

    NASA Astrophysics Data System (ADS)

    Polkowski, Marcin; Grad, Marek

    2016-04-01

    Passive seismic experiment "13BB Star" is operated since mid 2013 in northern Poland and consists of 13 broadband seismic stations. One of the elements of this experiment is dedicated on-line data acquisition system comprised of both client (station) side and server side modules with web based interface that allows monitoring of network status and provides tools for preliminary data analysis. Station side is controlled by ARM Linux board that is programmed to maintain 3G/EDGE internet connection, receive data from digitizer, send data do central server among with additional auxiliary parameters like temperatures, voltages and electric current measurements. Station side is controlled by set of easy to install PHP scripts. Data is transmitted securely over SSH protocol to central server. Central server is a dedicated Linux based machine. Its duty is receiving and processing all data from all stations including auxiliary parameters. Server side software is written in PHP and Python. Additionally, it allows remote station configuration and provides web based interface for user friendly interaction. All collected data can be displayed for each day and station. It also allows manual creation of event oriented plots with different filtering abilities and provides numerous status and statistic information. Our solution is very flexible and easy to modify. In this presentation we would like to share our solution and experience. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  15. Computation offloading for real-time health-monitoring devices.

    PubMed

    Kalantarian, Haik; Sideris, Costas; Tuan Le; Hosseini, Anahita; Sarrafzadeh, Majid

    2016-08-01

    Among the major challenges in the development of real-time wearable health monitoring systems is to optimize battery life. One of the major techniques with which this objective can be achieved is computation offloading, in which portions of computation can be partitioned between the device and other resources such as a server or cloud. In this paper, we describe a novel dynamic computation offloading scheme for real-time wearable health monitoring devices that adjusts the partitioning of data between the wearable device and mobile application as a function of desired classification accuracy.

  16. Self-adaptive Fault-Tolerance of HLA-Based Simulations in the Grid Environment

    NASA Astrophysics Data System (ADS)

    Huang, Jijie; Chai, Xudong; Zhang, Lin; Li, Bo Hu

    The objects of a HLA-based simulation can access model services to update their attributes. However, the grid server may be overloaded and refuse the model service to handle objects accesses. Because these objects have been accessed this model service during last simulation loop and their medium state are stored in this server, this may terminate the simulation. A fault-tolerance mechanism must be introduced into simulations. But the traditional fault-tolerance methods cannot meet the above needs because the transmission latency between a federate and the RTI in grid environment varies from several hundred milliseconds to several seconds. By adding model service URLs to the OMT and expanding the HLA services and model services with some interfaces, this paper proposes a self-adaptive fault-tolerance mechanism of simulations according to the characteristics of federates accessing model services. Benchmark experiments indicate that the expanded HLA/RTI can make simulations self-adaptively run in the grid environment.

  17. A network-based training environment: a medical image processing paradigm.

    PubMed

    Costaridou, L; Panayiotakis, G; Sakellaropoulos, P; Cavouras, D; Dimopoulos, J

    1998-01-01

    The capability of interactive multimedia and Internet technologies is investigated with respect to the implementation of a distance learning environment. The system is built according to a client-server architecture, based on the Internet infrastructure, composed of server nodes conceptually modelled as WWW sites. Sites are implemented by customization of available components. The environment integrates network-delivered interactive multimedia courses, network-based tutoring, SIG support, information databases of professional interest, as well as course and tutoring management. This capability has been demonstrated by means of an implemented system, validated with digital image processing content, specifically image enhancement. Image enhancement methods are theoretically described and applied to mammograms. Emphasis is given to the interactive presentation of the effects of algorithm parameters on images. The system end-user access depends on available bandwidth, so high-speed access can be achieved via LAN or local ISDN connections. Network based training offers new means of improved access and sharing of learning resources and expertise, as promising supplements in training.

  18. Filmless PACS in a multiple facility environment

    NASA Astrophysics Data System (ADS)

    Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.

    1996-05-01

    A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.

  19. Highway rock slope management program.

    DOT National Transportation Integrated Search

    2001-06-30

    Development of a comprehensive geotechnical database for risk management of highway rock slope problems is described. Computer software selected to program the client/server application in windows environment, components and structure of the geote...

  20. 25 CFR 543.2 - What are the definitions for this part?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... system, including an electronic or technological aid (not limited to terminals, player stations... that are effectively random. Server. A computer which controls one or more applications or environments...

  1. 25 CFR 543.2 - What are the definitions for this part?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... system, including an electronic or technological aid (not limited to terminals, player stations... that are effectively random. Server. A computer which controls one or more applications or environments...

  2. Wireless chest wearable vital sign monitoring platform for hypertension.

    PubMed

    Janjua, G; Guldenring, D; Finlay, D; McLaughlin, J

    2017-07-01

    Hypertension, a silent killer, is the biggest challenge of the 21 st century in public health agencies worldwide [1]. World Health Organization (WHO) statistic shows that the mortality rate of hypertension is 9.4 million per year and causes 55.3% of total deaths in cardiovascular (CV) patients [2]. Early detection and prevention of hypertension can significantly reduce the CV mortality. We are presenting a wireless chest wearable vital sign monitoring platform. It measures Electrocardiogram (ECG), Photoplethsmogram (PPG) and Ballistocardiogram (BCG) signals and sends data over Bluetooth low energy (BLE) to mobile phone-acts as a gateway. A custom android application relays the data to thingspeak server where MATLAB based offline analysis estimates the blood pressure. A server reacts on the health of subject to friends & family on the social media - twitter. The chest provides a natural position for the sensor to capture legitimate signals for hypertension condition. We have done a clinical technical evaluation of prototypes on 11 normotensive subjects, 9 males 2 females.

  3. A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets

    PubMed Central

    Kim, Dong Seong; Park, Jong Sou

    2014-01-01

    It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732

  4. Some Programs Should Not Run on Laptops - Providing Programmatic Access to Applications Via Web Services

    NASA Astrophysics Data System (ADS)

    Gupta, V.; Gupta, N.; Gupta, S.; Field, E.; Maechling, P.

    2003-12-01

    Modern laptop computers, and personal computers, can provide capabilities that are, in many ways, comparable to workstations or departmental servers. However, this doesn't mean we should run all computations on our local computers. We have identified several situations in which it preferable to implement our seismological application programs in a distributed, server-based, computing model. In this model, application programs on the user's laptop, or local computer, invoke programs that run on an organizational server, and the results are returned to the invoking system. Situations in which a server-based architecture may be preferred include: (a) a program is written in a language, or written for an operating environment, that is unsupported on the local computer, (b) software libraries or utilities required to execute a program are not available on the users computer, (c) a computational program is physically too large, or computationally too expensive, to run on a users computer, (d) a user community wants to enforce a consistent method of performing a computation by standardizing on a single implementation of a program, and (e) the computational program may require current information, that is not available to all client computers. Until recently, distributed, server-based, computational capabilities were implemented using client/server architectures. In these architectures, client programs were often written in the same language, and they executed in the same computing environment, as the servers. Recently, a new distributed computational model, called Web Services, has been developed. Web Services are based on Internet standards such as XML, SOAP, WDSL, and UDDI. Web Services offer the promise of platform, and language, independent distributed computing. To investigate this new computational model, and to provide useful services to the SCEC Community, we have implemented several computational and utility programs using a Web Service architecture. We have hosted these Web Services as a part of the SCEC Community Modeling Environment (SCEC/CME) ITR Project (http://www.scec.org/cme). We have implemented Web Services for several of the reasons sited previously. For example, we implemented a FORTRAN-based Earthquake Rupture Forecast (ERF) as a Web Service for use by client computers that don't support a FORTRAN runtime environment. We implemented a Generic Mapping Tool (GMT) Web Service for use by systems that don't have local access to GMT. We implemented a Hazard Map Calculator Web Service to execute Hazard calculations that are too computationally intensive to run on a local system. We implemented a Coordinate Conversion Web Service to enforce a standard and consistent method for converting between UTM and Lat/Lon. Our experience developing these services indicates both strengths and weakness in current Web Service technology. Client programs that utilize Web Services typically need network access, a significant disadvantage at times. Programs with simple input and output parameters were the easiest to implement as Web Services, while programs with complex parameter-types required a significant amount of additional development. We also noted that Web services are very data-oriented, and adapting object-oriented software into the Web Service model proved problematic. Also, the Web Service approach of converting data types into XML format for network transmission has significant inefficiencies for some data sets.

  5. Experience with PACS in an ATM/Ethernet switched network environment.

    PubMed

    Pelikan, E; Ganser, A; Kotter, E; Schrader, U; Timmermann, U

    1998-03-01

    Legacy local area network (LAN) technologies based on shared media concepts are not adequate for the growth of a large-scale picture archiving and communication system (PACS) in a client-server architecture. First, an asymmetric network load, due to the requests of a large number of PACS clients for only a few main servers, should be compensated by communication links to the servers with a higher bandwidth compared to the clients. Secondly, as the number of PACS nodes increases, the network throughout should not measurably cut production. These requirements can easily be fulfilled using switching technologies. Here asynchronous transfer mode (ATM) is clearly one of the hottest topics in networking because the ATM architecture provides integrated support for a variety of communication services, and it supports virtual networking. On the other hand, most of the imaging modalities are not yet ready for integration into a native ATM network. For a lot of nodes already joining an Ethernet, a cost-effective and pragmatic way to benefit from the switching concept would be a combined ATM/Ethernet switching environment. This incorporates an incremental migration strategy with the immediate benefits of high-speed, high-capacity ATM (for servers and high-sophisticated display workstations), while preserving elements of the existing network technologies. In addition, Ethernet switching instead of shared media Ethernet improves the performance considerably. The LAN emulation (LANE) specification by the ATM forum defines mechanisms that allow ATM networks to coexist with legacy systems using any data networking protocol. This paper points out the suitability of this network architecture in accordance with an appropriate system design.

  6. The Earthscope USArray Array Network Facility (ANF): Evolution of Data Acquisition, Processing, and Storage Systems

    NASA Astrophysics Data System (ADS)

    Davis, G. A.; Battistuz, B.; Foley, S.; Vernon, F. L.; Eakins, J. A.

    2009-12-01

    Since April 2004 the Earthscope USArray Transportable Array (TA) network has grown to over 400 broadband seismic stations that stream multi-channel data in near real-time to the Array Network Facility in San Diego. In total, over 1.7 terabytes per year of 24-bit, 40 samples-per-second seismic and state of health data is recorded from the stations. The ANF provides analysts access to real-time and archived data, as well as state-of-health data, metadata, and interactive tools for station engineers and the public via a website. Additional processing and recovery of missing data from on-site recorders (balers) at the stations is performed before the final data is transmitted to the IRIS Data Management Center (DMC). Assembly of the final data set requires additional storage and processing capabilities to combine the real-time data with baler data. The infrastructure supporting these diverse computational and storage needs currently consists of twelve virtualized Sun Solaris Zones executing on nine physical server systems. The servers are protected against failure by redundant power, storage, and networking connections. Storage needs are provided by a hybrid iSCSI and Fiber Channel Storage Area Network (SAN) with access to over 40 terabytes of RAID 5 and 6 storage. Processing tasks are assigned to systems based on parallelization and floating-point calculation needs. On-site buffering at the data-loggers provide protection in case of short-term network or hardware problems, while backup acquisition systems at the San Diego Supercomputer Center and the DMC protect against catastrophic failure of the primary site. Configuration management and monitoring of these systems is accomplished with open-source (Cfengine, Nagios, Solaris Community Software) and commercial tools (Intermapper). In the evolution from a single server to multiple virtualized server instances, Sun Cluster software was evaluated and found to be unstable in our environment. Shared filesystem architectures using PxFS and QFS were found to be incompatible with our software architecture, so sharing of data between systems is accomplished via traditional NFS. Linux was found to be limited in terms of deployment flexibility and consistency between versions. Despite the experimentation with various technologies, our current virtualized architecture is stable to the point of an average daily real time data return rate of 92.34% over the entire lifetime of the project to date.

  7. Assessment of feasibility of running RSNA's MIRC on a Raspberry Pi: a cost-effective solution for teaching files in radiology.

    PubMed

    Pereira, Andre; Atri, Mostafa; Rogalla, Patrik; Huynh, Thien; O'Malley, Martin E

    2015-11-01

    The value of a teaching case repository in radiology training programs is immense. The allocation of resources for putting one together is a complex issue, given the factors that have to be coordinated: hardware, software, infrastructure, administration, and ethics. Costs may be significant and cost-effective solutions are desirable. We chose Medical Imaging Resource Center (MIRC) to build our teaching file. It is offered by RSNA for free. For the hardware, we chose the Raspberry Pi, developed by the Raspberry Foundation: a small control board developed as a low cost computer for schools also used in alternative projects such as robotics and environmental data collection. Its performance and reliability as a file server were unknown to us. For the operational system, we chose Raspbian, a variant of Debian Linux, along with Apache (web server), MySql (database server) and PHP, which enhance the functionality of the server. A USB hub and an external hard drive completed the setup. Installation of software was smooth. The Raspberry Pi was able to handle very well the task of hosting the teaching file repository for our division. Uptime was logged at 100 %, and loading times were similar to other MIRC sites available online. We setup two servers (one for backup), each costing just below $200.00 including external storage and USB hub. It is feasible to run RSNA's MIRC off a low-cost control board (Raspberry Pi). Performance and reliability are comparable to full-size servers for the intended purpose of hosting a teaching file within an intranet environment.

  8. Report: EPA Should Improve Management Practices and Security Controls for Its Network Directory Service System and Related Servers

    EPA Pesticide Factsheets

    Report #12-P-0836, September 20, 2012. EPA's OEI is not managing key system management documentation, system administration functions, the granting and monitoring of privileged accounts, and the application of security controls associated with its DSS.

  9. The PhEDEx next-gen website

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Egeland, R.; Huang, C. H.; Rossman, P.

    PhEDEx is the data-transfer management solution written by CMS. It consists of agents running at each site, a website for presentation of information, and a web-based data-service for scripted access to information. The website allows users to monitor the progress of data-transfers, the status of site agents and links between sites, and the overall status and behaviour of everything about PhEDEx. It also allows users to make and approve requests for data-transfers and for deletion of data. It is the main point-of-entry for all users wishing to interact with PhEDEx. For several years, the website has consisted of a singlemore » perl program with about 10K SLOC. This program has limited capabilities for exploring the data, with only coarse filtering capabilities and no context-sensitive awareness. Graphical information is presented as static images, generated on the server, with no interactivity. It is also not well connected to the rest of the PhEDEx codebase, since much of it was written before the data-service was developed. All this makes it hard to maintain and extend. We are re-implementing the website to address these issues. The UI is being rewritten in Javascript, replacing most of the server-side code. We are using the YUI toolkit to provide advanced features and context-sensitive interaction, and will adopt a Javascript charting library for generating graphical representations client-side. This relieves the server of much of its load, and automatically improves server-side security. The Javascript components can be re-used in many ways, allowing custom pages to be developed for specific uses. In particular, standalone test-cases using small numbers of components make it easier to debug the Javascript than it is to debug a large server program. Information about PhEDEx is accessed through the PhEDEx data-service, since direct SQL is not available from the clients browser. This provides consistent semantics with other, externally written monitoring tools, which already use the data-service. It also reduces redundancy in the code, yielding a simpler, consolidated codebase. In this talk we describe our experience of re-factoring this monolithic server-side program into a lighter client-side framework. We describe some of the techniques that worked well for us, and some of the mistakes we made along the way. We present the current state of the project, and its future direction.« less

  10. The ICT monitoring system of the ASTRI SST-2M prototype proposed for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Gianotti, F.; Bruno, P.; Tacchini, A.; Conforti, V.; Fioretti, V.; Tanci, C.; Grillo, A.; Leto, G.; Malaguti, G.; Trifoglio, M.

    2016-08-01

    In the framework of the international Cherenkov Telescope Array (CTA) observatory, the Italian National Institute for Astrophysics (INAF) has developed a dual mirror, small sized, telescope prototype (ASTRI SST-2M), installed in Italy at the INAF observing station located at Serra La Nave, Mt. Etna. The ASTRI SST-2M prototype is the basis of the ASTRI telescopes that will form the mini-array proposed to be installed at the CTA southern site during its preproduction phase. This contribution presents the solutions implemented to realize the monitoring system for the Information and Communication Technology (ICT) infrastructure of the ASTRI SST-2M prototype. The ASTRI ICT monitoring system has been implemented by integrating traditional tools used in computer centers, with specific custom tools which interface via Open Platform Communication Unified Architecture (OPC UA) to the Alma Common Software (ACS) that is used to operate the ASTRI SST-2M prototype. The traditional monitoring tools are based on Simple Network Management Protocol (SNMP) and commercial solutions and features embedded in the devices themselves. They generate alerts by email and SMS. The specific custom tools convert the SNMP protocol into the OPC UA protocol and implement an OPC UA server. The server interacts with an OPC UA client implemented in an ACS component that, through the ACS Notification Channel, sends monitor data and alerts to the central console of the ASTRI SST-2M prototype. The same approach has been proposed also for the monitoring of the CTA onsite ICT infrastructures.

  11. Federal Emergency Management Information System (FEMIS) system administration guide. Version 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burford, M.J.; Burnett, R.A.; Downing, T.R.

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and analysis tool that was developed by the (Pacific Northwest National Laboratory) (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide defines FEMIS hardware and software requirements and gives instructions for installing the FEMIS software package. 91 This document also contains information on the following: software installation for the FEMIS data servers, communication server, mail server, and the emergency management workstations; distribution media loading and FEMIS installation validation and troubleshooting; and system management of FEMIS users, login, privileges, and usage.more » The system administration utilities (tools), available in the FEMIS client software, are described for user accounts and site profile. This document also describes the installation and use of system and database administration utilities that will assist in keeping the FEMIS system running in an operational environment.« less

  12. Development of a Personal Digital Assistant (PDA) based client/server NICU patient data and charting system.

    PubMed

    Carroll, A E; Saluja, S; Tarczy-Hornoch, P

    2001-01-01

    Personal Digital Assistants (PDAs) offer clinicians the ability to enter and manage critical information at the point of care. Although PDAs have always been designed to be intuitive and easy to use, recent advances in technology have made them even more accessible. The ability to link data on a PDA (client) to a central database (server) allows for near-unlimited potential in developing point of care applications and systems for patient data management. Although many stand-alone systems exist for PDAs, none are designed to work in an integrated client/server environment. This paper describes the design, software and hardware selection, and preliminary testing of a PDA based patient data and charting system for use in the University of Washington Neonatal Intensive Care Unit (NICU). This system will be the subject of a subsequent study to determine its impact on patient outcomes and clinician efficiency.

  13. Cryptanalysis and Improvement of a Biometric-Based Multi-Server Authentication and Key Agreement Scheme

    PubMed Central

    Wang, Chengqi; Zhang, Xiao; Zheng, Zhiming

    2016-01-01

    With the security requirements of networks, biometrics authenticated schemes which are applied in the multi-server environment come to be more crucial and widely deployed. In this paper, we propose a novel biometric-based multi-server authentication and key agreement scheme which is based on the cryptanalysis of Mishra et al.’s scheme. The informal and formal security analysis of our scheme are given, which demonstrate that our scheme satisfies the desirable security requirements. The presented scheme provides a variety of significant functionalities, in which some features are not considered in the most of existing authentication schemes, such as, user revocation or re-registration and biometric information protection. Compared with several related schemes, our scheme has more secure properties and lower computation cost. It is obviously more appropriate for practical applications in the remote distributed networks. PMID:26866606

  14. A robust anonymous biometric-based authenticated key agreement scheme for multi-server environments

    PubMed Central

    Huang, Yuanfei; Ma, Fangchao

    2017-01-01

    In order to improve the security in remote authentication systems, numerous biometric-based authentication schemes using smart cards have been proposed. Recently, Moon et al. presented an authentication scheme to remedy the flaws of Lu et al.’s scheme, and claimed that their improved protocol supports the required security properties. Unfortunately, we found that Moon et al.’s scheme still has weaknesses. In this paper, we show that Moon et al.’s scheme is vulnerable to insider attack, server spoofing attack, user impersonation attack and guessing attack. Furthermore, we propose a robust anonymous multi-server authentication scheme using public key encryption to remove the aforementioned problems. From the subsequent formal and informal security analysis, we demonstrate that our proposed scheme provides strong mutual authentication and satisfies the desirable security requirements. The functional and performance analysis shows that the improved scheme has the best secure functionality and is computational efficient. PMID:29121050

  15. A robust anonymous biometric-based authenticated key agreement scheme for multi-server environments.

    PubMed

    Guo, Hua; Wang, Pei; Zhang, Xiyong; Huang, Yuanfei; Ma, Fangchao

    2017-01-01

    In order to improve the security in remote authentication systems, numerous biometric-based authentication schemes using smart cards have been proposed. Recently, Moon et al. presented an authentication scheme to remedy the flaws of Lu et al.'s scheme, and claimed that their improved protocol supports the required security properties. Unfortunately, we found that Moon et al.'s scheme still has weaknesses. In this paper, we show that Moon et al.'s scheme is vulnerable to insider attack, server spoofing attack, user impersonation attack and guessing attack. Furthermore, we propose a robust anonymous multi-server authentication scheme using public key encryption to remove the aforementioned problems. From the subsequent formal and informal security analysis, we demonstrate that our proposed scheme provides strong mutual authentication and satisfies the desirable security requirements. The functional and performance analysis shows that the improved scheme has the best secure functionality and is computational efficient.

  16. Requirements for a network storage service

    NASA Technical Reports Server (NTRS)

    Kelly, Suzanne M.; Haynes, Rena A.

    1992-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.

  17. Implementation of Medical Information Exchange System Based on EHR Standard

    PubMed Central

    Han, Soon Hwa; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong

    2010-01-01

    Objectives To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. Methods To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. Results The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. Conclusions This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information. PMID:21818447

  18. Volume serving and media management in a networked, distributed client/server environment

    NASA Technical Reports Server (NTRS)

    Herring, Ralph H.; Tefend, Linda L.

    1993-01-01

    The E-Systems Modular Automated Storage System (EMASS) is a family of hierarchical mass storage systems providing complete storage/'file space' management. The EMASS volume server provides the flexibility to work with different clients (file servers), different platforms, and different archives with a 'mix and match' capability. The EMASS design considers all file management programs as clients of the volume server system. System storage capacities are tailored to customer needs ranging from small data centers to large central libraries serving multiple users simultaneously. All EMASS hardware is commercial off the shelf (COTS), selected to provide the performance and reliability needed in current and future mass storage solutions. All interfaces use standard commercial protocols and networks suitable to service multiple hosts. EMASS is designed to efficiently store and retrieve in excess of 10,000 terabytes of data. Current clients include CRAY's YMP Model E based Data Migration Facility (DMF), IBM's RS/6000 based Unitree, and CONVEX based EMASS File Server software. The VolSer software provides the capability to accept client or graphical user interface (GUI) commands from the operator's console and translate them to the commands needed to control any configured archive. The VolSer system offers advanced features to enhance media handling and particularly media mounting such as: automated media migration, preferred media placement, drive load leveling, registered MediaClass groupings, and drive pooling.

  19. Implementation of Medical Information Exchange System Based on EHR Standard.

    PubMed

    Han, Soon Hwa; Lee, Min Ho; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong

    2010-12-01

    To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information.

  20. Procedure: Ensuring EPA Public Content in the EPA Web Environment

    EPA Pesticide Factsheets

    This document outlines the procedures for ensuring access to EPA information by hosting EPA data and information on the epa.gov server. Additionally, it provides the procedures for obtaining waivers of this requirement.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritz, David J.; Harrison, Christopher B.; Perr, C. W.

    Choreographer is a "moving target defense system", designed to protect against attacks aimed at IP addresses without corresponding domain name system (DNS) lookups. It coordinates actions between a DNS server and a Network Address Translation (NAT) device to regularly change which publicly available IP addresses' traffic will be routed to the protected device versus routed to a honeypot. More details about how Choreographer operates can be found in Section 2: Introducing Choreographer. Operational considerations for the successful deployment of Choreographer can be found in Section 3. The Testing & Evaluation (T&E) for Choreographer involved 3 phases: Pre-testing, Code Analysis, andmore » Operational Testing. Pre-testing, described in Section 4, involved installing and configuring an instance of Choreographer and verifying it would operate as expected for a simple use case. Our findings were that it was simple and straightforward to prepare a system for a Choreographer installation as well as configure Choreographer to work in a representative environment. Code Analysis, described in Section 5, consisted of running a static code analyzer (HP Fortify) and conducting dynamic analysis tests using the Valgrind instrumentation framework. Choreographer performed well, such that only a few errors that might possibly be problematic in a given operating situation were identified. Operational Testing, described in Section 6, involved operating Choreographer in a representative environment created through Emulytics TM . Depending upon the amount of server resources dedicated to Choreographer vis-á-vis the amount of client traffic handled, Choreographer had varying degrees of operational success. In an environment with a poorly resourced Choreographer server and as few as 50-100 clients, Choreographer failed to properly route traffic over half the time. Yet, with a well-resourced server, Choreographer handled over 1000 clients without missrouting. Choreographer demonstrated sensitivity to low-latency connections as well as high volumes of traffic. In addition, depending upon the frequency of new connection requests and the size of the address range that Choreographer has to work with, it is possible for all benefits of Choreographer to be ameliorated by its need to allow DNS servers rather than the end client to make DNS requests. Conclusions and Recommendations, listed in Section 7, address the need to understand the specific use case where Choreographer would be deployed to assess whether there would be problems resulting from the operational considerations described in Section 3 or performance concerns from the results of Operational Testing in Section 6. Deployed in an appropriate architecture with sufficiently light traffic volumes and a well-provisioned server, it is quite likely that Choreographer would perform satisfactorily. Thus, we recommend further detailed testing, to potentially include Red Team testing, at such time a specific use case is identified« less

  2. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  3. Convolutional neural network-based classification system design with compressed wireless sensor network images.

    PubMed

    Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil

    2018-01-01

    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.

  4. VRML and Collaborative Environments: New Tools for Networked Visualization

    NASA Astrophysics Data System (ADS)

    Crutcher, R. M.; Plante, R. L.; Rajlich, P.

    We present two new applications that engage the network as a tool for astronomical research and/or education. The first is a VRML server which allows users over the Web to interactively create three-dimensional visualizations of FITS images contained in the NCSA Astronomy Digital Image Library (ADIL). The server's Web interface allows users to select images from the ADIL, fill in processing parameters, and create renderings featuring isosurfaces, slices, contours, and annotations; the often extensive computations are carried out on an NCSA SGI supercomputer server without the user having an individual account on the system. The user can then download the 3D visualizations as VRML files, which may be rotated and manipulated locally on virtually any class of computer. The second application is the ADILBrowser, a part of the NCSA Horizon Image Data Browser Java package. ADILBrowser allows a group of participants to browse images from the ADIL within a collaborative session. The collaborative environment is provided by the NCSA Habanero package which includes text and audio chat tools and a white board. The ADILBrowser is just an example of a collaborative tool that can be built with the Horizon and Habanero packages. The classes provided by these packages can be assembled to create custom collaborative applications that visualize data either from local disk or from anywhere on the network.

  5. System level traffic shaping in disk servers with heterogeneous protocols

    NASA Astrophysics Data System (ADS)

    Cano, Eric; Kruse, Daniele Francesco

    2014-06-01

    Disk access and tape migrations compete for network bandwidth in CASTORs disk servers, over various protocols: RFIO, Xroot, root and GridFTP. As there are a limited number of tape drives, it is important to keep them busy all the time, at their nominal speed. With potentially 100s of user read streams per server, the bandwidth for the tape migrations has to be guaranteed to a controlled level, and not the fair share the system gives by default. Xroot provides a prioritization mechanism, but using it implies moving exclusively to the Xroot protocol, which is not possible in short to mid-term time frame, as users are equally using all protocols. The greatest commonality of all those protocols is not more than the usage of TCP/IP. We investigated the Linux kernel traffic shaper to control TCP/ IP bandwidth. The performance and limitations of the traffic shaper have been understood in test environment, and satisfactory working point has been found for production. Notably, TCP offload engines' negative impact on traffic shaping, and the limitations of the length of the traffic shaping rules were discovered and measured. A suitable working point has been found and the traffic shaping is now successfully deployed in the CASTOR production systems at CERN. This system level approach could be transposed easily to other environments.

  6. Cross-standard user description in mobile, medical oriented virtual collaborative environments

    NASA Astrophysics Data System (ADS)

    Ganji, Rama Rao; Mitrea, Mihai; Joveski, Bojan; Chammem, Afef

    2015-03-01

    By combining four different open standards belonging to the ISO/IEC JTC1/SC29 WG11 (a.k.a. MPEG) and W3C, this paper advances an architecture for mobile, medical oriented virtual collaborative environments. The various users are represented according to MPEG-UD (MPEG User Description) while the security issues are dealt with by deploying the WebID principles. On the server side, irrespective of their elementary types (text, image, video, 3D, …), the medical data are aggregated into hierarchical, interactive multimedia scenes which are alternatively represented into MPEG-4 BiFS or HTML5 standards. This way, each type of content can be optimally encoded according to its particular constraints (semantic, medical practice, network conditions, etc.). The mobile device should ensure only the displaying of the content (inside an MPEG player or an HTML5 browser) and the capturing of the user interaction. The overall architecture is implemented and tested under the framework of the MEDUSA European project, in partnership with medical institutions. The testbed considers a server emulated by a PC and heterogeneous user devices (tablets, smartphones, laptops) running under iOS, Android and Windows operating systems. The connection between the users and the server is alternatively ensured by WiFi and 3G/4G networks.

  7. Indoor Navigation Design Integrated with Smart Phones and Rfid Devices

    NASA Astrophysics Data System (ADS)

    Ortakci, Y.; Demiral, E.; Atila, U.; Karas, I. R.

    2015-10-01

    High rise, complex and huge buildings in the cities are almost like a small city with their tens of floors, hundreds of corridors and rooms and passages. Due to size and complexity of these buildings, people need guidance to find their way to the destination in these buildings. In this study, a mobile application is developed to visualize pedestrian's indoor position as 3D in their smartphone and RFID Technology is used to detect the position of pedestrian. While the pedestrian is walking on his/her way on the route, smartphone will guide the pedestrian by displaying the photos of indoor environment on the route. Along the tour, an RFID (Radio-Frequency Identification) device is integrated to the system. The pedestrian will carry the RFID device during his/her tour in the building. The RFID device will send the position data to the server directly in every two seconds periodically. On the other side, the pedestrian will just select the destination point in the mobile application on smartphone and sent the destination point to the server. The shortest path from the pedestrian position to the destination point is found out by the script on the server. This script also sends the environment photo of the first node on the acquired shortest path to the client as an indoor navigation module.

  8. Perspectives or Telemedicine Development in Ukraine

    DTIC Science & Technology

    2001-05-01

    National Register of the individuals who are suffering from consequences of the Chernobyl disaster. This Register monitors the health of more than 700,000...integrate the Chernobyl Register net into ’UkrMedNet’ and to create a WWW server containing the Chernobyl Register information. To provide access to the

  9. UC Irvine CHRS Real-time Global Satellite Precipitation Monitoring System (G-WADI PERSIANN-CCS GeoServer) for Hydrometeorological Applications

    NASA Astrophysics Data System (ADS)

    Sorooshian, S.; Hsu, K. L.; Gao, X.; Imam, B.; Nguyen, P.; Braithwaite, D.; Logan, W. S.; Mishra, A.

    2015-12-01

    The G-WADI Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Cloud Classification System (PERSIANN-CCS) GeoServer has been successfully developed by the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California Irvine in collaboration with the UNESCO's International Hydrological Programme (IHP) and a number of its international centers. The system employs state-of-the-art technologies in remote sensing and artificial intelligence to estimate precipitation globally from satellite imagery in real-time and high spatiotemporal resolution (4km, hourly). It offers graphical tools and data service to help the user in emergency planning and management for natural disasters related to hydrological processes. The G-WADI PERSIANN-CCS GeoServer has been upgraded with new user-friendly functionalities. The precipitation data generated by the GeoServer is disseminated to the user community through support provided by ICIWaRM (The International Center for Integrated Water Resources Management), UNESCO and UC Irvine. Recently a number of new applications for mobile devices have been developed by our students. The RainMapper has been available on App Store and Google Play for the real-time PERSIANN-CCS observations. A global crowd sourced rainfall reporting system named iRain has also been developed to engage the public globally to provide qualitative information about real-time precipitation in their location which will be useful in improving the quality of the PERSIANN-CCS data. A number of recent examples of the application and use of the G-WADI PERSIANN-CCS GeoServer information will also be presented.

  10. Monitoring Global Precipitation through UCI CHRS's RainMapper App on Mobile Devices

    NASA Astrophysics Data System (ADS)

    Nguyen, P.; Huynh, P.; Braithwaite, D.; Hsu, K. L.; Sorooshian, S.

    2014-12-01

    The Water and Development Information for Arid Lands-a Global Network (G-WADI) Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks—Cloud Classification System (PERSIANN-CCS) GeoServer has been developed through a collaboration between the Center for Hydrometeorology and Remote Sensing (CHRS) at the University of California, Irvine (UCI) and the UNESCO's International Hydrological Program (IHP). G-WADI PERSIANN-CCS GeoServer provides near real-time high resolution (0.04o, approx 4km) global (60oN - 60oS) satellite precipitation estimated by the PERSIANN-CCS algorithm developed by the scientists at CHRS. The G-WADI PERSIANN-CCS GeoServer utilizes the open-source MapServer software from the University of Minnesota to provide a user-friendly web-based mapping and visualization of satellite precipitation data. Recent efforts have been made by the scientists at CHRS to provide free on-the-go access to the PERSIANN-CCS precipitation data through an application named RainMapper for mobile devices. RainMapper provides visualization of global satellite precipitation of the most recent 3, 6, 12, 24, 48 and 72-hour periods overlaid with various basemaps. RainMapper uses the Google maps application programing interface (API) and embedded global positioning system (GPS) access to better monitor the global precipitation data on mobile devices. Functionalities include using geographical searching with voice recognition technologies make it easy for the user to explore near real-time precipitation in a certain location. RainMapper also allows for conveniently sharing the precipitation information and visualizations with the public through social networks such as Facebook and Twitter. RainMapper is available for iOS and Android devices and can be downloaded (free) from the App Store and Google Play. The usefulness of RainMapper was demonstrated through an application in tracking the evolution of the recent Rammasun Typhoon over the Philippines in mid July 2014.

  11. A web service framework for astronomical remote observation in Antarctica by using satellite link

    NASA Astrophysics Data System (ADS)

    Jia, M.-h.; Chen, Y.-q.; Zhang, G.-y.; Jiang, P.; Zhang, H.; Wang, J.

    2018-07-01

    Many telescopes are deployed in Antarctica as it offers excellent astronomical observation conditions. However, because Antarctica's environment is harsh to humans, remote operation of telescope is necessary for observation. Furthermore, communication to devices in Antarctica through satellite link with low bandwidth and high latency limits the effectiveness of remote observation. This paper introduces a web service framework for remote astronomical observation in Antarctica. The framework is based on Python Tornado. RTS2-HTTPD and REDIS are used as the access interface to the telescope control system in Antarctica. The web service provides real-time updates through WebSocket. To improve user experience and control effectiveness under the poor satellite link condition, an agent server is deployed in the mainland to synchronize the Antarctic server's data and send it to domestic users in China. The agent server will forward the request of domestic users to the Antarctic master server. The web service was deployed and tested on Bright Star Survey Telescope (BSST) in Antarctica. Results show that the service meets the demands of real-time, multiuser remote observation and domestic users have a better experience of remote operation.

  12. Development of a web geoservices platform for School of Environmental Sciences, Mahatma Gandhi University, Kerala, India

    NASA Astrophysics Data System (ADS)

    Satheendran, S.; John, C. M.; Fasalul, F. K.; Aanisa, K. M.

    2014-11-01

    Web geoservices is the obvious graduation of Geographic Information System in a distributed environment through a simple browser. It enables organizations to share domain-specific rich and dynamic spatial information over the web. The present study attempted to design and develop a web enabled GIS application for the School of Environmental Sciences, Mahatma Gandhi University, Kottayam, Kerala, India to publish various geographical databases to the public through its website. The development of this project is based upon the open source tools and techniques. The output portal site is platform independent. The premier webgis frame work `Geomoose' is utilized. Apache server is used as the Web Server and the UMN Map Server is used as the map server for this project. It provides various customised tools to query the geographical database in different ways and search for various facilities in the geographical area like banks, attractive places, hospitals, hotels etc. The portal site was tested with the output geographical database of 2 projects of the School such as 1) the Tourism Information System for the Malabar region of Kerala State consisting of 5 northern districts 2) the geoenvironmental appraisal of the Athirappilly Hydroelectric Project covering the entire Chalakkudy river basin.

  13. Quality of service policy control in virtual private networks

    NASA Astrophysics Data System (ADS)

    Yu, Yiqing; Wang, Hongbin; Zhou, Zhi; Zhou, Dongru

    2004-04-01

    This paper studies the QoS of VPN in an environment where the public network prices connection-oriented services based on source, destination and grade of service, and advertises these prices to its VPN customers (users). As different QoS technologies can produce different QoS, there are according different traffic classification rules and priority rules. The internet service provider (ISP) may need to build complex mechanisms separately for each node. In order to reduce the burden of network configuration, we need to design policy control technologies. We considers mainly directory server, policy server, policy manager and policy enforcers. Policy decision point (PDP) decide its control according to policy rules. In network, policy enforce point (PEP) decide its network controlled unit. For InterServ and DiffServ, we will adopt different policy control methods as following: (1) In InterServ, traffic uses resource reservation protocol (RSVP) to guarantee the network resource. (2) In DiffServ, policy server controls the DiffServ code points and per hop behavior (PHB), its PDP distributes information to each network node. Policy server will function as following: information searching; decision mechanism; decision delivering; auto-configuration. In order to prove the effectiveness of QoS policy control, we make the corrective simulation.

  14. Wireless structural monitoring for homeland security applications

    NASA Astrophysics Data System (ADS)

    Kiremidjian, Garo K.; Kiremidjian, Anne S.; Lynch, Jerome P.

    2004-07-01

    This paper addresses the development of a robust, low-cost, low power, and high performance autonomous wireless monitoring system for civil assets such as large facilities, new construction, bridges, dams, commercial buildings, etc. The role of the system is to identify the onset, development, location and severity of structural vulnerability and damage. The proposed system represents an enabling infrastructure for addressing structural vulnerabilities specifically associated with homeland security. The system concept is based on dense networks of "intelligent" wireless sensing units. The fundamental properties of a wireless sensing unit include: (a) interfaces to multiple sensors for measuring structural and environmental data (such as acceleration, displacements, pressure, strain, material degradation, temperature, gas agents, biological agents, humidity, corrosion, etc.); (b) processing of sensor data with embedded algorithms for assessing damage and environmental conditions; (c) peer-to-peer wireless communications for information exchange among units(thus enabling joint "intelligent" processing coordination) and storage of data and processed information in servers for information fusion; (d) ultra low power operation; (e) cost-effectiveness and compact size through the use of low-cost small-size off-the-shelf components. An integral component of the overall system concept is a decision support environment for interpretation and dissemination of information to various decision makers.

  15. The EVER-EST portal as support for the Sea Monitoring Virtual Research Community, through the sharing of resources, enabling dynamic collaboration and promoting community engagement

    NASA Astrophysics Data System (ADS)

    Foglini, Federica; Grande, Valentina; De Leo, Francesco; Mantovani, Simone; Ferraresi, Sergio

    2017-04-01

    EVER-EST offers a framework based on advanced services delivered both at the e-infrastructure and domain-specific level, with the objective of supporting each phase of the Earth Science Research and Information Lifecycle. It provides innovative e-research services to Earth Science user communities for communication, cross-validation and the sharing of knowledge and science outputs. The project follows a user-centric approach: real use cases taken from pre-selected Virtual Research Communities (VRC) covering different Earth Science research scenarios drive the implementation of the Virtual Research Environment (VRE) services and capabilities. The Sea Monitoring community is involved in the evaluation of the EVER-EST infrastructure. The community of potential users is wide and heterogeneous including both multi-disciplinary scientists and national/international agencies and authorities (e.g. MPAs directors, technicians from regional agencies like ARPA in Italy, the technicians working for the Ministry of the Environment) dealing with the adoption of a better way of measuring the quality of the environment. The scientific community has the main role of assessing the best criteria and indicators for defining the Good Environmental Status (GES) in their own sub regions, and implementing methods, protocols and tools for monitoring the GES descriptors. According to the Marine Strategy Framework Directive (MSFD), the environmental status of marine waters is defined by 11 descriptors, and forms a proposed set of 29 associated criteria and 56 different indicators. The objective of the Sea Monitoring VRC is to provide useful and applicable contributions to the evaluation of the descriptors: D1.Biodiversity, D2.Non-indigenous species and D6.Seafloor Integrity (http://ec.europa.eu/environment/marine/good-environmental-status/index_en.htm). The main challenges for the community members are: 1. discovery of existing data and products distributed among different infrastructures; 2. sharing methodologies about the GES evaluation and monitoring; 3. working on the same workflows and data; 4. adopting shared powerful tools for data processing (e.g. software and servers). The Sea Monitoring portal provides the VRC users with tools and services aimed at enhancing their ability to interoperate and share knowledge, experience and methods for GES assessment and monitoring, such as: •digital information services for data management, exploitation and preservation (accessibility of heterogeneous data sources including associated documentation); •e-collaboration services to communicate and share knowledge, ideas, protocols and workflows; •e-learning services to facilitate the use of common workflows for assessing GES indicators; •e-research services for workflow management, validation and verification, as well as visualization and interactive services. The current study is co-financed by the European Union's Horizon 2020 research and innovation programme under the EVER-EST project (Grant Agreement No. 674907).

  16. Proposal for a new CAPE-OPEN Object Model

    EPA Science Inventory

    Process simulation applications require the exchange of significant amounts of data between the flowsheet environment, unit operation model, and thermodynamic server. Packing and unpacking various data types and exchanging data using structured text-based architectures, including...

  17. SAME4HPC: A Promising Approach in Building a Scalable and Mobile Environment for High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karthik, Rajasekar

    2014-01-01

    In this paper, an architecture for building Scalable And Mobile Environment For High-Performance Computing with spatial capabilities called SAME4HPC is described using cutting-edge technologies and standards such as Node.js, HTML5, ECMAScript 6, and PostgreSQL 9.4. Mobile devices are increasingly becoming powerful enough to run high-performance apps. At the same time, there exist a significant number of low-end and older devices that rely heavily on the server or the cloud infrastructure to do the heavy lifting. Our architecture aims to support both of these types of devices to provide high-performance and rich user experience. A cloud infrastructure consisting of OpenStack withmore » Ubuntu, GeoServer, and high-performance JavaScript frameworks are some of the key open-source and industry standard practices that has been adopted in this architecture.« less

  18. Virtualizing access to scientific applications with the Application Hosting Environment

    NASA Astrophysics Data System (ADS)

    Zasada, S. J.; Coveney, P. V.

    2009-12-01

    The growing power and number of high performance computing resources made available through computational grids present major opportunities as well as a number of challenges to the user. At issue is how these resources can be accessed and how their power can be effectively exploited. In this paper we first present our views on the usability of contemporary high-performance computational resources. We introduce the concept of grid application virtualization as a solution to some of the problems with grid-based HPC usability. We then describe a middleware tool that we have developed to realize the virtualization of grid applications, the Application Hosting Environment (AHE), and describe the features of the new release, AHE 2.0, which provides access to a common platform of federated computational grid resources in standard and non-standard ways. Finally, we describe a case study showing how AHE supports clinical use of whole brain blood flow modelling in a routine and automated fashion. Program summaryProgram title: Application Hosting Environment 2.0 Catalogue identifier: AEEJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence, Version 2 No. of lines in distributed program, including test data, etc.: not applicable No. of bytes in distributed program, including test data, etc.: 1 685 603 766 Distribution format: tar.gz Programming language: Perl (server), Java (Client) Computer: x86 Operating system: Linux (Server), Linux/Windows/MacOS (Client) RAM: 134 217 728 (server), 67 108 864 (client) bytes Classification: 6.5 External routines: VirtualBox (server), Java (client) Nature of problem: The middleware that makes grid computing possible has been found by many users to be too unwieldy, and presents an obstacle to use rather than providing assistance [1,2]. Such problems are compounded when one attempts to harness the power of a grid, or a federation of different grids, rather than just a single resource on the grid. Solution method: To address the above problem, we have developed AHE, a lightweight interface, designed to simplify the process of running scientific codes on a grid of HPC and local resources. AHE does this by introducing a layer of middleware between the user and the grid, which encapsulates much of the complexity associated with launching grid applications. Unusual features: The server is distributed as a VirtualBox virtual machine. VirtualBox ( http://www.virtualbox.org) must be downloaded and installed in order to run the AHE server virtual machine. Details of how to do this are given in the AHE 2.0 Quick Start Guide. Running time: Not applicable References:J. Chin, P.V. Coveney, Towards tractable toolkits for the grid: A plea for lightweight, useable middleware, NeSC Technical Report, 2004, http://nesc.ac.uk/technical_papers/UKeS-2004-01.pdf. P.V. Coveney, R.S. Saksena, S.J. Zasada, M. McKeown, S. Pickles, The Application Hosting Environment: Lightweight middleware for grid-based computational science, Computer Physics Communications 176 (2007) 406-418.

  19. Efficient Byzantine Fault Tolerance for Scalable Storage and Services

    DTIC Science & Technology

    2009-07-01

    most critical applications must survive in ever harsher environments. Less synchronous networking delivers packets unreliably and unpredictably, and... synchronous environments to allowing asynchrony, and from tolerating crashes to tolerating some corruptions through ad-hoc consistency checks. Ad-hoc...servers are responsive. To support this thesis statement, this disseration takes the following steps. First, it develops a new cryptographic primitive

  20. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  1. Casimage project: a digital teaching files authoring environment.

    PubMed

    Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman

    2004-04-01

    The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.

  2. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser

    PubMed Central

    Almeida, Jonas S.; Iriabho, Egiebade E.; Gorrepati, Vijaya L.; Wilkinson, Sean R.; Grüneberg, Alexander; Robbins, David E.; Hackney, James R.

    2012-01-01

    Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results: Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. Conclusions: The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local “download and installation”. PMID:22934238

  3. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser.

    PubMed

    Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R

    2012-01-01

    Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".

  4. Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Burnett, R.A.; Carter, R.J.

    The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less

  5. WMT: The CSDMS Web Modeling Tool

    NASA Astrophysics Data System (ADS)

    Piper, M.; Hutton, E. W. H.; Overeem, I.; Syvitski, J. P.

    2015-12-01

    The Community Surface Dynamics Modeling System (CSDMS) has a mission to enable model use and development for research in earth surface processes. CSDMS strives to expand the use of quantitative modeling techniques, promotes best practices in coding, and advocates for the use of open-source software. To streamline and standardize access to models, CSDMS has developed the Web Modeling Tool (WMT), a RESTful web application with a client-side graphical interface and a server-side database and API that allows users to build coupled surface dynamics models in a web browser on a personal computer or a mobile device, and run them in a high-performance computing (HPC) environment. With WMT, users can: Design a model from a set of components Edit component parameters Save models to a web-accessible server Share saved models with the community Submit runs to an HPC system Download simulation results The WMT client is an Ajax application written in Java with GWT, which allows developers to employ object-oriented design principles and development tools such as Ant, Eclipse and JUnit. For deployment on the web, the GWT compiler translates Java code to optimized and obfuscated JavaScript. The WMT client is supported on Firefox, Chrome, Safari, and Internet Explorer. The WMT server, written in Python and SQLite, is a layered system, with each layer exposing a web service API: wmt-db: database of component, model, and simulation metadata and output wmt-api: configure and connect components wmt-exe: launch simulations on remote execution servers The database server provides, as JSON-encoded messages, the metadata for users to couple model components, including descriptions of component exchange items, uses and provides ports, and input parameters. Execution servers are network-accessible computational resources, ranging from HPC systems to desktop computers, containing the CSDMS software stack for running a simulation. Once a simulation completes, its output, in NetCDF, is packaged and uploaded to a data server where it is stored and from which a user can download it as a single compressed archive file.

  6. Phased development of a web-based PACS viewer

    NASA Astrophysics Data System (ADS)

    Gidron, Yoad; Shani, Uri; Shifrin, Mark

    2000-05-01

    The Web browser is an excellent environment for the rapid development of an effective and inexpensive PACS viewer. In this paper we will share our experience in developing a browser-based viewer, from the inception and prototype stages to its current state of maturity. There are many operational advantages to a browser-based viewer, even when native viewers already exist in the system (with multiple and/or high resolution screens): (1) It can be used on existing personal workstations throughout the hospital. (2) It is easy to make the service available from physician's homes. (3) The viewer is extremely portable and platform independent. There is a wide variety of means available for implementing the browser- based viewer. Each file sent to the client by the server can perform some end-user or client/server interaction. These means range from HTML (for HyperText Markup Language) files, through Java Script, to Java applets. Some data types may also invoke plug-in code in the client, although this would reduce the portability of the viewer, it would provide the needed efficiency in critical places. On the server side the range of means is also very rich: (1) A set of files: html, Java Script, Java applets, etc. (2) Extensions of the server via cgi-bin programs, (3) Extensions of the server via servlets, (4) Any other helper application residing and working with the server to access the DICOM archive. The viewer architecture consists of two basic parts: The first part performs query and navigation through the DICOM archive image folders. The second part does the image access and display. While the first part deals with low data traffic, it involves many database transactions. The second part is simple as far as access transactions are concerned, but requires much more data traffic and display functions. Our web-based viewer has gone through three development stages characterized by the complexity of the means and tools employed on both client and server sides.

  7. A vision-based tool for the control of hydraulic structures in sewer systems

    NASA Astrophysics Data System (ADS)

    Nguyen, L.; Sage, D.; Kayal, S.; Jeanbourquin, D.; Rossi, L.

    2009-04-01

    During rain events, the total amount of the wastewater/storm-water mixture cannot be treated in the wastewater treatment plant; the overflowed water goes directly into the environment (lakes, rivers, streams) via devices called combined sewers overflows (CSOs). This water is untreated and is recognized as an important source of pollution. In most cases, the quantity of overflowed water is unknown due to high hydraulic turbulences during rain events; this quantity is often significant. For this reason, the monitoring of the water flow and the water level is of crucial environmental importance. Robust monitoring of sewer systems is a challenging task to achieve. Indeed, the environment inside sewers systems is inherently harsh and hostile: constant humidity of 100%, fast and large water level changes, corrosive atmosphere, presence of gas, difficult access, solid debris inside the flow. A flow monitoring based on traditional probes placed inside the water (such as Doppler flow meter) is difficult to conduct because of the solid material transported by the flow. Probes placed outside the flow such as ultrasonic water level probes are often used; however the measurement is generally done on only one particular point. Experience has shown that the water level in CSOs during rain events is far from being constant due to hydraulic turbulences. Thus, such probes output uncertain information. Moreover, a check of the data reliability is impossible to achieve. The HydroPix system proposes a novel approach to the monitoring of sewers based on video images, without contact with the water flow. The goal of this system is to provide a monitoring tool for wastewater system managers (end-users). The hardware was chosen in order to suit the harsh conditions of sewers system: Cameras are 100% waterproof and corrosion-resistant; Infra-red LED illumination systems are used (waterproof, low power consumption); A waterproof case contains the registration and communication system. The monitoring software has the following requirements: visual analysis of particular hydraulic behavior, automatic vision-based flow measurements, automatic alarm system for particular events (overflows, risk of flooding, etc), database for data management (images, events, measurements, etc.), ability to be controlled remotely. The software is implemented in modular server/client architecture under LabVIEW development system. We have conducted conclusive in situ tests in various sewers configurations (CSOs, storm-water sewerage, WWTP); they have shown the ability of the HydroPix to perform accurate monitoring of hydraulic structures. Visual information demonstrated a better understanding of the flow behavior in complex and difficult environment.

  8. Assessment of risk for asthma initiation and cancer and heart disease deaths among patrons and servers due to secondhand smoke exposure in restaurants and bars

    PubMed Central

    Liu, Ruiling; Bohac, David L; Gundel, Lara A; Hewett, Martha J; Apte, Michael G; Hammond, S Katharine

    2014-01-01

    Background Despite efforts to reduce exposure to secondhand smoke (SHS), only 5% of the world's population enjoy smoke-free restaurants and bars. Methods Lifetime excess risk (LER) of cancer death, ischaemic heart disease (IHD) death and asthma initiation among non-smoking restaurant and bar servers and patrons in Minnesota and the US were estimated using weighted field measurements of SHS constituents in Minnesota, existing data on tobacco use and multiple dose-response models. Results A continuous approach estimated a LER of lung cancer death (LCD) of 18×10−6(95% CI 13 to 23×10−6) for patrons visiting only designated non-smoking sections, 80×10−6(95% CI 66 to 95×10−6) for patrons visiting only smoking venues/sections and 802×10−6(95% CI 658 to 936×10−6) for servers in smoking-permitted venues. An attributable-risk (exposed/non-exposed) approach estimated a similar LER of LCD, a LER of IHD death about 10−2 for non-smokers with average SHS exposure from all sources and a LER of asthma initiation about 5% for servers with SHS exposure at work only. These risks correspond to 214 LCDs and 3001 IHD deaths among the general non-smoking population and 1420 new asthma cases among non-smoking servers in the US each year due to SHS exposure in restaurants and bars alone. Conclusions Health risks for patrons and servers from SHS exposure in restaurants and bars alone are well above the acceptable level. Restaurants and bars should be a priority for governments’ effort to create smoke-free environments and should not be exempt from smoking bans. PMID:23407112

  9. A WebGIS system on the base of satellite data processing system for marine application

    NASA Astrophysics Data System (ADS)

    Gong, Fang; Wang, Difeng; Huang, Haiqing; Chen, Jianyu

    2007-10-01

    From 2002 to 2004, a satellite data processing system for marine application had been built up in State Key Laboratory of Satellite Ocean Environment Dynamics (Second Institute of Oceanography, State Oceanic Administration). The system received satellite data from TERRA, AQUA, NOAA-12/15/16/17/18, FY-1D and automatically generated Level3 products and Level4 products(products of single orbit and merged multi-orbits products) deriving from Level0 data, which is controlled by an operational control sub-system. Currently, the products created by this system play an important role in the marine environment monitoring, disaster monitoring and researches. Now a distribution platform has been developed on this foundation, namely WebGIS system for querying and browsing of oceanic remote sensing data. This system is based upon large database system-Oracle. We made use of the space database engine of ArcSDE and other middleware to perform database operation in addition. J2EE frame was adopted as development model, and Oracle 9.2 DBMS as database background and server. Simply using standard browsers(such as IE6.0), users can visit and browse the public service information that provided by system, including browsing for oceanic remote sensing data, and enlarge, contract, move, renew, traveling, further data inquiry, attribution search and data download etc. The system is still under test now. Founding of such a system will become an important distribution platform of Chinese satellite oceanic environment products of special topic and category (including Sea surface temperature, Concentration of chlorophyll, and so on), for the exaltation of satellite products' utilization and promoting the data share and the research of the oceanic remote sensing platform.

  10. A Wireless Physiological Signal Monitoring System with Integrated Bluetooth and WiFi Technologies.

    PubMed

    Yu, Sung-Nien; Cheng, Jen-Chieh

    2005-01-01

    This paper proposes a wireless patient monitoring system which integrates Bluetooth and WiFi wireless technologies. A wireless portable multi-parameter device was designated to acquire physiological signals and transmit them to a local server via Bluetooth wireless technology. Four kinds of monitor units were designed to communicate via the WiFi wireless technology, including a local monitor unit, a control center, mobile devices (personal digital assistant; PDA), and a web page. The use of various monitor units is intending to meet different medical requirements for different medical personnel. This system was demonstrated to promote the mobility and flexibility for both the patients and the medical personnel, which further improves the quality of health care.

  11. Research on cloud-based remote measurement and analysis system

    NASA Astrophysics Data System (ADS)

    Gao, Zhiqiang; He, Lingsong; Su, Wei; Wang, Can; Zhang, Changfan

    2015-02-01

    The promising potential of cloud computing and its convergence with technologies such as cloud storage, cloud push, mobile computing allows for creation and delivery of newer type of cloud service. Combined with the thought of cloud computing, this paper presents a cloud-based remote measurement and analysis system. This system mainly consists of three parts: signal acquisition client, web server deployed on the cloud service, and remote client. This system is a special website developed using asp.net and Flex RIA technology, which solves the selective contradiction between two monitoring modes, B/S and C/S. This platform supplies customer condition monitoring and data analysis service by Internet, which was deployed on the cloud server. Signal acquisition device is responsible for data (sensor data, audio, video, etc.) collection and pushes the monitoring data to the cloud storage database regularly. Data acquisition equipment in this system is only conditioned with the function of data collection and network function such as smartphone and smart sensor. This system's scale can adjust dynamically according to the amount of applications and users, so it won't cause waste of resources. As a representative case study, we developed a prototype system based on Ali cloud service using the rotor test rig as the research object. Experimental results demonstrate that the proposed system architecture is feasible.

  12. Implementation of remote monitoring and managing switches

    NASA Astrophysics Data System (ADS)

    Leng, Junmin; Fu, Guo

    2010-12-01

    In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.

  13. Real-time ground motions monitoring system developed by Raspberry Pi 3

    NASA Astrophysics Data System (ADS)

    Chen, P.; Jang, J. P.; Chang, H.; Lin, C. R.; Lin, P. P.; Wang, C. C.

    2016-12-01

    Ground-motions seismic stations are usually installed in the special geological area, like high possibility landslide area, active volcanoes, or nearby faults, to real-time monitor the possible geo-hazards. Base on the demands, three main issues needs to be considered: size, low-power consumption and real-time data transmission. Raspberry Pi 3 has the suitable characteristics to fit our requests. Thus, we develop a real-time ground motions monitoring system by Raspberry Pi 3. The Raspberry Pi has the credit-card-sized with single-board computers. The operating system is based on the programmable Linux system.The volume is only 85.6 by 53.98 by 17 mm with USB and Ethernet interfaces. The power supply is only needed 5 Volts and 2.1 A. It is easy to get power by using solar power and transmit the real-time data through Ethernet or by the mobile signal through USB adapter. As Raspberry Pi still a kind of small computer, the service, software or GUI can be very flexibly developed, such as the basic web server, ftp server, SSH connection, and real-time visualization interface tool etc. Until now, we have developed ten instruments with on-line/ real-time data transmission and have installed in the Taiping Mountain in Taiwan to motor the geohazard like mudslide.

  14. Health monitoring of offshore structures using wireless sensor network: experimental investigations

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, Srinivasan; Chitambaram, Thailammai

    2016-04-01

    This paper presents a detailed methodology of deploying wireless sensor network in offshore structures for structural health monitoring (SHM). Traditional SHM is carried out by visual inspections and wired systems, which are complicated and requires larger installation space to deploy while decommissioning is a tedious process. Wireless sensor networks can enhance the art of health monitoring with deployment of scalable and dense sensor network, which consumes lesser space and lower power consumption. Proposed methodology is mainly focused to determine the status of serviceability of large floating platforms under environmental loads using wireless sensors. Data acquired by the servers will analyze the data for their exceedance with respect to the threshold values. On failure, SHM architecture will trigger an alarm or an early warning in the form of alert messages to alert the engineer-in-charge on board; emergency response plans can then be subsequently activated, which shall minimize the risk involved apart from mitigating economic losses occurring from the accidents. In the present study, wired and wireless sensors are installed in the experimental model and the structural response, acquired is compared. The wireless system comprises of Raspberry pi board, which is programmed to transmit the acquired data to the server using Wi-Fi adapter. Data is then hosted in the webpage for further post-processing, as desired.

  15. The Red Atrapa Sismos (Quake Catcher Network in Mexico): assessing performance during large and damaging earthquakes.

    USGS Publications Warehouse

    Dominguez, Luis A.; Yildirim, Battalgazi; Husker, Allen L.; Cochran, Elizabeth S.; Christensen, Carl; Cruz-Atienza, Victor M.

    2015-01-01

    Each volunteer computer monitors ground motion and communicates using the Berkeley Open Infrastructure for Network Computing (BOINC, Anderson, 2004). Using a standard short‐term average, long‐term average (STLA) algorithm (Earle and Shearer, 1994; Cochran, Lawrence, Christensen, Chung, 2009; Cochran, Lawrence, Christensen, and Jakka, 2009), volunteer computer and sensor systems detect abrupt changes in the acceleration recordings. Each time a possible trigger signal is declared, a small package of information containing sensor and ground‐motion information is streamed to one of the QCN servers (Chung et al., 2011). Trigger signals, correlated in space and time, are then processed by the QCN server to look for potential earthquakes.

  16. Modeling And Simulation Of Multimedia Communication Networks

    NASA Astrophysics Data System (ADS)

    Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.

    1989-05-01

    In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.

  17. A Proposal of TLS Implementation for Cross Certification Model

    NASA Astrophysics Data System (ADS)

    Kaji, Tadashi; Fujishiro, Takahiro; Tezuka, Satoru

    Today, TLS is widely used for achieving a secure communication system. And TLS is used PKI for server authentication and/or client authentication. However, its PKI environment, which is called as “multiple trust anchors environment,” causes the problem that the verifier has to maintain huge number of CA certificates in the ubiquitous network because the increase of terminals connected to the network brings the increase of CAs. However, most of terminals in the ubiquitous network will not have enough memory to hold such huge number of CA certificates. Therefore, another PKI environment, “cross certification environment”, is useful for the ubiquitous network. But, because current TLS is designed for the multiple trust anchors model, TLS cannot work efficiently on the cross-certification model. This paper proposes a TLS implementation method to support the cross certification model efficiently. Our proposal reduces the size of exchanged messages between the TLS client and the TLS server during the handshake process. Therefore, our proposal is suitable for implementing TLS in the terminals that do not have enough computing power and memory in ubiquitous network.

  18. The future of remote ECG monitoring systems.

    PubMed

    Guo, Shu-Li; Han, Li-Na; Liu, Hong-Wei; Si, Quan-Jin; Kong, De-Feng; Guo, Fu-Su

    2016-09-01

    Remote ECG monitoring systems are becoming commonplace medical devices for remote heart monitoring. In recent years, remote ECG monitoring systems have been applied in the monitoring of various kinds of heart diseases, and the quality of the transmission and reception of the ECG signals during remote process kept advancing. However, there remains accompanying challenges. This report focuses on the three components of the remote ECG monitoring system: patient (the end user), the doctor workstation, and the remote server, reviewing and evaluating the imminent challenges on the wearable systems, packet loss in remote transmission, portable ECG monitoring system, patient ECG data collection system, and ECG signals transmission including real-time processing ST segment, R wave, RR interval and QRS wave, etc. This paper tries to clarify the future developmental strategies of the ECG remote monitoring, which can be helpful in guiding the research and development of remote ECG monitoring.

  19. A remote data access architecture for home-monitoring health-care applications.

    PubMed

    Lin, Chao-Hung; Young, Shuenn-Tsong; Kuo, Te-Son

    2007-03-01

    With the aging of the population and the increasing patient preference for receiving care in their own homes, remote home care is one of the fastest growing areas of health care in Taiwan and many other countries. Many remote home-monitoring applications have been developed and implemented to enable both formal and informal caregivers to have remote access to patient data so that they can respond instantly to any abnormalities of in-home patients. The aim of this technology is to give both patients and relatives better control of the health care, reduce the burden on informal caregivers and reduce visits to hospitals and thus result in a better quality of life for both the patient and his/her family. To facilitate their widespread adoption, remote home-monitoring systems take advantage of the low-cost features and popularity of the Internet and PCs, but are inherently exposed to several security risks, such as virus and denial-of-service (DoS) attacks. These security threats exist as long as the in-home PC is directly accessible by remote-monitoring users over the Internet. The purpose of the study reported in this paper was to improve the security of such systems, with the proposed architecture aimed at increasing the system availability and confidentiality of patient information. A broker server is introduced between the remote-monitoring devices and the in-home PCs. This topology removes direct access to the in-home PC, and a firewall can be configured to deny all inbound connections while the remote home-monitoring application is operating. This architecture helps to transfer the security risks from the in-home PC to the managed broker server, on which more advanced security measures can be implemented. The pros and cons of this novel architecture design are also discussed and summarized.

  20. A Wireless MEMS-Based Inclinometer Sensor Node for Structural Health Monitoring

    PubMed Central

    Ha, Dae Woong; Park, Hyo Seon; Choi, Se Woon; Kim, Yousok

    2013-01-01

    This paper proposes a wireless inclinometer sensor node for structural health monitoring (SHM) that can be applied to civil engineering and building structures subjected to various loadings. The inclinometer used in this study employs a method for calculating the tilt based on the difference between the static acceleration and the acceleration due to gravity, using a micro-electro-mechanical system (MEMS)-based accelerometer. A wireless sensor node was developed through which tilt measurement data are wirelessly transmitted to a monitoring server. This node consists of a slave node that uses a short-distance wireless communication system (RF 2.4 GHz) and a master node that uses a long-distance telecommunication system (code division multiple access—CDMA). The communication distance limitation, which is recognized as an important issue in wireless monitoring systems, has been resolved via these two wireless communication components. The reliability of the proposed wireless inclinometer sensor node was verified experimentally by comparing the values measured by the inclinometer and subsequently transferred to the monitoring server via wired and wireless transfer methods to permit a performance evaluation of the wireless communication sensor nodes. The experimental results indicated that the two systems (wired and wireless transfer systems) yielded almost identical values at a tilt angle greater than 1°, and a uniform difference was observed at a tilt angle less than 0.42° (approximately 0.0032° corresponding to 0.76% of the tilt angle, 0.42°) regardless of the tilt size. This result was deemed to be within the allowable range of measurement error in SHM. Thus, the wireless transfer system proposed in this study was experimentally verified for practical application in a structural health monitoring system. PMID:24287533

  1. High-throughput neuroimaging-genetics computational infrastructure

    PubMed Central

    Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619

  2. Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks

    PubMed Central

    Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu

    2017-01-01

    Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users’ medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs’ applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs’ deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models. PMID:28475110

  3. Public Auditing with Privacy Protection in a Multi-User Model of Cloud-Assisted Body Sensor Networks.

    PubMed

    Li, Song; Cui, Jie; Zhong, Hong; Liu, Lu

    2017-05-05

    Wireless Body Sensor Networks (WBSNs) are gaining importance in the era of the Internet of Things (IoT). The modern medical system is a particular area where the WBSN techniques are being increasingly adopted for various fundamental operations. Despite such increasing deployments of WBSNs, issues such as the infancy in the size, capabilities and limited data processing capacities of the sensor devices restrain their adoption in resource-demanding applications. Though providing computing and storage supplements from cloud servers can potentially enrich the capabilities of the WBSNs devices, data security is one of the prevailing issues that affects the reliability of cloud-assisted services. Sensitive applications such as modern medical systems demand assurance of the privacy of the users' medical records stored in distant cloud servers. Since it is economically impossible to set up private cloud servers for every client, auditing data security managed in the remote servers has necessarily become an integral requirement of WBSNs' applications relying on public cloud servers. To this end, this paper proposes a novel certificateless public auditing scheme with integrated privacy protection. The multi-user model in our scheme supports groups of users to store and share data, thus exhibiting the potential for WBSNs' deployments within community environments. Furthermore, our scheme enriches user experiences by offering public verifiability, forward security mechanisms and revocation of illegal group members. Experimental evaluations demonstrate the security effectiveness of our proposed scheme under the Random Oracle Model (ROM) by outperforming existing cloud-assisted WBSN models.

  4. [Construction and analysis of a monitoring system with remote real-time multiple physiological parameters based on cloud computing].

    PubMed

    Zhu, Lingyun; Li, Lianjie; Meng, Chunyan

    2014-12-01

    There have been problems in the existing multiple physiological parameter real-time monitoring system, such as insufficient server capacity for physiological data storage and analysis so that data consistency can not be guaranteed, poor performance in real-time, and other issues caused by the growing scale of data. We therefore pro posed a new solution which was with multiple physiological parameters and could calculate clustered background data storage and processing based on cloud computing. Through our studies, a batch processing for longitudinal analysis of patients' historical data was introduced. The process included the resource virtualization of IaaS layer for cloud platform, the construction of real-time computing platform of PaaS layer, the reception and analysis of data stream of SaaS layer, and the bottleneck problem of multi-parameter data transmission, etc. The results were to achieve in real-time physiological information transmission, storage and analysis of a large amount of data. The simulation test results showed that the remote multiple physiological parameter monitoring system based on cloud platform had obvious advantages in processing time and load balancing over the traditional server model. This architecture solved the problems including long turnaround time, poor performance of real-time analysis, lack of extensibility and other issues, which exist in the traditional remote medical services. Technical support was provided in order to facilitate a "wearable wireless sensor plus mobile wireless transmission plus cloud computing service" mode moving towards home health monitoring for multiple physiological parameter wireless monitoring.

  5. Agentless Cloud-Wide Monitoring of Virtual Disk State

    DTIC Science & Technology

    2015-10-01

    packages include Apache, MySQL , PHP, Ruby on Rails, Java Application Servers, and many others. Figure 2.12 shows the results of a run of the Software...Linux, Apache, MySQL , PHP (LAMP) set of applications. Thus, many file-level update logs will contain the same versions of files repeated across many

  6. Wireless Sensor-Based Smart-Clothing Platform for ECG Monitoring

    PubMed Central

    Lin, Chung-Chih; Yu, Yan-Shuo

    2015-01-01

    The goal of this study is to use wireless sensor technologies to develop a smart clothes service platform for health monitoring. Our platform consists of smart clothes, a sensor node, a gateway server, and a health cloud. The smart clothes have fabric electrodes to detect electrocardiography (ECG) signals. The sensor node improves the accuracy of QRS complexes detection by morphology analysis and reduces power consumption by the power-saving transmission functionality. The gateway server provides a reconfigurable finite state machine (RFSM) software architecture for abnormal ECG detection to support online updating. Most normal ECG can be filtered out, and the abnormal ECG is further analyzed in the health cloud. Three experiments are conducted to evaluate the platform's performance. The results demonstrate that the signal-to-noise ratio (SNR) of the smart clothes exceeds 37 dB, which is within the “very good signal” interval. The average of the QRS sensitivity and positive prediction is above 99.5%. Power-saving transmission is reduced by nearly 1980 times the power consumption in the best-case analysis. PMID:26640512

  7. Wireless Sensor-Based Smart-Clothing Platform for ECG Monitoring.

    PubMed

    Wang, Jie; Lin, Chung-Chih; Yu, Yan-Shuo; Yu, Tsang-Chu

    2015-01-01

    The goal of this study is to use wireless sensor technologies to develop a smart clothes service platform for health monitoring. Our platform consists of smart clothes, a sensor node, a gateway server, and a health cloud. The smart clothes have fabric electrodes to detect electrocardiography (ECG) signals. The sensor node improves the accuracy of QRS complexes detection by morphology analysis and reduces power consumption by the power-saving transmission functionality. The gateway server provides a reconfigurable finite state machine (RFSM) software architecture for abnormal ECG detection to support online updating. Most normal ECG can be filtered out, and the abnormal ECG is further analyzed in the health cloud. Three experiments are conducted to evaluate the platform's performance. The results demonstrate that the signal-to-noise ratio (SNR) of the smart clothes exceeds 37 dB, which is within the "very good signal" interval. The average of the QRS sensitivity and positive prediction is above 99.5%. Power-saving transmission is reduced by nearly 1980 times the power consumption in the best-case analysis.

  8. Technical Manual for the Geospatial Stream Flow Model (GeoSFM)

    USGS Publications Warehouse

    Asante, Kwabena O.; Artan, Guleid A.; Pervez, Md Shahriar; Bandaragoda, Christina; Verdin, James P.

    2008-01-01

    The monitoring of wide-area hydrologic events requires the use of geospatial and time series data available in near-real time. These data sets must be manipulated into information products that speak to the location and magnitude of the event. Scientists at the U.S. Geological Survey Earth Resources Observation and Science (USGS EROS) Center have implemented a hydrologic modeling system which consists of an operational data processing system and the Geospatial Stream Flow Model (GeoSFM). The data processing system generates daily forcing evapotranspiration and precipitation data from various remotely sensed and ground-based data sources. To allow for rapid implementation in data scarce environments, widely available terrain, soil, and land cover data sets are used for model setup and initial parameter estimation. GeoSFM performs geospatial preprocessing and postprocessing tasks as well as hydrologic modeling tasks within an ArcView GIS environment. The integration of GIS routines and time series processing routines is achieved seamlessly through the use of dynamically linked libraries (DLLs) embedded within Avenue scripts. GeoSFM is run operationally to identify and map wide-area streamflow anomalies. Daily model results including daily streamflow and soil water maps are disseminated through Internet map servers, flood hazard bulletins and other media.

  9. Unidata Cyberinfrastructure in the Cloud

    NASA Astrophysics Data System (ADS)

    Ramamurthy, M. K.; Young, J. W.

    2016-12-01

    Data services, software, and user support are critical components of geosciences cyber-infrastructure to help researchers to advance science. With the maturity of and significant advances in cloud computing, it has recently emerged as an alternative new paradigm for developing and delivering a broad array of services over the Internet. Cloud computing is now mature enough in usability in many areas of science and education, bringing the benefits of virtualized and elastic remote services to infrastructure, software, computation, and data. Cloud environments reduce the amount of time and money spent to procure, install, and maintain new hardware and software, and reduce costs through resource pooling and shared infrastructure. Given the enormous potential of cloud-based services, Unidata has been moving to augment its software, services, data delivery mechanisms to align with the cloud-computing paradigm. To realize the above vision, Unidata has worked toward: * Providing access to many types of data from a cloud (e.g., via the THREDDS Data Server, RAMADDA and EDEX servers); * Deploying data-proximate tools to easily process, analyze, and visualize those data in a cloud environment cloud for consumption by any one, by any device, from anywhere, at any time; * Developing and providing a range of pre-configured and well-integrated tools and services that can be deployed by any university in their own private or public cloud settings. Specifically, Unidata has developed Docker for "containerized applications", making them easy to deploy. Docker helps to create "disposable" installs and eliminates many configuration challenges. Containerized applications include tools for data transport, access, analysis, and visualization: THREDDS Data Server, Integrated Data Viewer, GEMPAK, Local Data Manager, RAMADDA Data Server, and Python tools; * Leveraging Jupyter as a central platform and hub with its powerful set of interlinking tools to connect interactively data servers, Python scientific libraries, scripts, and workflows; * Exploring end-to-end modeling and prediction capabilities in the cloud; * Partnering with NOAA and public cloud vendors (e.g., Amazon and OCC) on the NOAA Big Data Project to harness their capabilities and resources for the benefit of the academic community.

  10. A Virtual Research Environment for a Secondary Ion Mass Spectrometer (SIMS)

    NASA Astrophysics Data System (ADS)

    Wiedenbeck, M.; Schäfer, L.; Klump, J.; Galkin, A.

    2013-12-01

    Overview: This poster describes the development of a Virtual Research Environment for the Secondary Ion Mass Spectrometer (SIMS) at GFZ Potsdam. Background: Secondary Ion Mass Spectrometers (SIMS) are extremely sensitive instruments for analyzing the surfaces of solid and thin film samples. These instruments are rare, expensive and experienced operators are very highly sought after. As such, measurement time is a precious commodity, until now only accessible to small numbers of researchers. The challenge: The Virtual SIMS Project aims to set up a Virtual Research Environment for the operation of the CAMECA IMS 1280-HR instrument at the GFZ Potsdam. The objective of the VRE is to provide SIMS access not only to researchers locally present in Potsdam but also to scientists working with SIMS cooperation partners in e.g., South Africa, Brazil or India. The requirements: The system should address the complete spectrum of laboratory procedures - from online application for measurement time, to remote access for data acquisition to data archiving for the subsequent publication and for future reuse. The approach: The targeted Virtual SIMS Environment will consist of a: 1. Web Server running the Virtual SIMS website providing general information about the project, lab access proposal forms and calendar for the timing of project related tasks. 2. LIMS Server, responsible for scheduling procedures, data management and, if applicable, accounting and billing. 3. Remote SIMS Tool, devoted to the operation of the experiment within a remote control environment. 4. Publishing System, which supports the publication of results in cooperation with the GFZ Library services. 5. Training Simulator, which offers the opportunity to rehearse experiments and to prepare for possible events such as a power outages or interruptions to broadband services. First results: The SIMS Virtual Research Environment will be mainly based on open source software, the only exception being the CAMECA IMS 1280-HR SIMS operating under LabView. The Publishing System will be based on eSciDoc, which is already successfully used by the GFZ scientific library. For the LIMS Server we are currently testing various options. The challenge, however, is the successful integration of all the various components and, where necessary, the definition of useful interfaces between the modules.

  11. A Modular IoT Platform for Real-Time Indoor Air Quality Monitoring.

    PubMed

    Benammar, Mohieddine; Abdaoui, Abderrazak; Ahmad, Sabbir H M; Touati, Farid; Kadri, Abdullah

    2018-02-14

    The impact of air quality on health and on life comfort is well established. In many societies, vulnerable elderly and young populations spend most of their time indoors. Therefore, indoor air quality monitoring (IAQM) is of great importance to human health. Engineers and researchers are increasingly focusing their efforts on the design of real-time IAQM systems using wireless sensor networks. This paper presents an end-to-end IAQM system enabling measurement of CO₂, CO, SO₂, NO₂, O₃, Cl₂, ambient temperature, and relative humidity. In IAQM systems, remote users usually use a local gateway to connect wireless sensor nodes in a given monitoring site to the external world for ubiquitous access of data. In this work, the role of the gateway in processing collected air quality data and its reliable dissemination to end-users through a web-server is emphasized. A mechanism for the backup and the restoration of the collected data in the case of Internet outage is presented. The system is adapted to an open-source Internet-of-Things (IoT) web-server platform, called Emoncms, for live monitoring and long-term storage of the collected IAQM data. A modular IAQM architecture is adopted, which results in a smart scalable system that allows seamless integration of various sensing technologies, wireless sensor networks (WSNs) and smart mobile standards. The paper gives full hardware and software details of the proposed solution. Sample IAQM results collected in various locations are also presented to demonstrate the abilities of the system.

  12. ATM LAN Emulation: Getting from Here to There.

    ERIC Educational Resources Information Center

    Learn, Larry L., Ed.

    1995-01-01

    Discusses current LAN (local area network) configuration and explains ATM (asynchronous transfer mode) as the future telecommunications transport. Highlights include LAN emulation, which enables the interconnection of legacy LANs and the new ATM environment; virtual LANs; broadcast servers; and standards. (LRW)

  13. Medical Information Management System (MIMS) CareWindows.

    PubMed Central

    Stiphout, R. M.; Schiffman, R. M.; Christner, M. F.; Ward, R.; Purves, T. M.

    1991-01-01

    The demonstration of MIMS/CareWindows will include: (1) a review of the application environment and development history, (2) a demonstration of a very large, comprehensive clinical information system with a cost effective graphic user server and communications interface. PMID:1807755

  14. Beating the tyranny of scale with a private cloud configured for Big Data

    NASA Astrophysics Data System (ADS)

    Lawrence, Bryan; Bennett, Victoria; Churchill, Jonathan; Juckes, Martin; Kershaw, Philip; Pepler, Sam; Pritchard, Matt; Stephens, Ag

    2015-04-01

    The Joint Analysis System, JASMIN, consists of a five significant hardware components: a batch computing cluster, a hypervisor cluster, bulk disk storage, high performance disk storage, and access to a tape robot. Each of the computing clusters consists of a heterogeneous set of servers, supporting a range of possible data analysis tasks - and a unique network environment makes it relatively trivial to migrate servers between the two clusters. The high performance disk storage will include the world's largest (publicly visible) deployment of the Panasas parallel disk system. Initially deployed in April 2012, JASMIN has already undergone two major upgrades, culminating in a system which by April 2015, will have in excess of 16 PB of disk and 4000 cores. Layered on the basic hardware are a range of services, ranging from managed services, such as the curated archives of the Centre for Environmental Data Archival or the data analysis environment for the National Centres for Atmospheric Science and Earth Observation, to a generic Infrastructure as a Service (IaaS) offering for the UK environmental science community. Here we present examples of some of the big data workloads being supported in this environment - ranging from data management tasks, such as checksumming 3 PB of data held in over one hundred million files, to science tasks, such as re-processing satellite observations with new algorithms, or calculating new diagnostics on petascale climate simulation outputs. We will demonstrate how the provision of a cloud environment closely coupled to a batch computing environment, all sharing the same high performance disk system allows massively parallel processing without the necessity to shuffle data excessively - even as it supports many different virtual communities, each with guaranteed performance. We will discuss the advantages of having a heterogeneous range of servers with available memory from tens of GB at the low end to (currently) two TB at the high end. There are some limitations of the JASMIN environment, the high performance disk environment is not fully available in the IaaS environment, and a planned ability to burst compute heavy jobs into the public cloud is not yet fully available. There are load balancing and performance issues that need to be understood. We will conclude with projections for future usage, and our plans to meet those requirements.

  15. Mobile collaborative medical display system.

    PubMed

    Park, Sanghun; Kim, Wontae; Ihm, Insung

    2008-03-01

    Because of recent advances in wireless communication technologies, the world of mobile computing is flourishing with a variety of applications. In this study, we present an integrated architecture for a personal digital assistant (PDA)-based mobile medical display system that supports collaborative work between remote users. We aim to develop a system that enables users in different regions to share a working environment for collaborative visualization with the potential for exploring huge medical datasets. Our system consists of three major components: mobile client, gateway, and parallel rendering server. The mobile client serves as a front end and enables users to choose the visualization and control parameters interactively and cooperatively. The gateway handles requests and responses between mobile clients and the rendering server for efficient communication. Through the gateway, it is possible to share working environments between users, allowing them to work together in computer supported cooperative work (CSCW) mode. Finally, the parallel rendering server is responsible for performing heavy visualization tasks. Our experience indicates that some features currently available to our mobile clients for collaborative scientific visualization are limited due to the poor performance of mobile devices and the low bandwidth of wireless connections. However, as mobile devices and wireless network systems are experiencing considerable elevation in their capabilities, we believe that our methodology will be utilized effectively in building quite responsive, useful mobile collaborative medical systems in the very near future.

  16. Flexible software architecture for user-interface and machine control in laboratory automation.

    PubMed

    Arutunian, E B; Meldrum, D R; Friedman, N A; Moody, S E

    1998-10-01

    We describe a modular, layered software architecture for automated laboratory instruments. The design consists of a sophisticated user interface, a machine controller and multiple individual hardware subsystems, each interacting through a client-server architecture built entirely on top of open Internet standards. In our implementation, the user-interface components are built as Java applets that are downloaded from a server integrated into the machine controller. The user-interface client can thereby provide laboratory personnel with a familiar environment for experiment design through a standard World Wide Web browser. Data management and security are seamlessly integrated at the machine-controller layer using QNX, a real-time operating system. This layer also controls hardware subsystems through a second client-server interface. This architecture has proven flexible and relatively easy to implement and allows users to operate laboratory automation instruments remotely through an Internet connection. The software architecture was implemented and demonstrated on the Acapella, an automated fluid-sample-processing system that is under development at the University of Washington.

  17. Intelligent open-architecture controller using knowledge server

    NASA Astrophysics Data System (ADS)

    Nacsa, Janos; Kovacs, George L.; Haidegger, Geza

    2001-12-01

    In an ideal scenario of intelligent machine tools [22] the human mechanist was almost replaced by the controller. During the last decade many efforts have been made to get closer to this ideal scenario, but the way of information processing within the CNC did not change too much. The paper summarizes the requirements of an intelligent CNC evaluating the different research efforts done in this field using different artificial intelligence (AI) methods. The need for open CNC architecture was emerging at many places around the world. The second part of the paper introduces and shortly compares these efforts. In the third part a low cost concept for intelligent and open systems named Knowledge Server for Controllers (KSC) is introduced. It allows more devices to solve their intelligent processing needs using the same server that is capable to process intelligent data. In the final part the KSC concept is used in an open CNC environment to build up some elements of an intelligent CNC. The preliminary results of the implementation are also introduced.

  18. Fieldservers and Sensor Service Grid as Real-time Monitoring Infrastructure for Ubiquitous Sensor Networks

    PubMed Central

    Honda, Kiyoshi; Shrestha, Aadit; Witayangkurn, Apichon; Chinnachodteeranun, Rassarin; Shimamura, Hiroshi

    2009-01-01

    The fieldserver is an Internet based observation robot that can provide an outdoor solution for monitoring environmental parameters in real-time. The data from its sensors can be collected to a central server infrastructure and published on the Internet. The information from the sensor network will contribute to monitoring and modeling on various environmental issues in Asia, including agriculture, food, pollution, disaster, climate change etc. An initiative called Sensor Asia is developing an infrastructure called Sensor Service Grid (SSG), which integrates fieldservers and Web GIS to realize easy and low cost installation and operation of ubiquitous field sensor networks. PMID:22574018

  19. Report on the Installation and Testing of the Advanced Weather Interactive Processing System (AWIPS II) for U.S. Navy Applications

    DTIC Science & Technology

    2018-04-24

    Environment (CAVE). The report also details NRL’s work in extending AWIPS II EDEX to ingest and decode a Navy movement report instructions (MOVREP...phase of this work involved obtaining a copy of the AWIPS II client, the Common Access Visualization Environment (CAVE) as well as a copy of the server...assess the development environment of CAVE for supporting a Navy specific application. In consultation with FWC-San Diego we chose to work with

  20. Integrated multimodal human-computer interface and augmented reality for interactive display applications

    NASA Astrophysics Data System (ADS)

    Vassiliou, Marius S.; Sundareswaran, Venkataraman; Chen, S.; Behringer, Reinhold; Tam, Clement K.; Chan, M.; Bangayan, Phil T.; McGee, Joshua H.

    2000-08-01

    We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.

  1. Federal Emergency Management Information System (FEMIS) System Administration Guide for FEMIS Version 1.4.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Bower, J.C.; Burnett, R.A.

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less

  2. Federal Emergency Management Information System (FEMIS), Installation Guide for FEMIS 1.4.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Burnett, R.A.; Carter, R.J.

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less

  3. Federal Emergency Management Information System (FEMIS) system administration guide. Version 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Burnett, R.A.; Downing, T.R.

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and analysis tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide defines FEMIS hardware and software requirements and gives instructions for installing the FEMIS software package. This document also contains information on the following: software installation for the FEMIS data servers, communication server, mail server, and the emergency management workstations; distribution media loading and FEMIS installation validation and troubleshooting; and system management of FEMIS users, login privileges, and usage. Themore » system administration utilities (tools), available in the FEMIS client software, are described for user accounts and site profile. This document also describes the installation and use of system and database administration utilities that will assist in keeping the FEMIS system running in an operational environment. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local area network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via telecommunications links.« less

  4. Federal Emergency Management Information System (FEMIS) Data Management Guide for FEMIS Version 1.4.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angel, L.K.; Bower, J.C.; Burnett, R.A.

    1999-06-29

    The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less

  5. Development of a novel SCADA system for laboratory testing.

    PubMed

    Patel, M; Cole, G R; Pryor, T L; Wilmot, N A

    2004-07-01

    This document summarizes the supervisory control and data acquisition (SCADA) system that allows communication with, and controlling the output of, various I/O devices in the renewable energy systems and components test facility RESLab. This SCADA system differs from traditional SCADA systems in that it supports a continuously changing operating environment depending on the test to be performed. The SCADA System is based on the concept of having one Master I/O Server and multiple client computer systems. This paper describes the main features and advantages of this dynamic SCADA system, the connections of various field devices to the master I/O server, the device servers, and numerous software features used in the system. The system is based on the graphical programming language "LabVIEW" and its "Datalogging and Supervisory Control" (DSC) module. The DSC module supports a real-time database called the "tag engine," which performs the I/O operations with all field devices attached to the master I/O server and communications with the other tag engines running on the client computers connected via a local area network. Generic and detailed communication block diagrams illustrating the hierarchical structure of this SCADA system are presented. The flow diagram outlining a complete test performed using this system in one of its standard configurations is described.

  6. Cooperative runtime monitoring

    NASA Astrophysics Data System (ADS)

    Hallé, Sylvain

    2013-11-01

    Requirements on message-based interactions can be formalised as an interface contract that specifies constraints on the sequence of possible messages that can be exchanged by multiple parties. At runtime, each peer can monitor incoming messages and check that the contract is correctly being followed by their respective senders. We introduce cooperative runtime monitoring, where a recipient 'delegates' its monitoring task to the sender, which is required to provide evidence that the message it sends complies with the contract. In turn, this evidence can be quickly checked by the recipient, which is then guaranteed of the sender's compliance to the contract without doing the monitoring computation by itself. A particular application of this concept is shown on web services, where service providers can monitor and enforce contract compliance of third-party clients at a small cost on the server side, while avoiding to certify or digitally sign them.

  7. The Other Infrastructure: Distance Education's Digital Plant.

    ERIC Educational Resources Information Center

    Boettcher, Judith V.; Kumar, M. S. Vijay

    2000-01-01

    Suggests a new infrastructure--the digital plant--for supporting flexible Web campus environments. Describes four categories which make up the infrastructure: personal communication tools and applications; network of networks for the Web campus; dedicated servers and software applications; software applications and services from external…

  8. Networks: The Telecommunications Infrastructure and Impacts of Change.

    ERIC Educational Resources Information Center

    Learn, Larry L.

    1988-01-01

    This overview of the telecommunications environment discusses: (1) influences of technology, economics, politics, and government; (2) legal separations and jurisdictions; (3) pricing policies; and (4) bypass of local facilities. Probable changes and impacts on national and local information servers, local telecommunications carriers, and…

  9. Designing an autonomous environment for mission critical operation of the EUVE satellite

    NASA Technical Reports Server (NTRS)

    Abedini, Annadiana; Malina, Roger F.

    1994-01-01

    Since the launch of NASA's Extreme Ultraviolet Explorer (EUVE) satellite in 1992, there has only been a handful of occurrences that have warranted manual intervention in the EUVE Science Operations Center (ESOC). So, in an effort to reduce costs, the current environment is being redesigned to utilize a combination of off-the-shelf packages and recently developed artificial intelligence (AI) software to automate the monitoring of the science payload and ground systems. The successful implementation of systemic automation would allow the ESOC to evolve from a seven day/week, three shift operation, to a seven day/week one shift operation. First, it was necessary to identify all areas considered mission critical. These were defined as follows: (1) The telemetry stream must be monitored autonomously and anomalies identified. (2) Duty personnel must be automatically paged and informed of the occurrence of an anomaly. (3) The 'basic' state of the ground system must be assessed. (4) Monitors should check that the systems and processes needed to continue in a 'healthy' operational mode are working at all times. (5) Network loads should be monitored to ensure that they stay within established limits. (6) Connectivity to Goddard Space Flight Center (GSFC) systems should be monitored as well, not just for connectivity of the network itself but also for the ability to transfer files. (7) All necessary peripheral devices should be monitored. This would include the disks, routers, tape drives, printers, tape carousel, and power supplies. (8) System daemons such as the archival daemon, the Sybase server, the payload monitoring software, and any other necessary processes should be monitored to ensure that they are operational. (9) The monitoring system needs to be redundant so that the failure of a single machine will not paralyze the monitors. (10) Notification should be done by means of looking though a table of the pager numbers for current 'on call' personnel. The software should be capable of dialing out to notify, sending email, and producing error logs. (11) The system should have knowledge of when real-time passes and tape recorder dumps will occur and should know that these passes and data transmissions are successful. Once the design criteria were established, the design team split into two groups: one that addressed the tracking, commanding, and health and safety of the science payload and another group that addressed the ground systems and communications aspects of the overall system.

  10. Handheld Devices with Wide-Area Wireless Connectivity: Applications in Astronomy Educational Technology and Remote Computational Control

    NASA Astrophysics Data System (ADS)

    Budiardja, R. D.; Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    Wireless technology implemented with handheld devices has attractive features because of the potential to access large amounts of data and the prospect of on-the-fly computational analysis from a device that can be carried in a shirt pocket. We shall describe applications of such technology to the general paradigm of making digital wireless connections from the field to upload information and queries to network servers, executing (potentially complex) programs and controlling data analysis and/or database operations on fast network computers, and returning real-time information from this analysis to the handheld device in the field. As illustration, we shall describe several client/server programs that we have written for applications in teaching introductory astronomy. For example, one program allows static and dynamic properties of astronomical objects to be accessed in a remote observation laboratory setting using a digital cell phone or PDA. Another implements interactive quizzing over a cell phone or PDA using a 700-question introductory astronomy quiz database, thus permitting students to study for astronomy quizzes in any environment in which they have a few free minutes and a digital cell phone or wireless PDA. Another allows one to control and monitor a computation done on a Beowulf cluster by changing the parameters of the computation remotely and retrieving the result when the computation is done. The presentation will include hands-on demonstrations with real devices. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  11. The IRI/LDEO Climate Data Library: Helping People use Climate Data

    NASA Astrophysics Data System (ADS)

    Blumenthal, M. B.; Grover-Kopec, E.; Bell, M.; del Corral, J.

    2005-12-01

    The IRI Climate Data Library (http://iridl.ldeo.columbia.edu/) is a library of datasets. By library we mean a collection of things, collected from both near and far, designed to make them more accessible for the library's users. Our datasets come from many different sources, many different "data cultures", many different formats. By dataset we mean a collection of data organized as multidimensional dependent variables, independent variables, and sub-datasets, along with the metadata (particularly use-metadata) that makes it possible to interpret the data in a meaningful manner. Ingrid, which provides the infrastructure for the Data Library, is an environment that lets one work with datasets: read, write, request, serve, view, select, calculate, transform, ... . It hides an extraordinary amount of technical detail from the user, letting the user think in terms of manipulations to datasets rather that manipulations of files of numbers. Among other things, this hidden technical detail could be accessing data on servers in other places, doing only the small needed portion of an enormous calculation, or translating to and from a variety of formats and between "data cultures". These operations are presented as a collection of virtual directories and documents on a web server, so that an ordinary web client can instantiate a calculation simply by requesting the resulting document or image. Building on this infrastructure, we (and others) have created collections of dynamically-updated images to faciliate monitoring aspects of the climate system, as well as linking these images to the underlying data. We have also created specialized interfaces to address the particular needs of user groups that IRI needs to support.

  12. Map-IT! A Web-Based GIS Tool for Watershed Science Education.

    ERIC Educational Resources Information Center

    Curtis, David H.; Hewes, Christopher M.; Lossau, Matthew J.

    This paper describes the development of a prototypic, Web-accessible GIS solution for K-12 science education and citizen-based watershed monitoring. The server side consists of ArcView IMS running on an NT workstation. The client is built around MapCafe. The client interface, which runs through a standard Web browser, supports standard MapCafe…

  13. MOD Tool (Microwave Optics Design Tool)

    NASA Technical Reports Server (NTRS)

    Katz, Daniel S.; Borgioli, Andrea; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.

    1999-01-01

    The Jet Propulsion Laboratory (JPL) is currently designing and building a number of instruments that operate in the microwave and millimeter-wave bands. These include MIRO (Microwave Instrument for the Rosetta Orbiter), MLS (Microwave Limb Sounder), and IMAS (Integrated Multispectral Atmospheric Sounder). These instruments must be designed and built to meet key design criteria (e.g., beamwidth, gain, pointing) obtained from the scientific goals for the instrument. These criteria are frequently functions of the operating environment (both thermal and mechanical). To design and build instruments which meet these criteria, it is essential to be able to model the instrument in its environments. Currently, a number of modeling tools exist. Commonly used tools at JPL include: FEMAP (meshing), NASTRAN (structural modeling), TRASYS and SINDA (thermal modeling), MACOS/IMOS (optical modeling), and POPO (physical optics modeling). Each of these tools is used by an analyst, who models the instrument in one discipline. The analyst then provides the results of this modeling to another analyst, who continues the overall modeling in another discipline. There is a large reengineering task in place at JPL to automate and speed-up the structural and thermal modeling disciplines, which does not include MOD Tool. The focus of MOD Tool (and of this paper) is in the fields unique to microwave and millimeter-wave instrument design. These include initial design and analysis of the instrument without thermal or structural loads, the automation of the transfer of this design to a high-end CAD tool, and the analysis of the structurally deformed instrument (due to structural and/or thermal loads). MOD Tool is a distributed tool, with a database of design information residing on a server, physical optics analysis being performed on a variety of supercomputer platforms, and a graphical user interface (GUI) residing on the user's desktop computer. The MOD Tool client is being developed using Tcl/Tk, which allows the user to work on a choice of platforms (PC, Mac, or Unix) after downloading the Tcl/Tk binary, which is readily available on the web. The MOD Tool server is written using Expect, and it resides on a Sun workstation. Client/server communications are performed over a socket, where upon a connection from a client to the server, the server spawns a child which is be dedicated to communicating with that client. The server communicates with other machines, such as supercomputers using Expect with the username and password being provided by the user on the client.

  14. Creation of a Web-Based GIS Server and Custom Geoprocessing Tools for Enhanced Hydrologic Applications

    NASA Astrophysics Data System (ADS)

    Welton, B.; Chouinard, K.; Sultan, M.; Becker, D.; Milewski, A.; Becker, R.

    2010-12-01

    Rising populations in the arid and semi arid parts of the World are increasing the demand for fresh water supplies worldwide. Many data sets needed for assessment of hydrologic applications across vast regions of the world are expensive, unpublished, difficult to obtain, or at varying scales which complicates their use. Fortunately, this situation is changing with the development of global remote sensing datasets and web-based platforms such as GIS Server. GIS provides a cost effective vehicle for comparing, analyzing, and querying a variety of spatial datasets as geographically referenced layers. We have recently constructed a web-based GIS, that incorporates all relevant geological, geochemical, geophysical, and remote sensing data sets that were readily used to identify reservoir types and potential well locations on local and regional scales in various tectonic settings including: (1) extensional environment (Red Sea rift), (2) transcurrent fault system (Najd Fault in the Arabian-Nubian Shield), and (3) compressional environments (Himalayas). The web-based GIS could also be used to detect spatial and temporal trends in precipitation, recharge, and runoff in large watersheds on local, regional, and continental scales. These applications were enabled through the construction of a web-based ArcGIS Server with Google Map’s interface and the development of customized geoprocessing tools. ArcGIS Server provides out-of-the-box setups that are generic in nature. This platform includes all of the standard web based GIS tools (e.g. pan, zoom, identify, search, data querying, and measurement). In addition to the standard suite of tools provided by ArcGIS Server an additional set of advanced data manipulation and display tools was also developed to allow for a more complete and customizable view of the area of interest. The most notable addition to the standard GIS Server tools is the custom on-demand geoprocessing tools (e.g., graph, statistical functions, custom raster creation, profile, TRMM). The generation of a wide range of derivative maps (e.g., buffer zone, contour map, graphs, temporal rainfall distribution maps) from various map layers (e.g., geologic maps, geophysics, satellite images) allows for more user flexibility. The use of these tools along with Google Map’s API which enables the website user to utilize high quality GeoEye 2 images provide by Google in conjunction with our data, creates a more complete image of the area being observed and allows for custom derivative maps to be created in the field and viewed immediately on the web, processes that were restricted to offline databases.

  15. Risk Assessment of the Naval Postgraduate School Gigabit Network

    DTIC Science & Technology

    2004-09-01

    Management Server (1) • Ras Server (1) • Remedy Server (1) • Samba Server(2) • SQL Servers (3) • Web Servers (3) • WINS Server (1) • Library...Server Bob Sharp INCA Windows 2000 Advanced Server NPGS Landesk SQL 2000 Alan Pires eagle Microsoft Windows 2000 Advanced Server EWS NPGS Landesk...Advanced Server Special Projects NPGS SQL Alan Pires MC01BDB Microsoft Windows 2000 Advanced Server Special Projects NPGS SQL 2000 Alan Pires

  16. Analyzing Cyber-Physical Threats on Robotic Platforms.

    PubMed

    Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J

    2018-05-21

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.

  17. Analyzing Cyber-Physical Threats on Robotic Platforms †

    PubMed Central

    2018-01-01

    Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403

  18. ROME (Request Object Management Environment)

    NASA Astrophysics Data System (ADS)

    Kong, M.; Good, J. C.; Berriman, G. B.

    2005-12-01

    Most current astronomical archive services are based on an HTML/ CGI architecture where users submit HTML forms via a browser and CGI programs operating under a web server process the requests. Most services return an HTML result page with URL links to the result files or, for longer jobs, return a message indicating that email will be sent when the job is done. This paradigm has a few serious shortcomings. First, it is all too common for something to go wrong and for the user to never hear about the job again. Second, for long and complicated jobs there is often important intermediate information that would allow the user to adjust the processing. Finally, unless some sort of custom queueing mechanism is used, background jobs are started immediately upon receiving the CGI request. When there are many such requests the server machine can easily be overloaded and either slow to a crawl or crash. Request Object Management Environment (ROME) is a collection of middleware components being developed under the National Virtual Observatory Project to provide mechanism for managing long jobs such as computationally intensive statistical analysis requests or the generation of large scale mosaic images. Written as EJB objects within the open-source JBoss applications server, ROME receives processing requests via a servelet interface, stores them in a DBMS using JDBC, distributes the processing (via queuing mechanisms) across multiple machines and environments (including Grid resources), manages realtime messages from the processing modules, and ensures proper user notification. The request processing modules are identical in structure to standard CGI-programs -- though they can optionally implement status messaging -- and can be written in any language. ROME will persist these jobs across failures of processing modules, network outages, and even downtime of ROME and the DBMS, restarting them as necessary.

  19. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P.; Dimitriou, D.; Hankin, S.

    2005-12-01

    The USGODAE Monterey Data Server (http://www.usgodae.org/) has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. The server is operated with oversight and funding from the Office of Naval Research (ONR). Support of the GODAE Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the GODAE server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data available from the Global Telecommunications System (GTS) and other FTP sites, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. It supports GODAE participants, as well as the broader oceanographic research community, and is becoming a significant node in the international GODAE program. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Presenting data with a consistent interface and ensuring its availability in the maximum number of standard formats is one of the primary challenges in hosting the many diverse formats and broad range of data used by the GODAE community. To this end, all USGODAE data sets are available in their original format via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and GODAE Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC) specifications. USGODAE serves FNMOC GRIB files from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) as OPeNDAP data sets using the GrADS Data Server (GDS). The server also provides several FNMOC custom IEEE binary format high resolution ocean analysis products and model outputs through GDS. These data sets are also made available through LAS. The Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo temperature/salinity profiling float data. The Argo collection includes all available Delayed-Mode (scientific quality controlled and corrected) data. USGODAE Argo data are served through OPeNDAP and LAS, which provide complete integration of the Argo data set into NVODS and the IOOS/DMAC. By providing researchers flexible, easy access to data through standard Internet and oceanographic interfaces, the USGODAE Monterey Data Server has become an invaluable resource for oceanographic research. Also, by promoting the community data serving projects, USGODAE strengthens the community and helps to advance the data serving standards.

  20. The Application of Wireless Sensor Networks in Management of Orchard

    NASA Astrophysics Data System (ADS)

    Zhu, Guizhi

    A monitoring system based on wireless sensor network is established, aiming at the difficulty of information acquisition in the orchard on the hill at present. The temperature and humidity sensors are deployed around fruit trees to gather the real-time environmental parameters, and the wireless communication modules with self-organized form, which transmit the data to a remote central server, can realize the function of monitoring. By setting the parameters of data intelligent analysis judgment, the information on remote diagnosis and decision support can be timely and effectively feed back to users.

  1. A price and performance comparison of three different storage architectures for data in cloud-based systems

    NASA Astrophysics Data System (ADS)

    Gallagher, J. H. R.; Jelenak, A.; Potter, N.; Fulker, D. W.; Habermann, T.

    2017-12-01

    Providing data services based on cloud computing technology that is equivalent to those developed for traditional computing and storage systems is critical for successful migration to cloud-based architectures for data production, scientific analysis and storage. OPeNDAP Web-service capabilities (comprising the Data Access Protocol (DAP) specification plus open-source software for realizing DAP in servers and clients) are among the most widely deployed means for achieving data-as-service functionality in the Earth sciences. OPeNDAP services are especially common in traditional data center environments where servers offer access to datasets stored in (very large) file systems, and a preponderance of the source data for these services is being stored in the Hierarchical Data Format Version 5 (HDF5). Three candidate architectures for serving NASA satellite Earth Science HDF5 data via Hyrax running on Amazon Web Services (AWS) were developed and their performance examined for a set of representative use cases. The performance was based both on runtime and incurred cost. The three architectures differ in how HDF5 files are stored in the Amazon Simple Storage Service (S3) and how the Hyrax server (as an EC2 instance) retrieves their data. The results for both the serial and parallel access to HDF5 data in the S3 will be presented. While the study focused on HDF5 data, OPeNDAP and the Hyrax data server, the architectures are generic and the analysis can be extrapolated to many different data formats, web APIs, and data servers.

  2. Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)

    NASA Technical Reports Server (NTRS)

    Pham, Long; Eng, Eunice; Sweatman, Paul

    2003-01-01

    As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGlS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGlS protocol to provide data in GIs-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an environment physically close to the data source. NADM will benefit users with mining or offer data reduction algorithms by reducing large volumes of data before transmission over the network to the user.

  3. A strategy for providing electronic library services to members of the AGATE Consortium

    NASA Technical Reports Server (NTRS)

    Thompson, J. Garth

    1995-01-01

    In November, 1992, NASA Administrator Daniel Goldin established a Task Force to evaluate conditions which have lead to the precipitous decline of the US General Aviation System and to recommend actions needed to re-establish US leadership in General Aviation. The Task Force Report and a report by Dr. Bruce J. Holmes, Manager of the General Aviation/Commuter Office at NASA Langley Research Center provided the directions for the formation of the Advanced General Aviation Transport Experiments (AGATE), a consortium of government, industry and university committed to the revitalization of the US General Aviation Industry. One of the recommendations of the Task Force Report was that 'a central repository of information should be created to disseminate NASA research as well as other domestic and foreign aeronautical research that has been accomplished, is ongoing or is planned... A user friendly environment should be created.' This paper describes technical and logistic issues and recommends a plan for providing technical information to members of the AGATE Consortium. It is recommended that the General Aviation office establish and maintain an electronic literature page on the AGATE server. This page should provide a user friendly interface to existing technical report and index servers identified in the report and listed in the Recommendations section. A page should also be provided which gives links to Web resources. A list of specific resources is provided in the Recommendations section. Links should also be provided to a page with tips on searching, a form to provide for feedback and suggestions from users for other resources. Finally, a page should be maintained which provides pointers to other resources like the LaRCsim workstation simulation software which is avail from LaRC at no cost. The developments of the Web is very dynamic. These developments should be monitored regularly by the GA staff and links to additional resources should be provided on the server as they become available. An recommendation to NASA Headquarters should be made to establish a logically central access to all of the NASA Technical Libraries, to make these resources available both to all NASA employees and to the AGATE Consortium.

  4. Datacube as a Service to Exploit the Full Potential of Data Cloudy Distributed

    NASA Astrophysics Data System (ADS)

    Mantovani, S.; Natali, S.; Barboni, D.; Hogan, P.; Baumann, P.; Clements, O.

    2017-12-01

    For almost half a century satellite platforms devoted to Earth Observation have allowed creating a complete description of the global environment, generating hundreds of Petabytes of data. The continuous increase of data availability (and respective data volume), together with the raised awareness of climate change issues, have made people of any kind (from citizens to decision makers to scientists) sensitive to environmental threats, improving their inclination to invest on monitoring and mitigation activities. Recently, the term "datacube" has received increasing attention for its potential to simplify the provision of "Big Earth Data" services, allowing massive spatio-temporal data in an analysis-ready way. A number of datacube-ready platforms have emerged to enable a new collaborative approach to analyse the vast amount of satellite imagery and other Earth Observation data, making quicker and easier to explore a time series of images stored in global or regional datacubes. In this context, the European Space Agency and European Commission H2020-funded projects ([1], [2]) are bringing together multiple organisations in Europe, Australia and United States to allow federated data holdings to be analysed using web-based access to petabytes of multidimensional geospatial datasets. In this study, we provide an overview of the existing datacubes (EarthServer-2 datacubes, Sentinel Datacube, European and Australian Landsat Datacubes, …), how the regional datacube structures differ each other, how datacubes interoperability is achieved through OpenSearch and Web Coverage Service (WCS) standards, and finally how the datacube contents can be visualized on a virtual globe (ESA-NASA WebWorldWind) based on a WC(P)S query, and how data can be manipulated on the fly through web-based interfaces, such as Jupyter notebook. The current study is co-financed by the European Space Agency under the MaaS project (ESRIN Contract No. 4000114186/15/I-LG) and the European Union's Horizon 2020 research and innovation programme under the EarthServer-2 project (Grant Agreement No. 654367). [1] MEA as a Service (http://eodatacube.eu) [2] EarthServer-2 (http://www.earthserver.eu)

  5. A proposed UAV for indoor patient care.

    PubMed

    Todd, Catherine; Watfa, Mohamed; El Mouden, Yassine; Sahir, Sana; Ali, Afrah; Niavarani, Ali; Lutfi, Aoun; Copiaco, Abigail; Agarwal, Vaibhavi; Afsari, Kiyan; Johnathon, Chris; Okafor, Onyeka; Ayad, Marina

    2015-09-10

    Indoor flight, obstacle avoidance and client-server communication of an Unmanned Aerial Vehicle (UAV) raises several unique research challenges. This paper examines current methods and associated technologies adapted within the literature toward autonomous UAV flight, for consideration in a proposed system for indoor healthcare administration with a quadcopter. We introduce Healthbuddy, a unique research initiative towards overcoming challenges associated with indoor navigation, collision detection and avoidance, stability, wireless drone-server communications and automated decision support for patient care in a GPS-denied environment. To address the identified research deficits, a drone-based solution is presented. The solution is preliminary as we develop and refine the suggested algorithms and hardware system to achieve the research objectives.

  6. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    PubMed

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  7. Improving the Capture and Re-Use of Data with Wearable Computers

    NASA Technical Reports Server (NTRS)

    Pfarr, Barbara; Fating, Curtis C.; Green, Daniel; Powers, Edward I. (Technical Monitor)

    2001-01-01

    At the Goddard Space Flight Center, members of the Real-Time Software Engineering Branch are developing a wearable, wireless, voice-activated computer for use in a wide range of crosscutting space applications that would benefit from having instant Internet, network, and computer access with complete mobility and hands-free operations. These applications can be applied across many fields and disciplines including spacecraft fabrication, integration and testing (including environmental testing), and astronaut on-orbit control and monitoring of experiments with ground based experimenters. To satisfy the needs of NASA customers, this wearable computer needs to be connected to a wireless network, to transmit and receive real-time video over the network, and to receive updated documents via the Internet or NASA servers. The voice-activated computer, with a unique vocabulary, will allow the users to access documentation in a hands free environment and interact in real-time with remote users. We will discuss wearable computer development, hardware and software issues, wireless network limitations, video/audio solutions and difficulties in language development.

  8. ICCE/ICCAI 2000 Full & Short Papers (Creative Learning).

    ERIC Educational Resources Information Center

    2000

    This document contains the following full and short papers on creative learning from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A Collaborative Learning Support System Based on Virtual Environment Server for Multiple Agents" (Takashi Ohno, Kenji…

  9. High Assurance Models for Secure Systems

    ERIC Educational Resources Information Center

    Almohri, Hussain M. J.

    2013-01-01

    Despite the recent advances in systems and network security, attacks on large enterprise networks consistently impose serious challenges to maintaining data privacy and software service integrity. We identify two main problems that contribute to increasing the security risk in a networked environment: (i) vulnerable servers, workstations, and…

  10. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1994-01-01

    Envision is an interactive environment that provides researchers in the earth sciences convenient ways to manage, browse, and visualize large observed or model data sets. Its main features are support for the netCDF and HDF file formats, an easy to use X/Motif user interface, a client-server configuration, and portability to many UNIX workstations. The Envision package also provides new ways to view and change metadata in a set of data files. It permits a scientist to conveniently and efficiently manage large data sets consisting of many data files. It also provides links to popular visualization tools so that data can be quickly browsed. Envision is a public domain package, freely available to the scientific community. Envision software (binaries and source code) and documentation can be obtained from either of these servers: ftp://vista.atmos.uiuc.edu/pub/envision/ and ftp://csrp.tamu.edu/pub/envision/. Detailed descriptions of Envision capabilities and operations can be found in the User's Guide and Reference Manuals distributed with Envision software.

  11. CANEapp: a user-friendly application for automated next generation transcriptomic data analysis.

    PubMed

    Velmeshev, Dmitry; Lally, Patrick; Magistri, Marco; Faghihi, Mohammad Ali

    2016-01-13

    Next generation sequencing (NGS) technologies are indispensable for molecular biology research, but data analysis represents the bottleneck in their application. Users need to be familiar with computer terminal commands, the Linux environment, and various software tools and scripts. Analysis workflows have to be optimized and experimentally validated to extract biologically meaningful data. Moreover, as larger datasets are being generated, their analysis requires use of high-performance servers. To address these needs, we developed CANEapp (application for Comprehensive automated Analysis of Next-generation sequencing Experiments), a unique suite that combines a Graphical User Interface (GUI) and an automated server-side analysis pipeline that is platform-independent, making it suitable for any server architecture. The GUI runs on a PC or Mac and seamlessly connects to the server to provide full GUI control of RNA-sequencing (RNA-seq) project analysis. The server-side analysis pipeline contains a framework that is implemented on a Linux server through completely automated installation of software components and reference files. Analysis with CANEapp is also fully automated and performs differential gene expression analysis and novel noncoding RNA discovery through alternative workflows (Cuffdiff and R packages edgeR and DESeq2). We compared CANEapp to other similar tools, and it significantly improves on previous developments. We experimentally validated CANEapp's performance by applying it to data derived from different experimental paradigms and confirming the results with quantitative real-time PCR (qRT-PCR). CANEapp adapts to any server architecture by effectively using available resources and thus handles large amounts of data efficiently. CANEapp performance has been experimentally validated on various biological datasets. CANEapp is available free of charge at http://psychiatry.med.miami.edu/research/laboratory-of-translational-rna-genomics/CANE-app . We believe that CANEapp will serve both biologists with no computational experience and bioinformaticians as a simple, timesaving but accurate and powerful tool to analyze large RNA-seq datasets and will provide foundations for future development of integrated and automated high-throughput genomics data analysis tools. Due to its inherently standardized pipeline and combination of automated analysis and platform-independence, CANEapp is an ideal for large-scale collaborative RNA-seq projects between different institutions and research groups.

  12. Cardio-PACs: a new opportunity

    NASA Astrophysics Data System (ADS)

    Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary

    2000-05-01

    It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.

  13. Enhancing the AliEn Web Service Authentication

    NASA Astrophysics Data System (ADS)

    Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping

    2011-12-01

    Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.

  14. Integrating Fingerprint Verification into the Smart Card-Based Healthcare Information System

    NASA Astrophysics Data System (ADS)

    Moon, Daesung; Chung, Yongwha; Pan, Sung Bum; Park, Jin-Won

    2009-12-01

    As VLSI technology has been improved, a smart card employing 32-bit processors has been released, and more personal information such as medical, financial data can be stored in the card. Thus, it becomes important to protect personal information stored in the card. Verification of the card holder's identity using a fingerprint has advantages over the present practices of Personal Identification Numbers (PINs) and passwords. However, the computational workload of fingerprint verification is much heavier than that of the typical PIN-based solution. In this paper, we consider three strategies to implement fingerprint verification in a smart card environment and how to distribute the modules of fingerprint verification between the smart card and the card reader. We first evaluate the number of instructions of each step of a typical fingerprint verification algorithm, and estimate the execution time of several cryptographic algorithms to guarantee the security/privacy of the fingerprint data transmitted in the smart card with the client-server environment. Based on the evaluation results, we analyze each scenario with respect to the security level and the real-time execution requirements in order to implement fingerprint verification in the smart card with the client-server environment.

  15. Clinical Digital Libraries Project: design approach and exploratory assessment of timely use in clinical environments*

    PubMed Central

    MacCall, Steven L.

    2006-01-01

    Objective: The paper describes and evaluates the use of Clinical Digital Libraries Project (CDLP) digital library collections in terms of their facilitation of timely clinical information seeking. Design: A convenience sample of CDLP Web server log activity over a twelve-month period (7/2002 to 6/2003) was analyzed for evidence of timely information seeking after users were referred to digital library clinical topic pages from Web search engines. Sample searches were limited to those originating from medical schools (26% North American and 19% non-North American) and from hospitals or clinics (51% North American and 4% non-North American). Measurement: Timeliness was determined based on a calculation of the difference between the timestamps of the first and last Web server log “hit” during each search in the sample. The calculated differences were mapped into one of three ranges: less than one minute, one to three minutes, and three to five minutes. Results: Of the 864 searches analyzed, 48% were less than 1 minute, 41% were 1 to 3 minutes, and 11% were 3 to 5 minutes. These results were further analyzed by environment (medical schools versus hospitals or clinics) and by geographic location (North America versus non-North American). Searches reflected a consistent pattern of less than 1 minute in these environments. Though the results were not consistent on a month-by-month basis over the entire time period, data for 8 of 12 months showed that searches shorter than 1 minute predominated and data for 1 month showed an equal number of less than 1 minute and 1 to 3 minute searches. Conclusions: The CDLP digital library collections provided timely access to high-quality Web clinical resources when used for information seeking in medical education and hospital or clinic environments from North American and non–North American locations and consistently provided access to the sought information within the documented two-minute standard. The limitations of the use of Web server data warrant an exploratory assessment. This research also suggests the need for further investigation in the area of timely digital library collection services to clinical environments. PMID:16636712

  16. Clinical Digital Libraries Project: design approach and exploratory assessment of timely use in clinical environments.

    PubMed

    Maccall, Steven L

    2006-04-01

    The paper describes and evaluates the use of Clinical Digital Libraries Project (CDLP) digital library collections in terms of their facilitation of timely clinical information seeking. A convenience sample of CDLP Web server log activity over a twelve-month period (7/2002 to 6/2003) was analyzed for evidence of timely information seeking after users were referred to digital library clinical topic pages from Web search engines. Sample searches were limited to those originating from medical schools (26% North American and 19% non-North American) and from hospitals or clinics (51% North American and 4% non-North American). Timeliness was determined based on a calculation of the difference between the timestamps of the first and last Web server log "hit" during each search in the sample. The calculated differences were mapped into one of three ranges: less than one minute, one to three minutes, and three to five minutes. Of the 864 searches analyzed, 48% were less than 1 minute, 41% were 1 to 3 minutes, and 11% were 3 to 5 minutes. These results were further analyzed by environment (medical schools versus hospitals or clinics) and by geographic location (North America versus non-North American). Searches reflected a consistent pattern of less than 1 minute in these environments. Though the results were not consistent on a month-by-month basis over the entire time period, data for 8 of 12 months showed that searches shorter than 1 minute predominated and data for 1 month showed an equal number of less than 1 minute and 1 to 3 minute searches. The CDLP digital library collections provided timely access to high-quality Web clinical resources when used for information seeking in medical education and hospital or clinic environments from North American and non-North American locations and consistently provided access to the sought information within the documented two-minute standard. The limitations of the use of Web server data warrant an exploratory assessment. This research also suggests the need for further investigation in the area of timely digital library collection services to clinical environments.

  17. Sensor node for remote monitoring of waterborne disease-causing bacteria.

    PubMed

    Kim, Kyukwang; Myung, Hyun

    2015-05-05

    A sensor node for sampling water and checking for the presence of harmful bacteria such as E. coli in water sources was developed in this research. A chromogenic enzyme substrate assay method was used to easily detect coliform bacteria by monitoring the color change of the sampled water mixed with a reagent. Live webcam image streaming to the web browser of the end user with a Wi-Fi connected sensor node shows the water color changes in real time. The liquid can be manipulated on the web-based user interface, and also can be observed by webcam feeds. Image streaming and web console servers run on an embedded processor with an expansion board. The UART channel of the expansion board is connected to an external Arduino board and a motor driver to control self-priming water pumps to sample the water, mix the reagent, and remove the water sample after the test is completed. The sensor node can repeat water testing until the test reagent is depleted. The authors anticipate that the use of the sensor node developed in this research can decrease the cost and required labor for testing samples in a factory environment and checking the water quality of local water sources in developing countries.

  18. Web Based Prognostics and 24/7 Monitoring

    NASA Technical Reports Server (NTRS)

    Strautkalns, Miryam; Robinson, Peter

    2013-01-01

    We created a general framework for analysts to store and view data in a way that removes the boundaries created by operating systems, programming languages, and proximity. With the advent of HTML5 and CSS3 with JavaScript the distribution of information is limited to only those who lack a browser. We created a framework based on the methodology: one server, one web based application. Additional benefits are increased opportunities for collaboration. Today the idea of a group in a single room is antiquated. Groups will communicate and collaborate with others from other universities, organizations, as well as other continents across times zones. There are many varieties of data gathering and condition-monitoring software available as well as companies who specialize in customizing software to individual applications. One single group will depend on multiple languages, environments, and computers to oversee recording and collaborating with one another in a single lab. The heterogeneous nature of the system creates challenges for seamless exchange of data and ideas between members. To address these limitations we designed a framework to allow users seamless accessibility to their data. Our framework was deployed using the data feed on the NASA Ames' planetary rover testbed. Our paper demonstrates the process and implementation we followed on the rover.

  19. [Development of a microbiology data warehouse (Akita-ReNICS) for networking hospitals in a medical region].

    PubMed

    Ueki, Shigeharu; Kayaba, Hiroyuki; Tomita, Noriko; Kobayashi, Noriko; Takahashi, Tomoe; Obara, Toshikage; Takeda, Masahide; Moritoki, Yuki; Itoga, Masamichi; Ito, Wataru; Ohsaga, Atsushi; Kondoh, Katsuyuki; Chihara, Junichi

    2011-04-01

    The active involvement of hospital laboratory in surveillance is crucial to the success of nosocomial infection control. The recent dramatic increase of antimicrobial-resistant organisms and their spread into the community suggest that the infection control strategy of independent medical institutions is insufficient. To share the clinical data and surveillance in our local medical region, we developed a microbiology data warehouse for networking hospital laboratories in Akita prefecture. This system, named Akita-ReNICS, is an easy-to-use information management system designed to compare, track, and report the occurrence of antimicrobial-resistant organisms. Participating laboratories routinely transfer their coded and formatted microbiology data to ReNICS server located at Akita University Hospital from their health care system's clinical computer applications over the internet. We established the system to automate the statistical processes, so that the participants can access the server to monitor graphical data in the manner they prefer, using their own computer's browser. Furthermore, our system also provides the documents server, microbiology and antimicrobiotic database, and space for long-term storage of microbiological samples. Akita-ReNICS could be a next generation network for quality improvement of infection control.

  20. Web Program for Development of GUIs for Cluster Computers

    NASA Technical Reports Server (NTRS)

    Czikmantory, Akos; Cwik, Thomas; Klimeck, Gerhard; Hua, Hook; Oyafuso, Fabiano; Vinyard, Edward

    2003-01-01

    WIGLAF (a Web Interface Generator and Legacy Application Facade) is a computer program that provides a Web-based, distributed, graphical-user-interface (GUI) framework that can be adapted to any of a broad range of application programs, written in any programming language, that are executed remotely on any cluster computer system. WIGLAF enables the rapid development of a GUI for controlling and monitoring a specific application program running on the cluster and for transferring data to and from the application program. The only prerequisite for the execution of WIGLAF is a Web-browser program on a user's personal computer connected with the cluster via the Internet. WIGLAF has a client/server architecture: The server component is executed on the cluster system, where it controls the application program and serves data to the client component. The client component is an applet that runs in the Web browser. WIGLAF utilizes the Extensible Markup Language to hold all data associated with the application software, Java to enable platform-independent execution on the cluster system and the display of a GUI generator through the browser, and the Java Remote Method Invocation software package to provide simple, effective client/server networking.

  1. SSRL Emergency Response Shore Tool

    NASA Technical Reports Server (NTRS)

    Mah, Robert W.; Papasin, Richard; McIntosh, Dawn M.; Denham, Douglas; Jorgensen, Charles; Betts, Bradley J.; Del Mundo, Rommel

    2006-01-01

    The SSRL Emergency Response Shore Tool (wherein SSRL signifies Smart Systems Research Laboratory ) is a computer program within a system of communication and mobile-computing software and hardware being developed to increase the situational awareness of first responders at building collapses. This program is intended for use mainly in planning and constructing shores to stabilize partially collapsed structures. The program consists of client and server components, runs in the Windows operating system on commercial off-the-shelf portable computers, and can utilize such additional hardware as digital cameras and Global Positioning System devices. A first responder can enter directly, into a portable computer running this program, the dimensions of a required shore. The shore dimensions, plus an optional digital photograph of the shore site, can then be uploaded via a wireless network to a server. Once on the server, the shore report is time-stamped and made available on similarly equipped portable computers carried by other first responders, including shore wood cutters and an incident commander. The staff in a command center can use the shore reports and photographs to monitor progress and to consult with structural engineers to assess whether a building is in imminent danger of further collapse.

  2. The Anatomy of a Grid portal

    NASA Astrophysics Data System (ADS)

    Licari, Daniele; Calzolari, Federico

    2011-12-01

    In this paper we introduce a new way to deal with Grid portals referring to our implementation. L-GRID is a light portal to access the EGEE/EGI Grid infrastructure via Web, allowing users to submit their jobs from a common Web browser in a few minutes, without any knowledge about the Grid infrastructure. It provides the control over the complete lifecycle of a Grid Job, from its submission and status monitoring, to the output retrieval. The system, implemented as client-server architecture, is based on the Globus Grid middleware. The client side application is based on a java applet; the server relies on a Globus User Interface. There is no need of user registration on the server side, and the user needs only his own X.509 personal certificate. The system is user-friendly, secure (it uses SSL protocol, mechanism for dynamic delegation and identity creation in public key infrastructures), highly customizable, open source, and easy to install. The X.509 personal certificate does not get out from the local machine. It allows to reduce the time spent for the job submission, granting at the same time a higher efficiency and a better security level in proxy delegation and management.

  3. Implementation of Online Promethee Method for Poor Family Change Rate Calculation

    NASA Astrophysics Data System (ADS)

    Aji, Dhady Lukito; Suryono; Widodo, Catur Edi

    2018-02-01

    This research has been done online calculation of the rate of poor family change rate by using Preference Ranking Method of Organization Of Enrichment Evaluation (PROMETHEE) .This system is very useful to monitor poverty in a region as well as for administrative services related to poverty rate. The system consists of computer clients and servers connected via the internet network. Poor family residence data obtained from the government. In addition, survey data are inputted through the client computer in each administrative village and also 23 criteria of input in accordance with the established government. The PROMETHEE method is used to evaluate the value of poverty and its weight is used to determine poverty status. PROMETHEE output can also be used to rank the poverty of the registered population of the server based on the netflow value. The poverty rate is calculated based on the current poverty rate compared to the previous poverty rate. The rate results can be viewed online and real time on the server through numbers and graphs. From the test results can be seen that the system can classify poverty status, calculate the poverty rate change rate and can determine the value and poverty ranking of each population.

  4. A network monitor for HTTPS protocol based on proxy

    NASA Astrophysics Data System (ADS)

    Liu, Yangxin; Zhang, Lingcui; Zhou, Shuguang; Li, Fenghua

    2016-10-01

    With the explosive growth of harmful Internet information such as pornography, violence, and hate messages, network monitoring is essential. Traditional network monitors is based mainly on bypass monitoring. However, we can't filter network traffic using bypass monitoring. Meanwhile, only few studies focus on the network monitoring for HTTPS protocol. That is because HTTPS data is in the encrypted traffic, which makes it difficult to monitor. This paper proposes a network monitor for HTTPS protocol based on proxy. We adopt OpenSSL to establish TLS secure tunes between clients and servers. Epoll is used to handle a large number of concurrent client connections. We also adopt Knuth- Morris-Pratt string searching algorithm (or KMP algorithm) to speed up the search process. Besides, we modify request packets to reduce the risk of errors and modify response packets to improve security. Experiments show that our proxy can monitor the content of all tested HTTPS websites efficiently with little loss of network performance.

  5. The development of a national surveillance system for monitoring blood use and inventory levels at sentinel hospitals in South Korea.

    PubMed

    Lim, Y A; Kim, H H; Joung, U S; Kim, C Y; Shin, Y H; Lee, S W; Kim, H J

    2010-04-01

    We developed a web-based program for a national surveillance system to determine baseline data regarding the supply and demand of blood products at sentinel hospitals in South Korea. Sentinel hospitals were invited to participate in a 1-month pilot-test. The data for receipts and exports of blood from each hospital information system were converted into comma-separated value files according to a specific conversion rule. The daily data from the sites could be transferred to the web-based program server using a semi-automated submission procedure: pressing a key allowed the program to automatically compute the blood inventory level as well as other indices including the minimal inventory ratio (MIR), ideal inventory ratio (IIR), supply index (SI) and utilisation index (UI). The national surveillance system was referred to as the Korean Blood Inventory Monitoring System (KBIMS) and the web-based program for KBIMS was referred to as the Blood Inventory Monitoring System (BMS). A total of 30 256 red blood cell (RBC) units were submitted as receipt data, however, only 83% of the receipt data were submitted to the BMS server as export data (25 093 RBC units). Median values were 2.67 for MIR, 1.08 for IIR, 1.00 for SI, 0.88 for UI and 5.33 for the ideal inventory day. The BMS program was easy to use and is expected to provide a useful tool for monitoring hospital inventory levels. This information will provide baseline data regarding the supply and demand of blood products in South Korea.

  6. A Modular IoT Platform for Real-Time Indoor Air Quality Monitoring

    PubMed Central

    Abdaoui, Abderrazak; Ahmad, Sabbir H.M.; Touati, Farid; Kadri, Abdullah

    2018-01-01

    The impact of air quality on health and on life comfort is well established. In many societies, vulnerable elderly and young populations spend most of their time indoors. Therefore, indoor air quality monitoring (IAQM) is of great importance to human health. Engineers and researchers are increasingly focusing their efforts on the design of real-time IAQM systems using wireless sensor networks. This paper presents an end-to-end IAQM system enabling measurement of CO2, CO, SO2, NO2, O3, Cl2, ambient temperature, and relative humidity. In IAQM systems, remote users usually use a local gateway to connect wireless sensor nodes in a given monitoring site to the external world for ubiquitous access of data. In this work, the role of the gateway in processing collected air quality data and its reliable dissemination to end-users through a web-server is emphasized. A mechanism for the backup and the restoration of the collected data in the case of Internet outage is presented. The system is adapted to an open-source Internet-of-Things (IoT) web-server platform, called Emoncms, for live monitoring and long-term storage of the collected IAQM data. A modular IAQM architecture is adopted, which results in a smart scalable system that allows seamless integration of various sensing technologies, wireless sensor networks (WSNs) and smart mobile standards. The paper gives full hardware and software details of the proposed solution. Sample IAQM results collected in various locations are also presented to demonstrate the abilities of the system. PMID:29443893

  7. ICCE/ICCAI 2000 Full & Short Papers (Virtual Lab/Classroom/School).

    ERIC Educational Resources Information Center

    2000

    This document contains the following full and short papers on virtual laboratories, classrooms, and schools from ICCE/ICCAI 2000 (International Conference on Computers in Education/International Conference on Computer-Assisted Instruction): (1) "A Collaborative Learning Support System Based on Virtual Environment Server for Multiple…

  8. MOOville: The Writing Project's Own "Private Idaho".

    ERIC Educational Resources Information Center

    Conlon, Michael

    1997-01-01

    Describes how a computerized environment supplemented traditional undergraduate courses in English literature and composition at the University of Florida, and was developed with a grant from IBM. Highlights include the use of MOO (multi-user, object-oriented) space; student assignments; the client-server setting; and student and teacher…

  9. iRODS-Based Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Astrophysics Data System (ADS)

    Schnase, J. L.; Duffy, D. Q.; Tamkin, G. S.; Strong, S.; Ripley, D.; Gill, R.; Sinno, S. S.; Shen, Y.; Carriere, L. E.; Brieger, L.; Moore, R.; Rajasekar, A.; Schroeder, W.; Wan, M.

    2011-12-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of specialized virtual climate data servers, repetitive cloud provisioning, image-based deployment and distribution, and virtualization-as-a-service. A virtual climate data server is an OAIS-compliant, iRODS-based data server designed to support a particular type of scientific data collection. iRODS is data grid middleware that provides policy-based control over collection-building, managing, querying, accessing, and preserving large scientific data sets. We have developed prototype vCDSs to manage NetCDF, HDF, and GeoTIF data products. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA's Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into these virtualized resources, multiple vCDSs can use iRODS's federation and realized object capabilities to create an integrated ecosystem of data servers that can scale and adapt to changing requirements. This approach enables platform- or software-as-a-service deployment of the vCDSs and allows the NCCS to offer virtualization-as-a-service, a capacity to respond in an agile way to new customer requests for data services, and a path for migrating existing services into the cloud. We have registered MODIS Atmosphere data products in a vCDS that contains 54 million registered files, 630TB of data, and over 300 million metadata values. We are now assembling IPCC AR5 data into a production vCDS that will provide the platform upon which NCCS's Earth System Grid (ESG) node publishes to the extended science community. In this talk, we describe our approach, experiences, lessons learned, and plans for the future.

  10. Uveka: a UV exposure monitoring system using autonomous instruments network for Reunion Island citizens

    NASA Astrophysics Data System (ADS)

    Sébastien, Nicolas; Cros, Sylvain; Lallemand, Caroline; Kurzrock, Frederik; Schmutz, Nicolas

    2016-04-01

    Reunion Island is a French oversea territory located in the Indian Ocean. This tropical Island has about 840,000 inhabitants and is visited every year by more than 400,000 tourists. On average, 340 sunny days occurs on this island in a whole year. Beyond these advantageous conditions, exposure of the population to ultraviolet radiation constitutes a public health issue. The number of hospitalisations for skin cancer increased by 50% between 2005 and 2010. Health insurance reimbursements due to ophthalmic anomalies caused by the sun is about two million Euros. Among the prevention measures recommended by public health policies, access to information on UV radiation is one of the basic needs. Reuniwatt, supported by the Regional Council of La Reunion, is currently developing the project Uveka. Uveka is a solution permitting to provide in real-time and in short-term forecast (several hours), the UV radiation maps of the Reunion Island. Accessible via web interface and smartphone application, Uveka informs the citizens about the UV exposure rate and its risk according to its individual characteristics (skin phototype, past exposure to sun etc.). The present work describes this initiative through the presentation of the UV radiation monitoring system and the data processing chain toward the end-users. The UV radiation monitoring system of Uveka is a network of low cost UV sensors. Each instrument is equipped with a solar panel and a battery. Moreover, the sensor is able to communicate using the 3G telecommunication network. Then, the instrument can be installed without AC power or access to a wired communication network. This feature eliminates a site selection constraint. Indeed, with more than 200 microclimates and a strong cloud cover spatial variability, building a representative measurement site network in this island with a limited number of instruments is a real challenge. In addition to these UV radiation measurements, the mapping of the surface solar radiation using the meteorological satellite Meteosat-7 data permits to complete the gaps. Kriging the punctual measurements using satellite data as spatial weights enables to obtain a continuous map with a spatially constant quality all over the Reunion Island. A significant challenge of this monitoring system is to ensure the temporal continuity of the real-time mapping. Indeed, autonomous sensors are programmed with our proprietary protocol leading to a smart management of the battery load and telecommunication costs. Measurements are sent to a server with a protocol minimizing the data amount in order to ensure low telecommunication prices. The server receives the measurements data and integrates them into a NoSql database. The server is able to handle long times series and quality control is routinely made to ensure data consistence as well as instruments float state monitoring. The database can be requested by our geographical information system server through an application programming interface. This configuration permits an easy development of a web-based or smart phone application using any external information provided by the user (personal phenotype and exposure experience) or its device (e.g. computing refinements according to its location).

  11. Spacecraft operations automation: Automatic alarm notification and web telemetry display

    NASA Astrophysics Data System (ADS)

    Short, Owen G.; Leonard, Robert E.; Bucher, Allen W.; Allen, Bryan

    1999-11-01

    In these times of Faster, Better, Cheaper (FBC) spacecraft, Spacecraft Operations Automation is an area that is targeted by many Operations Teams. To meet the challenges of the FBC environment, the Mars Global Surveyor (MGS) Operations Team designed and quickly implemented two new low-cost technologies: one which monitors spacecraft telemetry, checks the status of the telemetry, and contacts technical experts by pager when any telemetry datapoints exceed alarm limits, and a second which allows quick and convenient remote access to data displays. The first new technology is Automatic Alarm Notification (AAN). AAN monitors spacecraft telemetry and will notify engineers automatically if any telemetry is received which creates an alarm condition. The second new technology is Web Telemetry Display (WTD). WTD captures telemetry displays generated by the flight telemetry system and makes them available to the project web server. This allows engineers to check the health and status of the spacecraft from any computer capable of connecting to the global internet, without needing normally-required specialized hardware and software. Both of these technologies have greatly reduced operations costs by alleviating the need to have operations engineers monitor spacecraft performance on a 24 hour per day, 7 day per week basis from a central Mission Support Area. This paper gives details on the design and implementation of AAN and WTD, discusses their limitations, and lists the ongoing benefits which have accrued to MGS Flight Operations since their implementation in late 1996.

  12. A Testbed for Data Fusion for Helicopter Diagnostics and Prognostics

    DTIC Science & Technology

    2003-03-01

    and algorithm design and tuning in order to develop advanced diagnostic and prognostic techniques for air craft health monitoring . Here a...and development of models for diagnostics, prognostics , and anomaly detection . Figure 5 VMEP Server Browser Interface 7 Download... detections , and prognostic prediction time horizons. The VMEP system and in particular the web component are ideal for performing data collection

  13. Addressing Software Security

    NASA Technical Reports Server (NTRS)

    Bailey, Brandon

    2015-01-01

    Historically security within organizations was thought of as an IT function (web sites/servers, email, workstation patching, etc.) Threat landscape has evolved (Script Kiddies, Hackers, Advanced Persistent Threat (APT), Nation States, etc.) Attack surface has expanded -Networks interconnected!! Some security posture factors Network Layer (Routers, Firewalls, etc.) Computer Network Defense (IPS/IDS, Sensors, Continuous Monitoring, etc.) Industrial Control Systems (ICS) Software Security (COTS, FOSS, Custom, etc.)

  14. Real-Time and Secure Wireless Health Monitoring

    PubMed Central

    Dağtaş, S.; Pekhteryev, G.; Şahinoğlu, Z.; Çam, H.; Challa, N.

    2008-01-01

    We present a framework for a wireless health monitoring system using wireless networks such as ZigBee. Vital signals are collected and processed using a 3-tiered architecture. The first stage is the mobile device carried on the body that runs a number of wired and wireless probes. This device is also designed to perform some basic processing such as the heart rate and fatal failure detection. At the second stage, further processing is performed by a local server using the raw data transmitted by the mobile device continuously. The raw data is also stored at this server. The processed data as well as the analysis results are then transmitted to the service provider center for diagnostic reviews as well as storage. The main advantages of the proposed framework are (1) the ability to detect signals wirelessly within a body sensor network (BSN), (2) low-power and reliable data transmission through ZigBee network nodes, (3) secure transmission of medical data over BSN, (4) efficient channel allocation for medical data transmission over wireless networks, and (5) optimized analysis of data using an adaptive architecture that maximizes the utility of processing and computational capacity at each platform. PMID:18497866

  15. Evaluation of MPEG-7-Based Audio Descriptors for Animal Voice Recognition over Wireless Acoustic Sensor Networks.

    PubMed

    Luque, Joaquín; Larios, Diego F; Personal, Enrique; Barbancho, Julio; León, Carlos

    2016-05-18

    Environmental audio monitoring is a huge area of interest for biologists all over the world. This is why some audio monitoring system have been proposed in the literature, which can be classified into two different approaches: acquirement and compression of all audio patterns in order to send them as raw data to a main server; or specific recognition systems based on audio patterns. The first approach presents the drawback of a high amount of information to be stored in a main server. Moreover, this information requires a considerable amount of effort to be analyzed. The second approach has the drawback of its lack of scalability when new patterns need to be detected. To overcome these limitations, this paper proposes an environmental Wireless Acoustic Sensor Network architecture focused on use of generic descriptors based on an MPEG-7 standard. These descriptors demonstrate it to be suitable to be used in the recognition of different patterns, allowing a high scalability. The proposed parameters have been tested to recognize different behaviors of two anuran species that live in Spanish natural parks; the Epidalea calamita and the Alytes obstetricans toads, demonstrating to have a high classification performance.

  16. Construction of a smart medication dispenser with high degree of scalability and remote manageability.

    PubMed

    Pak, JuGeon; Park, KeeHyun

    2012-01-01

    We propose a smart medication dispenser having a high degree of scalability and remote manageability. We construct the dispenser to have extensible hardware architecture for achieving scalability, and we install an agent program in it for achieving remote manageability. The dispenser operates as follows: when the real-time clock reaches the predetermined medication time and the user presses the dispense button at that time, the predetermined medication is dispensed from the medication dispensing tray (MDT). In the proposed dispenser, the medication for each patient is stored in an MDT. One smart medication dispenser contains mainly one MDT; however, the dispenser can be extended to include more MDTs in order to support multiple users using one dispenser. For remote management, the proposed dispenser transmits the medication status and the system configurations to the monitoring server. In the case of a specific event such as a shortage of medication, memory overload, software error, or non-adherence, the event is transmitted immediately. All these operations are performed automatically without the intervention of patients, through the agent program installed in the dispenser. Results of implementation and verification show that the proposed dispenser operates normally and performs the management operations from the medication monitoring server suitably.

  17. A remote monitor of bed patient cardiac vibration, respiration and movement.

    PubMed

    Mukai, Koji; Yonezawa, Yoshiharu; Ogawa, Hidekuni; Maki, Hiromichi; Caldwell, W Morton

    2009-01-01

    We have developed a remote system for monitoring heart rate, respiration rate and movement behavior of at-home elderly people who are living alone. The system consists of a 40 kHz ultrasonic transmitter and receiver, linear integrated circuits, a low-power 8-bit single chip microcomputer and an Internet server computer. The 40 kHz ultrasonic transmitter and receiver are installed into a bed mattress. The transmitted signal diffuses into the bed mattress, and the amplitude of the received ultrasonic wave is modulated by the shape of the mattress and parameters such as respiration, cardiac vibration and movement. The modulated ultrasonic signal is received and demodulated by an envelope detection circuit. Low, high and band pass filters separate the respiration, cardiac vibration and movement signals, which are fed into the microcontroller and digitized at a sampling rate of 50 Hz by 8-bit A/D converters. The digitized data are sent to the server computer as a serial signal. This computer stores the data and also creates a graphic chart of the latest hour. The person's family or caregiver can download this chart via the Internet at any time.

  18. A web-based quantitative signal detection system on adverse drug reaction in China.

    PubMed

    Li, Chanjuan; Xia, Jielai; Deng, Jianxiong; Chen, Wenge; Wang, Suzhen; Jiang, Jing; Chen, Guanquan

    2009-07-01

    To establish a web-based quantitative signal detection system for adverse drug reactions (ADRs) based on spontaneous reporting to the Guangdong province drug-monitoring database in China. Using Microsoft Visual Basic and Active Server Pages programming languages and SQL Server 2000, a web-based system with three software modules was programmed to perform data preparation and association detection, and to generate reports. Information component (IC), the internationally recognized measure of disproportionality for quantitative signal detection, was integrated into the system, and its capacity for signal detection was tested with ADR reports collected from 1 January 2002 to 30 June 2007 in Guangdong. A total of 2,496 associations including known signals were mined from the test database. Signals (e.g., cefradine-induced hematuria) were found early by using the IC analysis. In addition, 291 drug-ADR associations were alerted for the first time in the second quarter of 2007. The system can be used for the detection of significant associations from the Guangdong drug-monitoring database and could be an extremely useful adjunct to the expert assessment of very large numbers of spontaneously reported ADRs for the first time in China.

  19. Experience with Multi-Tier Grid MySQL Database Service Resiliency at BNL

    NASA Astrophysics Data System (ADS)

    Wlodek, Tomasz; Ernst, Michael; Hover, John; Katramatos, Dimitrios; Packard, Jay; Smirnov, Yuri; Yu, Dantong

    2011-12-01

    We describe the use of F5's BIG-IP smart switch technology (3600 Series and Local Traffic Manager v9.0) to provide load balancing and automatic fail-over to multiple Grid services (GUMS, VOMS) and their associated back-end MySQL databases. This resiliency is introduced in front of the external application servers and also for the back-end database systems, which is what makes it "multi-tier". The combination of solutions chosen to ensure high availability of the services, in particular the database replication and fail-over mechanism, are discussed in detail. The paper explains the design and configuration of the overall system, including virtual servers, machine pools, and health monitors (which govern routing), as well as the master-slave database scheme and fail-over policies and procedures. Pre-deployment planning and stress testing will be outlined. Integration of the systems with our Nagios-based facility monitoring and alerting is also described. And application characteristics of GUMS and VOMS which enable effective clustering will be explained. We then summarize our practical experiences and real-world scenarios resulting from operating a major US Grid center, and assess the applicability of our approach to other Grid services in the future.

  20. A wearable, mobile phone-based respiration monitoring system for sleep apnea syndrome detection.

    PubMed

    Ishida, Ryoichi; Yonezawa, Yoshiharu; Maki, Hiromichi; Ogawa, Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Hahn, Allen W; Caldwell, W Morton

    2005-01-01

    A new wearable respiration monitoring system has been developed for non-invasive detection of sleep apnea syndrome. The system, which is attached to a shirt, consists of a piezoelectric sensor, a low-power 8-bit single chip microcontroller, EEPROM and a 2.4 GHz low-power transmitting mobile phone (PHS). The piezoelectric sensor, whose electrical polarization voltage is produced by body movements, is installed inside the shirt and closely contacts the patient's chest. The low frequency components of body movements recorded by the sensor are mainly generated by respiration. The microcontroller sequentially stores the movement signal to the EEPROM for 5 minutes and detects, by time-frequency analysis, whether the patient has breathed during that time. When the patient is apneic for 10 sseconds, the microcontroller sends the recorded respiration waveform during and one minute before and after the apnea directly to the hospital server computer via the mobile phone. The server computer then creates apnea "filings" automatically for every patient. The system can be used at home and be self-applied by patients. Moreover, the system does not require any extra equipment such as a personal computer, PDA, or Internet connection.

  1. Evaluation of MPEG-7-Based Audio Descriptors for Animal Voice Recognition over Wireless Acoustic Sensor Networks

    PubMed Central

    Luque, Joaquín; Larios, Diego F.; Personal, Enrique; Barbancho, Julio; León, Carlos

    2016-01-01

    Environmental audio monitoring is a huge area of interest for biologists all over the world. This is why some audio monitoring system have been proposed in the literature, which can be classified into two different approaches: acquirement and compression of all audio patterns in order to send them as raw data to a main server; or specific recognition systems based on audio patterns. The first approach presents the drawback of a high amount of information to be stored in a main server. Moreover, this information requires a considerable amount of effort to be analyzed. The second approach has the drawback of its lack of scalability when new patterns need to be detected. To overcome these limitations, this paper proposes an environmental Wireless Acoustic Sensor Network architecture focused on use of generic descriptors based on an MPEG-7 standard. These descriptors demonstrate it to be suitable to be used in the recognition of different patterns, allowing a high scalability. The proposed parameters have been tested to recognize different behaviors of two anuran species that live in Spanish natural parks; the Epidalea calamita and the Alytes obstetricans toads, demonstrating to have a high classification performance. PMID:27213375

  2. Web Monitoring of EOS Front-End Ground Operations, Science Downlinks and Level 0 Processing

    NASA Technical Reports Server (NTRS)

    Cordier, Guy R.; Wilkinson, Chris; McLemore, Bruce

    2008-01-01

    This paper addresses the efforts undertaken and the technology deployed to aggregate and distribute the metadata characterizing the real-time operations associated with NASA Earth Observing Systems (EOS) high-rate front-end systems and the science data collected at multiple ground stations and forwarded to the Goddard Space Flight Center for level 0 processing. Station operators, mission project management personnel, spacecraft flight operations personnel and data end-users for various EOS missions can retrieve the information at any time from any location having access to the internet. The users are distributed and the EOS systems are distributed but the centralized metadata accessed via an external web server provide an effective global and detailed view of the enterprise-wide events as they are happening. The data-driven architecture and the implementation of applied middleware technology, open source database, open source monitoring tools, and external web server converge nicely to fulfill the various needs of the enterprise. The timeliness and content of the information provided are key to making timely and correct decisions which reduce project risk and enhance overall customer satisfaction. The authors discuss security measures employed to limit access of data to authorized users only.

  3. Design and Application of a Field Sensing System for Ground Anchors in Slopes

    PubMed Central

    Choi, Se Woon; Lee, Jihoon; Kim, Jong Moon; Park, Hyo Seon

    2013-01-01

    In a ground anchor system, cables or tendons connected to a bearing plate are used for stabilization of slopes. Then, the stability of a slope is dependent on maintaining the tension levels in the cables. So far, no research on a strain-based field sensing system for ground anchors has been reported. Therefore, in this study, a practical monitoring system for long-term sensing of tension levels in tendons for anchor-reinforced slopes is proposed. The system for anchor-reinforced slopes is composed of: (1) load cells based on vibrating wire strain gauges (VWSGs), (2) wireless sensor nodes which receive and process the signals from load cells and then transmit the result to a master node through local area communication, (3) master nodes which transmit the data sent from sensor nodes to the server through mobile communication, and (4) a server located at the base station. The system was applied to field sensing of ground anchors in the 62 m-long and 26 m-high slope at the side of the highway. Based on the long-term monitoring, the safety of the anchor-reinforced slope can be secured by the timely applications of re-tensioning processes in tendons. PMID:23507820

  4. Work Coordination Engine

    NASA Technical Reports Server (NTRS)

    Zendejas, Silvino; Bui, Tung; Bui, Bach; Malhotra, Shantanu; Chen, Fannie; Kim, Rachel; Allen, Christopher; Luong, Ivy; Chang, George; Sadaqathulla, Syed

    2009-01-01

    The Work Coordination Engine (WCE) is a Java application integrated into the Service Management Database (SMDB), which coordinates the dispatching and monitoring of a work order system. WCE de-queues work orders from SMDB and orchestrates the dispatching of work to a registered set of software worker applications distributed over a set of local, or remote, heterogeneous computing systems. WCE monitors the execution of work orders once dispatched, and accepts the results of the work order by storing to the SMDB persistent store. The software leverages the use of a relational database, Java Messaging System (JMS), and Web Services using Simple Object Access Protocol (SOAP) technologies to implement an efficient work-order dispatching mechanism capable of coordinating the work of multiple computer servers on various platforms working concurrently on different, or similar, types of data or algorithmic processing. Existing (legacy) applications can be wrapped with a proxy object so that no changes to the application are needed to make them available for integration into the work order system as "workers." WCE automatically reschedules work orders that fail to be executed by one server to a different server if available. From initiation to completion, the system manages the execution state of work orders and workers via a well-defined set of events, states, and actions. It allows for configurable work-order execution timeouts by work-order type. This innovation eliminates a current processing bottleneck by providing a highly scalable, distributed work-order system used to quickly generate products needed by the Deep Space Network (DSN) to support space flight operations. WCE is driven by asynchronous messages delivered via JMS indicating the availability of new work or workers. It runs completely unattended in support of the lights-out operations concept in the DSN.

  5. Quality improvement in home life based on EEG signal

    NASA Astrophysics Data System (ADS)

    Wang, Xiaolong; Wu, Shan; Wang, Sen; Liang, Jinhu

    2017-06-01

    The purpose of this research is based on the EEG and environmental signals, which are collected by different sensors and uploaded to the same server wirelessly. On the one hand, it is convenient for the data storage and data calls at any time; on the other hand, the system can provide a health advice with adjusting to the environment spontaneously, or to use EEG for the control part of the electrical equipment. The people, objects, and the environment will be organically combined to create a more comfortable, more suitable environment for their living.

  6. Testing an Open Source installation and server provisioning tool for the INFN CNAF Tierl Storage system

    NASA Astrophysics Data System (ADS)

    Pezzi, M.; Favaro, M.; Gregori, D.; Ricci, P. P.; Sapunenko, V.

    2014-06-01

    In large computing centers, such as the INFN CNAF Tier1 [1], is essential to be able to configure all the machines, depending on use, in an automated way. For several years at the Tier1 has been used Quattor[2], a server provisioning tool, which is currently used in production. Nevertheless we have recently started a comparison study involving other tools able to provide specific server installation and configuration features and also offer a proper full customizable solution as an alternative to Quattor. Our choice at the moment fell on integration between two tools: Cobbler [3] for the installation phase and Puppet [4] for the server provisioning and management operation. The tool should provide the following properties in order to replicate and gradually improve the current system features: implement a system check for storage specific constraints such as kernel modules black list at boot time to avoid undesired SAN (Storage Area Network) access during disk partitioning; a simple and effective mechanism for kernel upgrade and downgrade; the ability of setting package provider using yum, rpm or apt; easy to use Virtual Machine installation support including bonding and specific Ethernet configuration; scalability for managing thousands of nodes and parallel installations. This paper describes the results of the comparison and the tests carried out to verify the requirements and the new system suitability in the INFN-T1 environment.

  7. The USGODAE Monterey Data Server

    NASA Astrophysics Data System (ADS)

    Sharfstein, P. J.; Dimitriou, D.; Hankin, S. C.

    2004-12-01

    With oversight from the U.S. Global Ocean Data Assimilation Experiment (GODAE) Steering Committee and funding from the Office of Naval Research, the USGODAE Monterey Data Server has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. Support of the Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Because of the broad range and diverse formats of data used by the GODAE community, presenting data with a consistent interface and ensuring its availability in standard formats is a primary challenge faced by the USGODAE Server project. To this end, all USGODAE data sets are available via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC). To provide surface forcing, fluxes, and boundary conditions for ocean model research, USGODAE serves global data from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and regional data from the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Global meteorological data and observational data from the FNMOC Ocean QC process are posted in near real-time to USGODAE. These include T/S profiles, in-situ and satellite sea surface temperature (SST), satellite altimetry, and SSM/I sea ice. They contain all of the unclassified in-situ and satellite observations used to initialize the FNMOC NOGAPS model. Also, the Naval Oceanographic Office provides daily satellite SST and SSH retrievals to USGODAE. The USGODAE Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo T/S profiling float data. USGODAE Argo data are served through OPeNDAP and LAS, providing complete integration into NVODS and the IOOS/DMAC. Due to its high reliability, ease of data access, and increasing breadth of data, the USGODAE Server is becoming an invaluable resource for both the GODAE community and the general oceanographic community. Continued integration of model, forcing, and in-situ data sets from providers throughout the world is making the USGODAE Monterey Data Server a key part of the international GODAE project.

  8. A microprocessor card software server to support the Quebec health microprocessor card project.

    PubMed

    Durant, P; Bérubé, J; Lavoie, G; Gamache, A; Ardouin, P; Papillon, M J; Fortin, J P

    1995-01-01

    The Quebec Health Smart Card Project is advocating the use of a memory card software server[1] (SCAM) to implement a portable medical record (PMR) on a smart card. The PMR is viewed as an object that can be manipulated by SCAM's services. In fact, we can talk about a pseudo-object-oriented approach. This software architecture provides a flexible and evolutive way to manage and optimize the PMR. SCAM is a generic software server; it can manage smart cards as well as optical (laser) cards or other types of memory cards. But, in the specific case of the Quebec Health Card Project, SCAM is used to provide services between physicians' or pharmacists' software and IBM smart card technology. We propose to expose the concepts and techniques used to provide a generic environment to deal with smart cards (and more generally with memory cards), to obtain a dynamic an evolutive PMR, to raise the system global security level and the data integrity, to optimize significantly the management of the PMR, and to provide statistic information about the use of the PMR.

  9. Secure Nearest Neighbor Query on Crowd-Sensing Data

    PubMed Central

    Cheng, Ke; Wang, Liangmin; Zhong, Hong

    2016-01-01

    Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU) situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes. PMID:27669253

  10. Secure Nearest Neighbor Query on Crowd-Sensing Data.

    PubMed

    Cheng, Ke; Wang, Liangmin; Zhong, Hong

    2016-09-22

    Nearest neighbor queries are fundamental in location-based services, and secure nearest neighbor queries mainly focus on how to securely and quickly retrieve the nearest neighbor in the outsourced cloud server. However, the previous big data system structure has changed because of the crowd-sensing data. On the one hand, sensing data terminals as the data owner are numerous and mistrustful, while, on the other hand, in most cases, the terminals find it difficult to finish many safety operation due to computation and storage capability constraints. In light of they Multi Owners and Multi Users (MOMU) situation in the crowd-sensing data cloud environment, this paper presents a secure nearest neighbor query scheme based on the proxy server architecture, which is constructed by protocols of secure two-party computation and secure Voronoi diagram algorithm. It not only preserves the data confidentiality and query privacy but also effectively resists the collusion between the cloud server and the data owners or users. Finally, extensive theoretical and experimental evaluations are presented to show that our proposed scheme achieves a superior balance between the security and query performance compared to other schemes.

  11. Cloud-based Predictive Modeling System and its Application to Asthma Readmission Prediction

    PubMed Central

    Chen, Robert; Su, Hang; Khalilia, Mohammed; Lin, Sizhe; Peng, Yue; Davis, Tod; Hirsh, Daniel A; Searles, Elizabeth; Tejedor-Sojo, Javier; Thompson, Michael; Sun, Jimeng

    2015-01-01

    The predictive modeling process is time consuming and requires clinical researchers to handle complex electronic health record (EHR) data in restricted computational environments. To address this problem, we implemented a cloud-based predictive modeling system via a hybrid setup combining a secure private server with the Amazon Web Services (AWS) Elastic MapReduce platform. EHR data is preprocessed on a private server and the resulting de-identified event sequences are hosted on AWS. Based on user-specified modeling configurations, an on-demand web service launches a cluster of Elastic Compute 2 (EC2) instances on AWS to perform feature selection and classification algorithms in a distributed fashion. Afterwards, the secure private server aggregates results and displays them via interactive visualization. We tested the system on a pediatric asthma readmission task on a de-identified EHR dataset of 2,967 patients. We conduct a larger scale experiment on the CMS Linkable 2008–2010 Medicare Data Entrepreneurs’ Synthetic Public Use File dataset of 2 million patients, which achieves over 25-fold speedup compared to sequential execution. PMID:26958172

  12. Automatic Web-based Calibration of Network-Capable Shipboard Sensors

    DTIC Science & Technology

    2007-09-01

    Server, Java , Applet, and Servlet . 16. PRICE CODE 17. SECURITY CLASSIFICATION OF REPORT Unclassified 18. SECURITY CLASSIFICATION OF THIS PAGE...49 b. Sensor Applet...........................................................................49 3. Java Servlet ...Table 1. Required System Environment Variables for Java Servlet Development. ......25 Table 2. Payload Data Format of the POST Requests from

  13. Architecture-Based Reliability Analysis of Web Services

    ERIC Educational Resources Information Center

    Rahmani, Cobra Mariam

    2012-01-01

    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…

  14. Z39.50 and the Scholar's Workstation Concept.

    ERIC Educational Resources Information Center

    Phillips, Gary Lee

    1992-01-01

    Examines the potential application of the American National Standards Institute (ANSI)/National Information Standards Organization (NISO) Z39.50 library networking protocol as a client/server environment for a scholar's workstation. Computer networking models are described, and linking the workstation to an online public access catalog (OPAC) is…

  15. A Virtual Good Idea

    ERIC Educational Resources Information Center

    Bolch, Matt

    2009-01-01

    School districts across the country have always had to do more with less. Funding goes only so far, leaving administrators and IT staff to find innovative ways to save money while maintaining a high level of academic quality. Creating virtual servers accomplishes both tasks, district technology personnel say. Virtual environments not only allow…

  16. Hypertext-based computer vision teaching packages

    NASA Astrophysics Data System (ADS)

    Marshall, A. David

    1994-10-01

    The World Wide Web Initiative has provided a means for providing hypertext and multimedia based information across the whole INTERNET. Many applications have been developed on such http servers. At Cardiff we have developed a http hypertext based multimedia server, the Cardiff Information Server, using the widely available Mosaic system. The server provides a variety of information ranging from the provision of teaching modules, on- line documentation, timetables for departmental activities to more light hearted hobby interests. One important and novel development to the server has been the development of courseware facilities. This ranges from the provision of on-line lecture notes, exercises and their solutions to more interactive teaching packages. A variety of disciplines have benefitted notably Computer Vision, and Image Processing but also C programming, X Windows, Computer Graphics and Parallel Computing. This paper will address the issues of the implementation of the Computer Vision and Image Processing packages, the advantages gained from using a hypertext based system and also will relate practical experiences of using the packages in a class environment. The paper addresses issues of how best to provide information in such a hypertext based system and how interactive image processing packages can be developed and integrated into courseware. The suite of tools developed facilitates a flexible and powerful courseware package that has proved popular in the classroom and over the Internet. The paper will also detail many future developments we see possible. One of the key points raised in the paper is that Mosaic's hypertext language (html) is extremely powerful and yet relatively straightforward to use. It is also possible to link in Unix calls so that programs and shells can be executed. This provides a powerful suite of utilities that can be exploited to develop many packages.

  17. CINTEX: International Interoperability Extensions to EOSDIS

    NASA Technical Reports Server (NTRS)

    Graves, Sara J.

    1997-01-01

    A large part of the research under this cooperative agreement involved working with representatives of the DLR, NASDA, EDC, and NOAA-SAA data centers to propose a set of enhancements and additions to the EOSDIS Version 0 Information Management System (V0 IMS) Client/Server Message Protocol. Helen Conover of ITSL led this effort to provide for an additional geographic search specification (WRS Path/Row), data set- and data center-specific search criteria, search by granule ID, specification of data granule subsetting requests, data set-based ordering, and the addition of URLs to result messages. The V0 IMS Server Cookbook is an evolving document, providing resources and information to data centers setting up a VO IMS Server. Under this Cooperative Agreement, Helen Conover revised, reorganized, and expanded this document, and converted it to HTML. Ms. Conover has also worked extensively with the IRE RAS data center, CPSSI, in Russia. She served as the primary IMS contact for IRE-CPSSI and as IRE-CPSSI's liaison to other members of IMS and Web Gateway (WG) development teams. Her documentation of IMS problems in the IRE environment (Sun servers and low network bandwidth) led to a general restructuring of the V0 IMS Client message polling system. to the benefit of all IMS participants. In addition to the IMS server software and documentation. which are generally available to CINTEX sites, Ms. Conover also provided database design documentation and consulting, order tracking software, and hands-on testing and debug assistance to IRE. In the final pre-operational phase of IRE-CPSSI development, she also supplied information on configuration management, including ideas and processes in place at the Global Hydrology Resource Center (GHRC), an EOSDIS data center operated by ITSL.

  18. Ecological Momentary Assessment in Behavioral Research: Addressing Technological and Human Participant Challenges

    PubMed Central

    Shiffman, Saul; Music, Edvin; Styn, Mindi A; Kriska, Andrea; Smailagic, Asim; Siewiorek, Daniel; Ewing, Linda J; Chasens, Eileen; French, Brian; Mancino, Juliet; Mendez, Dara; Strollo, Patrick; Rathbun, Stephen L

    2017-01-01

    Background Ecological momentary assessment (EMA) assesses individuals’ current experiences, behaviors, and moods as they occur in real time and in their natural environment. EMA studies, particularly those of longer duration, are complex and require an infrastructure to support the data flow and monitoring of EMA completion. Objective Our objective is to provide a practical guide to developing and implementing an EMA study, with a focus on the methods and logistics of conducting such a study. Methods The EMPOWER study was a 12-month study that used EMA to examine the triggers of lapses and relapse following intentional weight loss. We report on several studies that informed the implementation of the EMPOWER study: (1) a series of pilot studies, (2) the EMPOWER study’s infrastructure, (3) training of study participants in use of smartphones and the EMA protocol and, (4) strategies used to enhance adherence to completing EMA surveys. Results The study enrolled 151 adults and had 87.4% (132/151) retention rate at 12 months. Our learning experiences in the development of the infrastructure to support EMA assessments for the 12-month study spanned several topic areas. Included were the optimal frequency of EMA prompts to maximize data collection without overburdening participants; the timing and scheduling of EMA prompts; technological lessons to support a longitudinal study, such as proper communication between the Android smartphone, the Web server, and the database server; and use of a phone that provided access to the system’s functionality for EMA data collection to avoid loss of data and minimize the impact of loss of network connectivity. These were especially important in a 1-year study with participants who might travel. It also protected the data collection from any server-side failure. Regular monitoring of participants’ response to EMA prompts was critical, so we built in incentives to enhance completion of EMA surveys. During the first 6 months of the 12-month study interval, adherence to completing EMA surveys was high, with 88.3% (66,978/75,888) completion of random assessments and around 90% (23,411/25,929 and 23,343/26,010) completion of time-contingent assessments, despite the duration of EMA data collection and challenges with implementation. Conclusions This work informed us of the necessary preliminary steps to plan and prepare a longitudinal study using smartphone technology and the critical elements to ensure participant engagement in the potentially burdensome protocol, which spanned 12 months. While this was a technology-supported and -programmed study, it required close oversight to ensure all elements were functioning correctly, particularly once human participants became involved. PMID:28298264

  19. Designing of peptides with desired half-life in intestine-like environment.

    PubMed

    Sharma, Arun; Singla, Deepak; Rashid, Mamoon; Raghava, Gajendra Pal Singh

    2014-08-20

    In past, a number of peptides have been reported to possess highly diverse properties ranging from cell penetrating, tumor homing, anticancer, anti-hypertensive, antiviral to antimicrobials. Owing to their excellent specificity, low-toxicity, rich chemical diversity and availability from natural sources, FDA has successfully approved a number of peptide-based drugs and several are in various stages of drug development. Though peptides are proven good drug candidates, their usage is still hindered mainly because of their high susceptibility towards proteases degradation. We have developed an in silico method to predict the half-life of peptides in intestine-like environment and to design better peptides having optimized physicochemical properties and half-life. In this study, we have used 10mer (HL10) and 16mer (HL16) peptides dataset to develop prediction models for peptide half-life in intestine-like environment. First, SVM based models were developed on HL10 dataset which achieved maximum correlation R/R2 of 0.57/0.32, 0.68/0.46, and 0.69/0.47 using amino acid, dipeptide and tripeptide composition, respectively. Secondly, models developed on HL16 dataset showed maximum R/R2 of 0.91/0.82, 0.90/0.39, and 0.90/0.31 using amino acid, dipeptide and tripeptide composition, respectively. Furthermore, models that were developed on selected features, achieved a correlation (R) of 0.70 and 0.98 on HL10 and HL16 dataset, respectively. Preliminary analysis suggests the role of charged residue and amino acid size in peptide half-life/stability. Based on above models, we have developed a web server named HLP (Half Life Prediction), for predicting and designing peptides with desired half-life. The web server provides three facilities; i) half-life prediction, ii) physicochemical properties calculation and iii) designing mutant peptides. In summary, this study describes a web server 'HLP' that has been developed for assisting scientific community for predicting intestinal half-life of peptides and to design mutant peptides with better half-life and physicochemical properties. HLP models were trained using a dataset of peptides whose half-lives have been determined experimentally in crude intestinal proteases preparation. Thus, HLP server will help in designing peptides possessing the potential to be administered via oral route (http://www.imtech.res.in/raghava/hlp/).

  20. Utilization of Virtual Server Technology in Mission Operations

    NASA Technical Reports Server (NTRS)

    Felton, Larry; Lankford, Kimberly; Pitts, R. Lee; Pruitt, Robert W.

    2010-01-01

    Virtualization provides the opportunity to continue to do "more with less"---more computing power with fewer physical boxes, thus reducing the overall hardware footprint, power and cooling requirements, software licenses, and their associated costs. This paper explores the tremendous advantages and any disadvantages of virtualization in all of the environments associated with software and systems development to operations flow. It includes the use and benefits of the Intelligent Platform Management Interface (IPMI) specification, and identifies lessons learned concerning hardware and network configurations. Using the Huntsville Operations Support Center (HOSC) at NASA Marshall Space Flight Center as an example, we demonstrate that deploying virtualized servers as a means of managing computing resources is applicable and beneficial to many areas of application, up to and including flight operations.

Top