Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-24
... Communications System Server Software, Wireless Handheld Devices and Battery Packs; Notice of Investigation..., wireless handheld devices and battery packs by reason of infringement of certain claims of U.S. Patent Nos... certain wireless communications system server software, wireless handheld devices or battery packs that...
CDC WONDER: a cooperative processing architecture for public health.
Friede, A; Rosen, D H; Reid, J A
1994-01-01
CDC WONDER is an information management architecture designed for public health. It provides access to information and communications without the user's needing to know the location of data or communication pathways and mechanisms. CDC WONDER users have access to extractions from some 40 databases; electronic mail (e-mail); and surveillance data processing. System components include the Remote Client, the Communications Server, the Queue Managers, and Data Servers and Process Servers. The Remote Client software resides in the user's machine; other components are at the Centers for Disease Control and Prevention (CDC). The Remote Client, the Communications Server, and the Applications Server provide access to the information and functions in the Data Servers and Process Servers. The system architecture is based on cooperative processing, and components are coupled via pure message passing, using several protocols. This architecture allows flexibility in the choice of hardware and software. One system limitation is that final results from some subsystems are obtained slowly. Although designed for public health, CDC WONDER could be useful for other disciplines that need flexible, integrated information exchange. PMID:7719813
Horton, John J.
2006-04-11
A system and method of maintaining communication between a computer and a server, the server being in communication with the computer via xDSL service or dial-up modem service, with xDSL service being the default mode of communication, the method including sending a request to the server via xDSL service to which the server should respond and determining if a response has been received. If no response has been received, displaying on the computer a message (i) indicating that xDSL service has failed and (ii) offering to establish communication between the computer and the server via the dial-up modem, and thereafter changing the default mode of communication between the computer and the server to dial-up modem service. In a preferred embodiment, an xDSL service provider monitors dial-up modem communications and determines if the computer dialing in normally establishes communication with the server via xDSL service. The xDSL service provider can thus quickly and easily detect xDSL failures.
Java RMI Software Technology for the Payload Planning System of the International Space Station
NASA Technical Reports Server (NTRS)
Bryant, Barrett R.
1999-01-01
The Payload Planning System is for experiment planning on the International Space Station. The planning process has a number of different aspects which need to be stored in a database which is then used to generate reports on the planning process in a variety of formats. This process is currently structured as a 3-tier client/server software architecture comprised of a Java applet at the front end, a Java server in the middle, and an Oracle database in the third tier. This system presently uses CGI, the Common Gateway Interface, to communicate between the user-interface and server tiers and Active Data Objects (ADO) to communicate between the server and database tiers. This project investigated other methods and tools for performing the communications between the three tiers of the current system so that both the system performance and software development time could be improved. We specifically found that for the hardware and software platforms that PPS is required to run on, the best solution is to use Java Remote Method Invocation (RMI) for communication between the client and server and SQLJ (Structured Query Language for Java) for server interaction with the database. Prototype implementations showed that RMI combined with SQLJ significantly improved performance and also greatly facilitated construction of the communication software.
Process evaluation distributed system
NASA Technical Reports Server (NTRS)
Moffatt, Christopher L. (Inventor)
2006-01-01
The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.
Liu, Yan-Lin; Shih, Cheng-Ting; Chang, Yuan-Jen; Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment.
Chang, Shu-Jun; Wu, Jay
2014-01-01
The rapid development of picture archiving and communication systems (PACSs) thoroughly changes the way of medical informatics communication and management. However, as the scale of a hospital's operations increases, the large amount of digital images transferred in the network inevitably decreases system efficiency. In this study, a server cluster consisting of two server nodes was constructed. Network load balancing (NLB), distributed file system (DFS), and structured query language (SQL) duplication services were installed. A total of 1 to 16 workstations were used to transfer computed radiography (CR), computed tomography (CT), and magnetic resonance (MR) images simultaneously to simulate the clinical situation. The average transmission rate (ATR) was analyzed between the cluster and noncluster servers. In the download scenario, the ATRs of CR, CT, and MR images increased by 44.3%, 56.6%, and 100.9%, respectively, when using the server cluster, whereas the ATRs increased by 23.0%, 39.2%, and 24.9% in the upload scenario. In the mix scenario, the transmission performance increased by 45.2% when using eight computer units. The fault tolerance mechanisms of the server cluster maintained the system availability and image integrity. The server cluster can improve the transmission efficiency while maintaining high reliability and continuous availability in a healthcare environment. PMID:24701580
Filmless PACS in a multiple facility environment
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.
1996-05-01
A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.
Naver: a PC-cluster-based VR system
NASA Astrophysics Data System (ADS)
Park, ChangHoon; Ko, HeeDong; Kim, TaiYun
2003-04-01
In this paper, we present a new framework NAVER for virtual reality application. The NAVER is based on a cluster of low-cost personal computers. The goal of NAVER is to provide flexible, extensible, scalable and re-configurable framework for the virtual environments defined as the integration of 3D virtual space and external modules. External modules are various input or output devices and applications on the remote hosts. From the view of system, personal computers are divided into three servers according to its specific functions: Render Server, Device Server and Control Server. While Device Server contains external modules requiring event-based communication for the integration, Control Server contains external modules requiring synchronous communication every frame. And, the Render Server consists of 5 managers: Scenario Manager, Event Manager, Command Manager, Interaction Manager and Sync Manager. These managers support the declaration and operation of virtual environment and the integration with external modules on remote servers.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-23
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-706] In the Matter of Certain Wireless Communications System Server Software, Wireless Handheld Devices and Battery Packs: Notice of Commission... handheld devices and battery packs by reason of infringement of certain claims of U.S. Patent Nos. 5,319...
NASA Technical Reports Server (NTRS)
George, Jude (Inventor); Schlecht, Leslie (Inventor); McCabe, James D. (Inventor); LeKashman, John Jr. (Inventor)
1998-01-01
A network management system has SNMP agents distributed at one or more sites, an input output module at each site, and a server module located at a selected site for communicating with input output modules, each of which is configured for both SNMP and HNMP communications. The server module is configured exclusively for HNMP communications, and it communicates with each input output module according to the HNMP. Non-iconified, informationally complete views are provided of network elements to aid in network management.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V
2001-06-01
In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.
An energy-efficient architecture for internet of things systems
NASA Astrophysics Data System (ADS)
De Rango, Floriano; Barletta, Domenico; Imbrogno, Alessandro
2016-05-01
In this paper some of the motivations for energy-efficient communications in wireless systems are described by highlighting emerging trends and identifying some challenges that need to be addressed to enable novel, scalable and energy-efficient communications. So an architecture for Internet of Things systems is presented, the purpose of which is to minimize energy consumption by communication devices, protocols, networks, end-user systems and data centers. Some electrical devices have been designed with multiple communication interfaces, such as RF or WiFi, using open source technology; they have been analyzed under different working conditions. Some devices are programmed to communicate directly with a web server, others to communicate only with a special device that acts as a bridge between some devices and the web server. Communication parameters and device status have been changed dynamically according to different scenarios in order to have the most benefits in terms of energy cost and battery lifetime. So the way devices communicate with the web server or between each other and the way they try to obtain the information they need to be always up to date change dynamically in order to guarantee always the lowest energy consumption, a long lasting battery lifetime, the fastest responses and feedbacks and the best quality of service and communication for end users and inner devices of the system.
Designing communication and remote controlling of virtual instrument network system
NASA Astrophysics Data System (ADS)
Lei, Lin; Wang, Houjun; Zhou, Xue; Zhou, Wenjian
2005-01-01
In this paper, a virtual instrument network through the LAN and finally remote control of virtual instruments is realized based on virtual instrument and LabWindows/CVI software platform. The virtual instrument network system is made up of three subsystems. There are server subsystem, telnet client subsystem and local instrument control subsystem. This paper introduced virtual instrument network structure in detail based on LabWindows. Application procedure design of virtual instrument network communication, the Client/the programming mode of the server, remote PC and server communication far realizing, the control power of the workstation is transmitted, server program and so on essential technical were introduced. And virtual instruments network may connect to entire Internet on. Above-mentioned technology, through measuring the application in the electronic measurement virtual instrument network that is already built up, has verified the actual using value of the technology. Experiment and application validate that this design is resultful.
NASA Technical Reports Server (NTRS)
Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)
1998-01-01
The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.
Patients’ Data Management System Protected by Identity-Based Authentication and Key Exchange
Rivero-García, Alexandra; Santos-González, Iván; Hernández-Goya, Candelaria; Caballero-Gil, Pino; Yung, Moti
2017-01-01
A secure and distributed framework for the management of patients’ information in emergency and hospitalization services is proposed here in order to seek improvements in efficiency and security in this important area. In particular, confidentiality protection, mutual authentication, and automatic identification of patients are provided. The proposed system is based on two types of devices: Near Field Communication (NFC) wristbands assigned to patients, and mobile devices assigned to medical staff. Two other main elements of the system are an intermediate server to manage the involved data, and a second server with a private key generator to define the information required to protect communications. An identity-based authentication and key exchange scheme is essential to provide confidential communication and mutual authentication between the medical staff and the private key generator through an intermediate server. The identification of patients is carried out through a keyed-hash message authentication code. Thanks to the combination of the aforementioned tools, a secure alternative mobile health (mHealth) scheme for managing patients’ data is defined for emergency and hospitalization services. Different parts of the proposed system have been implemented, including mobile application, intermediate server, private key generator and communication channels. Apart from that, several simulations have been performed, and, compared with the current system, significant improvements in efficiency have been observed. PMID:28362328
Patients' Data Management System Protected by Identity-Based Authentication and Key Exchange.
Rivero-García, Alexandra; Santos-González, Iván; Hernández-Goya, Candelaria; Caballero-Gil, Pino; Yung, Moti
2017-03-31
A secure and distributed framework for the management of patients' information in emergency and hospitalization services is proposed here in order to seek improvements in efficiency and security in this important area. In particular, confidentiality protection, mutual authentication, and automatic identification of patients are provided. The proposed system is based on two types of devices: Near Field Communication (NFC) wristbands assigned to patients, and mobile devices assigned to medical staff. Two other main elements of the system are an intermediate server to manage the involved data, and a second server with a private key generator to define the information required to protect communications. An identity-based authentication and key exchange scheme is essential to provide confidential communication and mutual authentication between the medical staff and the private key generator through an intermediate server. The identification of patients is carried out through a keyed-hash message authentication code. Thanks to the combination of the aforementioned tools, a secure alternative mobile health (mHealth) scheme for managing patients' data is defined for emergency and hospitalization services. Different parts of the proposed system have been implemented, including mobile application, intermediate server, private key generator and communication channels. Apart from that, several simulations have been performed, and, compared with the current system, significant improvements in efficiency have been observed.
Development of a novel SCADA system for laboratory testing.
Patel, M; Cole, G R; Pryor, T L; Wilmot, N A
2004-07-01
This document summarizes the supervisory control and data acquisition (SCADA) system that allows communication with, and controlling the output of, various I/O devices in the renewable energy systems and components test facility RESLab. This SCADA system differs from traditional SCADA systems in that it supports a continuously changing operating environment depending on the test to be performed. The SCADA System is based on the concept of having one Master I/O Server and multiple client computer systems. This paper describes the main features and advantages of this dynamic SCADA system, the connections of various field devices to the master I/O server, the device servers, and numerous software features used in the system. The system is based on the graphical programming language "LabVIEW" and its "Datalogging and Supervisory Control" (DSC) module. The DSC module supports a real-time database called the "tag engine," which performs the I/O operations with all field devices attached to the master I/O server and communications with the other tag engines running on the client computers connected via a local area network. Generic and detailed communication block diagrams illustrating the hierarchical structure of this SCADA system are presented. The flow diagram outlining a complete test performed using this system in one of its standard configurations is described.
Maitra, Tanmoy; Giri, Debasis
2014-12-01
The medical organizations have introduced Telecare Medical Information System (TMIS) to provide a reliable facility by which a patient who is unable to go to a doctor in critical or urgent period, can communicate to a doctor through a medical server via internet from home. An authentication mechanism is needed in TMIS to hide the secret information of both parties, namely a server and a patient. Recent research includes patient's biometric information as well as password to design a remote user authentication scheme that enhances the security level. In a single server environment, one server is responsible for providing services to all the authorized remote patients. However, the problem arises if a patient wishes to access several branch servers, he/she needs to register to the branch servers individually. In 2014, Chuang and Chen proposed an remote user authentication scheme for multi-server environment. In this paper, we have shown that in their scheme, an non-register adversary can successfully logged-in into the system as a valid patient. To resist the weaknesses, we have proposed an authentication scheme for TMIS in multi-server environment where the patients can register to a root telecare server called registration center (RC) in one time to get services from all the telecare branch servers through their registered smart card. Security analysis and comparison shows that our proposed scheme provides better security with low computational and communication cost.
Remote Adaptive Communication System
2001-10-25
manage several different devices using the software tool A. Client /Server Architecture The architecture we are proposing is based on the Client ...Server model (see figure 3). We want both client and server to be accessible from anywhere via internet. The computer, acting as a server, is in...the other hand, each of the client applications will act as sender or receiver, depending on the associated interface: user interface or device
The Development of a Remote Patient Monitoring System using Java-enabled Mobile Phones.
Kogure, Y; Matsuoka, H; Kinouchi, Y; Akutagawa, M
2005-01-01
A remote patient monitoring system is described. This system is to monitor information of multiple patients in ICU/CCU via 3G mobile phones. Conventionally, various patient information, such as vital signs, is collected and stored on patient information systems. In proposed system, the patient information is recollected by remote information server, and transported to mobile phones. The server is worked as a gateway between hospital intranet and public networks. Provided information from the server consists of graphs and text data. Doctors can browse patient's information on their mobile phones via the server. A custom Java application software is used to browse these data. In this study, the information server and Java application are developed, and communication between the server and mobile phone in model environment is confirmed. To apply this system to practical products of patient information systems is future work.
A group communication approach for mobile computing mobile channel: An ISIS tool for mobile services
NASA Astrophysics Data System (ADS)
Cho, Kenjiro; Birman, Kenneth P.
1994-05-01
This paper examines group communication as an infrastructure to support mobility of users, and presents a simple scheme to support user mobility by means of switching a control point between replicated servers. We describe the design and implementation of a set of tools, called Mobile Channel, for use with the ISIS system. Mobile Channel is based on a combination of the two replication schemes: the primary-backup approach and the state machine approach. Mobile Channel implements a reliable one-to-many FIFO channel, in which a mobile client sees a single reliable server; servers, acting as a state machine, see multicast messages from clients. Migrations of mobile clients are handled as an intentional primary switch, and hand-offs or server failures are completely masked to mobile clients. To achieve high performance, servers are replicated at a sliding-window level. Our scheme provides a simple abstraction of migration, eliminates complicated hand-off protocols, provides fault-tolerance and is implemented within the existing group communication mechanism.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-14
... Market Maker Standard quote server as a gateway for communicating eQuotes to MIAX. Because of the... connect the Limited Service Ports to independent servers that host their eQuote and purge functionality... same server for all of their Market Maker quoting activity. Currently, Market Makers in the MIAX System...
Wang, Chunliang; Ritter, Felix; Smedby, Orjan
2010-07-01
To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.
Implementation experience of a patient monitoring solution based on end-to-end standards.
Martinez, I; Fernandez, J; Galarraga, M; Serrano, L; de Toledo, P; Escayola, J; Jimenez-Fernandez, S; Led, S; Martinez-Espronceda, M; Garcia, J
2007-01-01
This paper presents a proof-of-concept design of a patient monitoring solution for Intensive Care Unit (ICU). It is end-to-end standards-based, using ISO/IEEE 11073 (X73) in the bedside environment and EN13606 to communicate the information to an Electronic Healthcare Record (EHR) server. At the bedside end a plug-and-play sensor network is implemented, which communicates with a gateway that collects the medical information and sends it to a monitoring server. At this point the server transforms the data frame into an EN13606 extract, to be stored on the EHR server. The presented system has been tested in a laboratory environment to demonstrate the feasibility of this end-to-end standards-based solution.
Application of Aquaculture Monitoring System Based on CC2530
NASA Astrophysics Data System (ADS)
Chen, H. L.; Liu, X. Q.
In order to improve the intelligent level of aquaculture technology, this paper puts forward a remote wireless monitoring system based on ZigBee technology, GPRS technology and Android mobile phone platform. The system is composed of wireless sensor network (WSN), GPRS module, PC server, and Android client. The WSN was set up by CC2530 chips based on ZigBee protocol, to realize the collection of water quality parameters such as the water level, temperature, PH and dissolved oxygen. The GPRS module realizes remote communication between WSN and PC server. Android client communicates with server to monitor the level of water quality. The PID (proportion, integration, differentiation) control is adopted in the control part, the control commands from the android mobile phone is sent to the server, the server again send it to the lower machine to control the water level regulating valve and increasing oxygen pump. After practical testing to the system in Liyang, Jiangsu province, China, temperature measurement accuracy reaches 0.5°C, PH measurement accuracy reaches 0.3, water level control precision can be controlled within ± 3cm, dissolved oxygen control precision can be controlled within ±0.3 mg/L, all the indexes can meet the requirements, this system is very suitable for aquaculture.
Multi-board kernel communication using socket programming for embedded applications
NASA Astrophysics Data System (ADS)
Mishra, Ashish; Girdhar, Neha; Krishnia, Nikita
2016-03-01
It is often seen in large application projects, there is a need to communicate between two different processors or two different kernels. The aim of this paper is to communicate between two different kernels and use efficient method to do so. The TCP/IP protocol is implemented to communicate between two boards via the Ethernet port and use lwIP (lightweight IP) stack, which is a smaller independent implementation of the TCP/IP stack suitable for use in embedded systems. While retaining TCP/IP functionality, lwIP stack reduces the use of memory and even size of the code. In this process of communication we made Raspberry pi as an active client and Field programmable gate array(FPGA) board as a passive server and they are allowed to communicate via Ethernet. Three applications based on TCP/IP client-server network communication have been implemented. The Echo server application is used to communicate between two different kernels of two different boards. Socket programming is used as it is independent of platform and programming language used. TCP transmit and receive throughput test applications are used to measure maximum throughput of the transmission of data. These applications are based on communication to an open source tool called iperf. It is used to measure the throughput transmission rate by sending or receiving some constant piece of data to the client or server according to the test application.
Group-oriented coordination models for distributed client-server computing
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Hughes, Craig S.
1994-01-01
This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.
An ontology-based telemedicine tasks management system architecture.
Nageba, Ebrahim; Fayn, Jocelyne; Rubel, Paul
2008-01-01
The recent developments in ambient intelligence and ubiquitous computing offer new opportunities for the design of advanced Telemedicine systems providing high quality services, anywhere, anytime. In this paper we present an approach for building an ontology-based task-driven telemedicine system. The architecture is composed of a task management server, a communication server and a knowledge base for enabling decision makings taking account of different telemedical concepts such as actors, resources, services and the Electronic Health Record. The final objective is to provide an intelligent management of the different types of available human, material and communication resources.
Secure UNIX socket-based controlling system for high-throughput protein crystallography experiments.
Gaponov, Yurii; Igarashi, Noriyuki; Hiraki, Masahiko; Sasajima, Kumiko; Matsugaki, Naohiro; Suzuki, Mamoru; Kosuge, Takashi; Wakatsuki, Soichi
2004-01-01
A control system for high-throughput protein crystallography experiments has been developed based on a multilevel secure (SSL v2/v3) UNIX socket under the Linux operating system. Main features of protein crystallography experiments (purification, crystallization, loop preparation, data collecting, data processing) are dealt with by the software. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data, that are stored in Network File Server) in a relational database (MySQL). The system consists of several servers and clients. TCP/IP secure UNIX sockets with four predefined behaviors [(a) listening to a request followed by a reply, (b) sending a request and waiting for a reply, (c) listening to a broadcast message, and (d) sending a broadcast message] support communications between all servers and clients allowing one to control experiments, view data, edit experimental conditions and perform data processing remotely. The usage of the interface software is well suited for developing well organized control software with a hierarchical structure of different software units (Gaponov et al., 1998), which will pass and receive different types of information. All communication is divided into two parts: low and top levels. Large and complicated control tasks are split into several smaller ones, which can be processed by control clients independently. For communicating with experimental equipment (beamline optical elements, robots, and specialized experimental equipment etc.), the STARS server, developed at the Photon Factory, is used (Kosuge et al., 2002). The STARS server allows any application with an open socket to be connected with any other clients that control experimental equipment. Majority of the source code is written in C/C++. GUI modules of the system were built mainly using Glade user interface builder for GTK+ and Gnome under Red Hat Linux 7.1 operating system.
NASA Technical Reports Server (NTRS)
Deb, Somnath (Inventor); Ghoshal, Sudipto (Inventor); Malepati, Venkata N. (Inventor); Kleinman, David L. (Inventor); Cavanaugh, Kevin F. (Inventor)
2004-01-01
A network-based diagnosis server for monitoring and diagnosing a system, the server being remote from the system it is observing, comprises a sensor for generating signals indicative of a characteristic of a component of the system, a network-interfaced sensor agent coupled to the sensor for receiving signals therefrom, a broker module coupled to the network for sending signals to and receiving signals from the sensor agent, a handler application connected to the broker module for transmitting signals to and receiving signals therefrom, a reasoner application in communication with the handler application for processing, and responding to signals received from the handler application, wherein the sensor agent, broker module, handler application, and reasoner applications operate simultaneously relative to each other, such that the present invention diagnosis server performs continuous monitoring and diagnosing of said components of the system in real time. The diagnosis server is readily adaptable to various different systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bent, John M.; Faibish, Sorin; Pedone, Jr., James M.
A cluster file system is provided having a plurality of distributed metadata servers with shared access to one or more shared low latency persistent key-value metadata stores. A metadata server comprises an abstract storage interface comprising a software interface module that communicates with at least one shared persistent key-value metadata store providing a key-value interface for persistent storage of key-value metadata. The software interface module provides the key-value metadata to the at least one shared persistent key-value metadata store in a key-value format. The shared persistent key-value metadata store is accessed by a plurality of metadata servers. A metadata requestmore » can be processed by a given metadata server independently of other metadata servers in the cluster file system. A distributed metadata storage environment is also disclosed that comprises a plurality of metadata servers having an abstract storage interface to at least one shared persistent key-value metadata store.« less
Improvements to Autoplot's HAPI Support
NASA Astrophysics Data System (ADS)
Faden, J.; Vandegriff, J. D.; Weigel, R. S.
2017-12-01
Autoplot handles data from a variety of data servers. These servers communicate data in different forms, each somewhat different in capabilities and each needing new software to interface. The Heliophysics Application Programmer's Interface (HAPI) attempts to ease this by providing a standard target for clients and servers to meet. Autoplot fully supports reading data from HAPI servers, and support continues to improve as the HAPI server spec matures. This collaboration has already produced robust clients and documentation which would be expensive for groups creating their own protocol. For example, client-side data caching is introduced where Autoplot maintains a cache of data for performance and off-line use. This is a feature we considered for previous data systems, but we could never afford the time to study and implement this carefully. Also, Autoplot itself can be used as a server, making the data it can read and the results of its processing available to other data systems. Autoplot use with other data transmission systems is reviewed as well, outlining features of each system.
Kent, Alexander Dale [Los Alamos, NM
2008-09-02
Methods and systems in a data/computer network for authenticating identifying data transmitted from a client to a server through use of a gateway interface system which are communicately coupled to each other are disclosed. An authentication packet transmitted from a client to a server of the data network is intercepted by the interface, wherein the authentication packet is encrypted with a one-time password for transmission from the client to the server. The one-time password associated with the authentication packet can be verified utilizing a one-time password token system. The authentication packet can then be modified for acceptance by the server, wherein the response packet generated by the server is thereafter intercepted, verified and modified for transmission back to the client in a similar but reverse process.
Image-based electronic patient records for secured collaborative medical applications.
Zhang, Jianguo; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen; Yao, Yihong; Cai, Weihua; Jin, Jin; Zhang, Guozhen; Sun, Kun
2005-01-01
We developed a Web-based system to interactively display image-based electronic patient records (EPR) for secured intranet and Internet collaborative medical applications. The system consists of four major components: EPR DICOM gateway (EPR-GW), Image-based EPR repository server (EPR-Server), Web Server and EPR DICOM viewer (EPR-Viewer). In the EPR-GW and EPR-Viewer, the security modules of Digital Signature and Authentication are integrated to perform the security processing on the EPR data with integrity and authenticity. The privacy of EPR in data communication and exchanging is provided by SSL/TLS-based secure communication. This presentation gave a new approach to create and manage image-based EPR from actual patient records, and also presented a way to use Web technology and DICOM standard to build an open architecture for collaborative medical applications.
Proposal and Implementation of SSH Client System Using Ajax
NASA Astrophysics Data System (ADS)
Kosuda, Yusuke; Sasaki, Ryoichi
Technology called Ajax gives web applications the functionality and operability of desktop applications. In this study, we propose and implement a Secure Shell (SSH) client system using Ajax, independent of the OS or Java execution environment. In this system, SSH packets are generated on a web browser by using JavaScript and a web server works as a proxy in communication with an SSH server to realize end-to-end SSH communication. We implemented a prototype program and confirmed by experiment that it runs on several web browsers and mobile phones. This system has enabled secure SSH communication from a PC at an Internet cafe or any mobile phone. By measuring the processing performance, we verified satisfactory performance for emergency use, although the speed was unsatisfactory in some cases with mobile phone. The system proposed in this study will be effective in various fields of E-Business.
A Browser-Server-Based Tele-audiology System That Supports Multiple Hearing Test Modalities.
Yao, Jianchu Jason; Yao, Daoyuan; Givens, Gregg
2015-09-01
Millions of global citizens suffering from hearing disorders have limited or no access to much needed hearing healthcare. Although tele-audiology presents a solution to alleviate this problem, existing remote hearing diagnosis systems support only pure-tone tests, leaving speech and other test procedures unsolved, due to the lack of software and hardware to enable communication required between audiologists and their remote patients. This article presents a comprehensive remote hearing test system that integrates the two most needed hearing test procedures: a pure-tone audiogram and a speech test. This enhanced system is composed of a Web application server, an embedded smart Internet-Bluetooth(®) (Bluetooth SIG, Kirkland, WA) gateway (or console device), and a Bluetooth-enabled audiometer. Several graphical user interfaces and a relational database are hosted on the application server. The console device has been designed to support the tests and auxiliary communication between the local site and the remote site. The study was conducted at an audiology laboratory. Pure-tone audiogram and speech test results from volunteers tested with this tele-audiology system are comparable with results from the traditional face-to-face approach. This browser-server-based comprehensive tele-audiology offers a flexible platform to expand hearing services to traditionally underserved groups.
Developing Control System of Electrical Devices with Operational Expense Prediction
NASA Astrophysics Data System (ADS)
Sendari, Siti; Wahyu Herwanto, Heru; Rahmawati, Yuni; Mukti Putranto, Dendi; Fitri, Shofiana
2017-04-01
The purpose of this research is to develop a system that can monitor and record home electrical device’s electricity usage. This system has an ability to control electrical devices in distance and predict the operational expense. The system was developed using micro-controllers and WiFi modules connected to PC server. The communication between modules is arranged by server via WiFi. Beside of reading home electrical devices electricity usage, the unique point of the proposed-system is the ability of micro-controllers to send electricity data to server for recording the usage of electrical devices. The testing of this research was done by Black-box method to test the functionality of system. Testing system run well with 0% error.
Assessment of Risk Communication about Undercooked Hamburgers by Restaurant Servers.
Thomas, Ellen M; Binder, Andrew R; McLAUGHLIN, Anne; Jaykus, Lee-Ann; Hanson, Dana; Powell, Douglas; Chapman, Benjamin
2016-12-01
According to the U.S. Food and Drug Administration 2013 Model Food Code, it is the duty of a food establishment to disclose and remind consumers of risk when ordering undercooked food such as ground beef. The purpose of this study was to explore actual risk communication behaviors of food establishment servers. Secret shoppers visited 265 restaurants in seven geographic locations across the United States, ordered medium rare burgers, and collected and coded risk information from chain and independent restaurant menus and from server responses. The majority of servers reported an unreliable method of doneness (77%) or other incorrect information (66%) related to burger doneness and safety. These results indicate major gaps in server knowledge and risk communication, and the current risk communication language in the Model Food Code does not sufficiently fill these gaps. The question is "should servers even be acting as risk communicators?" There are numerous challenges associated with this practice, including high turnover rates, limited education, and the high stress environment based on pleasing a customer. If servers are designated as risk communicators, food establishment staff should be adequately trained and provided with consumer advisory messages that are accurate, audience appropriate, and delivered in a professional manner so that customers can make informed food safety decisions.
Fast massive preventive security and information communication systems
NASA Astrophysics Data System (ADS)
Akopian, David; Chen, Philip; Miryakar, Susheel; Kumar, Abhinav
2008-04-01
We present a fast massive information communication system for data collection from distributive sources such as cell phone users. As a very important application one can mention preventive notification systems when timely notification and evidence communication may help to improve safety and security through wide public involvement by ensuring easy-to-access and easy-to-communicate information systems. The technology significantly simplifies the response to the events and will help e.g. special agencies to gather crucial information in time and respond as quickly as possible. Cellular phones are nowadays affordable for most of the residents and became a common personal accessory. The paper describes several ways to design such systems including existing internet access capabilities of cell phones or downloadable specialized software. We provide examples of such designs. The main idea is in structuring information in predetermined way and communicating data through a centralized gate-server which will automatically process information and forward it to a proper destination. The gate-server eliminates a need in knowing contact data and specific local community infrastructure. All the cell phones will have self-localizing capability according to FCC E911 mandate, thus the communicated information can be further tagged automatically by location and time information.
Integrated technologies for solid waste bin monitoring system.
Arebey, Maher; Hannan, M A; Basri, Hassan; Begum, R A; Abdullah, Huda
2011-06-01
The integration of communication technologies such as radio frequency identification (RFID), global positioning system (GPS), general packet radio system (GPRS), and geographic information system (GIS) with a camera are constructed for solid waste monitoring system. The aim is to improve the way of responding to customer's inquiry and emergency cases and estimate the solid waste amount without any involvement of the truck driver. The proposed system consists of RFID tag mounted on the bin, RFID reader as in truck, GPRS/GSM as web server, and GIS as map server, database server, and control server. The tracking devices mounted in the trucks collect location information in real time via the GPS. This information is transferred continuously through GPRS to a central database. The users are able to view the current location of each truck in the collection stage via a web-based application and thereby manage the fleet. The trucks positions and trash bin information are displayed on a digital map, which is made available by a map server. Thus, the solid waste of the bin and the truck are being monitored using the developed system.
Informatics in radiology (infoRAD): A complete continuous-availability PACS archive server.
Liu, Brent J; Huang, H K; Cao, Fei; Zhou, Michael Z; Zhang, Jianguo; Mogel, Greg
2004-01-01
The operational reliability of the picture archiving and communication system (PACS) server in a filmless hospital environment is always a major concern because server failure could cripple the entire PACS operation. A simple, low-cost, continuous-availability (CA) PACS archive server was designed and developed. The server makes use of a triple modular redundancy (TMR) system with a simple majority voting logic that automatically identifies a faulty module and removes it from service. The remaining two modules continue normal operation with no adverse effects on data flow or system performance. In addition, the server is integrated with two external mass storage devices for short- and long-term storage. Evaluation and testing of the server were conducted with laboratory experiments in which hardware failures were simulated to observe recovery time and the resumption of normal data flow. The server provides maximum uptime (99.999%) for end users while ensuring the transactional integrity of all clinical PACS data. Hardware failure has only minimal impact on performance, with no interruption of clinical data flow or loss of data. As hospital PACS become more widespread, the need for CA PACS solutions will increase. A TMR CA PACS archive server can reliably help achieve CA in this setting. Copyright RSNA, 2004
Deceit: A flexible distributed file system
NASA Technical Reports Server (NTRS)
Siegel, Alex; Birman, Kenneth; Marzullo, Keith
1989-01-01
Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.
Viewing ISS Data in Real Time via the Internet
NASA Technical Reports Server (NTRS)
Myers, Gerry; Chamberlain, Jim
2004-01-01
EZStream is a computer program that enables authorized users at diverse terrestrial locations to view, in real time, data generated by scientific payloads aboard the International Space Station (ISS). The only computation/communication resource needed for use of EZStream is a computer equipped with standard Web-browser software and a connection to the Internet. EZStream runs in conjunction with the TReK software, described in a prior NASA Tech Briefs article, that coordinates multiple streams of data for the ground communication system of the ISS. EZStream includes server components that interact with TReK within the ISS ground communication system and client components that reside in the users' remote computers. Once an authorized client has logged in, a server component of EZStream pulls the requested data from a TReK application-program interface and sends the data to the client. Future EZStream enhancements will include (1) extensions that enable the server to receive and process arbitrary data streams on its own and (2) a Web-based graphical-user-interface-building subprogram that enables a client who lacks programming expertise to create customized display Web pages.
User-Level QoS-Adaptive Resource Management in Server End-Systems
2003-05-01
an experimental evaluation on a microkernel indicate that it achieves end-system overload protection and traffic prioritization, improves insulation... microkernel operating system. The rest of this paper is organized as follows: Section 2 elaborates on the notion of QoS used in this paper. It proposes a...enforcing QoS contracts and their resource allocation. The communication server has been implemented on the Open Group microkernel Mk7.2. It exports a socket
A Design of a Network Model to the Electric Power Trading System Using Web Services
NASA Astrophysics Data System (ADS)
Maruo, Tomoaki; Matsumoto, Keinosuke; Mori, Naoki; Kitayama, Masashi; Izumi, Yoshio
Web services are regarded as a new application paradigm in the world of the Internet. On the other hand, many business models of a power trading system has been proposed to aim at load reduction by consumers cooperating with electric power suppliers in an electric power market. Then, we propose a network model of power trading system using Web service in this paper. The adaptability of Web services to power trading system was checked in the prototype of our network model and we got good results for it. Each server provides functions as a SOAP server, and it is coupled loosely with each other through SOAP. Storing SOAP message in HTTP packet can establish the penetration communication way that is not conscious of a firewall. Switching of a dynamic server is possible by means of rewriting the server point information on WSDL at the time of obstacle generating.
The DICOM-based radiation therapy information system
NASA Astrophysics Data System (ADS)
Law, Maria Y. Y.; Chan, Lawrence W. C.; Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
Similar to DICOM for PACS (Picture Archiving and Communication System), standards for radiotherapy (RT) information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions), which are more than just images. This presentation describes how a DICOM-based RT Information System Server can be built based on the PACS technology and its data model for a web-based distribution. Methods: The RT information System consists of a Modality Simulator, a data format translator, a RT Gateway, the DICOM RT Server, and the Web-based Application Server. The DICOM RT Server was designed based on a PACS data model and was connected to a Web application Server for distribution of the RT information including therapeutic plans, structures, dose distribution, images and records. Various DICOM RT objects of the patient transmitted to the RT Server were routed to the Web Application Server where the contents of the DICOM RT objects were decoded and mapped to the corresponding location of the RT data model for display in the specially-designed Graphic User Interface. The non-DICOM objects were first rendered to DICOM RT Objects in the translator before they were sent to the RT Server. Results: Ten clinical cases have been collected from different hopsitals for evaluation of the DICOM-based RT Information System. They were successfully routed through the data flow and displayed in the client workstation of the RT information System. Conclusion: Using the DICOM-RT standards, integration of RT data from different vendors is possible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, D.W.; Johnston, W.E.; Hall, D.E.
1990-03-01
We describe the use of the Sun Remote Procedure Call and Unix socket interprocess communication mechanisms to provide the network transport for a distributed, client-server based, image handling system. Clients run under Unix or UNICOS and servers run under Unix or MS-DOS. The use of remote procedure calls across local or wide-area networks to make video movies is addressed.
[Design and realization of the communication system for the mobile medical terminal].
Ji, Lei; Guo, Xu; Shi, Huayu
2013-01-01
Realizing wireless communication based on handset devices for medical staff; providing an instant messaging method. Constructing a set of communication protocols and standards; developing software both on server and client. Building an instant messaging system which follows the customized specification; based on Android the client provides functions like address book, message, voice service etc. As an independent module of the mobile medical terminal, the system can provide convenient communication for medical service with other mobile business.
A proposed UAV for indoor patient care.
Todd, Catherine; Watfa, Mohamed; El Mouden, Yassine; Sahir, Sana; Ali, Afrah; Niavarani, Ali; Lutfi, Aoun; Copiaco, Abigail; Agarwal, Vaibhavi; Afsari, Kiyan; Johnathon, Chris; Okafor, Onyeka; Ayad, Marina
2015-09-10
Indoor flight, obstacle avoidance and client-server communication of an Unmanned Aerial Vehicle (UAV) raises several unique research challenges. This paper examines current methods and associated technologies adapted within the literature toward autonomous UAV flight, for consideration in a proposed system for indoor healthcare administration with a quadcopter. We introduce Healthbuddy, a unique research initiative towards overcoming challenges associated with indoor navigation, collision detection and avoidance, stability, wireless drone-server communications and automated decision support for patient care in a GPS-denied environment. To address the identified research deficits, a drone-based solution is presented. The solution is preliminary as we develop and refine the suggested algorithms and hardware system to achieve the research objectives.
Secure electronic commerce communication system based on CA
NASA Astrophysics Data System (ADS)
Chen, Deyun; Zhang, Junfeng; Pei, Shujun
2001-07-01
In this paper, we introduce the situation of electronic commercial security, then we analyze the working process and security for SSL protocol. At last, we propose a secure electronic commerce communication system based on CA. The system provide secure services such as encryption, integer, peer authentication and non-repudiation for application layer communication software of browser clients' and web server. The system can implement automatic allocation and united management of key through setting up the CA in the network.
SPACER: server for predicting allosteric communication and effects of regulation
Goncearenco, Alexander; Mitternacht, Simon; Yong, Taipang; Eisenhaber, Birgit; Eisenhaber, Frank; Berezovsky, Igor N.
2013-01-01
The SPACER server provides an interactive framework for exploring allosteric communication in proteins with different sizes, degrees of oligomerization and function. SPACER uses recently developed theoretical concepts based on the thermodynamic view of allostery. It proposes easily tractable and meaningful measures that allow users to analyze the effect of ligand binding on the intrinsic protein dynamics. The server shows potential allosteric sites and allows users to explore communication between the regulatory and functional sites. It is possible to explore, for instance, potential effector binding sites in a given structure as targets for allosteric drugs. As input, the server only requires a single structure. The server is freely available at http://allostery.bii.a-star.edu.sg/. PMID:23737445
The distributed annotation system.
Dowell, R D; Jokerst, R M; Day, A; Eddy, S R; Stein, L
2001-01-01
Currently, most genome annotation is curated by centralized groups with limited resources. Efforts to share annotations transparently among multiple groups have not yet been satisfactory. Here we introduce a concept called the Distributed Annotation System (DAS). DAS allows sequence annotations to be decentralized among multiple third-party annotators and integrated on an as-needed basis by client-side software. The communication between client and servers in DAS is defined by the DAS XML specification. Annotations are displayed in layers, one per server. Any client or server adhering to the DAS XML specification can participate in the system; we describe a simple prototype client and server example. The DAS specification is being used experimentally by Ensembl, WormBase, and the Berkeley Drosophila Genome Project. Continued success will depend on the readiness of the research community to adopt DAS and provide annotations. All components are freely available from the project website http://www.biodas.org/.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, X; Liu, L; Xing, L
Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less
EPICS Channel Access Server for LabVIEW
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhukov, Alexander P.
It can be challenging to interface National Instruments LabVIEW (http://www.ni.com/labview/) with EPICS (http://www.aps.anl.gov/epics/). Such interface is required when an instrument control program was developed in LabVIEW but it also has to be part of global control system. This is frequently useful in big accelerator facilities. The Channel Access Server is written in LabVIEW, so it works on any hardware/software platform where LabVIEW is available. It provides full server functionality, so any EPICS client can communicate with it.
Kamauu, Aaron W C; DuVall, Scott L; Robison, Reid J; Liimatta, Andrew P; Wiggins, Richard H; Avrin, David E
2006-01-01
Although digital teaching files are important to radiology education, there are no current satisfactory solutions for export of Digital Imaging and Communications in Medicine (DICOM) images from picture archiving and communication systems (PACS) in desktop publishing format. A vendor-neutral digital teaching file, the Radiology Interesting Case Server (RadICS), offers an efficient tool for harvesting interesting cases from PACS without requiring modifications of the PACS configurations. Radiologists push imaging studies from PACS to RadICS via the standard DICOM Send process, and the RadICS server automatically converts the DICOM images into the Joint Photographic Experts Group format, a common desktop publishing format. They can then select key images and create an interesting case series at the PACS workstation. RadICS was tested successfully against multiple unmodified commercial PACS. Using RadICS, radiologists are able to harvest and author interesting cases at the point of clinical interpretation with minimal disruption in clinical work flow. RSNA, 2006
Experience with PACS in an ATM/Ethernet switched network environment.
Pelikan, E; Ganser, A; Kotter, E; Schrader, U; Timmermann, U
1998-03-01
Legacy local area network (LAN) technologies based on shared media concepts are not adequate for the growth of a large-scale picture archiving and communication system (PACS) in a client-server architecture. First, an asymmetric network load, due to the requests of a large number of PACS clients for only a few main servers, should be compensated by communication links to the servers with a higher bandwidth compared to the clients. Secondly, as the number of PACS nodes increases, the network throughout should not measurably cut production. These requirements can easily be fulfilled using switching technologies. Here asynchronous transfer mode (ATM) is clearly one of the hottest topics in networking because the ATM architecture provides integrated support for a variety of communication services, and it supports virtual networking. On the other hand, most of the imaging modalities are not yet ready for integration into a native ATM network. For a lot of nodes already joining an Ethernet, a cost-effective and pragmatic way to benefit from the switching concept would be a combined ATM/Ethernet switching environment. This incorporates an incremental migration strategy with the immediate benefits of high-speed, high-capacity ATM (for servers and high-sophisticated display workstations), while preserving elements of the existing network technologies. In addition, Ethernet switching instead of shared media Ethernet improves the performance considerably. The LAN emulation (LANE) specification by the ATM forum defines mechanisms that allow ATM networks to coexist with legacy systems using any data networking protocol. This paper points out the suitability of this network architecture in accordance with an appropriate system design.
A biometric method to secure telemedicine systems.
Zhang, G H; Poon, Carmen C Y; Li, Ye; Zhang, Y T
2009-01-01
Security and privacy are among the most crucial issues for data transmission in telemedicine systems. This paper proposes a solution for securing wireless data transmission in telemedicine systems, i.e. within a body sensor network (BSN), between the BSN and server as well as between the server and professionals who have assess to the server. A unique feature of this solution is the generation of random keys by physiological data (i.e. a biometric approach) for securing communication at all 3 levels. In the performance analysis, inter-pulse interval of photoplethysmogram is used as an example to generate these biometric keys to protect wireless data transmission. The results of statistical analysis and computational complexity suggest that this type of key is random enough to make telemedicine systems resistant to attacks.
CommServer: A Communications Manager For Remote Data Sites
NASA Astrophysics Data System (ADS)
Irving, K.; Kane, D. L.
2012-12-01
CommServer is a software system that manages making connections to remote data-gathering stations, providing a simple network interface to client applications. The client requests a connection to a site by name, and the server establishes the connection, providing a bidirectional channel between the client and the target site if successful. CommServer was developed to manage networks of FreeWave serial data radios with multiple data sites, repeaters, and network-accessed base stations, and has been in continuous operational use for several years. Support for Iridium modems using RUDICS will be added soon, and no changes to the application interface are anticipated. CommServer is implemented on Linux using programs written in bash shell, Python, Perl, AWK, under a set of conventions we refer to as ThinObject.
Leo Satellite Communication through a LEO Constellation using TCP/IP Over ATM
NASA Technical Reports Server (NTRS)
Foore, Lawrence R.; Konangi, Vijay K.; Wallett, Thomas M.
1999-01-01
The simulated performance characteristics for communication between a terrestrial client and a Low Earth Orbit (LEO) satellite server are presented. The client and server nodes consist of a Transmission Control Protocol /Internet Protocol (TCP/IP) over ATM configuration. The ATM cells from the client or the server are transmitted to a gateway, packaged with some header information and transferred to a commercial LEO satellite constellation. These cells are then routed through the constellation to a gateway on the globe that allows the client/server communication to take place. Unspecified Bit Rate (UBR) is specified as the quality of service (QoS). Various data rates are considered.
Identifying patients for clinical trials using fuzzy ternary logic expressions on HL7 messages.
Majeed, Raphael W; Röhrig, Rainer
2011-01-01
Identifying eligible patients is one of the most critical parts of any clinical trial. The process of recruiting patients for the third phase of any clinical trial is usually done manually, informing relevant physicians or putting notes on bulletin boards. While most necessary information is already available in electronic hospital information systems, required data still has to be looked up individually. Most university hospitals make use of a dedicated communication server to distribute information from independent information systems, e.g. laboratory information systems, electronic health records, surgery planning systems. Thus, a theoretical model is developed to formally describe inclusion and exclusion criteria for each clinical trial using a fuzzy ternary logic expression. These expressions will then be used to process HL7 messages from a communication server in order to identify eligible patients.
Federal Emergency Management Information System (FEMIS) system administration guide. Version 1.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burford, M.J.; Burnett, R.A.; Downing, T.R.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and analysis tool that was developed by the (Pacific Northwest National Laboratory) (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide defines FEMIS hardware and software requirements and gives instructions for installing the FEMIS software package. 91 This document also contains information on the following: software installation for the FEMIS data servers, communication server, mail server, and the emergency management workstations; distribution media loading and FEMIS installation validation and troubleshooting; and system management of FEMIS users, login, privileges, and usage.more » The system administration utilities (tools), available in the FEMIS client software, are described for user accounts and site profile. This document also describes the installation and use of system and database administration utilities that will assist in keeping the FEMIS system running in an operational environment.« less
Modeling And Simulation Of Multimedia Communication Networks
NASA Astrophysics Data System (ADS)
Vallee, Richard; Orozco-Barbosa, Luis; Georganas, Nicolas D.
1989-05-01
In this paper, we present a simulation study of a browsing system involving radiological image servers. The proposed IEEE 802.6 DQDB MAN standard is designated as the computer network to transfer radiological images from file servers to medical workstations, and to simultaneously support real time voice communications. Storage and transmission of original raster scanned images and images compressed according to pyramid data structures are considered. Different types of browsing as well as various image sizes and bit rates in the DQDB MAN are also compared. The elapsed time, measured from the time an image request is issued until the image is displayed on the monitor, is the parameter considered to evaluate the system performance. Simulation results show that image browsing can be supported by the DQDB MAN.
Design of Deformation Monitoring System for Volcano Mitigation
NASA Astrophysics Data System (ADS)
Islamy, M. R. F.; Salam, R. A.; Munir, M. M.; Irsyam, M.; Khairurrijal
2016-08-01
Indonesia has many active volcanoes that are potentially disastrous. It needs good mitigation systems to prevent victims and to reduce casualties from potential disaster caused by volcanoes eruption. Therefore, the system to monitor the deformation of volcano was built. This system employed telemetry with the combination of Radio Frequency (RF) communications of XBEE and General Packet Radio Service (GPRS) communication of SIM900. There are two types of modules in this system, first is the coordinator as a parent and second is the node as a child. Each node was connected to coordinator forming a Wireless Sensor Network (WSN) with a star topology and it has an inclinometer based sensor, a Global Positioning System (GPS), and an XBEE module. The coordinator collects data to each node, one a time, to prevent collision data between nodes, save data to SD Card and transmit data to web server via GPRS. Inclinometer was calibrated with self-built in calibrator and tested in high temperature environment to check the durability. The GPS was tested by displaying its position in web server via Google Map Application Protocol Interface (API v.3). It was shown that the coordinator can receive and transmit data from every node to web server very well and the system works well in a high temperature environment.
NASA Astrophysics Data System (ADS)
Ibrahim, Maslina Mohd; Yussup, Nolida; Haris, Mohd Fauzi; Soh @ Shaari, Syirrazie Che; Azman, Azraf; Razalim, Faizal Azrin B. Abdul; Yapp, Raymond; Hasim, Harzawardi; Aslan, Mohd Dzul Aiman
2017-01-01
One of the applications for radiation detector is area monitoring which is crucial for safety especially at a place where radiation source is involved. An environmental radiation monitoring system is a professional system that combines flexibility and ease of use for data collection and monitoring. Nowadays, with the growth of technology, devices and equipment can be connected to the network and Internet to enable online data acquisition. This technology enables data from the area monitoring devices to be transmitted to any place and location directly and faster. In Nuclear Malaysia, area radiation monitor devices are located at several selective locations such as laboratories and radiation facility. This system utilizes an Ethernet as a communication media for data acquisition of the area radiation levels from radiation detectors and stores the data at a server for recording and analysis. This paper discusses on the design and development of website that enable all user in Nuclear Malaysia to access and monitor the radiation level for each radiation detectors at real time online. The web design also included a query feature for history data from various locations online. The communication between the server's software and web server is discussed in detail in this paper.
VoIP attacks detection engine based on neural network
NASA Astrophysics Data System (ADS)
Safarik, Jakub; Slachta, Jiri
2015-05-01
The security is crucial for any system nowadays, especially communications. One of the most successful protocols in the field of communication over IP networks is Session Initiation Protocol. It is an open-source project used by different kinds of applications, both open-source and proprietary. High penetration and text-based principle made SIP number one target in IP telephony infrastructure, so security of SIP server is essential. To keep up with hackers and to detect potential malicious attacks, security administrator needs to monitor and evaluate SIP traffic in the network. But monitoring and following evaluation could easily overwhelm the security administrator in networks, typically in networks with a number of SIP servers, users and logically or geographically separated networks. The proposed solution lies in automatic attack detection systems. The article covers detection of VoIP attacks through a distributed network of nodes. Then the gathered data analyze aggregation server with artificial neural network. Artificial neural network means multilayer perceptron network trained with a set of collected attacks. Attack data could also be preprocessed and verified with a self-organizing map. The source data is detected by distributed network of detection nodes. Each node contains a honeypot application and traffic monitoring mechanism. Aggregation of data from each node creates an input for neural networks. The automatic classification on a centralized server with low false positive detection reduce the cost of attack detection resources. The detection system uses modular design for easy deployment in final infrastructure. The centralized server collects and process detected traffic. It also maintains all detection nodes.
Automatic analysis of attack data from distributed honeypot network
NASA Astrophysics Data System (ADS)
Safarik, Jakub; Voznak, MIroslav; Rezac, Filip; Partila, Pavol; Tomala, Karel
2013-05-01
There are many ways of getting real data about malicious activity in a network. One of them relies on masquerading monitoring servers as a production one. These servers are called honeypots and data about attacks on them brings us valuable information about actual attacks and techniques used by hackers. The article describes distributed topology of honeypots, which was developed with a strong orientation on monitoring of IP telephony traffic. IP telephony servers can be easily exposed to various types of attacks, and without protection, this situation can lead to loss of money and other unpleasant consequences. Using a distributed topology with honeypots placed in different geological locations and networks provides more valuable and independent results. With automatic system of gathering information from all honeypots, it is possible to work with all information on one centralized point. Communication between honeypots and centralized data store use secure SSH tunnels and server communicates only with authorized honeypots. The centralized server also automatically analyses data from each honeypot. Results of this analysis and also other statistical data about malicious activity are simply accessible through a built-in web server. All statistical and analysis reports serve as information basis for an algorithm which classifies different types of used VoIP attacks. The web interface then brings a tool for quick comparison and evaluation of actual attacks in all monitored networks. The article describes both, the honeypots nodes in distributed architecture, which monitor suspicious activity, and also methods and algorithms used on the server side for analysis of gathered data.
GSM module for wireless radiation monitoring system via SMS
NASA Astrophysics Data System (ADS)
Rahman, Nur Aira Abd; Hisyam Ibrahim, Noor; Lombigit, Lojius; Azman, Azraf; Jaafar, Zainudin; Arymaswati Abdullah, Nor; Hadzir Patai Mohamad, Glam
2018-01-01
A customised Global System for Mobile communication (GSM) module is designed for wireless radiation monitoring through Short Messaging Service (SMS). This module is able to receive serial data from radiation monitoring devices such as survey meter or area monitor and transmit the data as text SMS to a host server. It provides two-way communication for data transmission, status query, and configuration setup. The module hardware consists of GSM module, voltage level shifter, SIM circuit and Atmega328P microcontroller. Microcontroller provides control for sending, receiving and AT command processing to GSM module. The firmware is responsible to handle task related to communication between device and host server. It process all incoming SMS, extract, and store new configuration from Host, transmits alert/notification SMS when the radiation data reach/exceed threshold value, and transmits SMS data at every fixed interval according to configuration. Integration of this module with radiation survey/monitoring device will create mobile and wireless radiation monitoring system with prompt emergency alert at high-level radiation.
MATREX Leads the Way in Implementing New DOD VV&A Documentation Standards
2007-05-24
Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems Acquisition Concept...Communications Human Performance Model • C3GRID – Command & Control, Computer GRID • CES – Communications Effects Server • CMS2 – Comprehensive
Embedded controller for GEM detector readout system
NASA Astrophysics Data System (ADS)
Zabołotny, Wojciech M.; Byszuk, Adrian; Chernyshova, Maryna; Cieszewski, Radosław; Czarski, Tomasz; Dominik, Wojciech; Jakubowska, Katarzyna L.; Kasprowicz, Grzegorz; Poźniak, Krzysztof; Rzadkiewicz, Jacek; Scholz, Marek
2013-10-01
This paper describes the embedded controller used for the multichannel readout system for the GEM detector. The controller is based on the embedded Mini ITX mainboard, running the GNU/Linux operating system. The controller offers two interfaces to communicate with the FPGA based readout system. FPGA configuration and diagnostics is controlled via low speed USB based interface, while high-speed setup of the readout parameters and reception of the measured data is handled by the PCI Express (PCIe) interface. Hardware access is synchronized by the dedicated server written in C. Multiple clients may connect to this server via TCP/IP network, and different priority is assigned to individual clients. Specialized protocols have been implemented both for low level access on register level and for high level access with transfer of structured data with "msgpack" protocol. High level functionalities have been split between multiple TCP/IP servers for parallel operation. Status of the system may be checked, and basic maintenance may be performed via web interface, while the expert access is possible via SSH server. System was designed with reliability and flexibility in mind.
Virtual reality for spherical images
NASA Astrophysics Data System (ADS)
Pilarczyk, Rafal; Skarbek, Władysław
2017-08-01
Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.
A genetic algorithm for replica server placement
NASA Astrophysics Data System (ADS)
Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl
2012-01-01
Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.
A genetic algorithm for replica server placement
NASA Astrophysics Data System (ADS)
Eslami, Ghazaleh; Toroghi Haghighat, Abolfazl
2011-12-01
Modern distribution systems use replication to improve communication delay experienced by their clients. Some techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm proposed by Qiu et al but it's computational time is more than Greedy algorithm.
Secure data aggregation in heterogeneous and disparate networks using stand off server architecture
NASA Astrophysics Data System (ADS)
Vimalathithan, S.; Sudarsan, S. D.; Seker, R.; Lenin, R. B.; Ramaswamy, S.
2009-04-01
The emerging global reach of technology presents myriad challenges and intricacies as Information Technology teams aim to provide anywhere, anytime and anyone access, for service providers and customers alike. The world is fraught with stifling inequalities, both from an economic as well as socio-political perspective. The net result has been large capability gaps between various organizational locations that need to work together, which has raised new challenges for information security teams. Similar issues arise, when mergers and acquisitions among and between organizations take place. While integrating remote business locations with mainstream operations, one or more of the issues including the lack of application level support, computational capabilities, communication limitations, and legal requirements cause a serious impediment thereby complicating integration while not violating the organizations' security requirements. Often resorted techniques like IPSec, tunneling, secure socket layer, etc. may not be always techno-economically feasible. This paper addresses such security issues by introducing an intermediate server between corporate central server and remote sites, called stand-off-server. We present techniques such as break-before-make connection, break connection after transfer, multiple virtual machine instances with different operating systems using the concept of a stand-off-server. Our experiments show that the proposed solution provides sufficient isolation for the central server/site from attacks arising out of weak communication and/or computing links and is simple to implement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...
Code of Federal Regulations, 2012 CFR
2012-10-01
... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...
Code of Federal Regulations, 2010 CFR
2010-10-01
... ENVIRONMENTAL, CONSERVATION, OCCUPATIONAL SAFETY, AND DRUG-FREE WORKPLACE Energy-Efficient Computer Equipment 1523.7002 Waivers. (a) There are several types of computer equipment which technically fall under the... types of equipment: (1) LAN servers, including file servers; application servers; communication servers...
Analysis of data throughput in communication between PLCs and HMI/SCADA systems
NASA Astrophysics Data System (ADS)
Mikolajek, Martin; Koziorek, Jiri
2016-09-01
This paper is focused on Analysis of data throughout in communication between PLCs and HMI/SCADA systems. The first part of paper discusses basic problematic communication between PLC and HMI systems. Next part is about specific types of communications PLC - HMI requests. For those cases paper is talking about response and data throughput1-3 . Subsequent section of this article contains practical parts with various data exchanges between PLC Siemens and HMI. The possibilities of communication that are described in this article are focused on using OPC server for visualization software, custom HMI system and own application created by using .NET with Technology. The last part of this article contains some communication solutions.
An implementation of wireless medical image transmission system on mobile devices.
Lee, SangBock; Lee, Taesoo; Jin, Gyehwan; Hong, Juhyun
2008-12-01
The advanced technology of computing system was followed by the rapid improvement of medical instrumentation and patient record management system. The typical examples are hospital information system (HIS) and picture archiving and communication system (PACS), which computerized the management procedure of medical records and images in hospital. Because these systems were built and used in hospitals, doctors out of hospital have problems to access them immediately on emergent cases. To solve these problems, this paper addressed the realization of system that could transmit the images acquired by medical imaging systems in hospital to the remote doctors' handheld PDA's using CDMA cellular phone network. The system consists of server and PDA. The server was developed to manage the accounts of doctors and patients and allocate the patient images to each doctor. The PDA was developed to display patient images through remote server connection. To authenticate the personal user, remote data access (RDA) method was used in PDA accessing the server database and file transfer protocol (FTP) was used to download patient images from the remove server. In laboratory experiments, it was calculated to take ninety seconds to transmit thirty images with 832 x 488 resolution and 24 bit depth and 0.37 Mb size. This result showed that the developed system has no problems for remote doctors to receive and review the patient images immediately on emergent cases.
SciServer Compute brings Analysis to Big Data in the Cloud
NASA Astrophysics Data System (ADS)
Raddick, Jordan; Medvedev, Dmitry; Lemson, Gerard; Souter, Barbara
2016-06-01
SciServer Compute uses Jupyter Notebooks running within server-side Docker containers attached to big data collections to bring advanced analysis to big data "in the cloud." SciServer Compute is a component in the SciServer Big-Data ecosystem under development at JHU, which will provide a stable, reproducible, sharable virtual research environment.SciServer builds on the popular CasJobs and SkyServer systems that made the Sloan Digital Sky Survey (SDSS) archive one of the most-used astronomical instruments. SciServer extends those systems with server-side computational capabilities and very large scratch storage space, and further extends their functions to a range of other scientific disciplines.Although big datasets like SDSS have revolutionized astronomy research, for further analysis, users are still restricted to downloading the selected data sets locally - but increasing data sizes make this local approach impractical. Instead, researchers need online tools that are co-located with data in a virtual research environment, enabling them to bring their analysis to the data.SciServer supports this using the popular Jupyter notebooks, which allow users to write their own Python and R scripts and execute them on the server with the data (extensions to Matlab and other languages are planned). We have written special-purpose libraries that enable querying the databases and other persistent datasets. Intermediate results can be stored in large scratch space (hundreds of TBs) and analyzed directly from within Python or R with state-of-the-art visualization and machine learning libraries. Users can store science-ready results in their permanent allocation on SciDrive, a Dropbox-like system for sharing and publishing files. Communication between the various components of the SciServer system is managed through SciServer‘s new Single Sign-on Portal.We have created a number of demos to illustrate the capabilities of SciServer Compute, including Python and R scripts accessing a range of datasets and showing the data flow between storage and compute components.Demos, documentation, and more information can be found at www.sciserver.org.SciServer is funded by the National Science Foundation Award ACI-1261715.
Managing healthcare information using short message service (SMS) in wireless broadband networks
NASA Astrophysics Data System (ADS)
Documet, Jorge; Tsao, Sinchai; Documet, Luis; Liu, Brent J.; Zhou, Zheng; Joseph, Anika O.
2007-03-01
Due to the ubiquity of cell phones, SMS (Short Message Service) has become an ideal means to wirelessly manage a Healthcare environment and in particular PACS (Picture Archival and Communications System) data. SMS is a flexible and mobile method for real-time access and control of Healthcare information systems such as HIS (Hospital Information System) or PACS. Unlike conventional wireless access methods, SMS' mobility is not limited by the presence of a WiFi network or any other localized signal. It provides a simple, reliable yet flexible method to communicate with an information system. In addition, SMS services are widely available for low costs from cellular phone service providers and allows for more mobility than other services such as wireless internet. This paper aims to describe a use case of SMS as a means of remotely communicating with a PACS server. Remote access to a PACS server and its Query-Retrieve services allows for a more convenient, flexible and streamlined radiology workflow. Wireless access methods such as SMS will increase dedicated PACS workstation availability for more specialized DICOM (Digital Imaging and Communications in Medicine) workflow management. This implementation will address potential security, performance and cost issues of applying SMS as part of a healthcare information management system. This is in an effort to design a wireless communication system with optimal mobility and flexibility at minimum material and time costs.
Modular Mount Control System for Telescopes
NASA Astrophysics Data System (ADS)
Mooney, J.; Cleis, R.; Kyono, T.; Edwards, M.
The Space Observatory Control Kit (SpOCK) is the hardware, computers and software used to run small and large telescopes in the RDS division of the Air Force Research Laboratories (AFRL). The system is used to track earth satellites, celestial objects, terrestrial objects and aerial objects. The system will track general targets when provided with state vectors in one of five coordinate systems. Client-toserver and server-to-gimbals communication occurs via human-readable s-expressions that may be evaluated by the computer language called Racket. Software verification is achieved by scripts that exercise these expressions by sending them to the server, and receiving the expressions that the server evaluates. This paper describes the adaptation of a modular mount control system developed primarily for LEO satellite imaging on large and small portable AFRL telescopes with a goal of orbit determination and the generation of satellite metrics.
Xu, Xiu; Zhang, Honglei; Li, Yiming; Li, Bin
2015-07-01
Developed the information centralization and management integration system for monitors of different brands and models with wireless sensor network technologies such as wireless location and wireless communication, based on the existing wireless network. With adaptive implementation and low cost, the system which possesses the advantages of real-time, efficiency and elaboration is able to collect status and data of the monitors, locate the monitors, and provide services with web server, video server and locating server via local network. Using an intranet computer, the clinical and device management staffs can access the status and parameters of monitors. Applications of this system provide convenience and save human resource for clinical departments, as well as promote the efficiency, accuracy and elaboration for the device management. The successful achievement of this system provides solution for integrated and elaborated management of the mobile devices including ventilator and infusion pump.
A Fast lattice-based polynomial digital signature system for m-commerce
NASA Astrophysics Data System (ADS)
Wei, Xinzhou; Leung, Lin; Anshel, Michael
2003-01-01
The privacy and data integrity are not guaranteed in current wireless communications due to the security hole inside the Wireless Application Protocol (WAP) version 1.2 gateway. One of the remedies is to provide an end-to-end security in m-commerce by applying application level security on top of current WAP1.2. The traditional security technologies like RSA and ECC applied on enterprise's server are not practical for wireless devices because wireless devices have relatively weak computation power and limited memory compared with server. In this paper, we developed a lattice based polynomial digital signature system based on NTRU's Polynomial Authentication and Signature Scheme (PASS), which enabled the feasibility of applying high-level security on both server and wireless device sides.
Systematic plan of building Web geographic information system based on ActiveX control
NASA Astrophysics Data System (ADS)
Zhang, Xia; Li, Deren; Zhu, Xinyan; Chen, Nengcheng
2003-03-01
A systematic plan of building Web Geographic Information System (WebGIS) using ActiveX technology is proposed in this paper. In the proposed plan, ActiveX control technology is adopted in building client-side application, and two different schemas are introduced to implement communication between controls in users¡ browser and middle application server. One is based on Distribute Component Object Model (DCOM), the other is based on socket. In the former schema, middle service application is developed as a DCOM object that communicates with ActiveX control through Object Remote Procedure Call (ORPC) and accesses data in GIS Data Server through Open Database Connectivity (ODBC). In the latter, middle service application is developed using Java language. It communicates with ActiveX control through socket based on TCP/IP and accesses data in GIS Data Server through Java Database Connectivity (JDBC). The first one is usually developed using C/C++, and it is difficult to develop and deploy. The second one is relatively easy to develop, but its performance of data transfer relies on Web bandwidth. A sample application is developed using the latter schema. It is proved that the performance of the sample application is better than that of some other WebGIS applications in some degree.
2013-06-01
Communication Applet) UNIGE – D.I.M.E. Using a free application as “MIT APP Inventor” Android Software Development Kit DEGRADED C2 ICCRTS 2013...operate on an Android operating system up-gradable on which will be developed a simplified ACA ( Android Communication Applet) that will call C24U...Server) IP number . . . Portable COTS Devices ACA - C24U ( Android Communication Applet) Sending/receiving SEFL (Simple Exchange
A Browser-Server-Based Tele-audiology System That Supports Multiple Hearing Test Modalities
Yao, Daoyuan; Givens, Gregg
2015-01-01
Abstract Introduction: Millions of global citizens suffering from hearing disorders have limited or no access to much needed hearing healthcare. Although tele-audiology presents a solution to alleviate this problem, existing remote hearing diagnosis systems support only pure-tone tests, leaving speech and other test procedures unsolved, due to the lack of software and hardware to enable communication required between audiologists and their remote patients. This article presents a comprehensive remote hearing test system that integrates the two most needed hearing test procedures: a pure-tone audiogram and a speech test. Materials and Methods: This enhanced system is composed of a Web application server, an embedded smart Internet-Bluetooth® (Bluetooth SIG, Kirkland, WA) gateway (or console device), and a Bluetooth-enabled audiometer. Several graphical user interfaces and a relational database are hosted on the application server. The console device has been designed to support the tests and auxiliary communication between the local site and the remote site. Results: The study was conducted at an audiology laboratory. Pure-tone audiogram and speech test results from volunteers tested with this tele-audiology system are comparable with results from the traditional face-to-face approach. Conclusions: This browser-server–based comprehensive tele-audiology offers a flexible platform to expand hearing services to traditionally underserved groups. PMID:25919376
Advanced Pulse Oximetry System for Remote Monitoring and Management
Pak, Ju Geon; Park, Kee Hyun
2012-01-01
Pulse oximetry data such as saturation of peripheral oxygen (SpO2) and pulse rate are vital signals for early diagnosis of heart disease. Therefore, various pulse oximeters have been developed continuously. However, some of the existing pulse oximeters are not equipped with communication capabilities, and consequently, the continuous monitoring of patient health is restricted. Moreover, even though certain oximeters have been built as network models, they focus on exchanging only pulse oximetry data, and they do not provide sufficient device management functions. In this paper, we propose an advanced pulse oximetry system for remote monitoring and management. The system consists of a networked pulse oximeter and a personal monitoring server. The proposed pulse oximeter measures a patient's pulse oximetry data and transmits the data to the personal monitoring server. The personal monitoring server then analyzes the received data and displays the results to the patient. Furthermore, for device management purposes, operational errors that occur in the pulse oximeter are reported to the personal monitoring server, and the system configurations of the pulse oximeter, such as thresholds and measurement targets, are modified by the server. We verify that the proposed pulse oximetry system operates efficiently and that it is appropriate for monitoring and managing a pulse oximeter in real time. PMID:22933841
Advanced pulse oximetry system for remote monitoring and management.
Pak, Ju Geon; Park, Kee Hyun
2012-01-01
Pulse oximetry data such as saturation of peripheral oxygen (SpO(2)) and pulse rate are vital signals for early diagnosis of heart disease. Therefore, various pulse oximeters have been developed continuously. However, some of the existing pulse oximeters are not equipped with communication capabilities, and consequently, the continuous monitoring of patient health is restricted. Moreover, even though certain oximeters have been built as network models, they focus on exchanging only pulse oximetry data, and they do not provide sufficient device management functions. In this paper, we propose an advanced pulse oximetry system for remote monitoring and management. The system consists of a networked pulse oximeter and a personal monitoring server. The proposed pulse oximeter measures a patient's pulse oximetry data and transmits the data to the personal monitoring server. The personal monitoring server then analyzes the received data and displays the results to the patient. Furthermore, for device management purposes, operational errors that occur in the pulse oximeter are reported to the personal monitoring server, and the system configurations of the pulse oximeter, such as thresholds and measurement targets, are modified by the server. We verify that the proposed pulse oximetry system operates efficiently and that it is appropriate for monitoring and managing a pulse oximeter in real time.
Process Integrated Mechanism for Human-Computer Collaboration and Coordination
2012-09-12
system we implemented the TAFLib library that provides the communication with TAF . The data received from the TAF server is collected in a data structure...send new commands and flight plans for the UAVs to the TAF server. Test scenarios Several scenarios have been implemented to test and prove our...areas. Shooting Enemies The basic scenario proved the successful integration of PIM and the TAF simulation environment. Subsequently we improved the CP
1994-05-01
can easily envision using a microkernel such as Mach 3.0 [31], the NT executive [20], or QNX [40]. We only need to add a communications server and...of the system software architecture. Both hardware subsytems run the CMU Mach 3.0 microkernel [311: the host has special device drivers to support...in the Mach microkernel and a higher- level driver in the Unix server. The low-level drivers handle interrupts and simple device data transfers to
Implementation of Medical Information Exchange System Based on EHR Standard
Han, Soon Hwa; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong
2010-01-01
Objectives To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. Methods To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. Results The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. Conclusions This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information. PMID:21818447
NASA Astrophysics Data System (ADS)
Xu, Chong-Yao; Zheng, Xin; Xiong, Xiao-Ming
2017-02-01
With the development of Internet of Things (IoT) and the popularity of intelligent mobile terminals, smart home system has come into people’s vision. However, due to the high cost, complex installation and inconvenience, as well as network security issues, smart home system has not been popularized. In this paper, combined with Wi-Fi technology, Android system, cloud server and SSL security protocol, a new set of smart home system is designed, with low cost, easy operation, high security and stability. The system consists of Wi-Fi smart node (WSN), Android client and cloud server. In order to reduce system cost and complexity of the installation, each Wi-Fi transceiver, appliance control logic and data conversion in the WSN is setup by a single chip. In addition, all the data of the WSN can be uploaded to the server through the home router, without having to transit through the gateway. All the appliance status information and environmental information are preserved in the cloud server. Furthermore, to ensure the security of information, the Secure Sockets Layer (SSL) protocol is used in the WSN communication with the server. What’s more, to improve the comfort and simplify the operation, Android client is designed with room pattern to control home appliances more realistic, and more convenient.
Implementation of Medical Information Exchange System Based on EHR Standard.
Han, Soon Hwa; Lee, Min Ho; Kim, Sang Guk; Jeong, Jun Yong; Lee, Bi Na; Choi, Myeong Seon; Kim, Il Kon; Park, Woo Sung; Ha, Kyooseob; Cho, Eunyoung; Kim, Yoon; Bae, Jae Bong
2010-12-01
To develop effective ways of sharing patients' medical information, we developed a new medical information exchange system (MIES) based on a registry server, which enabled us to exchange different types of data generated by various systems. To assure that patient's medical information can be effectively exchanged under different system environments, we adopted the standardized data transfer methods and terminologies suggested by the Center for Interoperable Electronic Healthcare Record (CIEHR) of Korea in order to guarantee interoperability. Regarding information security, MIES followed the security guidelines suggested by the CIEHR of Korea. This study aimed to develop essential security systems for the implementation of online services, such as encryption of communication, server security, database security, protection against hacking, contents, and network security. The registry server managed information exchange as well as the registration information of the clinical document architecture (CDA) documents, and the CDA Transfer Server was used to locate and transmit the proper CDA document from the relevant repository. The CDA viewer showed the CDA documents via connection with the information systems of related hospitals. This research chooses transfer items and defines document standards that follow CDA standards, such that exchange of CDA documents between different systems became possible through ebXML. The proposed MIES was designed as an independent central registry server model in order to guarantee the essential security of patients' medical information.
Registered File Support for Critical Operations Files at (Space Infrared Telescope Facility) SIRTF
NASA Technical Reports Server (NTRS)
Turek, G.; Handley, Tom; Jacobson, J.; Rector, J.
2001-01-01
The SIRTF Science Center's (SSC) Science Operations System (SOS) has to contend with nearly one hundred critical operations files via comprehensive file management services. The management is accomplished via the registered file system (otherwise known as TFS) which manages these files in a registered file repository composed of a virtual file system accessible via a TFS server and a file registration database. The TFS server provides controlled, reliable, and secure file transfer and storage by registering all file transactions and meta-data in the file registration database. An API is provided for application programs to communicate with TFS servers and the repository. A command line client implementing this API has been developed as a client tool. This paper describes the architecture, current implementation, but more importantly, the evolution of these services based on evolving community use cases and emerging information system technology.
The USGODAE Monterey Data Server
NASA Astrophysics Data System (ADS)
Sharfstein, P.; Dimitriou, D.; Hankin, S.
2005-12-01
The USGODAE Monterey Data Server (http://www.usgodae.org/) has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. The server is operated with oversight and funding from the Office of Naval Research (ONR). Support of the GODAE Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the GODAE server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data available from the Global Telecommunications System (GTS) and other FTP sites, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. It supports GODAE participants, as well as the broader oceanographic research community, and is becoming a significant node in the international GODAE program. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Presenting data with a consistent interface and ensuring its availability in the maximum number of standard formats is one of the primary challenges in hosting the many diverse formats and broad range of data used by the GODAE community. To this end, all USGODAE data sets are available in their original format via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and GODAE Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC) specifications. USGODAE serves FNMOC GRIB files from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS) as OPeNDAP data sets using the GrADS Data Server (GDS). The server also provides several FNMOC custom IEEE binary format high resolution ocean analysis products and model outputs through GDS. These data sets are also made available through LAS. The Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo temperature/salinity profiling float data. The Argo collection includes all available Delayed-Mode (scientific quality controlled and corrected) data. USGODAE Argo data are served through OPeNDAP and LAS, which provide complete integration of the Argo data set into NVODS and the IOOS/DMAC. By providing researchers flexible, easy access to data through standard Internet and oceanographic interfaces, the USGODAE Monterey Data Server has become an invaluable resource for oceanographic research. Also, by promoting the community data serving projects, USGODAE strengthens the community and helps to advance the data serving standards.
Distributed PACS using distributed file system with hierarchical meta data servers.
Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato
2012-01-01
In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.
Distributed Name Servers: Naming and Caching in Large Distributed Computing Environments
1985-12-01
transmission rate of the communication medium1, transmission over a 56K bps line costs approx- imately 54r, and similarly, communication over a 9.6K...memories for modem computer systems attempt to maximize the hit ratio for a fixed-size cache by utilizing intelligent cache replacement algorithms
Development of Data Processing Software for NBI Spectroscopic Analysis System
NASA Astrophysics Data System (ADS)
Zhang, Xiaodan; Hu, Chundong; Sheng, Peng; Zhao, Yuanzhe; Wu, Deyun; Cui, Qinglong
2015-04-01
A set of data processing software is presented in this paper for processing NBI spectroscopic data. For better and more scientific managment and querying these data, they are managed uniformly by the NBI data server. The data processing software offers the functions of uploading beam spectral original and analytic data to the data server manually and automatically, querying and downloading all the NBI data, as well as dealing with local LZO data. The set software is composed of a server program and a client program. The server software is programmed in C/C++ under a CentOS development environment. The client software is developed under a VC 6.0 platform, which offers convenient operational human interfaces. The network communications between the server and the client are based on TCP. With the help of this set software, the NBI spectroscopic analysis system realizes the unattended automatic operation, and the clear interface also makes it much more convenient to offer beam intensity distribution data and beam power data to operators for operation decision-making. supported by National Natural Science Foundation of China (No. 11075183), the Chinese Academy of Sciences Knowledge Innovation
Image acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Reardon, Frank J.; Salutz, James R.
1991-07-01
The Mayo Clinic and IBM Rochester, Minnesota, have jointly developed a picture archiving, distribution and viewing system for use with Mayo's CT and MRI imaging modalities. Images are retrieved from the modalities and sent over the Mayo city-wide token ring network to optical storage subsystems for archiving, and to server subsystems for viewing on image review stations. Images may also be retrieved from archive and transmitted back to the modalities. The subsystems that interface to the modalities and communicate to the other components of the system are termed Image Acquisition Units (LAUs). The IAUs are IBM Personal System/2 (PS/2) computers with specially developed software. They operate independently in a network of cooperative subsystems and communicate with the modalities, archive subsystems, image review server subsystems, and a central subsystem that maintains information about the content and location of images. This paper provides a detailed description of the function and design of the Image Acquisition Units.
Network Upgrade for the SLC: Control System Modifications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crane, M.; Mackenzie, R.; Sass, R.
2011-09-09
Current communications between the SLAC Linear Collider control system central host and the SLCmicros is built upon the SLAC developed SLCNET communication hardware and protocols. We will describe how the Internet Suite of protocols (TCP/IP) are used to replace the SLCNET protocol interface. The major communication pathways and their individual requirements are described. A proxy server is used to reduce the number of total system TCP/IP connections. The SLCmicros were upgraded to use Ethernet and TCP/IP as well as SLCNET. Design choices and implementation experiences are addressed.
Medical Information Management System (MIMS) CareWindows.
Stiphout, R. M.; Schiffman, R. M.; Christner, M. F.; Ward, R.; Purves, T. M.
1991-01-01
The demonstration of MIMS/CareWindows will include: (1) a review of the application environment and development history, (2) a demonstration of a very large, comprehensive clinical information system with a cost effective graphic user server and communications interface. PMID:1807755
SMART Careplan System for Continuum of Care
Kim, Young Ah; Jang, Seon Young; Ahn, Meejung; Kim, Kyung Duck
2015-01-01
Objectives This paper describes the integrated Careplan system, designed to manage and utilize the existing Electronic Medical Record (EMR) system; the system also defines key items for interdisciplinary communication and continuity of patient care. Methods We structured the Careplan system to provide effective interdisciplinary communication for healthcare services. The design of the Careplan system architecture proceeded in four steps-defining target datasets; construction of conceptual framework and architecture; screen layout and storyboard creation; screen user interface (UI) design and development, and pilot test and step-by-step deployment. This Careplan system architecture consists of two parts, a server-side and client-side area. On the server-side, it performs the roles of data retrieval and storage from target EMRs. Furthermore, it performs the role of sending push notifications to the client depending on the careplan series. Also, the Careplan system provides various convenient modules to easily enter an individual careplan. Results Currently, Severance Hospital operates the Careplan system and provides a stable service dealing with dynamic changes (e.g., domestic medical certification, the Joint Commission International guideline) of EMR. Conclusions The Careplan system should go hand in hand with key items for strengthening interdisciplinary communication and information sharing within the EMR environment. A well-designed Careplan system can enhance user satisfaction and completed performance. PMID:25705559
SMART Careplan System for Continuum of Care.
Kim, Young Ah; Jang, Seon Young; Ahn, Meejung; Kim, Kyung Duck; Kim, Sung Soo
2015-01-01
This paper describes the integrated Careplan system, designed to manage and utilize the existing Electronic Medical Record (EMR) system; the system also defines key items for interdisciplinary communication and continuity of patient care. We structured the Careplan system to provide effective interdisciplinary communication for healthcare services. The design of the Careplan system architecture proceeded in four steps-defining target datasets; construction of conceptual framework and architecture; screen layout and storyboard creation; screen user interface (UI) design and development, and pilot test and step-by-step deployment. This Careplan system architecture consists of two parts, a server-side and client-side area. On the server-side, it performs the roles of data retrieval and storage from target EMRs. Furthermore, it performs the role of sending push notifications to the client depending on the careplan series. Also, the Careplan system provides various convenient modules to easily enter an individual careplan. Currently, Severance Hospital operates the Careplan system and provides a stable service dealing with dynamic changes (e.g., domestic medical certification, the Joint Commission International guideline) of EMR. The Careplan system should go hand in hand with key items for strengthening interdisciplinary communication and information sharing within the EMR environment. A well-designed Careplan system can enhance user satisfaction and completed performance.
Mobile collaborative medical display system.
Park, Sanghun; Kim, Wontae; Ihm, Insung
2008-03-01
Because of recent advances in wireless communication technologies, the world of mobile computing is flourishing with a variety of applications. In this study, we present an integrated architecture for a personal digital assistant (PDA)-based mobile medical display system that supports collaborative work between remote users. We aim to develop a system that enables users in different regions to share a working environment for collaborative visualization with the potential for exploring huge medical datasets. Our system consists of three major components: mobile client, gateway, and parallel rendering server. The mobile client serves as a front end and enables users to choose the visualization and control parameters interactively and cooperatively. The gateway handles requests and responses between mobile clients and the rendering server for efficient communication. Through the gateway, it is possible to share working environments between users, allowing them to work together in computer supported cooperative work (CSCW) mode. Finally, the parallel rendering server is responsible for performing heavy visualization tasks. Our experience indicates that some features currently available to our mobile clients for collaborative scientific visualization are limited due to the poor performance of mobile devices and the low bandwidth of wireless connections. However, as mobile devices and wireless network systems are experiencing considerable elevation in their capabilities, we believe that our methodology will be utilized effectively in building quite responsive, useful mobile collaborative medical systems in the very near future.
Federal Emergency Management Information System (FEMIS) system administration guide. Version 1.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Downing, T.R.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and analysis tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide defines FEMIS hardware and software requirements and gives instructions for installing the FEMIS software package. This document also contains information on the following: software installation for the FEMIS data servers, communication server, mail server, and the emergency management workstations; distribution media loading and FEMIS installation validation and troubleshooting; and system management of FEMIS users, login privileges, and usage. Themore » system administration utilities (tools), available in the FEMIS client software, are described for user accounts and site profile. This document also describes the installation and use of system and database administration utilities that will assist in keeping the FEMIS system running in an operational environment. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local area network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via telecommunications links.« less
Server rack for improved data center management
Bermudez Rodriguez, Sergio A.; Hamann, Hendrik F.; Wehle, Hans-Dieter
2018-01-09
Methods and systems for data center management include collecting sensor data from one or more sensors in a rack; determining a location and identifying information for each asset in the rack using a set of asset tags associated with respective assets; communicating the sensor and asset location to a communication module; receiving an instruction from the communication module; and executing the received instruction to change a property of the rack.
Performance of a distributed superscalar storage server
NASA Technical Reports Server (NTRS)
Finestead, Arlan; Yeager, Nancy
1993-01-01
The RS/6000 performed well in our test environment. The potential exists for the RS/6000 to act as a departmental server for a small number of users, rather than as a high speed archival server. Multiple UniTree Disk Server's utilizing one UniTree Disk Server's utilizing one UniTree Name Server could be developed that would allow for a cost effective archival system. Our performance tests were clearly limited by the network bandwidth. The performance gathered by the LibUnix testing shows that UniTree is capable of exceeding ethernet speeds on an RS/6000 Model 550. The performance of FTP might be significantly faster if asked to perform across a higher bandwidth network. The UniTree Name Server also showed signs of being a potential bottleneck. UniTree sites that would require a high ratio of file creations and deletions to reads and writes would run into this bottleneck. It is possible to improve the UniTree Name Server performance by bypassing the UniTree LibUnix Library altogether and communicating directly with the UniTree Name Server and optimizing creations. Although testing was performed in a less than ideal environment, hopefully the performance statistics stated in this paper will give end-users a realistic idea as to what performance they can expect in this type of setup.
Cheon, Gyeongwoo; Shin, Il Hyung; Jung, Min Yang; Kim, Hee Chan
2009-01-01
We developed a gateway server to support various types of bio-signal monitoring devices for ubiquitous emergency healthcare in a reliable, effective, and scalable way. The server provides multiple channels supporting real-time N-to-N client connections. We applied our system to four types of health monitoring devices including a 12-channel electrocardiograph (ECG), oxygen saturation (SpO(2)), and medical imaging devices (a ultrasonograph and a digital skin microscope). Different types of telecommunication networks were tested: WIBRO, CDMA, wireless LAN, and wired internet. We measured the performance of our system in terms of the transmission rate and the number of simultaneous connections. The results show that the proposed network communication strategy can be successfully applied to the ubiquitous emergency healthcare service by providing a fast rate enough for real-time video transmission and multiple connections among patients and medical personnel.
An analysis of the low-earth-orbit communications environment
NASA Astrophysics Data System (ADS)
Diersing, Robert Joseph
Advances in microprocessor technology and availability of launch opportunities have caused interest in low-earth-orbit satellite based communications systems to increase dramatically during the past several years. In this research the capabilities of two low-cost, store-and-forward LEO communications satellites operating in the public domain are examined--PACSAT-1 (operated by the Radio Amateur Satellite Corporation) and UoSAT-3 (operated by the University of Surrey, England, Electrical Engineering Department). The file broadcasting and file transfer facilities are examined in detail and a simulation model of the downlink traffic pattern is developed. The simulator will aid the assessment of changes in design and implementation for other systems. The development of the downlink traffic simulator is based on three major parts. First, is a characterization of the low-earth-orbit operating environment along with preliminary measurements of the PACSAT-1 and UoSAT-3 systems including: satellite visibility constraints on communications, monitoring equipment configuration, link margin computations, determination of block and bit error rates, and establishing typical data capture rates for ground stations using computer-pointed directional antennas and fixed omni-directional antennas. Second, arrival rates for successful and unsuccessful file server connections are established along with transaction service times. Downlink traffic has been further characterized by measuring: frame and byte counts for all data-link layer traffic; 30-second interval average response time for all traffic and for file server traffic only; file server response time on a per-connection basis; and retry rates for information and supervisory frames. Finally, the model is verified by comparison with measurements of actual traffic not previously used in the model building process. The simulator is then used to predict operation of the PACSAT-1 satellite with modifications to the original design.
Huang, Ean-Wen; Hung, Rui-Suan; Chiou, Shwu-Fen; Liu, Fei-Ying; Liou, Der-Ming
2011-01-01
Information and communication technologies progress rapidly and many novel applications have been developed in many domains of human life. In recent years, the demand for healthcare services has been growing because of the increase in the elderly population. Consequently, a number of healthcare institutions have focused on creating technologies to reduce extraneous work and improve the quality of service. In this study, an information platform for tele- healthcare services was implemented. The architecture of the platform included a web-based application server and client system. The client system was able to retrieve the blood pressure and glucose levels of a patient stored in measurement instruments through Bluetooth wireless transmission. The web application server assisted the staffs and clients in analyzing the health conditions of patients. In addition, the server provided face-to-face communications and instructions through remote video devices. The platform deployed a service-oriented architecture, which consisted of HL7 standard messages and web service components. The platform could transfer health records into HL7 standard clinical document architecture for data exchange with other organizations. The prototyping system was pretested and evaluated in a homecare department of hospital and a community management center for chronic disease monitoring. Based on the results of this study, this system is expected to improve the quality of healthcare services.
PEM public key certificate cache server
NASA Astrophysics Data System (ADS)
Cheung, T.
1993-12-01
Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.
Web-based system for surgical planning and simulation
NASA Astrophysics Data System (ADS)
Eldeib, Ayman M.; Ahmed, Mohamed N.; Farag, Aly A.; Sites, C. B.
1998-10-01
The growing scientific knowledge and rapid progress in medical imaging techniques has led to an increasing demand for better and more efficient methods of remote access to high-performance computer facilities. This paper introduces a web-based telemedicine project that provides interactive tools for surgical simulation and planning. The presented approach makes use of client-server architecture based on new internet technology where clients use an ordinary web browser to view, send, receive and manipulate patients' medical records while the server uses the supercomputer facility to generate online semi-automatic segmentation, 3D visualization, surgical simulation/planning and neuroendoscopic procedures navigation. The supercomputer (SGI ONYX 1000) is located at the Computer Vision and Image Processing Lab, University of Louisville, Kentucky. This system is under development in cooperation with the Department of Neurological Surgery, Alliant Health Systems, Louisville, Kentucky. The server is connected via a network to the Picture Archiving and Communication System at Alliant Health Systems through a DICOM standard interface that enables authorized clients to access patients' images from different medical modalities.
He, Longjun; Ming, Xing; Liu, Qian
2014-04-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Bower, J.C.; Burnett, R.A.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
Aydın, Eda Akman; Bay, Ömer Faruk; Güler, İnan
2016-01-01
Brain Computer Interface (BCI) based environment control systems could facilitate life of people with neuromuscular diseases, reduces dependence on their caregivers, and improves their quality of life. As well as easy usage, low-cost, and robust system performance, mobility is an important functionality expected from a practical BCI system in real life. In this study, in order to enhance users' mobility, we propose internet based wireless communication between BCI system and home environment. We designed and implemented a prototype of an embedded low-cost, low power, easy to use web server which is employed in internet based wireless control of a BCI based home environment. The embedded web server provides remote access to the environmental control module through BCI and web interfaces. While the proposed system offers to BCI users enhanced mobility, it also provides remote control of the home environment by caregivers as well as the individuals in initial stages of neuromuscular disease. The input of BCI system is P300 potentials. We used Region Based Paradigm (RBP) as stimulus interface. Performance of the BCI system is evaluated on data recorded from 8 non-disabled subjects. The experimental results indicate that the proposed web server enables internet based wireless control of electrical home appliances successfully through BCIs.
Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less
The USGODAE Monterey Data Server
NASA Astrophysics Data System (ADS)
Sharfstein, P. J.; Dimitriou, D.; Hankin, S. C.
2004-12-01
With oversight from the U.S. Global Ocean Data Assimilation Experiment (GODAE) Steering Committee and funding from the Office of Naval Research, the USGODAE Monterey Data Server has been established at the Fleet Numerical Meteorology and Oceanography Center (FNMOC) as an explicit U.S. contribution to GODAE. Support of the Monterey Data Server is accomplished by a cooperative effort between FNMOC and NOAA's Pacific Marine Environmental Laboratory (PMEL) in the on-going development of the server and the support of a collaborative network of GODAE assimilation groups. This server hosts near real-time in-situ oceanographic data, atmospheric forcing fields suitable for driving ocean models, and unique GODAE data sets, including demonstration ocean model products. GODAE is envisioned as a global system of observations, communications, modeling and assimilation, which will deliver regular, comprehensive information on the state of the oceans in a way that will promote and engender wide utility and availability of this resource for maximum benefit to society. It aims to make ocean monitoring and prediction a routine activity in a manner similar to weather forecasting. GODAE will contribute to an information system for the global ocean that will serve interests from climate and climate change to ship routing and fisheries. The USGODAE Server is developed and operated as a prototypical node for this global information system. Because of the broad range and diverse formats of data used by the GODAE community, presenting data with a consistent interface and ensuring its availability in standard formats is a primary challenge faced by the USGODAE Server project. To this end, all USGODAE data sets are available via HTTP and FTP. In addition, USGODAE data are served using Local Data Manager (LDM), THREDDS cataloging, OPeNDAP, and Live Access Server (LAS) from PMEL. Every effort is made to serve USGODAE data through the standards specified by the National Virtual Ocean Data System (NVODS) and the Integrated Ocean Observing System Data Management and Communications (IOOS/DMAC). To provide surface forcing, fluxes, and boundary conditions for ocean model research, USGODAE serves global data from the Navy Operational Global Atmospheric Prediction System (NOGAPS) and regional data from the Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS). Global meteorological data and observational data from the FNMOC Ocean QC process are posted in near real-time to USGODAE. These include T/S profiles, in-situ and satellite sea surface temperature (SST), satellite altimetry, and SSM/I sea ice. They contain all of the unclassified in-situ and satellite observations used to initialize the FNMOC NOGAPS model. Also, the Naval Oceanographic Office provides daily satellite SST and SSH retrievals to USGODAE. The USGODAE Server functions as one of two Argo Global Data Assembly Centers (GDACs), hosting the complete collection of quality-controlled Argo T/S profiling float data. USGODAE Argo data are served through OPeNDAP and LAS, providing complete integration into NVODS and the IOOS/DMAC. Due to its high reliability, ease of data access, and increasing breadth of data, the USGODAE Server is becoming an invaluable resource for both the GODAE community and the general oceanographic community. Continued integration of model, forcing, and in-situ data sets from providers throughout the world is making the USGODAE Monterey Data Server a key part of the international GODAE project.
Development of a system for transferring images via a network: supporting a regional liaison.
Mihara, Naoki; Manabe, Shiro; Takeda, Toshihiro; Shinichirou, Kitamura; Junichi, Murakami; Kouji, Kiso; Matsumura, Yasushi
2013-01-01
We developed a system that transfers images via network and started using them in our hospital's PACS (Picture Archiving and Communication Systems) in 2006. We are pleased to report that the system has been re-developed and has been running so that there will be a regional liaison in the future. It has become possible to automatically transfer images simply by selecting the destination hospital that is registered in advance at the relay server. The gateway of this system can send images to a multi-center, relay management server, which receives the images and resends them. This system has the potential to be useful for image exchange, and to serve as a regional medical liaison.
Fault-Tolerant Local-Area Network
NASA Technical Reports Server (NTRS)
Morales, Sergio; Friedman, Gary L.
1988-01-01
Local-area network (LAN) for computers prevents single-point failure from interrupting communication between nodes of network. Includes two complete cables, LAN 1 and LAN 2. Microprocessor-based slave switches link cables to network-node devices as work stations, print servers, and file servers. Slave switches respond to commands from master switch, connecting nodes to two cable networks or disconnecting them so they are completely isolated. System monitor and control computer (SMC) acts as gateway, allowing nodes on either cable to communicate with each other and ensuring that LAN 1 and LAN 2 are fully used when functioning properly. Network monitors and controls itself, automatically routes traffic for efficient use of resources, and isolates and corrects its own faults, with potential dramatic reduction in time out of service.
DICOM-compliant PACS with CD-based image archival
NASA Astrophysics Data System (ADS)
Cox, Robert D.; Henri, Christopher J.; Rubin, Richard K.; Bret, Patrice M.
1998-07-01
This paper describes the design and implementation of a low- cost PACS conforming to the DICOM 3.0 standard. The goal was to provide an efficient image archival and management solution on a heterogeneous hospital network as a basis for filmless radiology. The system follows a distributed, client/server model and was implemented at a fraction of the cost of a commercial PACS. It provides reliable archiving on recordable CD and allows access to digital images throughout the hospital and on the Internet. Dedicated servers have been designed for short-term storage, CD-based archival, data retrieval and remote data access or teleradiology. The short-term storage devices provide DICOM storage and query/retrieve services to scanners and workstations and approximately twelve weeks of 'on-line' image data. The CD-based archival and data retrieval processes are fully automated with the exception of CD loading and unloading. The system employs lossless compression on both short- and long-term storage devices. All servers communicate via the DICOM protocol in conjunction with both local and 'master' SQL-patient databases. Records are transferred from the local to the master database independently, ensuring that storage devices will still function if the master database server cannot be reached. The system features rules-based work-flow management and WWW servers to provide multi-platform remote data access. The WWW server system is distributed on the storage, retrieval and teleradiology servers allowing viewing of locally stored image data directly in a WWW browser without the need for data transfer to a central WWW server. An independent system monitors disk usage, processes, network and CPU load on each server and reports errors to the image management team via email. The PACS was implemented using a combination of off-the-shelf hardware, freely available software and applications developed in-house. The system has enabled filmless operation in CT, MR and ultrasound within the radiology department and throughout the hospital. The use of WWW technology has enabled the development of an intuitive we- based teleradiology and image management solution that provides complete access to image data.
A Lossless Network for Data Acquisition
NASA Astrophysics Data System (ADS)
Jereczek, Grzegorz; Lehmann Miotto, Giovanna; Malone, David; Walukiewicz, Miroslaw
2017-06-01
The bursty many-to-one communication pattern, typical for data acquisition systems, is particularly demanding for commodity TCP/IP and Ethernet technologies. We expand the study of lossless switching in software running on commercial off-the-shelf servers, using the ATLAS experiment as a case study. In this paper, we extend the popular software switch, Open vSwitch, with a dedicated, throughput-oriented buffering mechanism for data acquisition. We compare the performance under heavy congestion on typical Ethernet switches to a commodity server acting as a switch. Our results indicate that software switches with large buffers perform significantly better. Next, we evaluate the scalability of the system when building a larger topology of interconnected software switches, exploiting the integration with software-defined networking technologies. We build an IP-only leaf-spine network consisting of eight software switches running on distinct physical servers as a demonstrator.
Yang, Li; Zheng, Zhiming
2018-01-01
According to advancements in the wireless technologies, study of biometrics-based multi-server authenticated key agreement schemes has acquired a lot of momentum. Recently, Wang et al. presented a three-factor authentication protocol with key agreement and claimed that their scheme was resistant to several prominent attacks. Unfortunately, this paper indicates that their protocol is still vulnerable to the user impersonation attack, privileged insider attack and server spoofing attack. Furthermore, their protocol cannot provide the perfect forward secrecy. As a remedy of these aforementioned problems, we propose a biometrics-based authentication and key agreement scheme for multi-server environments. Compared with various related schemes, our protocol achieves the stronger security and provides more functionality properties. Besides, the proposed protocol shows the satisfactory performances in respect of storage requirement, communication overhead and computational cost. Thus, our protocol is suitable for expert systems and other multi-server architectures. Consequently, the proposed protocol is more appropriate in the distributed networks.
Zheng, Zhiming
2018-01-01
According to advancements in the wireless technologies, study of biometrics-based multi-server authenticated key agreement schemes has acquired a lot of momentum. Recently, Wang et al. presented a three-factor authentication protocol with key agreement and claimed that their scheme was resistant to several prominent attacks. Unfortunately, this paper indicates that their protocol is still vulnerable to the user impersonation attack, privileged insider attack and server spoofing attack. Furthermore, their protocol cannot provide the perfect forward secrecy. As a remedy of these aforementioned problems, we propose a biometrics-based authentication and key agreement scheme for multi-server environments. Compared with various related schemes, our protocol achieves the stronger security and provides more functionality properties. Besides, the proposed protocol shows the satisfactory performances in respect of storage requirement, communication overhead and computational cost. Thus, our protocol is suitable for expert systems and other multi-server architectures. Consequently, the proposed protocol is more appropriate in the distributed networks. PMID:29534085
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
Federal Emergency Management Information System (FEMIS), Installation Guide for FEMIS 1.4.6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arp, J.A.; Burnett, R.A.; Carter, R.J.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angel, L.K.; Bower, J.C.; Burnett, R.A.
1999-06-29
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the U.S. Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are corrected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication data distribution and notification functionality necessary to operate FEMIS in a networked, client/server environment.« less
Integration of digital gross pathology images for enterprise-wide access.
Amin, Milon; Sharma, Gaurav; Parwani, Anil V; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B; Lauro, Gonzalo Romero; Pantanowitz, Liron
2012-01-01
Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then "wrapped" according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a "DICOM wrapper" for multisystem compatibility.
Integration of digital gross pathology images for enterprise-wide access
Amin, Milon; Sharma, Gaurav; Parwani, Anil V.; Anderson, Ralph; Kolowitz, Brian J; Piccoli, Anthony; Shrestha, Rasu B.; Lauro, Gonzalo Romero; Pantanowitz, Liron
2012-01-01
Background: Sharing digital pathology images for enterprise- wide use into a picture archiving and communication system (PACS) is not yet widely adopted. We share our solution and 3-year experience of transmitting such images to an enterprise image server (EIS). Methods: Gross pathology images acquired by prosectors were integrated with clinical cases into the laboratory information system's image management module, and stored in JPEG2000 format on a networked image server. Automated daily searches for cases with gross images were used to compile an ASCII text file that was forwarded to a separate institutional Enterprise Digital Imaging and Communications in Medicine (DICOM) Wrapper (EDW) server. Concurrently, an HL7-based image order for these cases was generated, containing the locations of images and patient data, and forwarded to the EDW, which combined data in these locations to generate images with patient data, as required by DICOM standards. The image and data were then “wrapped” according to DICOM standards, transferred to the PACS servers, and made accessible on an institution-wide basis. Results: In total, 26,966 gross images from 9,733 cases were transmitted over the 3-year period from the laboratory information system to the EIS. The average process time for cases with successful automatic uploads (n=9,688) to the EIS was 98 seconds. Only 45 cases (0.5%) failed requiring manual intervention. Uploaded images were immediately available to institution- wide PACS users. Since inception, user feedback has been positive. Conclusions: Enterprise- wide PACS- based sharing of pathology images is feasible, provides useful services to clinical staff, and utilizes existing information system and telecommunications infrastructure. PACS-shared pathology images, however, require a “DICOM wrapper” for multisystem compatibility. PMID:22530178
MATREX: A Unifying Modeling and Simulation Architecture for Live-Virtual-Constructive Applications
2007-05-23
Deployment Systems Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems...CMS2 – Comprehensive Munitions & Sensor Server • CSAT – C4ISR Static Analysis Tool • C4ISR – Command & Control, Communications, Computers
Development of EPA Protocol Information Enquiry Service System Based on Embedded ARM Linux
NASA Astrophysics Data System (ADS)
Peng, Daogang; Zhang, Hao; Weng, Jiannian; Li, Hui; Xia, Fei
Industrial Ethernet is a new technology for industrial network communications developed in recent years. In the field of industrial automation in China, EPA is the first standard accepted and published by ISO, and has been included in the fourth edition IEC61158 Fieldbus of NO.14 type. According to EPA standard, Field devices such as industrial field controller, actuator and other instruments are all able to realize communication based on the Ethernet standard. The Atmel AT91RM9200 embedded development board and open source embedded Linux are used to develop an information inquiry service system of EPA protocol based on embedded ARM Linux in this paper. The system is capable of designing an EPA Server program for EPA data acquisition procedures, the EPA information inquiry service is available for programs in local or remote host through Socket interface. The EPA client can access data and information of other EPA equipments on the EPA network when it establishes connection with the monitoring port of the server.
NASA Technical Reports Server (NTRS)
Boorstyn, R. R.
1973-01-01
Research is reported dealing with problems of digital data transmission and computer communications networks. The results of four individual studies are presented which include: (1) signal processing with finite state machines, (2) signal parameter estimation from discrete-time observations, (3) digital filtering for radar signal processing applications, and (4) multiple server queues where all servers are not identical.
Huang, H K; Wong, A W; Zhu, X
1997-01-01
Asynchronous transfer mode (ATM) technology emerges as a leading candidate for medical image transmission in both local area network (LAN) and wide area network (WAN) applications. This paper describes the performance of an ATM LAN and WAN network at the University of California, San Francisco. The measurements were obtained using an intensive care unit (ICU) server connecting to four image workstations (WS) at four different locations of a hospital-integrated picture archiving and communication system (HI-PACS) in a daily regular clinical environment. Four types of performance were evaluated: magnetic disk-to-disk, disk-to-redundant array of inexpensive disks (RAID), RAID-to-memory, and memory-to-memory. Results demonstrate that the transmission rate between two workstations can reach 5-6 Mbytes/s from RAID-to-memory, and 8-10 Mbytes/s from memory-to-memory. When the server has to send images to all four workstations simultaneously, the transmission rate to each WS is about 4 Mbytes/s. Both situations are adequate for radiologic image communications for picture archiving and communication systems (PACS) and teleradiology applications.
Rajan, J Pandia; Rajan, S Edward
2018-01-01
Wireless physiological signal monitoring system designing with secured data communication in the health care system is an important and dynamic process. We propose a signal monitoring system using NI myRIO connected with the wireless body sensor network through multi-channel signal acquisition method. Based on the server side validation of the signal, the data connected to the local server is updated in the cloud. The Internet of Things (IoT) architecture is used to get the mobility and fast access of patient data to healthcare service providers. This research work proposes a novel architecture for wireless physiological signal monitoring system using ubiquitous healthcare services by virtual Internet of Things. We showed an improvement in method of access and real time dynamic monitoring of physiological signal of this remote monitoring system using virtual Internet of thing approach. This remote monitoring and access system is evaluated in conventional value. This proposed system is envisioned to modern smart health care system by high utility and user friendly in clinical applications. We claim that the proposed scheme significantly improves the accuracy of the remote monitoring system compared to the other wireless communication methods in clinical system.
Improving and streamlining the workflow in the graphic arts and printing industry
NASA Astrophysics Data System (ADS)
Tuijn, Chris
2003-01-01
In order to survive in the economy of today, an ever-increasing productivity is required from all the partners participating in a specific business process. This is not different for the printing industry. One of the ways to remain profitable is, on one hand, to reduce costs by automation and aiming for large-scale projects and, on the other hand, to specialize and become an expert in the area in which one is active. One of the ways to realize these goals is by streamlining the communication of the different partners and focus on the core business. If we look at the graphic arts and printing industry, we can identify different important players that eventually help in the realization of printed material. For the printing company (as is the case for any other company), the most important player is the customer. This role can be adopted by many different players including publishers, companies, non-commercial institutions, private persons etc. Sometimes, the customer will be the content provider as well but this is not always the case. Often, the content is provided by other organizations such as design and prepress agencies, advertising companies etc. In most printing organizations, the customer has one contact person often referred to as the CSR (Customers Service Representative). Other people involved at the printing organization include the sales representatives, prepress operators, printing operators, postpress operators, planners, the logistics department, the financial department etc. In the first part of this article, we propose a solution that will improve the communication between all the different actors in the graphic arts and printing industry considerably and will optimize and streamline the overall workflow as well. This solution consists of an environment in which the customer can communicate with the CSR to ask for a quote based on a specific product intent; the CSR will then (after the approval from the customer's side) organize the work and brief his technical managers to realize the product. Furthermore, the system will allow managers to brief the actors and follow up on the progress. At all times, the CSR's - as well as the customers - will be able to look at the over-all status of a specific product. If required, the customers can approve the content over the web; the system will also support local and remote proofing. In the second part of this article, we will focus on the technical environment that can be used to create such a system. To this end, we propose the use of a multi-tier server architecture based on Sun"s J2EE platform. Since our system performs a communicating role by nature, it will have to interface in a smart way with a lot of external systems such as prepress systems, MIS systems, mail servers etc. In order to allow a robust communication between the server and its subsystems that avoids a failure of the over-all system if one of the components goes down, we have chosen for a non-blocking, asynchronous communication method based on queuing systems. In order to support an easy integration with other systems in the graphic industry, we also will describe how our communication server supports the JDF standard, a new standard in the graphic industry established by the CIP4 committee.
Akiyama, M
2001-01-01
The Hospital Information System (HIS) has been positioned as the hub of the healthcare information management architecture. In Japan, the billing system assigns an "insurance disease names" to performed exams based on the diagnosis type. Departmental systems provide localized, departmental services, such as order receipt and diagnostic reporting, but do not provide patient demographic information. The system above has many problems. The departmental system's terminals and the HIS's terminals are not integrated. Duplicate data entry introduces errors and increases workloads. Order and exam data managed by the HIS can be sent to the billing system, but departmental data cannot usually be entered. Additionally, billing systems usually keep departmental data for only a short time before it is deleted. The billing system provides payment based on what is entered. The billing system is oriented towards diagnoses. Most importantly, the system is geared towards generating billing reports rather than at providing high-quality patient care. The role of the application server is that of a mediator between system components. Data and events generated by system components are sent to the application server that routes them to appropriate destinations. It also records all system events, including state changes to clinical data, access of clinical data and so on. Finally, the Resource Management System identifies all system resources available to the enterprise. The departmental systems are responsible for managing data and clinical processes at a departmental level. The client interacts with the system via the application server, which provides a general set of system-level functions. The system is implemented using current technologies CORBA and HTTP. System data is collected by the application server and assembled into XML documents for delivery to clients. Clients can access these URLs using standard HTTP clients, since each department provides an HTTP compliant web-server. We have implemented an integrated system communicating via CORBA middleware, consisting of an application server, endoscopy departmental server, pathology departmental server and wrappered legacy HIS. We have found this new approach solves the problems outlined earlier. It provides the services needed to ensure that data is never lost and is always available, that events that occur in the hospital are always captured, and that resources are managed and tracked effectively. Finally, it reduces costs, raises efficiency, increases the quality of patient care, and ultimately saves lives. Now, we are going to integrate all remaining hospital departments, and ultimately, all hospital functions.
Radiology on handheld devices: image display, manipulation, and PACS integration issues.
Raman, Bhargav; Raman, Raghav; Raman, Lalithakala; Beaulieu, Christopher F
2004-01-01
Handheld personal digital assistants (PDAs) have undergone continuous and substantial improvements in hardware and graphics capabilities, making them a compelling platform for novel developments in teleradiology. The latest PDAs have processor speeds of up to 400 MHz and storage capacities of up to 80 Gbytes with memory expansion methods. A Digital Imaging and Communications in Medicine (DICOM)-compliant, vendor-independent handheld image access system was developed in which a PDA server acts as the gateway between a picture archiving and communication system (PACS) and PDAs. The system is compatible with most currently available PDA models. It is capable of both wired and wireless transfer of images and includes custom PDA software and World Wide Web interfaces that implement a variety of basic image manipulation functions. Implementation of this system, which is currently undergoing debugging and beta testing, required optimization of the user interface to efficiently display images on smaller PDA screens. The PDA server manages user work lists and implements compression and security features to accelerate transfer speeds, protect patient information, and regulate access. Although some limitations remain, PDA-based teleradiology has the potential to increase the efficiency of the radiologic work flow, increasing productivity and improving communication with referring physicians and patients. Copyright RSNA, 2004
Landslide and Flood Warning System Prototypes based on Wireless Sensor Networks
NASA Astrophysics Data System (ADS)
Hloupis, George; Stavrakas, Ilias; Triantis, Dimos
2010-05-01
Wireless sensor networks (WSNs) are one of the emerging areas that received great attention during the last few years. This is mainly due to the fact that WSNs have provided scientists with the capability of developing real-time monitoring systems equipped with sensors based on Micro-Electro-Mechanical Systems (MEMS). WSNs have great potential for many applications in environmental monitoring since the sensor nodes that comprised from can host several MEMS sensors (such as temperature, humidity, inertial, pressure, strain-gauge) and transducers (such as position, velocity, acceleration, vibration). The resulting devices are small and inexpensive but with limited memory and computing resources. Each sensor node contains a sensing module which along with an RF transceiver. The communication is broadcast-based since the network topology can change rapidly due to node failures [1]. Sensor nodes can transmit their measurements to central servers through gateway nodes without any processing or they make preliminary calculations locally in order to produce results that will be sent to central servers [2]. Based on the above characteristics, two prototypes using WSNs are presented in this paper: A Landslide detection system and a Flood warning system. Both systems sent their data to central processing server where the core of processing routines exists. Transmission is made using Zigbee and IEEE 802.11b protocol but is capable to use VSAT communication also. Landslide detection system uses structured network topology. Each measuring node comprises of a columnar module that is half buried to the area under investigation. Each sensing module contains a geophone, an inclinometer and a set of strain gauges. Data transmitted to central processing server where possible landslide evolution is monitored. Flood detection system uses unstructured network topology since the failure rate of sensor nodes is expected higher. Each sensing module contains a custom water level sensor (based on plastic optical fiber). Data transmitted directly to server where the early warning algorithms monitor the water level variations in real time. Both sensor nodes use power harvesting techniques in order to extend their battery life as much as possible. [1] Yick J.; Mukherjee, B.; Ghosal, D. Wireless sensor network survey. Comput. Netw. 2008, 52, 2292-2330. [2] Garcia, M.; Bri, D.; Boronat, F.; Lloret, J. A new neighbor selection strategy for group-based wireless sensor networks, In The Fourth International Conference on Networking and Services (ICNS 2008), Gosier, Guadalupe, March 16-21, 2008.
Privacy-Preserving Electrocardiogram Monitoring for Intelligent Arrhythmia Detection.
Son, Junggab; Park, Juyoung; Oh, Heekuck; Bhuiyan, Md Zakirul Alam; Hur, Junbeom; Kang, Kyungtae
2017-06-12
Long-term electrocardiogram (ECG) monitoring, as a representative application of cyber-physical systems, facilitates the early detection of arrhythmia. A considerable number of previous studies has explored monitoring techniques and the automated analysis of sensing data. However, ensuring patient privacy or confidentiality has not been a primary concern in ECG monitoring. First, we propose an intelligent heart monitoring system, which involves a patient-worn ECG sensor (e.g., a smartphone) and a remote monitoring station, as well as a decision support server that interconnects these components. The decision support server analyzes the heart activity, using the Pan-Tompkins algorithm to detect heartbeats and a decision tree to classify them. Our system protects sensing data and user privacy, which is an essential attribute of dependability, by adopting signal scrambling and anonymous identity schemes. We also employ a public key cryptosystem to enable secure communication between the entities. Simulations using data from the MIT-BIH arrhythmia database demonstrate that our system achieves a 95.74% success rate in heartbeat detection and almost a 96.63% accuracy in heartbeat classification, while successfully preserving privacy and securing communications among the involved entities.
Privacy-Preserving Electrocardiogram Monitoring for Intelligent Arrhythmia Detection †
Son, Junggab; Park, Juyoung; Oh, Heekuck; Bhuiyan, Md Zakirul Alam; Hur, Junbeom; Kang, Kyungtae
2017-01-01
Long-term electrocardiogram (ECG) monitoring, as a representative application of cyber-physical systems, facilitates the early detection of arrhythmia. A considerable number of previous studies has explored monitoring techniques and the automated analysis of sensing data. However, ensuring patient privacy or confidentiality has not been a primary concern in ECG monitoring. First, we propose an intelligent heart monitoring system, which involves a patient-worn ECG sensor (e.g., a smartphone) and a remote monitoring station, as well as a decision support server that interconnects these components. The decision support server analyzes the heart activity, using the Pan–Tompkins algorithm to detect heartbeats and a decision tree to classify them. Our system protects sensing data and user privacy, which is an essential attribute of dependability, by adopting signal scrambling and anonymous identity schemes. We also employ a public key cryptosystem to enable secure communication between the entities. Simulations using data from the MIT-BIH arrhythmia database demonstrate that our system achieves a 95.74% success rate in heartbeat detection and almost a 96.63% accuracy in heartbeat classification, while successfully preserving privacy and securing communications among the involved entities. PMID:28604628
NASA Astrophysics Data System (ADS)
Yang, Keon Ho; Jung, Haijo; Kang, Won-Suk; Jang, Bong Mun; Kim, Joong Il; Han, Dong Hoon; Yoo, Sun-Kook; Yoo, Hyung-Sik; Kim, Hee-Joung
2006-03-01
The wireless mobile service with a high bit rate using CDMA-1X EVDO is now widely used in Korea. Mobile devices are also increasingly being used as the conventional communication mechanism. We have developed a web-based mobile system that communicates patient information and images, using CDMA-1X EVDO for emergency diagnosis. It is composed of a Mobile web application system using the Microsoft Windows 2003 server and an internet information service. Also, a mobile web PACS used for a database managing patient information and images was developed by using Microsoft access 2003. A wireless mobile emergency patient information and imaging communication system is developed by using Microsoft Visual Studio.NET, and JPEG 2000 ActiveX control for PDA phone was developed by using the Microsoft Embedded Visual C++. Also, the CDMA-1X EVDO is used for connections between mobile web servers and the PDA phone. This system allows fast access to the patient information database, storing both medical images and patient information anytime and anywhere. Especially, images were compressed into a JPEG2000 format and transmitted from a mobile web PACS inside the hospital to the radiologist using a PDA phone located outside the hospital. Also, this system shows radiological images as well as physiological signal data, including blood pressure, vital signs and so on, in the web browser of the PDA phone so radiologists can diagnose more effectively. Also, we acquired good results using an RW-6100 PDA phone used in the university hospital system of the Sinchon Severance Hospital in Korea.
NASA Astrophysics Data System (ADS)
Lang, Sherman Y. T.; Brooks, Martin; Gauthier, Marc; Wein, Marceli
1993-05-01
A data display system for embedded realtime systems has been developed for use as an operator's user interface and debugging tool. The motivation for development of the On-Line Data Display (ODD) have come from several sources. In particular the design reflects the needs of researchers developing an experimental mobile robot within our laboratory. A proliferation of specialized user interfaces revealed a need for a flexible communications and graphical data display system. At the same time the system had to be readily extensible for arbitrary graphical display formats which would be required for data visualization needs of the researchers. The system defines a communication protocol transmitting 'datagrams' between tasks executing on the realtime system and virtual devices displaying the data in a meaningful way on a graphical workstation. The communication protocol multiplexes logical channels on a single data stream. The current implementation consists of a server for the Harmony realtime operating system and an application written for the Macintosh computer. Flexibility requirements resulted in a highly modular server design, and a layered modular object- oriented design for the Macintosh part of the system. Users assign data types to specific channels at run time. Then devices are instantiated by the user and connected to channels to receive datagrams. The current suite of device types do not provide enough functionality for most users' specialized needs. Instead the system design allows the creation of new device types with modest programming effort. The protocol, design and use of the system are discussed.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.; Guld, Mark O.; Thies, Christian; Fischer, Benedikt; Keysers, Daniel; Kohnen, Michael; Schubert, Henning; Wein, Berthold B.
2003-05-01
Picture archiving and communication systems (PACS) aim to efficiently provide the radiologists with all images in a suitable quality for diagnosis. Modern standards for digital imaging and communication in medicine (DICOM) comprise alphanumerical descriptions of study, patient, and technical parameters. Currently, this is the only information used to select relevant images within PACS. Since textual descriptions insufficiently describe the great variety of details in medical images, content-based image retrieval (CBIR) is expected to have a strong impact when integrated into PACS. However, existing CBIR approaches usually are limited to a distinct modality, organ, or diagnostic study. In this state-of-the-art report, we present first results implementing a general approach to content-based image retrieval in medical applications (IRMA) and discuss its integration into PACS environments. Usually, a PACS consists of a DICOM image server and several DICOM-compliant workstations, which are used by radiologists for reading the images and reporting the findings. Basic IRMA components are the relational database, the scheduler, and the web server, which all may be installed on the DICOM image server, and the IRMA daemons running on distributed machines, e.g., the radiologists" workstations. These workstations can also host the web-based front-ends of IRMA applications. Integrating CBIR and PACS, a special focus is put on (a) location and access transparency for data, methods, and experiments, (b) replication transparency for methods in development, (c) concurrency transparency for job processing and feature extraction, (d) system transparency at method implementation time, and (e) job distribution transparency when issuing a query. Transparent integration will have a certain impact on diagnostic quality supporting both evidence-based medicine and case-based reasoning.
Experiences with DCE: the pro7 communication server based on OSF-DCE functionality.
Schulte, M; Lordieck, W
1997-01-01
The pro7-communication server is a new approach to manage communication between different applications on different hardware platforms in a hospital environment. The most important features are the use of OSF/DCE for realising remote procedure calls between different platforms, the use of an SQL-92 compatible relational database and the design of a new software development tool (called protocol definition language compiler) for describing the interface of a new application, which is to integrate in a hospital environment.
Saguaro: A Distributed Operating System Based on Pools of Servers.
1988-03-25
asynchronous message passing, multicast, and semaphores are supported. We have found this flexibility to be very useful for distributed programming. The...variety of communication primitives provided by SR has facilitated the research of Stella Atkins, who was a visiting professor at Arizona during Spring...data bits in a raw communication channel to help keep the source and destination synchronized , Psync explicitly embeds timing information drawn from the
Get the Word Out with List Servers
ERIC Educational Resources Information Center
Goldberg, Laurence
2006-01-01
In this article, the author details the use of electronic mail server in their school. In their school district of about 7,300 students in suburban Philadelphia (Abington SD), electronic mail list servers are now being used, along with other methods of communication, to disseminate information quickly and widely. They began by manually maintaining…
OWC with vortex beams in data center networks
NASA Astrophysics Data System (ADS)
Kupferman, Judy; Arnon, Shlomi
2017-10-01
Data centers are a key building block in the rapidly growing area of internet technology. A typical data center has tens of thousands of servers, and communication between them must be flexible and robust. Vortex light beams have orbital angular momentum and can provide a useful and flexible method for optical wireless communication in data centers. Vortex beams can be generated with orbital angular momentum but independent of polarization, and used in a multiplexed system. We propose a multiplexing vortex system to increase the communication capacity using optical wireless communication for data center networks. We then evaluate performance. This paper is intended for use as an engineering guideline for design of vortex multiplexing in data center applications.
DISTRIBUTED CONTROL AND DA FOR ATLAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. SCUDDER; ET AL
1999-05-01
The control system for the Atlas pulsed power generator being built at Los Alamos National Laboratory will utilize a significant level of distributed control. Other principal design characteristics include noise immunity, modularity and use of commercial products wherever possible. The data acquisition system is tightly coordinated with the control system. Both share a common database server and a fiber-optic ethernet communications backbone.
Design and Application of a Field Sensing System for Ground Anchors in Slopes
Choi, Se Woon; Lee, Jihoon; Kim, Jong Moon; Park, Hyo Seon
2013-01-01
In a ground anchor system, cables or tendons connected to a bearing plate are used for stabilization of slopes. Then, the stability of a slope is dependent on maintaining the tension levels in the cables. So far, no research on a strain-based field sensing system for ground anchors has been reported. Therefore, in this study, a practical monitoring system for long-term sensing of tension levels in tendons for anchor-reinforced slopes is proposed. The system for anchor-reinforced slopes is composed of: (1) load cells based on vibrating wire strain gauges (VWSGs), (2) wireless sensor nodes which receive and process the signals from load cells and then transmit the result to a master node through local area communication, (3) master nodes which transmit the data sent from sensor nodes to the server through mobile communication, and (4) a server located at the base station. The system was applied to field sensing of ground anchors in the 62 m-long and 26 m-high slope at the side of the highway. Based on the long-term monitoring, the safety of the anchor-reinforced slope can be secured by the timely applications of re-tensioning processes in tendons. PMID:23507820
Development of a graphical user interface for the global land information system (GLIS)
Alstad, Susan R.; Jackson, David A.
1993-01-01
The process of developing a Motif Graphical User Interface for the Global Land Information System (GLIS) involved incorporating user requirements, in-house visual and functional design requirements, and Open Software Foundation (OSF) Motif style guide standards. Motif user interface windows have been developed using the software to support Motif window functions war written using the C programming language. The GLIS architecture was modified to support multiple servers and remote handlers running the X Window System by forming a network of servers and handlers connected by TCP/IP communications. In April 1993, prior to release the GLIS graphical user interface and system architecture modifications were test by developers and users located at the EROS Data Center and 11 beta test sites across the country.
Ng, Curtise K C; White, Peter; McKay, Janice C
2009-04-01
Increasingly, the use of web database portfolio systems is noted in medical and health education, and for continuing professional development (CPD). However, the functions of existing systems are not always aligned with the corresponding pedagogy and hence reflection is often lost. This paper presents the development of a tailored web database portfolio system with Picture Archiving and Communication System (PACS) connectivity, which is based on the portfolio pedagogy. Following a pre-determined portfolio framework, a system model with the components of web, database and mail servers, server side scripts, and a Query/Retrieve (Q/R) broker for conversion between Hypertext Transfer Protocol (HTTP) requests and Q/R service class of Digital Imaging and Communication in Medicine (DICOM) standard, is proposed. The system was piloted with seventy-seven volunteers. A tailored web database portfolio system (http://radep.hti.polyu.edu.hk) was developed. Technological arrangements for reinforcing portfolio pedagogy include popup windows (reminders) with guidelines and probing questions of 'collect', 'select' and 'reflect' on evidence of development/experience, limitation in the number of files (evidence) to be uploaded, the 'Evidence Insertion' functionality to link the individual uploaded artifacts with reflective writing, capability to accommodate diversity of contents and convenient interfaces for reviewing portfolios and communication. Evidence to date suggests the system supports users to build their portfolios with sound hypertext reflection under a facilitator's guidance, and with reviewers to monitor students' progress providing feedback and comments online in a programme-wide situation.
Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat
2013-01-22
A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.
BIO-Plex Information System Concept
NASA Technical Reports Server (NTRS)
Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)
1999-01-01
This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-26
... rather than forcing them to use their Market Maker Standard quote server as a gateway for communicating e... technical flexibility to connect additional Limited Service Ports to independent servers that host their e... mitigate the risk of using the same server for all of their Market Maker quoting activity. By using the...
Request redirection paradigm in medical image archive implementation.
Dragan, Dinu; Ivetić, Dragan
2012-08-01
It is widely recognized that the JPEG2000 facilitates issues in medical imaging: storage, communication, sharing, remote access, interoperability, and presentation scalability. Therefore, JPEG2000 support was added to the DICOM standard Supplement 61. Two approaches to support JPEG2000 medical image are explicitly defined by the DICOM standard: replacing the DICOM image format with corresponding JPEG2000 codestream, or by the Pixel Data Provider service, DICOM supplement 106. The latest one supposes two-step retrieval of medical image: DICOM request and response from a DICOM server, and then JPIP request and response from a JPEG2000 server. We propose a novel strategy for transmission of scalable JPEG2000 images extracted from a single codestream over DICOM network using the DICOM Private Data Element without sacrificing system interoperability. It employs the request redirection paradigm: DICOM request and response from JPEG2000 server through DICOM server. The paper presents programming solution for implementation of request redirection paradigm in a DICOM transparent manner. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
MO/DSD online information server and global information repository access
NASA Technical Reports Server (NTRS)
Nguyen, Diem; Ghaffarian, Kam; Hogie, Keith; Mackey, William
1994-01-01
Often in the past, standards and new technology information have been available only in hardcopy form, with reproduction and mailing costs proving rather significant. In light of NASA's current budget constraints and in the interest of efficient communications, the Mission Operations and Data Systems Directorate (MO&DSD) New Technology and Data Standards Office recognizes the need for an online information server (OLIS). This server would allow: (1) dissemination of standards and new technology information throughout the Directorate more quickly and economically; (2) online browsing and retrieval of documents that have been published for and by MO&DSD; and (3) searching for current and past study activities on related topics within NASA before issuing a task. This paper explores a variety of available information servers and searching tools, their current capabilities and limitations, and the application of these tools to MO&DSD. Most importantly, the discussion focuses on the way this concept could be easily applied toward improving dissemination of standards and new technologies and improving documentation processes.
Development of HIHM (Home Integrated Health Monitor) for ubiquitous home healthcare.
Kim, Jung Soo; Kim, Beom Oh; Park, Kwang Suk
2007-01-01
Home Integrated Health Monitor (HIHM) was developed for ubiquitous home healthcare. From quantitative analysis, we have elicited modal of chair. The HIHM could detect Electrocardiogram (ECG) and Photoplethysmography (PPG) non-intrusively. Also, it could estimate blood pressure (BP) non-intrusively, measure blood glucose and ear temperature. Detected signals and information were transmitted to home gateway and home server through Zigbee communication technology. Home server carried them to Healthcare Center, and specialists such as medical doctors could monitor by Internet. There was also feedback system. This device has a potential to study about ubiquitous home healthcare.
Distributing medical images with internet technologies: a DICOM web server and a DICOM java viewer.
Fernàndez-Bayó, J; Barbero, O; Rubies, C; Sentís, M; Donoso, L
2000-01-01
With the advent of filmless radiology, it becomes important to be able to distribute radiologic images digitally throughout an entire hospital. A new approach based on World Wide Web technologies was developed to accomplish this objective. This approach involves a Web server that allows the query and retrieval of images stored in a Digital Imaging and Communications in Medicine (DICOM) archive. The images can be viewed inside a Web browser with use of a small Java program known as the DICOM Java Viewer, which is executed inside the browser. The system offers several advantages over more traditional picture archiving and communication systems (PACS): It is easy to install and maintain, is platform independent, allows images to be manipulated and displayed efficiently, and is easy to integrate with existing systems that are already making use of Web technologies. The system is user-friendly and can easily be used from outside the hospital if a security policy is in place. The simplicity and flexibility of Internet technologies makes them highly preferable to the more complex PACS workstations. The system works well, especially with magnetic resonance and computed tomographic images, and can help improve and simplify interdepartmental relationships in a filmless hospital environment.
Web-based DAQ systems: connecting the user and electronics front-ends
NASA Astrophysics Data System (ADS)
Lenzi, Thomas
2016-12-01
Web technologies are quickly evolving and are gaining in computational power and flexibility, allowing for a paradigm shift in the field of Data Acquisition (DAQ) systems design. Modern web browsers offer the possibility to create intricate user interfaces and are able to process and render complex data. Furthermore, new web standards such as WebSockets allow for fast real-time communication between the server and the user with minimal overhead. Those improvements make it possible to move the control and monitoring operations from the back-end servers directly to the user and to the front-end electronics, thus reducing the complexity of the data acquisition chain. Moreover, web-based DAQ systems offer greater flexibility, accessibility, and maintainability on the user side than traditional applications which often lack portability and ease of use. As proof of concept, we implemented a simplified DAQ system on a mid-range Spartan6 Field Programmable Gate Array (FPGA) development board coupled to a digital front-end readout chip. The system is connected to the Internet and can be accessed from any web browser. It is composed of custom code to control the front-end readout and of a dual soft-core Microblaze processor to communicate with the client.
Operational Experience with the Frontier System in CMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter
2012-06-20
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been deliveringmore » about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.« less
Operational Experience with the Frontier System in CMS
NASA Astrophysics Data System (ADS)
Blumenfeld, Barry; Dykstra, Dave; Kreuzer, Peter; Du, Ran; Wang, Weizhen
2012-12-01
The Frontier framework is used in the CMS experiment at the LHC to deliver conditions data to processing clients worldwide, including calibration, alignment, and configuration information. Each central server at CERN, called a Frontier Launchpad, uses tomcat as a servlet container to establish the communication between clients and the central Oracle database. HTTP-proxy Squid servers, located close to clients, cache the responses to queries in order to provide high performance data access and to reduce the load on the central Oracle database. Each Frontier Launchpad also has its own reverse-proxy Squid for caching. The three central servers have been delivering about 5 million responses every day since the LHC startup, containing about 40 GB data in total, to more than one hundred Squid servers located worldwide, with an average response time on the order of 10 milliseconds. The Squid caches deployed worldwide process many more requests per day, over 700 million, and deliver over 40 TB of data. Several monitoring tools of the tomcat log files, the accesses of the Squids on the central Launchpad servers, and the availability of remote Squids have been developed to guarantee the performance of the service and make the system easily maintainable. Following a brief introduction of the Frontier framework, we describe the performance of this highly reliable and stable system, detail monitoring concerns and their deployment, and discuss the overall operational experience from the first two years of LHC data-taking.
A Wireless MEMS-Based Inclinometer Sensor Node for Structural Health Monitoring
Ha, Dae Woong; Park, Hyo Seon; Choi, Se Woon; Kim, Yousok
2013-01-01
This paper proposes a wireless inclinometer sensor node for structural health monitoring (SHM) that can be applied to civil engineering and building structures subjected to various loadings. The inclinometer used in this study employs a method for calculating the tilt based on the difference between the static acceleration and the acceleration due to gravity, using a micro-electro-mechanical system (MEMS)-based accelerometer. A wireless sensor node was developed through which tilt measurement data are wirelessly transmitted to a monitoring server. This node consists of a slave node that uses a short-distance wireless communication system (RF 2.4 GHz) and a master node that uses a long-distance telecommunication system (code division multiple access—CDMA). The communication distance limitation, which is recognized as an important issue in wireless monitoring systems, has been resolved via these two wireless communication components. The reliability of the proposed wireless inclinometer sensor node was verified experimentally by comparing the values measured by the inclinometer and subsequently transferred to the monitoring server via wired and wireless transfer methods to permit a performance evaluation of the wireless communication sensor nodes. The experimental results indicated that the two systems (wired and wireless transfer systems) yielded almost identical values at a tilt angle greater than 1°, and a uniform difference was observed at a tilt angle less than 0.42° (approximately 0.0032° corresponding to 0.76% of the tilt angle, 0.42°) regardless of the tilt size. This result was deemed to be within the allowable range of measurement error in SHM. Thus, the wireless transfer system proposed in this study was experimentally verified for practical application in a structural health monitoring system. PMID:24287533
A packet switched communications system for GRO
NASA Astrophysics Data System (ADS)
Husain, Shabu; Yang, Wen-Hsing; Vadlamudi, Rani; Valenti, Joseph
1993-11-01
This paper describes the packet switched Instrumenters Communication System (ICS) that was developed for the Command Management Facility at GSFC to support the Gamma Ray Observatory (GRO) spacecraft. The GRO ICS serves as a vital science data acquisition link to the GRO scientists to initiate commands for their spacecraft instruments. The system is ready to send and receive messages at any time, 24 hours a day and seven days a week. The system is based on X.25 and the International Standard Organization's (ISO) 7-layer Open Systems Interconnection (OSI) protocol model and has client and server components. The components of the GRO ICS are discussed along with how the Communications Subsystem for Interconnection (CSFI) and Network Control Program Packet Switching Interface (NPSI) software are used in the system.
Triple-server blind quantum computation using entanglement swapping
NASA Astrophysics Data System (ADS)
Li, Qin; Chan, Wai Hong; Wu, Chunhui; Wen, Zhonghua
2014-04-01
Blind quantum computation allows a client who does not have enough quantum resources or technologies to achieve quantum computation on a remote quantum server such that the client's input, output, and algorithm remain unknown to the server. Up to now, single- and double-server blind quantum computation have been considered. In this work, we propose a triple-server blind computation protocol where the client can delegate quantum computation to three quantum servers by the use of entanglement swapping. Furthermore, the three quantum servers can communicate with each other and the client is almost classical since one does not require any quantum computational power, quantum memory, and the ability to prepare any quantum states and only needs to be capable of getting access to quantum channels.
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1991-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), comprises multiple distributed local area networks (LAN's) residing in New Mexico and California. The TCP/IP protocol suite is used for inter-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File Server (CFS). Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Service (NSS) and its requirements are described. An application or functional description of the NSS is given. The final section adds performance, capacity, and access constraints to the requirements.
Design and implementation of a cloud based lithography illumination pupil processing application
NASA Astrophysics Data System (ADS)
Zhang, Youbao; Ma, Xinghua; Zhu, Jing; Zhang, Fang; Huang, Huijie
2017-02-01
Pupil parameters are important parameters to evaluate the quality of lithography illumination system. In this paper, a cloud based full-featured pupil processing application is implemented. A web browser is used for the UI (User Interface), the websocket protocol and JSON format are used for the communication between the client and the server, and the computing part is implemented in the server side, where the application integrated a variety of high quality professional libraries, such as image processing libraries libvips and ImageMagic, automatic reporting system latex, etc., to support the program. The cloud based framework takes advantage of server's superior computing power and rich software collections, and the program could run anywhere there is a modern browser due to its web UI design. Compared to the traditional way of software operation model: purchased, licensed, shipped, downloaded, installed, maintained, and upgraded, the new cloud based approach, which is no installation, easy to use and maintenance, opens up a new way. Cloud based application probably is the future of the software development.
Application of wireless networks-peer-to-peer information sharing
NASA Astrophysics Data System (ADS)
ellappan, Vijayan; chaki, suchismita; kumar, avn
2017-11-01
Peer to Peer communications and its applications have gotten to be ordinary construction modelling in the wired network environment. But then, they have not been successfully adjusted with the wireless environment. Unlike the traditional client-server framework, in a P2P framework, each node can play the role of client as well as server simultaneously and exchange data or information with others. We aim to design an application which can adapt to the wireless ad-hoc networks. Peer to Peer communication can help people to share their files (information, image, audio, video and so on) and communicate with each other without relying on a particular network infrastructure or limited data usage. Here there is a central server with the help of which, the peers will have the capability to get the information about the other peers in the network. Indeed, even without the Internet, devices have the potential to allow users to connect and communicate in a special way through short range remote protocols such Wi-Fi.
Requirements for a network storage service
NASA Technical Reports Server (NTRS)
Kelly, Suzanne M.; Haynes, Rena A.
1992-01-01
Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.
Applications of Multi-Channel Safety Authentication Protocols in Wireless Networks.
Chen, Young-Long; Liau, Ren-Hau; Chang, Liang-Yu
2016-01-01
People can use their web browser or mobile devices to access web services and applications which are built into these servers. Users have to input their identity and password to login the server. The identity and password may be appropriated by hackers when the network environment is not safe. The multiple secure authentication protocol can improve the security of the network environment. Mobile devices can be used to pass the authentication messages through Wi-Fi or 3G networks to serve as a second communication channel. The content of the message number is not considered in a multiple secure authentication protocol. The more excessive transmission of messages would be easier to collect and decode by hackers. In this paper, we propose two schemes which allow the server to validate the user and reduce the number of messages using the XOR operation. Our schemes can improve the security of the authentication protocol. The experimental results show that our proposed authentication protocols are more secure and effective. In regard to applications of second authentication communication channels for a smart access control system, identity identification and E-wallet, our proposed authentication protocols can ensure the safety of person and property, and achieve more effective security management mechanisms.
The NetQuakes Project - Seeking a Balance Between Science and Citizens.
NASA Astrophysics Data System (ADS)
Luetgert, J. H.; Oppenheimer, D. H.
2012-12-01
The challenge for any system that uses volunteer help to do science is to dependably acquire quality data without unduly burdening the volunteer. The NetQuakes accelerograph and its data acquisition system were created to address the recognized need for more densely sampled strong ground motion recordings in urban areas to provide more accurate ShakeMaps for post-earthquake disaster assessment and to provide data for structural engineers to improve design standards. The recorder has 18 bit resolution with ±3g internal tri-axial MEMS accelerometers. Data are continuously recorded at 200 sps into a 1-2 week ringbuffer. When triggered, a miniSEED file is sent to USGS servers via the Internet. Data can also be recovered from the ringbuffer by a remote request through the NetQuakes servers. Following a power failure, the instrument can run for 36 hours using its internal battery. We rely upon cooperative citizens to host the dataloggers, provide power and Internet connectivity and perform minor servicing. Instrument and battery replacement are simple tasks that can be performed by hosts, thus reducing maintenance costs. Communication with the instrument to acquire data or deliver firmware is accomplished by file transfers using NetQuakes servers. The client instrument initiates all client-server interactions, so it safely resides behind a host's firewall. A connection to the host's LAN, and from there to the public Internet, can be made using WiFi to minimize cabling. Although timing using a cable to an external GPS antenna is possible, it is simpler to use the Network Time Protocol (NTP) to discipline the internal clock. This approach achieves timing accuracy substantially better than a sample interval. Since 2009, we have installed more than 140 NetQuakes instruments in the San Francisco Bay Area and have successfully integrated their data into the near real time data stream of the Northern California Seismic System. An additional 235 NetQuakes instruments have been installed by other regional seismic networks - all communicating via the common NetQuakes servers.
A web-server of cell type discrimination system.
Wang, Anyou; Zhong, Yan; Wang, Yanhua; He, Qianchuan
2014-01-01
Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells.
A Web-Server of Cell Type Discrimination System
Zhong, Yan
2014-01-01
Discriminating cell types is a daily request for stem cell biologists. However, there is not a user-friendly system available to date for public users to discriminate the common cell types, embryonic stem cells (ESCs), induced pluripotent stem cells (iPSCs), and somatic cells (SCs). Here, we develop WCTDS, a web-server of cell type discrimination system, to discriminate the three cell types and their subtypes like fetal versus adult SCs. WCTDS is developed as a top layer application of our recent publication regarding cell type discriminations, which employs DNA-methylation as biomarkers and machine learning models to discriminate cell types. Implemented by Django, Python, R, and Linux shell programming, run under Linux-Apache web server, and communicated through MySQL, WCTDS provides a friendly framework to efficiently receive the user input and to run mathematical models for analyzing data and then to present results to users. This framework is flexible and easy to be expended for other applications. Therefore, WCTDS works as a user-friendly framework to discriminate cell types and subtypes and it can also be expended to detect other cell types like cancer cells. PMID:24578634
Tools for Administration of a UNIX-Based Network
NASA Technical Reports Server (NTRS)
LeClaire, Stephen; Farrar, Edward
2004-01-01
Several computer programs have been developed to enable efficient administration of a large, heterogeneous, UNIX-based computing and communication network that includes a variety of computers connected to a variety of subnetworks. One program provides secure software tools for administrators to create, modify, lock, and delete accounts of specific users. This program also provides tools for users to change their UNIX passwords and log-in shells. These tools check for errors. Another program comprises a client and a server component that, together, provide a secure mechanism to create, modify, and query quota levels on a network file system (NFS) mounted by use of the VERITAS File SystemJ software. The client software resides on an internal secure computer with a secure Web interface; one can gain access to the client software from any authorized computer capable of running web-browser software. The server software resides on a UNIX computer configured with the VERITAS software system. Directories where VERITAS quotas are applied are NFS-mounted. Another program is a Web-based, client/server Internet Protocol (IP) address tool that facilitates maintenance lookup of information about IP addresses for a network of computers.
Video personalization for usage environment
NASA Astrophysics Data System (ADS)
Tseng, Belle L.; Lin, Ching-Yung; Smith, John R.
2002-07-01
A video personalization and summarization system is designed and implemented incorporating usage environment to dynamically generate a personalized video summary. The personalization system adopts the three-tier server-middleware-client architecture in order to select, adapt, and deliver rich media content to the user. The server stores the content sources along with their corresponding MPEG-7 metadata descriptions. Our semantic metadata is provided through the use of the VideoAnnEx MPEG-7 Video Annotation Tool. When the user initiates a request for content, the client communicates the MPEG-21 usage environment description along with the user query to the middleware. The middleware is powered by the personalization engine and the content adaptation engine. Our personalization engine includes the VideoSue Summarization on Usage Environment engine that selects the optimal set of desired contents according to user preferences. Afterwards, the adaptation engine performs the required transformations and compositions of the selected contents for the specific usage environment using our VideoEd Editing and Composition Tool. Finally, two personalization and summarization systems are demonstrated for the IBM Websphere Portal Server and for the pervasive PDA devices.
1993-08-01
Abstract We propose a new style of operating system architecture appropriate for microkernel -based operating sys- tems: services are implemented as a...retaining all the modularity advantages of microkernel technology. Since services reside in libraries, an application is free to use the library that...U.S. Government. 93-23976,. . I~lUI5E NIIA Keywords: Operating Systems, Microkernel , Network communication, File organization 1. Introduction In the
Constraint processing in our extensible language for cooperative imaging system
NASA Astrophysics Data System (ADS)
Aoki, Minoru; Murao, Yo; Enomoto, Hajime
1996-02-01
The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.
EMR-based TeleGeriatric system.
Pallawala, P M; Lun, K C
2001-01-01
As medical services improve due to new technologies and breakthroughs, it has lead to an increasingly aging population. There has been much discussion and debate on how to solve various aspects such as psychological, socio-economic and medical problems related to aging. Our effort is to implement a feasible telegeriatric medical service with the use of the state of the art technology to deliver medical services efficiently to remote sites where elderly homes are based. The TeleGeriatric system will lead to rapid decision-making in the presence of acute or subacute emergencies. This triage will also lead to a reduction of unnecessary admission. It will enable the doctors who visit these elderly homes once a week basis to improve their geriatric management skills by communication with geriatric specialist. Nursing skills in the geriatric care will also benefit from this system. Integrated electronic medical record (EMR) system will be indispensable in the face of emergency admissions to hospitals. Evolution of EMR database would lead to future research in telegeriatrics and will help to identify the areas where telegeriatrics can be optimally used. This system is based on current web browsing technology and broadband communication. The TeleGeriatric web based server is developed using Java Technology. The TeleGeriatric database server was developed using Microsoft SQL server. Both are based at the Medical Informatics Programme, National University of Singapore. Two elderly homes situated in the periphery of Singapore and a leading government hospital in geriatric care have been chosen for the project. These 3 institutions and National University of Singapore are connected via ADSL protocol. ADSL connection supports high bandwidth, which is necessary for high quality videoconferencing. Each time a patient needs a teleconsultation a nurse or a doctor in the remote site sends the patient's record to the TeleGeriatric server. The TeleGeriatric server forwards the request to the Alexandra Hospital for consultation. Geriatrics specialists at the Alexandra Hospital carry out teleward rounds twice weekly and on demand basis. Following the implementation of the system, a trial run has been done. Total results have demonstrated a high degree of coordination and cooperation between remote site and the Alexandra Hospital. Also the patient compliance is very high and they prefer teleconsultation. Initial results show that the TeleGeriatric system has definite advantages in managing geriatric patients at a remote site. As the system evolves, further research will show the areas where telegeriatrics can be used optimally.
History and structures of telecommunication in pathology, focusing on open access platforms.
Kayser, Klaus; Borkenfeld, Stephan; Djenouni, Amina; Kayser, Gian
2011-11-07
Telecommunication has matured to a broadly applied tool in diagnostic pathology. Contemporary with the development of fast electronic communication lines (Integrated digital network services (ISDN), broad band connections, and fibre optics, as well as the digital imaging technology (digital camera), telecommunication in tissue--based diagnosis (telepathology) has matured. Open access (internet) and server--based communication have induced the development of specific medical information platforms, such as iPATH, UICC-TPCC (telepathology consultation centre of the Union International against Cancer), or the Armed Forces Institute of Pathology (AFIP) teleconsultation system. They have been closed, and are subject to be replaced by specific open access forums (Medical Electronic Expert Communication System (MECES) with embedded virtual slide (VS) technology). MECES uses php language, data base driven mySqL architecture, X/L-AMPP infrastructure, and browser friendly W3C conform standards. The server--based medical communication systems (AFIP, iPATH, UICC-TPCC) have been reported to be a useful and easy to handle tool for expert consultation. Correct sampling and evaluation of transmitted still images by experts reported revealed no or only minor differences to the original images and good practice of the involved experts. β tests with the new generation medical expert consultation systems (MECES) revealed superior results in terms of performance, still image viewing, and system handling, especially as this is closely related to the use of so--called social forums (facebook, youtube, etc.). In addition to the acknowledged advantages of the former established systems (assistance of pathologists working in developing countries, diagnosis confirmation, international information exchange, etc.), the new generation offers additional benefits such as acoustic information transfer, assistance in image screening, VS technology, and teaching in diagnostic sampling, judgement, and verification.
2016-09-01
and network. The computing and network hardware are identified and include routers, servers, firewalls, laptops , backup hard drives, smart phones...deployable hardware units will be necessary. This includes the use of ruggedized laptops and desktop computers , a projector system, communications system...ENGINEERING STUDY AND CONCEPT DEVELOPMENT FOR A HUMANITARIAN AID AND DISASTER RELIEF OPERATIONS MANAGEMENT PLATFORM by Julie A. Reed September
Lange, Kristian; Kühn, Simone; Filevich, Elisa
2015-01-01
We present here “Just Another Tool for Online Studies” (JATOS): an open source, cross-platform web application with a graphical user interface (GUI) that greatly simplifies setting up and communicating with a web server to host online studies that are written in JavaScript. JATOS is easy to install in all three major platforms (Microsoft Windows, Mac OS X, and Linux), and seamlessly pairs with a database for secure data storage. It can be installed on a server or locally, allowing researchers to try the application and feasibility of their studies within a browser environment, before engaging in setting up a server. All communication with the JATOS server takes place via a GUI (with no need to use a command line interface), making JATOS an especially accessible tool for researchers without a strong IT background. We describe JATOS’ main features and implementation and provide a detailed tutorial along with example studies to help interested researchers to set up their online studies. JATOS can be found under the Internet address: www.jatos.org. PMID:26114751
Li, Ya-Pin; Gao, Hong-Wei; Fan, Hao-Jun; Wei, Wei; Xu, Bo; Dong, Wen-Long; Li, Qing-Feng; Song, Wen-Jing; Hou, Shi-Ke
2017-12-01
The objective of this study was to build a database to collect infectious disease information at the scene of a disaster through the use of 128 epidemiological questionnaires and 47 types of options, with rapid acquisition of information regarding infectious disease and rapid questionnaire customization at the scene of disaster relief by use of a personal digital assistant (PDA). SQL Server 2005 (Microsoft Corp, Redmond, WA) was used to create the option database for the infectious disease investigation, to develop a client application for the PDA, and to deploy the application on the server side. The users accessed the server for data collection and questionnaire customization with the PDA. A database with a set of comprehensive options was created and an application system was developed for the Android operating system (Google Inc, Mountain View, CA). On this basis, an infectious disease information collection system was built for use at the scene of disaster relief. The creation of an infectious disease information collection system and rapid questionnaire customization through the use of a PDA was achieved. This system integrated computer technology and mobile communication technology to develop an infectious disease information collection system and to allow for rapid questionnaire customization at the scene of disaster relief. (Disaster Med Public Health Preparedness. 2017;11:668-673).
RIMS: An Integrated Mapping and Analysis System with Applications to Earth Sciences and Hydrology
NASA Astrophysics Data System (ADS)
Proussevitch, A. A.; Glidden, S.; Shiklomanov, A. I.; Lammers, R. B.
2011-12-01
A web-based information and computational system for analysis of spatially distributed Earth system, climate, and hydrologic data have been developed. The System allows visualization, data exploration, querying, manipulation and arbitrary calculations with any loaded gridded or vector polygon dataset. The system's acronym, RIMS, stands for its core functionality as a Rapid Integrated Mapping System. The system can be deployed for a Global scale projects as well as for regional hydrology and climatology studies. In particular, the Water Systems Analysis Group of the University of New Hampshire developed the global and regional (Northern Eurasia, pan-Arctic) versions of the system with different map projections and specific data. The system has demonstrated its potential for applications in other fields of Earth sciences and education. The key Web server/client components of the framework include (a) a visualization engine built on Open Source libraries (GDAL, PROJ.4, etc.) that are utilized in a MapServer; (b) multi-level data querying tools built on XML server-client communication protocols that allow downloading map data on-the-fly to a client web browser; and (c) data manipulation and grid cell level calculation tools that mimic desktop GIS software functionality via a web interface. Server side data management of the system is designed around a simple database of dataset metadata facilitating mounting of new data to the system and maintaining existing data in an easy manner. RIMS contains "built-in" river network data that allows for query of upstream areas on-demand which can be used for spatial data aggregation and analysis of sub-basin areas. RIMS is an ongoing effort and currently being used to serve a number of websites hosting a suite of hydrologic, environmental and other GIS data.
SSRL Emergency Response Shore Tool
NASA Technical Reports Server (NTRS)
Mah, Robert W.; Papasin, Richard; McIntosh, Dawn M.; Denham, Douglas; Jorgensen, Charles; Betts, Bradley J.; Del Mundo, Rommel
2006-01-01
The SSRL Emergency Response Shore Tool (wherein SSRL signifies Smart Systems Research Laboratory ) is a computer program within a system of communication and mobile-computing software and hardware being developed to increase the situational awareness of first responders at building collapses. This program is intended for use mainly in planning and constructing shores to stabilize partially collapsed structures. The program consists of client and server components, runs in the Windows operating system on commercial off-the-shelf portable computers, and can utilize such additional hardware as digital cameras and Global Positioning System devices. A first responder can enter directly, into a portable computer running this program, the dimensions of a required shore. The shore dimensions, plus an optional digital photograph of the shore site, can then be uploaded via a wireless network to a server. Once on the server, the shore report is time-stamped and made available on similarly equipped portable computers carried by other first responders, including shore wood cutters and an incident commander. The staff in a command center can use the shore reports and photographs to monitor progress and to consult with structural engineers to assess whether a building is in imminent danger of further collapse.
Research on realization scheme of interactive voice response (IVR) system
NASA Astrophysics Data System (ADS)
Jin, Xin; Zhu, Guangxi
2003-12-01
In this paper, a novel interactive voice response (IVR) system is proposed, which is apparently different from the traditional. Using software operation and network control, the IVR system is presented which only depends on software in the server in which the system lies and the hardware in network terminals on user side, such as gateway (GW), personal gateway (PG), PC and so on. The system transmits the audio using real time protocol (RTP) protocol via internet to the network terminals and controls flow using finite state machine (FSM) stimulated by H.245 massages sent from user side and the system control factors. Being compared with other existing schemes, this IVR system results in several advantages, such as greatly saving the system cost, fully utilizing the existing network resources and enhancing the flexibility. The system is capable to be put in any service server anywhere in the Internet and even fits for the wireless applications based on packet switched communication. The IVR system has been put into reality and passed the system test.
Wu, Vincent Wing-Cheung; Tang, Fuk-hay; Cheung, Wai-kwan; Chan, Kit-chi
2013-02-01
In localisation of radiotherapy treatment field, the oncologist is present at the simulator to approve treatment details produced by the therapist. Problems may arise if the oncologist is not available and the patient requires urgent treatment. The development of a tele-localisation system is a potential solution, where the oncologist uses a personal digital assistant (PDA) to localise the treatment field on the image sent from the simulator through wireless communication and returns the information to the therapist after his or her approval. Our team developed the first tele-localisation prototype, which consisted of a server workstation (simulator) for the administration of digital imaging and communication in medicine localisation images including viewing and communication with the PDA via a Wi-Fi network; a PDA (oncologist's site) installed with the custom-built programme that synchronises with the server workstation and performs treatment field editing. Trial tests on accuracy and speed of the prototype system were conducted on 30 subjects with the treatment regions covering the neck, skull, chest and pelvis. The average time required in performing the localisation using the PDA was less than 1.5 min, with the blocked field longer than the open field. The transmission speed of the four treatment regions was similar. The average physical distortion of the images was within 4.4% and the accuracy of field size indication was within 5.3%. Compared with the manual method, the tele-localisation system presented with an average deviation of 5.5%. The prototype system fulfilled the planned objectives of tele-localisation procedure with reasonable speed and accuracy. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.
Autoconfiguration and Service Discovery
NASA Astrophysics Data System (ADS)
Manner, Jukka
To be useful, IP networking requires various parameters to be set up. A network node needs at least an IP address, routing information, and name services. In a fixed network this configuration is typically done with a centralized scheme, where a server hosts the configuration information and clients query the server with the Dynamic Host Configuration Protocol (DHCP). Companies, university campuses and even home broadband use the DHCP system to configure hosts. This signaling happens in the background, and users seldom need to think about it; only when things are not working properly, manual intervention is needed. The same protocol can be used in mobile networks, where the client device communicates with the access network provider and his DHCP service. The core information provided by DHCP includes a unique IP address for the host, the IP address of the closest IP router for routing messages to other networks, and the location of domain name servers.
A distributed component framework for science data product interoperability
NASA Technical Reports Server (NTRS)
Crichton, D.; Hughes, S.; Kelly, S.; Hardman, S.
2000-01-01
Correlation of science results from multi-disciplinary communities is a difficult task. Traditionally data from science missions is archived in proprietary data systems that are not interoperable. The Object Oriented Data Technology (OODT) task at the Jet Propulsion Laboratory is working on building a distributed product server as part of a distributed component framework to allow heterogeneous data systems to communicate and share scientific results.
ERIC Educational Resources Information Center
Herrera-Ruiz, Octavio
2012-01-01
Peer-to-Peer (P2P) technology has emerged as an important alternative to the traditional client-server communication paradigm to build large-scale distributed systems. P2P enables the creation, dissemination and access to information at low cost and without the need of dedicated coordinating entities. However, existing P2P systems fail to provide…
ERIC Educational Resources Information Center
Villano, Matt
2006-01-01
The benefits of deploying a communications system that runs over the Internet Protocol are well documented. Sending voice over the Internet, a process commonly known as VoIP, has been shown to save money on long distance calls, make voice mail more accessible, and enable users to answer their phones from anywhere. The technology also makes adding…
Securing a web-based teleradiology platform according to German law and "best practices".
Spitzer, Michael; Ullrich, Tobias; Ueckert, Frank
2009-01-01
The Medical Data and Picture Exchange platform (MDPE), as a teleradiology system, facilitates the exchange of digital medical imaging data among authorized users. It features extensive support of the DICOM standard including networking functions. Since MDPE is designed as a web service, security and confidentiality of data and communication pose an outstanding challenge. To comply with demands of German laws and authorities, a generic data security concept considered as "best practice" in German health telematics was adapted to the specific demands of MDPE. The concept features strict logical and physical separation of diagnostic and identity data and thus an all-encompassing pseudonymization throughout the system. Hence, data may only be merged at authorized clients. MDPE's solution of merging data from separate sources within a web browser avoids technically questionable techniques such as deliberate cross-site scripting. Instead, data is merged dynamically by JavaScriptlets running in the user's browser. These scriptlets are provided by one server, while content and method calls are generated by another server. Additionally, MDPE uses encrypted temporary IDs for communication and merging of data.
NASA Technical Reports Server (NTRS)
Dhaliwal, Swarn S.
1997-01-01
An investigation was undertaken to build the software foundation for the WHERE (Web-based Hyper-text Environment for Requirements Engineering) project. The TCM (Toolkit for Conceptual Modeling) was chosen as the foundation software for the WHERE project which aims to provide an environment for facilitating collaboration among geographically distributed people involved in the Requirements Engineering process. The TCM is a collection of diagram and table editors and has been implemented in the C++ programming language. The C++ implementation of the TCM was translated into Java in order to allow the editors to be used for building various functionality of the WHERE project; the WHERE project intends to use the Web as its communication back- bone. One of the limitations of the translated software (TcmJava), which militated against its use in the WHERE project, was persistent data management mechanisms which it inherited from the original TCM; it was designed to be used in standalone applications. Before TcmJava editors could be used as a part of the multi-user, geographically distributed applications of the WHERE project, a persistent storage mechanism must be built which would allow data communication over the Internet, using the capabilities of the Web. An approach involving features of Java, CORBA (Common Object Request Broker), the Web, a middle-ware (Java Relational Binding (JRB)), and a database server was used to build the persistent data management infrastructure for the WHERE project. The developed infrastructure allows a TcmJava editor to be downloaded and run from a network host by using a JDK 1.1 (Java Developer's Kit) compatible Web-browser. The aforementioned editor establishes connection with a server by using the ORB (Object Request Broker) software and stores/retrieves data in/from the server. The server consists of a CORBA object or objects depending upon whether the data is to be made persistent on a single server or multiple servers. The CORBA object providing the persistent data server is implemented using the Java progranu-ning language. It uses the JRB to store/retrieve data in/from a relational database server. The persistent data management system provides transaction and user management facilities which allow multi-user, distributed access to the stored data in a secure manner.
Cruz, Márcio Freire; Cavalcante, Carlos Arthur Mattos Teixeira; Sá Barretto, Sérgio Torres
2018-05-30
Health Level Seven (HL7) is one of the standards most used to centralize data from different vital sign monitoring systems. This solution significantly limits the data available for historical analysis, because it typically uses databases that are not effective in storing large volumes of data. In industry, a specific Big Data Historian, known as a Process Information Management System (PIMS), solves this problem. This work proposes the same solution to overcome the restriction on storing vital sign data. The PIMS needs a compatible communication standard to allow storing, and the one most commonly used is the OLE for Process Control (OPC). This paper presents a HL7-OPC Server that permits communication between vital sign monitoring systems with PIMS, thus allowing the storage of long historical series of vital signs. In addition, it carries out a review about local and cloud-based Big Medical Data researches, followed by an analysis of the PIMS in a Health IT Environment. Then it shows the architecture of HL7 and OPC Standards. Finally, it shows the HL7-OPC Server and a sequence of tests that proved its full operation and performance.
Jung, Jaewook; Kang, Dongwoo; Lee, Donghoon; Won, Dongho
2017-01-01
Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency.
Kang, Dongwoo; Lee, Donghoon; Won, Dongho
2017-01-01
Nowadays, many hospitals and medical institutes employ an authentication protocol within electronic patient records (EPR) services in order to provide protected electronic transactions in e-medicine systems. In order to establish efficient and robust health care services, numerous studies have been carried out on authentication protocols. Recently, Li et al. proposed a user authenticated key agreement scheme according to EPR information systems, arguing that their scheme is able to resist various types of attacks and preserve diverse security properties. However, this scheme possesses critical vulnerabilities. First, the scheme cannot prevent off-line password guessing attacks and server spoofing attack, and cannot preserve user identity. Second, there is no password verification process with the failure to identify the correct password at the beginning of the login phase. Third, the mechanism of password change is incompetent, in that it induces inefficient communication in communicating with the server to change a user password. Therefore, we suggest an upgraded version of the user authenticated key agreement scheme that provides enhanced security. Our security and performance analysis shows that compared to other related schemes, our scheme not only improves the security level, but also ensures efficiency. PMID:28046075
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Kumar, Neeraj
2015-11-01
In the last few years, numerous remote user authentication and session key agreement schemes have been put forwarded for Telecare Medical Information System, where the patient and medical server exchange medical information using Internet. We have found that most of the schemes are not usable for practical applications due to known security weaknesses. It is also worth to note that unrestricted number of patients login to the single medical server across the globe. Therefore, the computation and maintenance overhead would be high and the server may fail to provide services. In this article, we have designed a medical system architecture and a standard mutual authentication scheme for single medical server, where the patient can securely exchange medical data with the doctor(s) via trusted central medical server over any insecure network. We then explored the security of the scheme with its resilience to attacks. Moreover, we formally validated the proposed scheme through the simulation using Automated Validation of Internet Security Schemes and Applications software whose outcomes confirm that the scheme is protected against active and passive attacks. The performance comparison demonstrated that the proposed scheme has lower communication cost than the existing schemes in literature. In addition, the computation cost of the proposed scheme is nearly equal to the exiting schemes. The proposed scheme not only efficient in terms of different security attacks, but it also provides an efficient login, mutual authentication, session key agreement and verification and password update phases along with password recovery.
DDS as middleware of the Southern African Large Telescope control system
NASA Astrophysics Data System (ADS)
Maartens, Deneys S.; Brink, Janus D.
2016-07-01
The Southern African Large Telescope (SALT) software control system1 is realised as a distributed control system, implemented predominantly in National Instruments' LabVIEW. The telescope control subsystems communicate using cyclic, state-based messages. Currently, transmitting a message is accomplished by performing an HTTP PUT request to a WebDAV directory on a centralised Apache web server, while receiving is based on polling the web server for new messages. While the method works, it presents a number of drawbacks; a scalable distributed communication solution with minimal overhead is a better fit for control systems. This paper describes our exploration of the Data Distribution Service (DDS). DDS is a formal standard specification, defined by the Object Management Group (OMG), that presents a data-centric publish-subscribe model for distributed application communication and integration. It provides an infrastructure for platform- independent many-to-many communication. A number of vendors provide implementations of the DDS standard; RTI, in particular, provides a DDS toolkit for LabVIEW. This toolkit has been evaluated against the needs of SALT, and a few deficiencies have been identified. We have developed our own implementation that interfaces LabVIEW to DDS in order to address our specific needs. Our LabVIEW DDS interface implementation is built against the RTI DDS Core component, provided by RTI under their Open Community Source licence. Our needs dictate that the interface implementation be platform independent. Since we have access to the RTI DDS Core source code, we are able to build the RTI DDS libraries for any of the platforms on which we require support. The communications functionality is based on UDP multicasting. Multicasting is an efficient communications mechanism with low overheads which avoids duplicated point-to-point transmission of data on a network where there are multiple recipients of the data. In the paper we present a performance evaluation of DDS against the current HTTP-based implementation as well as the historical DataSocket implementation. We conclude with a summary and describe future work.
Online decision support system for surface irrigation management
NASA Astrophysics Data System (ADS)
Wang, Wenchao; Cui, Yuanlai
2017-04-01
Irrigation has played an important role in agricultural production. Irrigation decision support system is developed for irrigation water management, which can raise irrigation efficiency with few added engineering services. An online irrigation decision support system (OIDSS), in consist of in-field sensors and central computer system, is designed for surface irrigation management in large irrigation district. Many functions have acquired in OIDSS, such as data acquisition and detection, real-time irrigation forecast, water allocation decision and irrigation information management. The OIDSS contains four parts: Data acquisition terminals, Web server, Client browser and Communication system. Data acquisition terminals are designed to measure paddy water level, soil water content in dry land, ponds water level, underground water level, and canals water level. A web server is responsible for collecting meteorological data, weather forecast data, the real-time field data, and manager's feedback data. Water allocation decisions are made in the web server. Client browser is responsible for friendly displaying, interacting with managers, and collecting managers' irrigation intention. Communication system includes internet and the GPRS network used by monitoring stations. The OIDSS's model is based on water balance approach for both lowland paddy and upland crops. Considering basic database of different crops water demands in the whole growth stages and irrigation system engineering information, the OIDSS can make efficient decision of water allocation with the help of real-time field water detection and weather forecast. This system uses technical methods to reduce requirements of user's specialized knowledge and can also take user's managerial experience into account. As the system is developed by the Browser/Server model, it is possible to make full use of the internet resources, to facilitate users at any place where internet exists. The OIDSS has been applied in Zhanghe Irrigation District (Center China) to manage the required irrigation deliveries. Two years' application indicates that the proposed OIDSS can achieve promising performance for surface irrigation. Historical data of rice growing period in 2014 has been applied to test the OIDSS: it gives out 3 irrigation decisions, which is consistent with actual irrigation times and the forecast irrigation dates are well fit with the actual situations; the corresponding amount of total irrigation decreases by 15.13% compared to those without using the OIDSS.
[Design and implementation of field questionnaire survey system of taeniasis/cysticercosis].
Huan-Zhang, Li; Jing-Bo, Xue; Men-Bao, Qian; Xin-Zhong, Zang; Shang, Xia; Qiang, Wang; Ying-Dan, Chen; Shi-Zhu, Li
2018-04-17
A taeniasis/cysticercosis information management system was designed to achieve the dynamic monitoring of the epidemic situation of taeniasis/cysticercosis and improve the intelligence level of disease information management. The system includes three layer structures (application layer, technical core layer, and data storage layer) and designs a datum transmission and remote communication system of traffic information tube in Browser/Server architecture. The system is believed to promote disease datum collection. Additionally, the system may provide the standardized data for convenience of datum analysis.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-06
... Denver, Colorado. Communication Manager is designed to run on a variety of Linux-based media servers.... Some servers are in the form of blades. These are cards (similar to printed circuit cards with...
Jiménez, Felipe; Naranjo, Jose Eugenio; Serradilla, Francisco; Pérez, Elisa; Hernández, María Jose; Ruiz, Trinidad; Anaya, José Javier; Díaz, Alberto
2016-01-01
Inappropriate speed is a relevant concurrent factor in many traffic accidents. Moreover, in recent years, traffic accidents numbers in Spain have fallen sharply, but this reduction has not been so significant on single carriageway roads. These infrastructures have less equipment than high-capacity roads, therefore measures to reduce accidents on them should be implemented in vehicles. This article describes the development and analysis of the impact on the driver of a warning system for the safe speed on each road section in terms of geometry, the presence of traffic jams, weather conditions, type of vehicle and actual driving conditions. This system is based on an application for smartphones and includes knowledge of the vehicle position via Ground Positioning System (GPS), access to intravehicular information from onboard sensors through the Controller Area Network (CAN) bus, vehicle data entry by the driver, access to roadside information (short-range communications) and access to a centralized server with information about the road in the current and following sections of the route (long-range communications). Using this information, the system calculates the safe speed, recommends the appropriate speed in advance in the following sections and provides warnings to the driver. Finally, data are sent from vehicles to a server to generate new information to disseminate to other users or to supervise drivers’ behaviour. Tests in a driving simulator have been used to define the system warnings and Human Machine Interface (HMI) and final tests have been performed on real roads in order to analyze the effect of the system on driver behavior. PMID:26805839
A General Purpose Connections type CTI Server Based on SIP Protocol and Its Implementation
NASA Astrophysics Data System (ADS)
Watanabe, Toru; Koizumi, Hisao
In this paper, we propose a general purpose connections type CTI (Computer Telephony Integration) server that provides various CTI services such as voice logging where the CTI server communicates with IP-PBX using the SIP (Session Initiation Protocol), and accumulates voice packets of external line telephone call flowing between an IP telephone for extension and a VoIP gateway connected to outside line networks. The CTI server realizes CTI services such as voice logging, telephone conference, or IVR (interactive voice response) with accumulating and processing voice packets sampled. Furthermore, the CTI server incorporates a web server function which can provide various CTI services such as a Web telephone directory via a Web browser to PCs, cellular telephones or smart-phones in mobile environments.
Ubiquitous Multicriteria Clinic Recommendation System.
Chen, Toly
2016-05-01
Advancements in information, communication, and sensor technologies have led to new opportunities in medical care and education. Patients in general prefer visiting the nearest clinic, attempt to avoid waiting for treatment, and have unequal preferences for different clinics and doctors. Therefore, to enable patients to compare multiple clinics, this study proposes a ubiquitous multicriteria clinic recommendation system. In this system, patients can send requests through their cell phones to the system server to obtain a clinic recommendation. Once the patient sends this information to the system, the system server first estimates the patient's speed according to the detection results of a global positioning system. It then applies a fuzzy integer nonlinear programming-ordered weighted average approach to assess four criteria and finally recommends a clinic with maximal utility to the patient. The proposed methodology was tested in a field experiment, and the experimental results showed that it is advantageous over two existing methods in elevating the utilities of recommendations. In addition, such an advantage was shown to be statistically significant.
Simulator of Space Communication Networks
NASA Technical Reports Server (NTRS)
Clare, Loren; Jennings, Esther; Gao, Jay; Segui, John; Kwong, Winston
2005-01-01
Multimission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) is a suite of software tools that simulates the behaviors of communication networks to be used in space exploration, and predict the performance of established and emerging space communication protocols and services. MACHETE consists of four general software systems: (1) a system for kinematic modeling of planetary and spacecraft motions; (2) a system for characterizing the engineering impact on the bandwidth and reliability of deep-space and in-situ communication links; (3) a system for generating traffic loads and modeling of protocol behaviors and state machines; and (4) a system of user-interface for performance metric visualizations. The kinematic-modeling system makes it possible to characterize space link connectivity effects, including occultations and signal losses arising from dynamic slant-range changes and antenna radiation patterns. The link-engineering system also accounts for antenna radiation patterns and other phenomena, including modulations, data rates, coding, noise, and multipath fading. The protocol system utilizes information from the kinematic-modeling and link-engineering systems to simulate operational scenarios of space missions and evaluate overall network performance. In addition, a Communications Effect Server (CES) interface for MACHETE has been developed to facilitate hybrid simulation of space communication networks with actual flight/ground software/hardware embedded in the overall system.
MODBUS APPLICATION AT JEFFERSON LAB
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yan, Jianxun; Seaton, Chad; Philip, Sarin
Modbus is a client/server communication model. In our applications, the embedded Ethernet device XPort is designed as the server and a SoftIOC running EPICS Modbus is the client. The SoftIOC builds a Modbus request from parameter contained in a demand that is sent by the EPICS application to the Modbus Client interface. On reception of the Modbus request, the Modbus server activates a local action to read, write, or achieve some other action. So, the main Modbus server functions are to wait for a Modbus request on 502 TCP port, treat this request, and then build a Modbus response.
Distributed Intrusion Detection for Computer Systems Using Communicating Agents
2000-01-01
Log for a variety of suspicious events (like repeated failed login attempts), and alerts the IDAgent processes immediately via pipes when it finds...UX, IBM LAN Server, Raptor Eagle Firewalls, ANS Interlock Firewalls, and SunOS BSM. This program appears to be robust across many platforms. EMERALD ...Neumann, 1999] is a system developed by SRI International with research funding from DARPA. The EMERALD project will be the successor to Next
Secure entanglement distillation for double-server blind quantum computation.
Morimae, Tomoyuki; Fujii, Keisuke
2013-07-12
Blind quantum computation is a new secure quantum computing protocol where a client, who does not have enough quantum technologies at her disposal, can delegate her quantum computation to a server, who has a fully fledged quantum computer, in such a way that the server cannot learn anything about the client's input, output, and program. If the client interacts with only a single server, the client has to have some minimum quantum power, such as the ability of emitting randomly rotated single-qubit states or the ability of measuring states. If the client interacts with two servers who share Bell pairs but cannot communicate with each other, the client can be completely classical. For such a double-server scheme, two servers have to share clean Bell pairs, and therefore the entanglement distillation is necessary in a realistic noisy environment. In this Letter, we show that it is possible to perform entanglement distillation in the double-server scheme without degrading the security of blind quantum computing.
Iotti, Bryan; Valazza, Alberto
2014-10-01
Picture Archiving and Communications Systems (PACS) are the most needed system in a modern hospital. As an integral part of the Digital Imaging and Communications in Medicine (DICOM) standard, they are charged with the responsibility for secure storage and accessibility of the diagnostic imaging data. These machines need to offer high performance, stability, and security while proving reliable and ergonomic in the day-to-day and long-term storage and retrieval of the data they safeguard. This paper reports the experience of the authors in developing and installing a compact and low-cost solution based on open-source technologies in the Veterinary Teaching Hospital for the University of Torino, Italy, during the course of the summer of 2012. The PACS server was built on low-cost x86-based hardware and uses an open source operating system derived from Oracle OpenSolaris (Oracle Corporation, Redwood City, CA, USA) to host the DCM4CHEE PACS DICOM server (DCM4CHEE, http://www.dcm4che.org ). This solution features very high data security and an ergonomic interface to provide easy access to a large amount of imaging data. The system has been in active use for almost 2 years now and has proven to be a scalable, cost-effective solution for practices ranging from small to very large, where the use of different hardware combinations allows scaling to the different deployments, while the use of paravirtualization allows increased security and easy migrations and upgrades.
Ad Hoc Selection of Voice over Internet Streams
NASA Technical Reports Server (NTRS)
Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)
2014-01-01
A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.
Ad Hoc Selection of Voice over Internet Streams
NASA Technical Reports Server (NTRS)
Macha, Mitchell G. (Inventor); Bullock, John T. (Inventor)
2008-01-01
A method and apparatus for a communication system technique involving ad hoc selection of at least two audio streams is provided. Each of the at least two audio streams is a packetized version of an audio source. A data connection exists between a server and a client where a transport protocol actively propagates the at least two audio streams from the server to the client. Furthermore, software instructions executable on the client indicate a presence of the at least two audio streams, allow selection of at least one of the at least two audio streams, and direct the selected at least one of the at least two audio streams for audio playback.
NASA Astrophysics Data System (ADS)
Ge, Linqiang; Yu, Wei; Shen, Dan; Chen, Genshe; Pham, Khanh; Blasch, Erik; Lu, Chao
2014-06-01
Most enterprise networks are built to operate in a static configuration (e.g., static software stacks, network configurations, and application deployments). Nonetheless, static systems make it easy for a cyber adversary to plan and launch successful attacks. To address static vulnerability, moving target defense (MTD) has been proposed to increase the difficulty for the adversary to launch successful attacks. In this paper, we first present a literature review of existing MTD techniques. We then propose a generic defense framework, which can provision an incentive-compatible MTD mechanism through dynamically migrating server locations. We also present a user-server mapping mechanism, which not only improves system resiliency, but also ensures network performance. We demonstrate a MTD with a multi-user network communication and our data shows that the proposed framework can effectively improve the resiliency and agility of the system while achieving good network timeliness and throughput performance.
Hayashi, Takashi; Iwai, Mitsuhiro; Takahashi, Katsuhiko; Takeda, Satoshi; Tateishi, Toshiki; Kaneko, Rumi; Ogasawara, Yoko; Yonezawa, Kazuya; Hanada, Akiko
2011-01-01
Using a 3D-imaging-create-function server and network services by IP-VPN, we began to deliver 3D images to the remote institution. An indication trial of the primary image, a rotary trial of a 3D image, and a reproducibility trial were studied in order to examine the practicality of using the system in a real network between Hakodate and Sapporo (communication distance of about 150 km). In these trials, basic data (time and receiving data volume) were measured for every variation of QF (quality factor) or monitor resolution. Analyzing the results of the system using a 3D image delivery server of our hospital with variations in the setting of QF and monitor resolutions, we concluded that this system has practicality in the remote interpretation-of-radiogram work, even if the access point of the region has a line speed of 6 Mbps.
EMR based telegeriatric system.
Pallawala, P M; Lun, K C
2001-05-01
As medical services improve due to new technologies and breakthroughs, it has lead to an increasingly aging population. There has been much discussion and debate on how to solve various aspects such as psychological, socioeconomic and medical problems related to aging. Our effort is to implement a feasible telegeriatric medical service with the use of the state of the art technology to deliver medical services efficiently to remote sites where elderly homes are based. Telegeriatric system will lead to rapid decision-making in the presence of acute or subacute emergencies. This triage will also lead to a reduction of unnecessary admission. It will enable the doctors who visit these elderly homes on a once-a-week basis to improve their geriatric management skills by communication with geriatric specialist. Nursing skills in geriatric care will also benefit from this system. Integrated EMR service will be indispensable in the face of emergency admissions to hospitals. Evolution of EMR database would lead to future research in telegeriatrics and will help to identify the areas where telegeriatrics can be optimally used. This system is based on current web browsing technology and broadband communication. EMR web based server is developed using Java Technology. EMR database was developed using Microsoft SQL server. Both are based at the Medical Informatics Programme, National University of Singapore. Two elderly homes situated in the periphery of Singapore and a leading government hospital in geriatric care has been chosen for the project. These three institutions and National University of Singapore are connected via ADSL protocol, which support high bandwidth, which is necessary for high quality videoconferencing. Each time a patient needs a teleconsultation, a nurse or doctor in the remote site sends the history to the EMR server. EMR server forwards the request to the Alexandra Hospital for consultation. Geriatrics specialists at Alexandra Hospital carry out teleward rounds twice weekly and on demand basis. Following the implementation of the system, a trial run has been done. This shows a high degree of coordination and cooperation between remote site and the Alexandra Hospital Also the patient compliance is very high and they prefer teleconsultation. Initial results show that telegeriatric system has definite advantages in managing geriatric patients at a remote site. As the system evolves, further research will show the areas where telegeriatrics can be used optimally.
Smart Cards and remote entrusting
NASA Astrophysics Data System (ADS)
Aussel, Jean-Daniel; D'Annoville, Jerome; Castillo, Laurent; Durand, Stephane; Fabre, Thierry; Lu, Karen; Ali, Asad
Smart cards are widely used to provide security in end-to-end communication involving servers and a variety of terminals, including mobile handsets or payment terminals. Sometime, end-to-end server to smart card security is not applicable, and smart cards must communicate directly with an application executing on a terminal, like a personal computer, without communicating with a server. In this case, the smart card must somehow trust the terminal application before performing some secure operation it was designed for. This paper presents a novel method to remotely trust a terminal application from the smart card. For terminals such as personal computers, this method is based on an advanced secure device connected through the USB and consisting of a smart card bundled with flash memory. This device, or USB dongle, can be used in the context of remote untrusting to secure portable applications conveyed in the dongle flash memory. White-box cryptography is used to set the secure channel and a mechanism based on thumbprint is described to provide external authentication when session keys need to be renewed. Although not as secure as end-to-end server to smart card security, remote entrusting with smart cards is easy to deploy for mass-market applications and can provide a reasonable level of security.
NASA Astrophysics Data System (ADS)
Vernon, F. L.; Eakins, J. A.; Busby, R.
2008-12-01
The USArray Transportable Array has deployed over 600 stations in aggregate over the past four years. All stations communicate in near-real time using ip protocols over a variety of communication links including satellite, cell phone, and DSL. Several different communication providers have been used for each type of communication links. In addition, data are being acquired from several regional networks either directly from a data server or after passing through the IRIS DMC BUD system. We will present results about the latency of data arriving at the UCSD Array Network Facility where the real time data are acquired. Under normal operating conditions the median data latency is several seconds. We will also examine the data return rates through the near-real time systems. In addition we will examine the statistics of over 36,000 events which have automatic event locations and associations. We evaluate the timeliness of these results in the context of seismic early warning systems.
MyDas, an Extensible Java DAS Server
Jimenez, Rafael C.; Quinn, Antony F.; Jenkinson, Andrew M.; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning
2012-01-01
A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users. We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details. PMID:23028496
MyDas, an extensible Java DAS server.
Salazar, Gustavo A; García, Leyla J; Jones, Philip; Jimenez, Rafael C; Quinn, Antony F; Jenkinson, Andrew M; Mulder, Nicola; Martin, Maria; Hunter, Sarah; Hermjakob, Henning
2012-01-01
A large number of diverse, complex, and distributed data resources are currently available in the Bioinformatics domain. The pace of discovery and the diversity of information means that centralised reference databases like UniProt and Ensembl cannot integrate all potentially relevant information sources. From a user perspective however, centralised access to all relevant information concerning a specific query is essential. The Distributed Annotation System (DAS) defines a communication protocol to exchange annotations on genomic and protein sequences; this standardisation enables clients to retrieve data from a myriad of sources, thus offering centralised access to end-users.We introduce MyDas, a web server that facilitates the publishing of biological annotations according to the DAS specification. It deals with the common functionality requirements of making data available, while also providing an extension mechanism in order to implement the specifics of data store interaction. MyDas allows the user to define where the required information is located along with its structure, and is then responsible for the communication protocol details.
On-demand server-side image processing for web-based DICOM image display
NASA Astrophysics Data System (ADS)
Sakusabe, Takaya; Kimura, Michio; Onogi, Yuzo
2000-04-01
Low cost image delivery is needed in modern networked hospitals. If a hospital has hundreds of clients, cost of client systems is a big problem. Naturally, a Web-based system is the most effective solution. But a Web browser could not display medical images with certain image processing such as a lookup table transformation. We developed a Web-based medical image display system using Web browser and on-demand server-side image processing. All images displayed on a Web page are generated from DICOM files on a server, delivered on-demand. User interaction on the Web page is handled by a client-side scripting technology such as JavaScript. This combination makes a look-and-feel of an imaging workstation not only for its functionality but also for its speed. Real time update of images with tracing mouse motion is achieved on Web browser without any client-side image processing which may be done by client-side plug-in technology such as Java Applets or ActiveX. We tested performance of the system in three cases. Single client, small number of clients in a fast speed network, and large number of clients in a normal speed network. The result shows that there are very slight overhead for communication and very scalable in number of clients.
Thin client (web browser)-based collaboration for medical imaging and web-enabled data.
Le, Tuong Huu; Malhi, Nadeem
2002-01-01
Utilizing thin client software and open source server technology, a collaborative architecture was implemented allowing for sharing of Digital Imaging and Communications in Medicine (DICOM) and non-DICOM images with real-time markup. Using the Web browser as a thin client integrated with standards-based components, such as DHTML (dynamic hypertext markup language), JavaScript, and Java, collaboration was achieved through a Web server/proxy server combination utilizing Java Servlets and Java Server Pages. A typical collaborative session involved the driver, who directed the navigation of the other collaborators, the passengers, and provided collaborative markups of medical and nonmedical images. The majority of processing was performed on the server side, allowing for the client to remain thin and more accessible.
Centralized Duplicate Removal Video Storage System with Privacy Preservation in IoT.
Yan, Hongyang; Li, Xuan; Wang, Yu; Jia, Chunfu
2018-06-04
In recent years, the Internet of Things (IoT) has found wide application and attracted much attention. Since most of the end-terminals in IoT have limited capabilities for storage and computing, it has become a trend to outsource the data from local to cloud computing. To further reduce the communication bandwidth and storage space, data deduplication has been widely adopted to eliminate the redundant data. However, since data collected in IoT are sensitive and closely related to users' personal information, the privacy protection of users' information becomes a challenge. As the channels, like the wireless channels between the terminals and the cloud servers in IoT, are public and the cloud servers are not fully trusted, data have to be encrypted before being uploaded to the cloud. However, encryption makes the performance of deduplication by the cloud server difficult because the ciphertext will be different even if the underlying plaintext is identical. In this paper, we build a centralized privacy-preserving duplicate removal storage system, which supports both file-level and block-level deduplication. In order to avoid the leakage of statistical information of data, Intel Software Guard Extensions (SGX) technology is utilized to protect the deduplication process on the cloud server. The results of the experimental analysis demonstrate that the new scheme can significantly improve the deduplication efficiency and enhance the security. It is envisioned that the duplicated removal system with privacy preservation will be of great use in the centralized storage environment of IoT.
Content-based image retrieval on mobile devices
NASA Astrophysics Data System (ADS)
Ahmad, Iftikhar; Abdullah, Shafaq; Kiranyaz, Serkan; Gabbouj, Moncef
2005-03-01
Content-based image retrieval area possesses a tremendous potential for exploration and utilization equally for researchers and people in industry due to its promising results. Expeditious retrieval of desired images requires indexing of the content in large-scale databases along with extraction of low-level features based on the content of these images. With the recent advances in wireless communication technology and availability of multimedia capable phones it has become vital to enable query operation in image databases and retrieve results based on the image content. In this paper we present a content-based image retrieval system for mobile platforms, providing the capability of content-based query to any mobile device that supports Java platform. The system consists of light-weight client application running on a Java enabled device and a server containing a servlet running inside a Java enabled web server. The server responds to image query using efficient native code from selected image database. The client application, running on a mobile phone, is able to initiate a query request, which is handled by a servlet in the server for finding closest match to the queried image. The retrieved results are transmitted over mobile network and images are displayed on the mobile phone. We conclude that such system serves as a basis of content-based information retrieval on wireless devices and needs to cope up with factors such as constraints on hand-held devices and reduced network bandwidth available in mobile environments.
Electronic Reference Library: Silverplatter's Database Networking Solution.
ERIC Educational Resources Information Center
Millea, Megan
Silverplatter's Electronic Reference Library (ERL) provides wide area network access to its databases using TCP/IP communications and client-server architecture. ERL has two main components: The ERL clients (retrieval interface) and the ERL server (search engines). ERL clients provide patrons with seamless access to multiple databases on multiple…
Monitoring Heart Disease and Diabetes with Mobile Internet Communications
Mulvaney, David; Woodward, Bryan; Datta, Sekharjit; Harvey, Paul; Vyas, Anoop; Thakker, Bhaskar; Farooq, Omar; Istepanian, Robert
2012-01-01
A telemedicine system is described for monitoring vital signs and general health indicators of patients with cardiac and diabetic conditions. Telemetry from wireless sensors and readings from other instruments are combined into a comprehensive set of measured patient parameters. Using a combination of mobile device applications and web browser, the data can be stored, accessed, and displayed using mobile internet communications to the central server. As an extra layer of security in the data transmission, information embedded in the data is used in its verification. The paper highlights features that could be enhanced from previous systems by using alternative components or methods. PMID:23213330
Barnett, G O; Famiglietti, K T; Kim, R J; Hoffer, E P; Feldman, M J
1998-01-01
DXplain, a computer-based medical education, reference and decision support system has been used by thousands of physicians and medical students on stand-alone systems and over communications networks. For the past two years, we have made DXplain available over the Internet in order to provide DXplain's knowledge and analytical capabilities as a resource to other applications within Massachusetts General Hospital (MGH) and at outside institutions. We describe and provide the user experience with two different protocols through which users can access DXplain through the World Wide Web (WWW). The first allows the user to have direct interaction with all the functionality of DXplain where the MGH server controls the interaction and the mode of presentation. In the second mode, the MGH server provides the DXplain functionality as a series of services, which can be called independently by the user application program.
3D Visualization for Phoenix Mars Lander Science Operations
NASA Technical Reports Server (NTRS)
Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol
2012-01-01
Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.
Interactive real-time media streaming with reliable communication
NASA Astrophysics Data System (ADS)
Pan, Xunyu; Free, Kevin M.
2014-02-01
Streaming media is a recent technique for delivering multimedia information from a source provider to an end- user over the Internet. The major advantage of this technique is that the media player can start playing a multimedia file even before the entire file is transmitted. Most streaming media applications are currently implemented based on the client-server architecture, where a server system hosts the media file and a client system connects to this server system to download the file. Although the client-server architecture is successful in many situations, it may not be ideal to rely on such a system to provide the streaming service as users may be required to register an account using personal information in order to use the service. This is troublesome if a user wishes to watch a movie simultaneously while interacting with a friend in another part of the world over the Internet. In this paper, we describe a new real-time media streaming application implemented on a peer-to-peer (P2P) architecture in order to overcome these challenges within a mobile environment. When using the peer-to-peer architecture, streaming media is shared directly between end-users, called peers, with minimal or no reliance on a dedicated server. Based on the proposed software pɛvμa (pronounced [revma]), named for the Greek word meaning stream, we can host a media file on any computer and directly stream it to a connected partner. To accomplish this, pɛvμa utilizes the Microsoft .NET Framework and Windows Presentation Framework, which are widely available on various types of windows-compatible personal computers and mobile devices. With specially designed multi-threaded algorithms, the application can stream HD video at speeds upwards of 20 Mbps using the User Datagram Protocol (UDP). Streaming and playback are handled using synchronized threads that communicate with one another once a connection is established. Alteration of playback, such as pausing playback or tracking to a different spot in the media file, will be reflected in all media streams. These techniques are designed to allow users at different locations to simultaneously view a full length HD video and interactively control the media streaming session. To create a sustainable media stream with high quality, our system supports UDP packet loss recovery at high transmission speed using custom File- Buffers. Traditional real-time streaming protocols such as Real-time Transport Protocol/RTP Control Protocol (RTP/RTCP) provide no such error recovery mechanism. Finally, the system also features an Instant Messenger that allows users to perform social interactions with one another while they enjoy a media file. The ultimate goal of the application is to offer users a hassle free way to watch a media file over long distances without having to upload any personal information into a third party database. Moreover, the users can communicate with each other and stream media directly from one mobile device to another while maintaining an independence from traditional sign up required by most streaming services.
Distributed On-line Monitoring System Based on Modem and Public Phone Net
NASA Astrophysics Data System (ADS)
Chen, Dandan; Zhang, Qiushi; Li, Guiru
In order to solve the monitoring problem of urban sewage disposal, a distributed on-line monitoring system is proposed. By introducing dial-up communication technology based on Modem, the serial communication program can rationally solve the information transmission problem between master station and slave station. The realization of serial communication program is based on the MSComm control of C++ Builder 6.0.The software includes real-time data operation part and history data handling part, which using Microsoft SQL Server 2000 for database, and C++ Builder6.0 for user interface. The monitoring center displays a user interface with alarm information of over-standard data and real-time curve. Practical application shows that the system has successfully accomplished the real-time data acquisition from data gather station, and stored them in the terminal database.
Implementation of the Web-based laboratory
NASA Astrophysics Data System (ADS)
Ying, Liu; Li, Xunbo
2005-12-01
With the rapid developments of Internet technologies, remote access and control via Internet is becoming a reality. A realization of the web-based laboratory (the W-LAB) was presented. The main target of the W-LAB was to allow users to easily access and conduct experiments via the Internet. While realizing the remote communication, a system, which adopted the double client-server architecture, was introduced. It ensures the system better security and higher functionality. The experimental environment implemented in the W-Lab was integrated by both virtual lab and remote lab. The embedded technology in the W-LAB system as an economical and efficient way to build the distributed infrastructural network was introduced. Furthermore, by introducing the user authentication mechanism in the system, it effectively secures the remote communication.
Kamauu, Aaron W C; DuVall, Scott L; Wiggins, Richard H; Avrin, David E
2008-09-01
In the creation of interesting radiological cases in a digital teaching file, it is necessary to adjust the window and level settings of an image to effectively display the educational focus. The web-based applet described in this paper presents an effective solution for real-time window and level adjustments without leaving the picture archiving and communications system workstation. Optimized images are created, as user-defined parameters are passed between the applet and a servlet on the Health Insurance Portability and Accountability Act-compliant teaching file server.
An application for delivering field results to mobile devices
NASA Astrophysics Data System (ADS)
Kanta, A.; Hloupis, G.; Vallianatos, F.; Rust, D.
2009-04-01
Mobile devices (MD) such as personal digital assistants (PDAs) and Smartphones expand the ability of Internet communication between remote users. In particular these devices have the possibility to interact with data centres in order to request and receive information. For field surveys MDs used primarily for controlling instruments (in case of field measurements) or for entering data needed for later processing (e.g damage description after a natural hazard). It is not unusual in areas with high interest combined measurements took place. The results from these measurements usually stored in data servers and their publicity is driven mainly by web-based applications. Here we present a client / server application capable of displaying the results of several measurements for a specific area to a MD. More specific, we develop an application than can present to the screen of the MD the results of existing measurements according to the position of the user. The server side hosted at data centre and uses a relational data base (including the results), a SMS/MMS gateway and a receiver daemon application waiting for messages from MDs. The client side runs on MD and is a simple menu driven application which asks the user to enter the type of requested data and the geographical coordinates. In case of embedded GPS receiver, coordinates automatically derived from the receiver. Then a message is sent to server which responds with the results. In case of absence of Internet communication the application can switched to common Short/Multimedia Messaging Systems: the client request data using SMS and the server responds with MMS. We demonstrate the application using results from TEM, VES and HVSR measurements Acknowledgements Work of authors AK, GH and FV is partially supported by the EU-FP6-SSA in the frame of project "CYCLOPS: CYber-Infrastructure for CiviL protection Operative ProcedureS"
A web service framework for astronomical remote observation in Antarctica by using satellite link
NASA Astrophysics Data System (ADS)
Jia, M.-h.; Chen, Y.-q.; Zhang, G.-y.; Jiang, P.; Zhang, H.; Wang, J.
2018-07-01
Many telescopes are deployed in Antarctica as it offers excellent astronomical observation conditions. However, because Antarctica's environment is harsh to humans, remote operation of telescope is necessary for observation. Furthermore, communication to devices in Antarctica through satellite link with low bandwidth and high latency limits the effectiveness of remote observation. This paper introduces a web service framework for remote astronomical observation in Antarctica. The framework is based on Python Tornado. RTS2-HTTPD and REDIS are used as the access interface to the telescope control system in Antarctica. The web service provides real-time updates through WebSocket. To improve user experience and control effectiveness under the poor satellite link condition, an agent server is deployed in the mainland to synchronize the Antarctic server's data and send it to domestic users in China. The agent server will forward the request of domestic users to the Antarctic master server. The web service was deployed and tested on Bright Star Survey Telescope (BSST) in Antarctica. Results show that the service meets the demands of real-time, multiuser remote observation and domestic users have a better experience of remote operation.
Architecting Communication Network of Networks for Space System of Systems
NASA Technical Reports Server (NTRS)
Bhasin, Kul B.; Hayden, Jeffrey L.
2008-01-01
The National Aeronautics and Space Administration (NASA) and the Department of Defense (DoD) are planning Space System of Systems (SoS) to address the new challenges of space exploration, defense, communications, navigation, Earth observation, and science. In addition, these complex systems must provide interoperability, enhanced reliability, common interfaces, dynamic operations, and autonomy in system management. Both NASA and the DoD have chosen to meet the new demands with high data rate communication systems and space Internet technologies that bring Internet Protocols (IP), routers, servers, software, and interfaces to space networks to enable as much autonomous operation of those networks as possible. These technologies reduce the cost of operations and, with higher bandwidths, support the expected voice, video, and data needed to coordinate activities at each stage of an exploration mission. In this paper, we discuss, in a generic fashion, how the architectural approaches and processes are being developed and used for defining a hypothetical communication and navigation networks infrastructure to support lunar exploration. Examples are given of the products generated by the architecture development process.
Henri, C J; Cox, R D; Bret, P M
1997-08-01
This article details our experience in developing and operating an ultrasound mini-picture archiving and communication system (PACS). Using software developed in-house, low-end Macintosh computers (Apple Computer Co. Cupertino, CA) equipped with framegrabbers coordinate the entry of patient demographic information, image acquisition, and viewing on each ultrasound scanner. After each exam, the data are transmitted to a central archive server where they can be accessed from anywhere on the network. The archive server also provides web-based access to the data and manages pre-fetch and other requests for data that may no longer be on-line. Archival is fully automatic and is performed on recordable compact disk (CD) without compression. The system has been filmless now for over 18 months. In the meantime, one film processor has been eliminated and the position of one film clerk has been reallocated. Previously, nine ultrasound machines produced approximately 150 sheets of laser film per day (at 14 images per sheet). The same quantity of data are now archived without compression onto a single CD. Start-up costs were recovered within six months, and the project has been extended to include computed tomography (CT) and magnetic resonance imaging (MRI).
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.
1991-01-01
Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.
Space Images for NASA JPL Android Version
NASA Technical Reports Server (NTRS)
Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice
2013-01-01
This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.
Test-bed for the remote health monitoring system for bridge structures using FBG sensors
NASA Astrophysics Data System (ADS)
Lee, Chin-Hyung; Park, Ki-Tae; Joo, Bong-Chul; Hwang, Yoon-Koog
2009-05-01
This paper reports on test-bed for the long-term health monitoring system for bridge structures employing fiber Bragg grating (FBG) sensors, which is remotely accessible via the web, to provide real-time quantitative information on a bridge's response to live loading and environmental changes, and fast prediction of the structure's integrity. The sensors are attached on several locations of the structure and connected to a data acquisition system permanently installed onsite. The system can be accessed through remote communication using an optical cable network, through which the evaluation of the bridge behavior under live loading can be allowed at place far away from the field. Live structural data are transmitted continuously to the server computer at the central office. The server computer is connected securely to the internet, where data can be retrieved, processed and stored for the remote web-based health monitoring. Test-bed revealed that the remote health monitoring technology will enable practical, cost-effective, and reliable condition assessment and maintenance of bridge structures.
Markerless client-server augmented reality system with natural features
NASA Astrophysics Data System (ADS)
Ning, Shuangning; Sang, Xinzhu; Chen, Duo
2017-10-01
A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.
CMS distributed data analysis with CRAB3
NASA Astrophysics Data System (ADS)
Mascheroni, M.; Balcas, J.; Belforte, S.; Bockelman, B. P.; Hernandez, J. M.; Ciangottini, D.; Konstantinov, P. B.; Silva, J. M. D.; Ali, M. A. B. M.; Melo, A. M.; Riahi, H.; Tanasijczuk, A. J.; Yusli, M. N. B.; Wolf, M.; Woodard, A. E.; Vaandering, E.
2015-12-01
The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day. CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks and submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. The new system improves the robustness, scalability and sustainability of the service. Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.
Network oriented radiological and medical archive
NASA Astrophysics Data System (ADS)
Ferraris, M.; Frixione, P.; Squarcia, S.
2001-10-01
In this paper the basic ideas of NORMA (Network Oriented Radiological and Medical Archive) are discussed. NORMA is an original project built by a team of physicists in collaboration with radiologists in order to select the best Treatment Planning in radiotherapy. It allows physicians and health physicists, working in different places, to discuss on interesting clinical cases visualizing the same diagnostic images, at the same time, and highlighting zones of interest (tumors and organs at risk). NORMA has a client/server architecture in order to be platform independent. Applying World Wide Web technologies, it can be easily used by people with no specific computer knowledge providing a verbose help to guide the user through the right steps of execution. The client side is an applet while the server side is a Java application. In order to optimize execution the project also includes a proprietary protocol, lying over TCP/IP suite, that organizes data exchanges and control messages. Diagnostic images are retrieved from a relational database or from a standard DICOM (Digital Images and COmmunications in Medicine) PACS through the DICOM-WWW gateway allowing connection of the usual Web browsers, used by the NORMA system, to DICOM applications via the HTTP protocol. Browser requests are sent to the gateway from the Web server through CGI (Common Gateway Interface). DICOM software translates the requests in DICOM messages and organizes the communication with the remote DICOM Application.
A Cyber-Physical System for Girder Hoisting Monitoring Based on Smartphones.
Han, Ruicong; Zhao, Xuefeng; Yu, Yan; Guan, Quanhua; Hu, Weitong; Li, Mingchu
2016-07-07
Offshore design and construction is much more difficult than land-based design and construction, particularly due to hoisting operations. Real-time monitoring of the orientation and movement of a hoisted structure is thus required for operators' safety. In recent years, rapid development of the smart-phone commercial market has offered the possibility that everyone can carry a mini personal computer that is integrated with sensors, an operating system and communication system that can act as an effective aid for cyber-physical systems (CPS) research. In this paper, a CPS for hoisting monitoring using smartphones was proposed, including a phone collector, a controller and a server. This system uses smartphones equipped with internal sensors to obtain girder movement information, which will be uploaded to a server, then returned to controller users. An alarming system will be provided on the controller phone once the returned data exceeds a threshold. The proposed monitoring system is used to monitor the movement and orientation of a girder during hoisting on a cross-sea bridge in real time. The results show the convenience and feasibility of the proposed system.
Dominguez, Luis A.; Yildirim, Battalgazi; Husker, Allen L.; Cochran, Elizabeth S.; Christensen, Carl; Cruz-Atienza, Victor M.
2015-01-01
Each volunteer computer monitors ground motion and communicates using the Berkeley Open Infrastructure for Network Computing (BOINC, Anderson, 2004). Using a standard short‐term average, long‐term average (STLA) algorithm (Earle and Shearer, 1994; Cochran, Lawrence, Christensen, Chung, 2009; Cochran, Lawrence, Christensen, and Jakka, 2009), volunteer computer and sensor systems detect abrupt changes in the acceleration recordings. Each time a possible trigger signal is declared, a small package of information containing sensor and ground‐motion information is streamed to one of the QCN servers (Chung et al., 2011). Trigger signals, correlated in space and time, are then processed by the QCN server to look for potential earthquakes.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-09-18
A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.
NASA Astrophysics Data System (ADS)
Lee, Seungwon; Park, Ilkwon; Kim, Manbae; Byun, Hyeran
2006-10-01
As digital broadcasting technologies have been rapidly progressed, users' expectations for realistic and interactive broadcasting services also have been increased. As one of such services, 3D multi-view broadcasting has received much attention recently. In general, all the view sequences acquired at the server are transmitted to the client. Then, the user can select a part of views or all the views according to display capabilities. However, this kind of system requires high processing power of the server as well as the client, thus posing a difficulty in practical applications. To overcome this problem, a relatively simple method is to transmit only two view-sequences requested by the client in order to deliver a stereoscopic video. In this system, effective communication between the server and the client is one of important aspects. In this paper, we propose an efficient multi-view system that transmits two view-sequences and their depth maps according to user's request. The view selection process is integrated into MPEG-21 DIA (Digital Item Adaptation) so that our system is compatible to MPEG-21 multimedia framework. DIA is generally composed of resource adaptation and descriptor adaptation. It is one of merits that SVA (stereoscopic video adaptation) descriptors defined in DIA standard are used to deliver users' preferences and device capabilities. Furthermore, multi-view descriptions related to multi-view camera and system are newly introduced. The syntax of the descriptions and their elements is represented in XML (eXtensible Markup Language) schema. If the client requests an adapted descriptor (e.g., view numbers) to the server, then the server sends its associated view sequences. Finally, we present a method which can reduce user's visual discomfort that might occur while viewing stereoscopic video. This phenomenon happens when view changes as well as when a stereoscopic image produces excessive disparity caused by a large baseline between two cameras. To solve for the former, IVR (intermediate view reconstruction) is employed for smooth transition between two stereoscopic view sequences. As well, a disparity adjustment scheme is used for the latter. Finally, from the implementation of testbed and the experiments, we can show the valuables and possibilities of our system.
Protection of Location Privacy Based on Distributed Collaborative Recommendations
Wang, Peng; Yang, Jing; Zhang, Jian-Pei
2016-01-01
In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users’ location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users’ location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users’ location information profiles and used generalization and encryption to ensure the safety of the user’s location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user’s location privacy. PMID:27649308
Protection of Location Privacy Based on Distributed Collaborative Recommendations.
Wang, Peng; Yang, Jing; Zhang, Jian-Pei
2016-01-01
In the existing centralized location services system structure, the server is easily attracted and be the communication bottleneck. It caused the disclosure of users' location. For this, we presented a new distributed collaborative recommendation strategy that is based on the distributed system. In this strategy, each node establishes profiles of their own location information. When requests for location services appear, the user can obtain the corresponding location services according to the recommendation of the neighboring users' location information profiles. If no suitable recommended location service results are obtained, then the user can send a service request to the server according to the construction of a k-anonymous data set with a centroid position of the neighbors. In this strategy, we designed a new model of distributed collaborative recommendation location service based on the users' location information profiles and used generalization and encryption to ensure the safety of the user's location information privacy. Finally, we used the real location data set to make theoretical and experimental analysis. And the results show that the strategy proposed in this paper is capable of reducing the frequency of access to the location server, providing better location services and protecting better the user's location privacy.
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Lee, Diana D.; Norvig, Peter (Technical Monitor)
2000-01-01
We describe the Object Infrastructure Framework, a system that seeks to simplify the creation of distributed applications by injecting behavior on the communication paths between components. We touch on some of the ilities and services that can be achieved with injector technology, and then focus on the uses of redirecting injectors, injectors that take requests directed at a particular server and generate requests directed at others. We close by noting that OIF is an Aspect-Oriented Programming system, and comparing OIF to related work.
Simple group password-based authenticated key agreements for the integrated EPR information system.
Lee, Tian-Fu; Chang, I-Pin; Wang, Ching-Cheng
2013-04-01
The security and privacy are important issues for electronic patient records (EPRs). The goal of EPRs is sharing the patients' medical histories such as the diagnosis records, reports and diagnosis image files among hospitals by the Internet. So the security issue for the integrated EPR information system is essential. That is, to ensure the information during transmission through by the Internet is secure and private. The group password-based authenticated key agreement (GPAKE) allows a group of users like doctors, nurses and patients to establish a common session key by using password authentication. Then the group of users can securely communicate by using this session key. Many approaches about GAPKE employ the public key infrastructure (PKI) in order to have higher security. However, it not only increases users' overheads and requires keeping an extra equipment for storing long-term secret keys, but also requires maintaining the public key system. This investigation presents a simple group password-based authenticated key agreement (SGPAKE) protocol for the integrated EPR information system. The proposed SGPAKE protocol does not require using the server or users' public keys. Each user only remembers his weak password shared with a trusted server, and then can obtain a common session key. Then all users can securely communicate by using this session key. The proposed SGPAKE protocol not only provides users with convince, but also has higher security.
An Efficient Authenticated Key Transfer Scheme in Client-Server Networks
NASA Astrophysics Data System (ADS)
Shi, Runhua; Zhang, Shun
2017-10-01
In this paper, we presented a novel authenticated key transfer scheme in client-server networks, which can achieve two secure goals of remote user authentication and the session key establishment between the remote user and the server. Especially, the proposed scheme can subtly provide two fully different authentications: identity-base authentication and anonymous authentication, while the remote user only holds a private key. Furthermore, our scheme only needs to transmit 1-round messages from the remote user to the server, thus it is very efficient in communication complexity. In addition, the most time-consuming computation in our scheme is elliptic curve scalar point multiplication, so it is also feasible even for mobile devices.
Kilintzis, Vassilis; Beredimas, Nikolaos; Chouvarda, Ioanna
2014-01-01
An integral part of a system that manages medical data is the persistent storage engine. For almost twenty five years Relational Database Management Systems(RDBMS) were considered the obvious decision, yet today new technologies have emerged that require our attention as possible alternatives. Triplestores store information in terms of RDF triples without necessarily binding to a specific predefined structural model. In this paper we present an attempt to compare the performance of Apache JENA-Fuseki and the Virtuoso Universal Server 6 triplestores with that of MySQL 5.6 RDBMS for storing and retrieving medical information that it is communicated as RDF/XML ontology instances over a RESTful web service. The results show that the performance, calculated as average time of storing and retrieving instances, is significantly better using Virtuoso Server while MySQL performed better than Fuseki.
Autonomous mission planning and scheduling: Innovative, integrated, responsive
NASA Technical Reports Server (NTRS)
Sary, Charisse; Liu, Simon; Hull, Larry; Davis, Randy
1994-01-01
Autonomous mission scheduling, a new concept for NASA ground data systems, is a decentralized and distributed approach to scientific spacecraft planning, scheduling, and command management. Systems and services are provided that enable investigators to operate their own instruments. In autonomous mission scheduling, separate nodes exist for each instrument and one or more operations nodes exist for the spacecraft. Each node is responsible for its own operations which include planning, scheduling, and commanding; and for resolving conflicts with other nodes. One or more database servers accessible to all nodes enable each to share mission and science planning, scheduling, and commanding information. The architecture for autonomous mission scheduling is based upon a realistic mix of state-of-the-art and emerging technology and services, e.g., high performance individual workstations, high speed communications, client-server computing, and relational databases. The concept is particularly suited to the smaller, less complex missions of the future.
Using OPC technology to support the study of advanced process control.
Mahmoud, Magdi S; Sabih, Muhammad; Elshafei, Moustafa
2015-03-01
OPC, originally the Object Linking and Embedding (OLE) for Process Control, brings a broad communication opportunity between different kinds of control systems. This paper investigates the use of OPC technology for the study of distributed control systems (DCS) as a cost effective and flexible research tool for the development and testing of advanced process control (APC) techniques in university research centers. Co-Simulation environment based on Matlab, LabVIEW and TCP/IP network is presented here. Several implementation issues and OPC based client/server control application have been addressed for TCP/IP network. A nonlinear boiler model is simulated as OPC server and OPC client is used for closed loop model identification, and to design a Model Predictive Controller. The MPC is able to control the NOx emissions in addition to drum water level and steam pressure. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Arnold, Corey W; Bui, Alex A T; Morioka, Craig; El-Saden, Suzie; Kangarloo, Hooshang
2007-01-01
The communication of imaging findings to a referring physician is an important role of the radiologist. However, communication between onsite and offsite physicians is a time-consuming process that can obstruct work flow and frequently involves no exchange of visual information, which is especially problematic given the importance of radiologic images for diagnosis and treatment. A prototype World Wide Web-based image documentation and reporting system was developed for use in supporting a "communication loop" that is based on the concept of a classic "wet-read" system. The proposed system represents an attempt to address many of the problems seen in current communication work flows by implementing a well-documented and easily accessible communication loop that is adaptable to different types of imaging study evaluation. Images are displayed in a native (DICOM) Digital Imaging and Communications in Medicine format with a Java applet, which allows accurate presentation along with use of various image manipulation tools. The Web-based infrastructure consists of a server that stores imaging studies and reports, with Web browsers that download and install necessary client software on demand. Application logic consists of a set of PHP (hypertext preprocessor) modules that are accessible with an application programming interface. The system may be adapted to any clinician-specialist communication loop, and, because it integrates radiologic standards with Web-based technologies, can more effectively communicate and document imaging data. RSNA, 2007
MedRapid--medical community & business intelligence system.
Finkeissen, E; Fuchs, H; Jakob, T; Wetter, T
2002-01-01
currently, it takes at least 6 months for researchers to communicate their results. This delay is caused (a) by partial lacks of machine support for both representation as well as communication and (b) by media breaks during the communication process. To make an integrated communication between researchers and practitioners possible, a general structure for medical content representation has been set up. The procedure for data entry and quality management has been generalized and implemented in a web-based authoring system. The MedRapid-system supports the medical experts in entering their knowledge into a database. Here, the level of detail is still below that of current medical guidelines representation. However, the symmetric structure for an area-wide medical knowledge representation is highly retrievable and thus can quickly be communicated into daily routine for the improvement of the treatment quality. In addition, other sources like journal articles and medical guidelines can be references within the MedRapid-system and thus be communicated into daily routine. The fundamental system for the representation of medical reference knowledge (from reference works/books) itself is not sufficient for the friction-less communication amongst medical staff. Rather, the process of (a) representing medical knowledge, (b) refereeing the represented knowledge, (c) communicating the represented knowledge, and (d) retrieving the represented knowledge has to be unified. MedRapid will soon support the whole process on one server system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameme, Dan Selorm Kwami; Guttromson, Ross
This report characterizes communications network latency under various network topologies and qualities of service (QoS). The characterizations are probabilistic in nature, allowing deeper analysis of stability for Internet Protocol (IP) based feedback control systems used in grid applications. The work involves the use of Raspberry Pi computers as a proxy for a controlled resource, and an ns-3 network simulator on a Linux server to create an experimental platform (testbed) that can be used to model wide-area grid control network communications in smart grid. Modbus protocol is used for information transport, and Routing Information Protocol is used for dynamic route selectionmore » within the simulated network.« less
A simple tool for neuroimaging data sharing
Haselgrove, Christian; Poline, Jean-Baptiste; Kennedy, David N.
2014-01-01
Data sharing is becoming increasingly common, but despite encouragement and facilitation by funding agencies, journals, and some research efforts, most neuroimaging data acquired today is still not shared due to political, financial, social, and technical barriers to sharing data that remain. In particular, technical solutions are few for researchers that are not a part of larger efforts with dedicated sharing infrastructures, and social barriers such as the time commitment required to share can keep data from becoming publicly available. We present a system for sharing neuroimaging data, designed to be simple to use and to provide benefit to the data provider. The system consists of a server at the International Neuroinformatics Coordinating Facility (INCF) and user tools for uploading data to the server. The primary design principle for the user tools is ease of use: the user identifies a directory containing Digital Imaging and Communications in Medicine (DICOM) data, provides their INCF Portal authentication, and provides identifiers for the subject and imaging session. The user tool anonymizes the data and sends it to the server. The server then runs quality control routines on the data, and the data and the quality control reports are made public. The user retains control of the data and may change the sharing policy as they need. The result is that in a few minutes of the user’s time, DICOM data can be anonymized and made publicly available, and an initial quality control assessment can be performed on the data. The system is currently functional, and user tools and access to the public image database are available at http://xnat.incf.org/. PMID:24904398
NASA Technical Reports Server (NTRS)
Sundermier, Amy (Inventor)
2002-01-01
A method for acquiring and assembling software components at execution time into a client program, where the components may be acquired from remote networked servers is disclosed. The acquired components are assembled according to knowledge represented within one or more acquired mediating components. A mediating component implements knowledge of an object model. A mediating component uses its implemented object model knowledge, acquired component class information and polymorphism to assemble components into an interacting program at execution time. The interactions or abstract relationships between components in the object model may be implemented by the mediating component as direct invocations or indirect events or software bus exchanges. The acquired components may establish communications with remote servers. The acquired components may also present a user interface representing data to be exchanged with the remote servers. The mediating components may be assembled into layers, allowing arbitrarily complex programs to be constructed at execution time.
Analyzing Cyber-Physical Threats on Robotic Platforms.
Ahmad Yousef, Khalil M; AlMajali, Anas; Ghalyon, Salah Abu; Dweik, Waleed; Mohd, Bassam J
2018-05-21
Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBot TM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications.
Analyzing Cyber-Physical Threats on Robotic Platforms †
2018-01-01
Robots are increasingly involved in our daily lives. Fundamental to robots are the communication link (or stream) and the applications that connect the robots to their clients or users. Such communication link and applications are usually supported through client/server network connection. This networking system is amenable of being attacked and vulnerable to the security threats. Ensuring security and privacy for robotic platforms is thus critical, as failures and attacks could have devastating consequences. In this paper, we examine several cyber-physical security threats that are unique to the robotic platforms; specifically the communication link and the applications. Threats target integrity, availability and confidential security requirements of the robotic platforms, which use MobileEyes/arnlServer client/server applications. A robot attack tool (RAT) was developed to perform specific security attacks. An impact-oriented approach was adopted to analyze the assessment results of the attacks. Tests and experiments of attacks were conducted in simulation environment and physically on the robot. The simulation environment was based on MobileSim; a software tool for simulating, debugging and experimenting on MobileRobots/ActivMedia platforms and their environments. The robot platform PeopleBotTM was used for physical experiments. The analysis and testing results show that certain attacks were successful at breaching the robot security. Integrity attacks modified commands and manipulated the robot behavior. Availability attacks were able to cause Denial-of-Service (DoS) and the robot was not responsive to MobileEyes commands. Integrity and availability attacks caused sensitive information on the robot to be hijacked. To mitigate security threats, we provide possible mitigation techniques and suggestions to raise awareness of threats on the robotic platforms, especially when the robots are involved in critical missions or applications. PMID:29883403
An efficient quantum scheme for Private Set Intersection
NASA Astrophysics Data System (ADS)
Shi, Run-hua; Mu, Yi; Zhong, Hong; Cui, Jie; Zhang, Shun
2016-01-01
Private Set Intersection allows a client to privately compute set intersection with the collaboration of the server, which is one of the most fundamental and key problems within the multiparty collaborative computation of protecting the privacy of the parties. In this paper, we first present a cheat-sensitive quantum scheme for Private Set Intersection. Compared with classical schemes, our scheme has lower communication complexity, which is independent of the size of the server's set. Therefore, it is very suitable for big data services in Cloud or large-scale client-server networks.
How Public Is the Web?: Robots, Access, and Scholarly Communication.
ERIC Educational Resources Information Center
Snyder, Herbert; Rosenbaum, Howard
1998-01-01
Examines the use of Robot Exclusion Protocol (REP) to restrict the access of search engine robots to 10 major United States university Web sites. An analysis of Web site searching and interviews with Web server administrators shows that the decision to use this procedure is largely technical and is typically made by the Web server administrator.…
[The database server for the medical bibliography database at Charles University].
Vejvalka, J; Rojíková, V; Ulrych, O; Vorísek, M
1998-01-01
In the medical community, bibliographic databases are widely accepted as a most important source of information both for theoretical and clinical disciplines. To improve access to medical bibliographic databases at Charles University, a database server (ERL by Silver Platter) was set up at the 2nd Faculty of Medicine in Prague. The server, accessible by Internet 24 hours/7 days, hosts now 14 years' MEDLINE and 10 years' EMBASE Paediatrics. Two different strategies are available for connecting to the server: a specialized client program that communicates over the Internet (suitable for professional searching) and a web-based access that requires no specialized software (except the WWW browser) on the client side. The server is now offered to academic community to host further databases, possibly subscribed by consortia whose individual members would not subscribe them by themselves.
Ecoupling server: A tool to compute and analyze electronic couplings.
Cabeza de Vaca, Israel; Acebes, Sandra; Guallar, Victor
2016-07-05
Electron transfer processes are often studied through the evaluation and analysis of the electronic coupling (EC). Since most standard QM codes do not provide readily such a measure, additional, and user-friendly tools to compute and analyze electronic coupling from external wave functions will be of high value. The first server to provide a friendly interface for evaluation and analysis of electronic couplings under two different approximations (FDC and GMH) is presented in this communication. Ecoupling server accepts inputs from common QM and QM/MM software and provides useful plots to understand and analyze the results easily. The web server has been implemented in CGI-python using Apache and it is accessible at http://ecouplingserver.bsc.es. Ecoupling server is free and open to all users without login. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
EON: software for long time simulations of atomic scale systems
NASA Astrophysics Data System (ADS)
Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme
2014-07-01
The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.
Federal Emergency Management Information System (FEMIS) system administration guide. Version 1.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burford, M.J.; Burnett, R.A.; Curtis, L.M.
The Federal Emergency Management Information System (FEMIS) is an emergency management planning and analysis tool that is being developed under the direction of the US Army Chemical biological Defense Command. The FEMIS System Administration Guide defines FEMIS hardware and software requirements and gives instructions for installing the FEMIS system package. System administrators, database administrators, and general users can use this guide to install, configure, and maintain the FEMIS client software package. This document provides a description of the FEMIS environment; distribution media; data, communications, and electronic mail servers; user workstations; and system management.
HTML 5 Displays for On-Board Flight Systems
NASA Technical Reports Server (NTRS)
Silva, Chandika
2016-01-01
During my Internship at NASA in the summer of 2016, I was assigned to a project which dealt with developing a web-server that would display telemetry and other system data using HTML 5, JavaScript, and CSS. By doing this, it would be possible to view the data across a variety of screen sizes, and establish a standard that could be used to simplify communication and software development between NASA and other countries. Utilizing a web- approach allowed us to add in more functionality, as well as make the displays more aesthetically pleasing for the users. When I was assigned to this project my main task was to first establish communication with the current display server. This display server would output data from the on-board systems in XML format. Once communication was established I was then asked to create a dynamic telemetry table web page that would update its header and change as new information came in. After this was completed, certain minor functionalities were added to the table such as a hide column and filter by system option. This was more for the purpose of making the table more useful for the users, as they can now filter and view relevant data. Finally my last task was to create a graphical system display for all the systems on the space craft. This was by far the most challenging part of my internship as finding a JavaScript library that was both free and contained useful functions to assist me in my task was difficult. In the end I was able to use the JointJs library and accomplish the task. With the help of my mentor and the HIVE lab team, we were able to establish stable communication with the display server. We also succeeded in creating a fully dynamic telemetry table and in developing a graphical system display for the advanced modular power system. Working in JSC for this internship has taught me a lot about coding in JavaScript and HTML 5. I was also introduced to the concept of developing software as a team, and exposed to the different types of programs that are used to simplify team coding such as GitLab. While in JSC, I took full advantage of and attended the lectures that were held here on site. I learned a lot about what it is NASA does and about the interesting projects that are conducted here. One of the lectures I attended was about the selection process and the criteria that is used to select future astronauts for flight missions. This truly had an impact on my future plans as it showed me that this path was a viable option for me. After this internship I plan on completing my undergraduate course work and plan to move on for a masters degree. However, during the time in which I will be completing my masters course work, I would like to apply for the NASA pathways graduate program and, if I am accepted, eventually move on to being a full time civil servant. Working in NASA has not only been enjoyable, but full of information and great experiences that have motivated me to seek a full time employment here in the near future.
NASA Astrophysics Data System (ADS)
Kwon, Hyeokjun; Oh, Sechang; Kumar, Prashanth S.; Varadan, Vijay K.
2012-10-01
CardioVascular Disease(CVD)s lead the sudden cardiac death due to irregular phenomenon of the cardiac signal by the abnormal case of blood vessel and cardiac structure. For last two decades, cardiac disease research for man is under active discussion. As a result, the death rate by cardiac disease in men has been falling gradually compared with relatively increasing the women death rate due to CVD[2]. The main reason of this phenomenon causes the lack a sense of the seriousness to female CVD and different symptom of female CVD compared with the symptoms of male CVD. Usually, because the women CVD accompanies with ordinary symptoms unrecognizing the heart abnormality signal such as unusual fatigue, sleep disturbances, shortness of breath, anxiety, chest discomfort, and indigestion dyspepsia, most women CVD patients do not realize that these symptoms are related to the CVD symptoms. Therefore, periodic ECG signal observation is required for women cardiac disease patients. ElectroCardioGram(ECG) detection, treadmill test/exercise ECG, nuclear scan, coronary angiography, and intracoronary ultrasound are used to diagnose abnormality of heart. Among the medical checkup methods for CVDs checkup, it is very effective method for the diagnosis of cardiac disease and the early detection of heart abnormality to monitor ECG periodically. This paper suggests the effective ECG monitoring system for woman by attaching the system on woman's brassiere by using augmented chest lead attachment method. The suggested system in this paper consists of ECG signal transmission system and a server program to display and analyze the transmitted ECG. The ECG signal transmission system consists of three parts such as ECG physical signal detection part with two electrodes made by gold nanowire structure, data acquisition with AD converter, and data transmission part with GPRS(General Packet Radio Service) communication. Usually, to detect human bio signal, Ag/AgCl or gold cup electrodes are used with conductive gel. However, the gel can be dried when taking long time monitoring. The gold nanowire structure electrodes without consideration of uncomfortable usage of gel are attached on beneath the chest position of a brassiere, and the electrodes convert the physical ECG signal to voltage potential signal. The voltage potential ECG signal is converted to digital signal by AD converter included in microprocessor. The converted ECG signal by AD converter is saved on every 1 sec period in the internal RAM in microprocessor. For transmission of the saved data in the internal RAM to a server computer locating at remote area, the system uses the GPRS communication technology, which can develop the wide area network(WAP) without any gateway and repeater. In addition, the transmission system is operated on client mode of GPRS communication. The remote server is installed a program including the functions of displaying and analyzing the transmitted ECG. To display the ECG data, the program is operated with TCP/IP server mode and static IP address, and to analyze the ECG data, the paper suggests motion artifact remove algorithm including adaptive filter with LMS(least mean square), baseline detection algorithm using predictability estimation theory, a filter with moving weighted factor, low pass filter, peak to peak detection, and interpolation.
Cardio-PACs: a new opportunity
NASA Astrophysics Data System (ADS)
Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary
2000-05-01
It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.
Environmental Monitoring Using Sensor Networks
NASA Astrophysics Data System (ADS)
Yang, J.; Zhang, C.; Li, X.; Huang, Y.; Fu, S.; Acevedo, M. F.
2008-12-01
Environmental observatories, consisting of a variety of sensor systems, computational resources and informatics, are important for us to observe, model, predict, and ultimately help preserve the health of the nature. The commoditization and proliferation of coin-to-palm sized wireless sensors will allow environmental monitoring with unprecedented fine spatial and temporal resolution. Once scattered around, these sensors can identify themselves, locate their positions, describe their functions, and self-organize into a network. They communicate through wireless channel with nearby sensors and transmit data through multi-hop protocols to a gateway, which can forward information to a remote data server. In this project, we describe an environmental observatory called Texas Environmental Observatory (TEO) that incorporates a sensor network system with intertwined wired and wireless sensors. We are enhancing and expanding the existing wired weather stations to include wireless sensor networks (WSNs) and telemetry using solar-powered cellular modems. The new WSNs will monitor soil moisture and support long-term hydrologic modeling. Hydrologic models are helpful in predicting how changes in land cover translate into changes in the stream flow regime. These models require inputs that are difficult to measure over large areas, especially variables related to storm events, such as soil moisture antecedent conditions and rainfall amount and intensity. This will also contribute to improve rainfall estimations from meteorological radar data and enhance hydrological forecasts. Sensor data are transmitted from monitoring site to a Central Data Collection (CDC) Server. We incorporate a GPRS modem for wireless telemetry, a single-board computer (SBC) as Remote Field Gateway (RFG) Server, and a WSN for distributed soil moisture monitoring. The RFG provides effective control, management, and coordination of two independent sensor systems, i.e., a traditional datalogger-based wired sensor system and the WSN-based wireless sensor system. The RFG also supports remote manipulation of the devices in the field such as the SBC, datalogger, and WSN. Sensor data collected from the distributed monitoring stations are stored in a database (DB) Server. The CDC Server acts as an intermediate component to hide the heterogeneity of different devices and support data validation required by the DB Server. Daemon programs running on the CDC Server pre-process the data before it is inserted into the database, and periodically perform synchronization tasks. A SWE-compliant data repository is installed to enable data exchange, accepting data from both internal DB Server and external sources through the OGC web services. The web portal, i.e. TEO Online, serves as a user-friendly interface for data visualization, analysis, synthesis, modeling, and K-12 educational outreach activities. It also provides useful capabilities for system developers and operators to remotely monitor system status and remotely update software and system configuration, which greatly simplifies the system debugging and maintenance tasks. We also implement Sensor Observation Services (SOS) at this layer, conforming to the SWE standard to facilitate data exchange. The standard SensorML/O&M data representation makes it easy to integrate our sensor data into the existing Geographic Information Systems (GIS) web services and exchange the data with other organizations.
NASA Astrophysics Data System (ADS)
Faden, J.; Vandegriff, J. D.; Weigel, R. S.
2016-12-01
Autoplot was introduced in 2008 as an easy-to-use plotting tool for the space physics community. It reads data from a variety of file resources, such as CDF and HDF files, and a number of specialized data servers, such as the PDS/PPI's DIT-DOS, CDAWeb, and from the University of Iowa's RPWG Das2Server. Each of these servers have optimized methods for transmitting data to display in Autoplot, but require coordination and specialized software to work, limiting Autoplot's ability to access new servers and datasets. Likewise, groups who would like software to access their APIs must either write thier own clients, or publish a specification document in hopes that people will write clients. The HAPI specification was written so that a simple, standard API could be used by both Autoplot and server implementations, to remove these barriers to free flow of time series data. Autoplot's software for communicating with HAPI servers is presented, showing the user interface scientists will use, and how data servers might implement the HAPI specification to provide access to their data. This will also include instructions on how Autoplot is used and installed desktop computers, and used to view data from the RBSP, Juno, and other missions.
Design and implementation of streaming media server cluster based on FFMpeg.
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system.
Design and Implementation of Streaming Media Server Cluster Based on FFMpeg
Zhao, Hong; Zhou, Chun-long; Jin, Bao-zhao
2015-01-01
Poor performance and network congestion are commonly observed in the streaming media single server system. This paper proposes a scheme to construct a streaming media server cluster system based on FFMpeg. In this scheme, different users are distributed to different servers according to their locations and the balance among servers is maintained by the dynamic load-balancing algorithm based on active feedback. Furthermore, a service redirection algorithm is proposed to improve the transmission efficiency of streaming media data. The experiment results show that the server cluster system has significantly alleviated the network congestion and improved the performance in comparison with the single server system. PMID:25734187
NASA Astrophysics Data System (ADS)
Silva, Augusto F. d.; Costa, Carlos; Abrantes, Pedro; Gama, Vasco; Den Boer, Ad
1998-07-01
This paper describes an integrated system designed to provide efficient means for DICOM compliant cardiac imaging archival, transmission and visualization based on a communications backbone matching recent enabling telematic technologies like Asynchronous Transfer Mode (ATM) and switched Local Area Networks (LANs). Within a distributed client-server framework, the system was conceived on a modality based bottom-up approach, aiming ultrafast access to short term archives and seamless retrieval of cardiac video sequences throughout review stations located at the outpatient referral rooms, intensive and intermediate care units and operating theaters.
Strum, David P.; Vargas, Luis G.; May, Jerrold H.
1997-01-01
Abstract The plans for Resource Coordination for Surgical Services system (RCSS) incorporate a distributed objectbase with a coordinating server. User-centered information screens are customized for each geographic location in surgical services. User interfaces are designed to mimic paper lists and worksheets used by health care providers. Patient-specific and site-specific data will be entered and maintained by providers at each geographic location, but also rebroadcast and displayed for all providers. Although RCSS is primarily a communications system, it will also support review of surgical utilization and operative scheduling. PMID:9067878
Li, Chun-Ta; Lee, Cheng-Chi; Weng, Chi-Yao; Chen, Song-Jhih
2016-11-01
Secure user authentication schemes in many e-Healthcare applications try to prevent unauthorized users from intruding the e-Healthcare systems and a remote user and a medical server can establish session keys for securing the subsequent communications. However, many schemes does not mask the users' identity information while constructing a login session between two or more parties, even though personal privacy of users is a significant topic for e-Healthcare systems. In order to preserve personal privacy of users, dynamic identity based authentication schemes are hiding user's real identity during the process of network communications and only the medical server knows login user's identity. In addition, most of the existing dynamic identity based authentication schemes ignore the inputs verification during login condition and this flaw may subject to inefficiency in the case of incorrect inputs in the login phase. Regarding the use of secure authentication mechanisms for e-Healthcare systems, this paper presents a new dynamic identity and chaotic maps based authentication scheme and a secure data protection approach is employed in every session to prevent illegal intrusions. The proposed scheme can not only quickly detect incorrect inputs during the phases of login and password change but also can invalidate the future use of a lost/stolen smart card. Compared the functionality and efficiency with other authentication schemes recently, the proposed scheme satisfies desirable security attributes and maintains acceptable efficiency in terms of the computational overheads for e-Healthcare systems.
Wide Area Information Servers: An Executive Information System for Unstructured Files.
ERIC Educational Resources Information Center
Kahle, Brewster; And Others
1992-01-01
Describes the Wide Area Information Servers (WAIS) system, an integrated information retrieval system for corporate end users. Discussion covers general characteristics of the system, search techniques, protocol development, user interfaces, servers, selective dissemination of information, nontextual data, access to other servers, and description…
The D3 Middleware Architecture
NASA Technical Reports Server (NTRS)
Walton, Joan; Filman, Robert E.; Korsmeyer, David J.; Lee, Diana D.; Mak, Ron; Patel, Tarang
2002-01-01
DARWIN is a NASA developed, Internet-based system for enabling aerospace researchers to securely and remotely access and collaborate on the analysis of aerospace vehicle design data, primarily the results of wind-tunnel testing and numeric (e.g., computational fluid-dynamics) model executions. DARWIN captures, stores and indexes data; manages derived knowledge (such as visualizations across multiple datasets); and provides an environment for designers to collaborate in the analysis of test results. DARWIN is an interesting application because it supports high-volumes of data. integrates multiple modalities of data display (e.g., images and data visualizations), and provides non-trivial access control mechanisms. DARWIN enables collaboration by allowing not only sharing visualizations of data, but also commentary about and views of data. Here we provide an overview of the architecture of D3, the third generation of DARWIN. Earlier versions of DARWIN were characterized by browser-based interfaces and a hodge-podge of server technologies: CGI scripts, applets, PERL, and so forth. But browsers proved difficult to control, and a proliferation of computational mechanisms proved inefficient and difficult to maintain. D3 substitutes a pure-Java approach for that medley: A Java client communicates (though RMI over HTTPS) with a Java-based application server. Code on the server accesses information from JDBC databases, distributed LDAP security services, and a collaborative information system. D3 is a three tier-architecture, but unlike 'E-commerce' applications, the data usage pattern suggests different strategies than traditional Enterprise Java Beans - we need to move volumes of related data together, considerable processing happens on the client, and the 'business logic' on the server-side is primarily data integration and collaboration. With D3, we are extending DARWIN to handle other data domains and to be a distributed system, where a single login allows a user transparent access to test results from multiple servers and authority domains.
Asynchronous data change notification between database server and accelerator controls system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, W.; Morris, J.; Nemesure, S.
2011-10-10
Database data change notification (DCN) is a commonly used feature. Not all database management systems (DBMS) provide an explicit DCN mechanism. Even for those DBMS's which support DCN (such as Oracle and MS SQL server), some server side and/or client side programming may be required to make the DCN system work. This makes the setup of DCN between database server and interested clients tedious and time consuming. In accelerator control systems, there are many well established software client/server architectures (such as CDEV, EPICS, and ADO) that can be used to implement data reflection servers that transfer data asynchronously to anymore » client using the standard SET/GET API. This paper describes a method for using such a data reflection server to set up asynchronous DCN (ADCN) between a DBMS and clients. This method works well for all DBMS systems which provide database trigger functionality. Asynchronous data change notification (ADCN) between database server and clients can be realized by combining the use of a database trigger mechanism, which is supported by major DBMS systems, with server processes that use client/server software architectures that are familiar in the accelerator controls community (such as EPICS, CDEV or ADO). This approach makes the ADCN system easy to set up and integrate into an accelerator controls system. Several ADCN systems have been set up and used in the RHIC-AGS controls system.« less
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-01-01
A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613
Personal diabetes management system based on ubiquitous computing technology.
Park, Kyung-Soon; Kim, Nam-Jin; Hong, Joo-Hyun; Park, Mi-Sook; Cha, Eun-Jung; Lee, Tae-Soo
2006-01-01
Assisting diabetes patients to self manage blood glucose test and insulin injection is of great importance for their healthcare. This study presented a PDA based system to manage the personal glucose level data interfaced with a small glucometer through a serial port. The data stored in the PDA can be transmitted by cradle or wireless communication to the remote web-server, where further medical analysis and service is provided. This system enables more efficient and systematic management of diabetes patients through self management and remote medical practice.
Cooperating Expert Systems For Space Station Power Distribution Management
NASA Astrophysics Data System (ADS)
Nguyen, T. A.; Chiou, W. C.
1987-02-01
In a complex system such as the manned Space Station, it is deem necessary that many expert systems must perform tasks in a concurrent and cooperative manner. An important question arise is: what cooperative-task-performing models are appropriate for multiple expert systems to jointly perform tasks. The solution to this question will provide a crucial automation design criteria for the Space Station complex systems architecture. Based on a client/server model for performing tasks, we have developed a system that acts as a front-end to support loosely-coupled communications between expert systems running on multiple Symbolics machines. As an example, we use two ART*-based expert systems to demonstrate the concept of parallel symbolic manipulation for power distribution management and dynamic load planner/scheduler in the simulated Space Station environment. This on-going work will also explore other cooperative-task-performing models as alternatives which can evaluate inter and intra expert system communication mechanisms. It will be served as a testbed and a bench-marking tool for other Space Station expert subsystem communication and information exchange.
Amin, Ruhul; Islam, S K Hafizul; Biswas, G P; Khan, Muhammad Khurram; Obaidat, Mohammad S
2015-11-01
In order to access remote medical server, generally the patients utilize smart card to login to the server. It has been observed that most of the user (patient) authentication protocols suffer from smart card stolen attack that means the attacker can mount several common attacks after extracting smart card information. Recently, Lu et al.'s proposes a session key agreement protocol between the patient and remote medical server and claims that the same protocol is secure against relevant security attacks. However, this paper presents several security attacks on Lu et al.'s protocol such as identity trace attack, new smart card issue attack, patient impersonation attack and medical server impersonation attack. In order to fix the mentioned security pitfalls including smart card stolen attack, this paper proposes an efficient remote mutual authentication protocol using smart card. We have then simulated the proposed protocol using widely-accepted AVISPA simulation tool whose results make certain that the same protocol is secure against active and passive attacks including replay and man-in-the-middle attacks. Moreover, the rigorous security analysis proves that the proposed protocol provides strong security protection on the relevant security attacks including smart card stolen attack. We compare the proposed scheme with several related schemes in terms of computation cost and communication cost as well as security functionalities. It has been observed that the proposed scheme is comparatively better than related existing schemes.
Mobile real-time data acquisition system for application in preventive medicine.
Neubert, Sebastian; Arndt, Dagmar; Thurow, Kerstin; Stoll, Regina
2010-05-01
In this article, the development of a system for online monitoring of a subject's physiological parameters and subjective workload regardless of location has been presented, which allows for studies on occupational health. In the sector of occupational health, modern acquisition systems are needed. Such systems can be used by the subject during usual daily routines without being influenced by the presence of an examiner. Moreover, the system's influence on the subject should be reduced to a minimum to receive reliable data from the examination. The acquisition system is based on a mobile handheld (or smart phone), which allows both management of the communication process and input of several dialog data (e.g., questionnaires). A sensor electronics module permits the acquisition of different physiological parameters and their online transmission to the handheld via Bluetooth. The mobile handheld and the sensor electronics module constitute a wireless personal area network. The handheld allows the first analysis, the synchronization of the data, and the continuous data transfer to a communication server by the integrated mobile radio standards of the handheld. The communication server stores the incoming data of several subjects in an application-dependent database and allows access from all over the world via a Web-based management system. The developed system permits one examiner to monitor the physiological parameters and the subjective workload of several subjects in different locations at the same time. Thereby the subjects can move almost freely in any area covered by the mobile network. The mobile handheld allows the popping-up of the questionnaires at flexible time intervals. This electronic input of the dialog data, in comparison to the manual documentation on papers, is more comfortable to the subject as well as to the examiner for an analysis. A Web-based management application facilitates a continuous remote monitoring of the physiological and the subjective data of the subject.
Velonakis, E; Mantas, J; Mavrikakis, I
2006-01-01
The occupational health and safety management constitutes a field of increasing interest. Institutions in cooperation with enterprises make synchronized efforts to initiate quality management systems to this field. Computer networks can offer such services via TCP/IP which is a reliable protocol for workflow management between enterprises and institutions. A design of such network is based on several factors in order to achieve defined criteria and connectivity with other networks. The network will be consisted of certain nodes responsible to inform executive persons on Occupational Health and Safety. A web database has been planned for inserting and searching documents, for answering and processing questionnaires. The submission of files to a server and the answers to questionnaires through the web help the experts to make corrections and improvements on their activities. Based on the requirements of enterprises we have constructed a web file server. We submit files in purpose users could retrieve the files which need. The access is limited to authorized users and digital watermarks authenticate and protect digital objects. The Health and Safety Management System follows ISO 18001. The implementation of it, through the web site is an aim. The all application is developed and implemented on a pilot basis for the health services sector. It is all ready installed within a hospital, supporting health and safety management among different departments of the hospital and allowing communication through WEB with other hospitals.
Revolutionize Situational Awareness in Emergencies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hehlen, Markus Peter
This report describes an integrated system that provides real-time actionable information to first responders. LANL will integrate three technologies to form an advanced predictive real-time sensor network including compact chemical and wind sensor sin low cost rugged package for outdoor installation; flexible robust communication architecture linking sensors in near-real time to globally accessible servers; and the QUIC code which predicts contamination transport and dispersal in urban environments in near real time.
Integration of a clinical trial database with a PACS
NASA Astrophysics Data System (ADS)
van Herk, M.
2014-03-01
Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.
Managing Attribute—Value Clinical Trials Data Using the ACT/DB Client—Server Database System
Nadkarni, Prakash M.; Brandt, Cynthia; Frawley, Sandra; Sayward, Frederick G.; Einbinder, Robin; Zelterman, Daniel; Schacter, Lee; Miller, Perry L.
1998-01-01
ACT/DB is a client—server database application for storing clinical trials and outcomes data, which is currently undergoing initial pilot use. It stores most of its data in entity—attribute—value form. Such data are segregated according to data type to allow indexing by value when possible, and binary large object data are managed in the same way as other data. ACT/DB lets an investigator design a study rapidly by defining the parameters (or attributes) that are to be gathered, as well as their logical grouping for purposes of display and data entry. ACT/DB generates customizable data entry. The data can be viewed through several standard reports as well as exported as text to external analysis programs. ACT/DB is designed to encourage reuse of parameters across multiple studies and has facilities for dictionary search and maintenance. It uses a Microsoft Access client running on Windows 95 machines, which communicates with an Oracle server running on a UNIX platform. ACT/DB is being used to manage the data for seven studies in its initial deployment. PMID:9524347
NASA Technical Reports Server (NTRS)
Connell, Andrea M.
2011-01-01
The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.
A remote instruction system empowered by tightly shared haptic sensation
NASA Astrophysics Data System (ADS)
Nishino, Hiroaki; Yamaguchi, Akira; Kagawa, Tsuneo; Utsumiya, Kouichi
2007-09-01
We present a system to realize an on-line instruction environment among physically separated participants based on a multi-modal communication strategy. In addition to visual and acoustic information, commonly used communication modalities in network environments, our system provides a haptic channel to intuitively conveying partners' sense of touch. The human touch sensation, however, is very sensitive for delays and jitters in the networked virtual reality (NVR) systems. Therefore, a method to compensate for such negative factors needs to be provided. We show an NVR architecture to implement a basic framework that can be shared by various applications and effectively deals with the problems. We take a hybrid approach to implement both data consistency by client-server and scalability by peer-to-peer models. As an application system built on the proposed architecture, a remote instruction system targeted at teaching handwritten characters and line patterns on a Korea-Japan high-speed research network also is mentioned.
Yokohama, Noriya
2003-09-01
The author constructed a medical image network system using open source software that took security into consideration. This system was enabled for search and browse with a WWW browser, and images were stored in a DICOM server. In order to realize this function, software was developed to fill in the gap between the DICOM protocol and HTTP using PHP language. The transmission speed was evaluated by the difference in protocols between DICOM and HTTP. Furthermore, an attempt was made to evaluate the convenience of medical image access with a personal information terminal via the Internet through the high-speed mobile communication terminal. Results suggested the feasibility of remote diagnosis and application to emergency care.
NASA Technical Reports Server (NTRS)
Tuey, Richard C.; Collins, Mary; Caswell, Pamela; Haynes, Bob; Nelson, Michael L.; Holm, Jeanne; Buquo, Lynn; Tingle, Annette; Cooper, Bill; Stiltner, Roy
1996-01-01
This evaluation report contains an introduction, seven chapters, and five appendices. The Introduction describes the purpose, conceptual frame work, functional description, and technical report server of the STI Electronic Document Distribution (EDD) project. Chapter 1 documents the results of the prototype STI EDD in actual operation. Chapter 2 documents each NASA center's post processing publication processes. Chapter 3 documents each center's STI software, hardware, and communications configurations. Chapter 7 documents STI EDD policy, practices, and procedures. The appendices, which arc contained in Part 2 of this document, consist of (1) STI EDD Project Plan, (2) Team members, (3) Phasing Schedules, (4) Accessing On-line Reports, and (5) Creating an HTML File and Setting Up an xTRS. In summary, Stage 4 of the NASAwide Electronic Publishing System is the final phase of its implementation through the prototyping and gradual integration of each NASA center's electronic printing systems, desktop publishing systems, and technical report servers to be able to provide to NASA's engineers, researchers, scientists, and external users the widest practicable and appropriate dissemination of information concerning its activities and the result thereof to their work stations.
NASA Technical Reports Server (NTRS)
Tuey, Richard C.; Collins, Mary; Caswell, Pamela; Haynes, Bob; Nelson, Michael L.; Holm, Jeanne; Buquo, Lynn; Tingle, Annette; Cooper, Bill; Stiltner, Roy
1996-01-01
This evaluation report contains an introduction, seven chapters, and five appendices. The Introduction describes the purpose, conceptual framework, functional description, and technical report server of the Scientific and Technical Information (STI) Electronic Document Distribution (EDD) project. Chapter 1 documents the results of the prototype STI EDD in actual operation. Chapter 2 documents each NASA center's post processing publication processes. Chapter 3 documents each center's STI software, hardware. and communications configurations. Chapter 7 documents STI EDD policy, practices, and procedures. The appendices consist of (A) the STI EDD Project Plan, (B) Team members, (C) Phasing Schedules, (D) Accessing On-line Reports, and (E) Creating an HTML File and Setting Up an xTRS. In summary, Stage 4 of the NASAwide Electronic Publishing System is the final phase of its implementation through the prototyping and gradual integration of each NASA center's electronic printing systems, desk top publishing systems, and technical report servers, to be able to provide to NASA's engineers, researchers, scientists, and external users, the widest practicable and appropriate dissemination of information concerning its activities and the result thereof to their work stations.
CMS distributed data analysis with CRAB3
Mascheroni, M.; Balcas, J.; Belforte, S.; ...
2015-12-23
The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing about 200,000 jobs per day.CRAB has been significantly upgraded in order to face the new challenges posed by LHC Run 2. Components of the new system include 1) a lightweight client, 2) a central primary server which communicates with the clients through a REST interface, 3) secondary servers which manage user analysis tasks andmore » submit jobs to the CMS resource provisioning system, and 4) a central service to asynchronously move user data from temporary storage in the execution site to the desired storage location. Furthermore, the new system improves the robustness, scalability and sustainability of the service.Here we provide an overview of the new system, operation, and user support, report on its current status, and identify lessons learned from the commissioning phase and production roll-out.« less
Integrating medical imaging analyses through a high-throughput bundled resource imaging system
NASA Astrophysics Data System (ADS)
Covington, Kelsie; Welch, E. Brian; Jeong, Ha-Kyu; Landman, Bennett A.
2011-03-01
Exploitation of advanced, PACS-centric image analysis and interpretation pipelines provides well-developed storage, retrieval, and archival capabilities along with state-of-the-art data providence, visualization, and clinical collaboration technologies. However, pursuit of integrated medical imaging analysis through a PACS environment can be limiting in terms of the overhead required to validate, evaluate and integrate emerging research technologies. Herein, we address this challenge through presentation of a high-throughput bundled resource imaging system (HUBRIS) as an extension to the Philips Research Imaging Development Environment (PRIDE). HUBRIS enables PACS-connected medical imaging equipment to invoke tools provided by the Java Imaging Science Toolkit (JIST) so that a medical imaging platform (e.g., a magnetic resonance imaging scanner) can pass images and parameters to a server, which communicates with a grid computing facility to invoke the selected algorithms. Generated images are passed back to the server and subsequently to the imaging platform from which the images can be sent to a PACS. JIST makes use of an open application program interface layer so that research technologies can be implemented in any language capable of communicating through a system shell environment (e.g., Matlab, Java, C/C++, Perl, LISP, etc.). As demonstrated in this proof-of-concept approach, HUBRIS enables evaluation and analysis of emerging technologies within well-developed PACS systems with minimal adaptation of research software, which simplifies evaluation of new technologies in clinical research and provides a more convenient use of PACS technology by imaging scientists.
Transitioning to Intel-based Linux Servers in the Payload Operations Integration Center
NASA Technical Reports Server (NTRS)
Guillebeau, P. L.
2004-01-01
The MSFC Payload Operations Integration Center (POIC) is the focal point for International Space Station (ISS) payload operations. The POIC contains the facilities, hardware, software and communication interface necessary to support payload operations. ISS ground system support for processing and display of real-time spacecraft and telemetry and command data has been operational for several years. The hardware components were reaching end of life and vendor costs were increasing while ISS budgets were becoming severely constrained. Therefore it has been necessary to migrate the Unix portions of our ground systems to commodity priced Intel-based Linux servers. hardware architecture including networks, data storage, and highly available resources. This paper will concentrate on the Linux migration implementation for the software portion of our ground system. The migration began with 3.5 million lines of code running on Unix platforms with separate servers for telemetry, command, Payload information management systems, web, system control, remote server interface and databases. The Intel-based system is scheduled to be available for initial operational use by August 2004 The overall migration to Intel-based Linux servers in the control center involves changes to the This paper will address the Linux migration study approach including the proof of concept, criticality of customer buy-in and importance of beginning with POSlX compliant code. It will focus on the development approach explaining the software lifecycle. Other aspects of development will be covered including phased implementation, interim milestones and metrics measurements and reporting mechanisms. This paper will also address the testing approach covering all levels of testing including development, development integration, IV&V, user beta testing and acceptance testing. Test results including performance numbers compared with Unix servers will be included. need for a smooth transition while maintaining real-time support. An important aspect of the paper will involve challenges and lessons learned. product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support. This paper will also address the deployment approach including user involvement in testing and the , This includes COTS product compatibility, implications of phasing decisions and tracking of dependencies, particularly non- software dependencies. The paper will also discuss scheduling challenges providing real-time flight support during the migration and the requirement to incorporate in the migration changes being made simultaneously for flight support.
Jaschob, Daniel; Riffle, Michael
2012-07-30
Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.
CrossVit: enhancing canopy monitoring management practices in viticulture.
Matese, Alessandro; Vaccari, Francesco Primo; Tomasi, Diego; Di Gennaro, Salvatore Filippo; Primicerio, Jacopo; Sabatini, Francesco; Guidoni, Silvia
2013-06-13
A new wireless sensor network (WSN), called CrossVit, and based on MEMSIC products, has been tested for two growing seasons in two vineyards in Italy. The aims are to evaluate the monitoring performances of the new WSN directly in the vineyard and collect air temperature, air humidity and solar radiation data to support vineyard management practices. The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering; Nodes level is based on a network of peripheral nodes consisting of a MDA300 sensor board and Iris module and equipped with thermistors for air temperature, photodiodes for global and diffuse solar radiation, and an HTM2500LF sensor for relative humidity. The communication levels are: WSN links between gateways and sensor nodes by ZigBee, and long-range GSM/GPRS links between gateways and the server farm level. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity, detecting the differences between the canopy treatments applied. The performance of CrossVit, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power consumption.
CrossVit: Enhancing Canopy Monitoring Management Practices in Viticulture
Matese, Alessandro; Vaccari, Francesco Primo; Tomasi, Diego; Di Gennaro, Salvatore Filippo; Primicerio, Jacopo; Sabatini, Francesco; Guidoni, Silvia
2013-01-01
A new wireless sensor network (WSN), called CrossVit, and based on MEMSIC products, has been tested for two growing seasons in two vineyards in Italy. The aims are to evaluate the monitoring performances of the new WSN directly in the vineyard and collect air temperature, air humidity and solar radiation data to support vineyard management practices. The WSN consists of various levels: the Master/Gateway level coordinates the WSN and performs data aggregation; the Farm/Server level takes care of storing data on a server, data processing and graphic rendering; Nodes level is based on a network of peripheral nodes consisting of a MDA300 sensor board and Iris module and equipped with thermistors for air temperature, photodiodes for global and diffuse solar radiation, and an HTM2500LF sensor for relative humidity. The communication levels are: WSN links between gateways and sensor nodes by ZigBee, and long-range GSM/GPRS links between gateways and the server farm level. The system was able to monitor the agrometeorological parameters in the vineyard: solar radiation, air temperature and air humidity, detecting the differences between the canopy treatments applied. The performance of CrossVit, in terms of monitoring and reliability of the system, have been evaluated considering: its handiness, cost-effective, non-invasive dimensions and low power consumption. PMID:23765273
Leveraging the Cloud for Integrated Network Experimentation
2014-03-01
kernel settings, or any of the low-level subcomponents. 3. Scalable Solutions: Businesses can build scalable solutions for their clients , ranging from...values. These values 13 can assume several distributions that include normal, Pareto , uniform, exponential and Poisson, among others [21]. Additionally, D...communication, the web client establishes a connection to the server before traffic begins to flow. Web servers do not initiate connections to clients in
Medication order communication using fax and document-imaging technologies.
Simonian, Armen I
2008-03-15
The implementation of fax and document-imaging technology to electronically communicate medication orders from nursing stations to the pharmacy is described. The evaluation of a commercially available pharmacy order imaging system to improve order communication and to make document retrieval more efficient led to the selection and customization of a system already licensed and used in seven affiliated hospitals. The system consisted of existing fax machines and document-imaging software that would capture images of written orders and send them from nursing stations to a central database server. Pharmacists would then retrieve the images and enter the orders in an electronic medical record system. The pharmacy representatives from all seven hospitals agreed on the configuration and functionality of the custom application. A 30-day trial of the order imaging system was successfully conducted at one of the larger institutions. The new system was then implemented at the remaining six hospitals over a period of 60 days. The transition from a paper-order system to electronic communication via a standardized pharmacy document management application tailored to the specific needs of this health system was accomplished. A health system with seven affiliated hospitals successfully implemented electronic communication and the management of inpatient paper-chart orders by using faxes and document-imaging technology. This standardized application eliminated the problems associated with the hand delivery of paper orders, the use of the pneumatic tube system, and the printing of traditional faxes.
Takizawa, Masaomi; Miyashita, Toyohisa; Murase, Sumio; Kanda, Hirohito; Karaki, Yoshiaki; Yagi, Kazuo; Ohue, Toru
2003-01-01
A real-time telescreening system is developed to detect early diseases for rural area residents using two types of mobile vans with a portable satellite station. The system consists of a satellite communication system with 1.5Mbps of the JCSAT-1B satellite, a spiral CT van, an ultrasound imaging van with two video conference system, a DICOM server and a multicast communication unit. The video image and examination image data are transmitted from the van to hospitals and the university simultaneously. Physician in the hospital observes and interprets exam images from the van and watches the video images of the position of ultrasound transducer on screenee in the van. After the observation images, physician explains a results of the examination by the video conference system. Seventy lung CT screening and 203 ultrasound screening were done from March to June 2002. The trial of this real time screening suggested that rural residents are given better healthcare without visit to the hospital. And it will open the gateway to reduce the medical cost and medical divide between city area and rural area.
An Advanced Sensor Network Design For Subglacial Sensing
NASA Astrophysics Data System (ADS)
Martinez, K.; Hart, J. K.; Elsaify, A.; Zou, G.; Padhy, P.; Riddoch, A.
2006-12-01
In the Glacsweb project a sensor network has been designed to take sensor measurements inside glaciers and send the data back to a web server autonomously. A wide range of experience was gained in the deployment of the earlier systems and this has been used to develop new hardware and software to better meet the needs of glaciologists using the data from the system. The system was reduced in size, new sensors (compass, light sensor) were added and the radio communications system completely changed. The new 173MHz radio system was designed with an antenna tuned to work in ice and a new network algorithm written to provide better data security. Probes can communicate data through each other (ad-hoc network) and store many months of data in a large buffer to cope with long term communications failures. New sensors include a light reflection measurement in order to provide data on the surrounding material. This paper will discuss the design decisions, the effectiveness of the final system and generic outcomes of use to sensor network designers deploying in difficult environments.
Networked Multimedia: Are We There Yet?
ERIC Educational Resources Information Center
Wyman, Bill
1997-01-01
Discusses the technological advances in electronic communication over the last 30 years. Touches on various real-time interactive multimedia communications, including video on demand, videocassettes, laser discs, CD-ROM, a history of networking, terminal/host and client/server networking, intraoperability and interoperability and multimedia…
Computer Ethics and Cyber Laws to Mental Health Professionals
Raveesh, B N; Pande, Sanjay
2004-01-01
The explosive growth of computer and communications technology raises new legal and ethical challenges that reflect tensions between individual rights and societal needs. For instance, should cracking into a computer system be viewed as a petty prank, as trespassing, as theft, or as espionage? Should placing copyrighted material onto a public file server be treated as freedom of expression or as theft? Should ordinary communications be encrypted using codes that make it impossible for law-enforcement agencies to perform wiretaps? As we develop shared understandings and norms of behaviour, we are setting standards that will govern the information society for decades to come. PMID:21408035
Computer ethics and cyber laws to mental health professionals.
Raveesh, B N; Pande, Sanjay
2004-04-01
The explosive growth of computer and communications technology raises new legal and ethical challenges that reflect tensions between individual rights and societal needs. For instance, should cracking into a computer system be viewed as a petty prank, as trespassing, as theft, or as espionage? Should placing copyrighted material onto a public file server be treated as freedom of expression or as theft? Should ordinary communications be encrypted using codes that make it impossible for law-enforcement agencies to perform wiretaps? As we develop shared understandings and norms of behaviour, we are setting standards that will govern the information society for decades to come.
Architecture of the software for LAMOST fiber positioning subsystem
NASA Astrophysics Data System (ADS)
Peng, Xiaobo; Xing, Xiaozheng; Hu, Hongzhuan; Zhai, Chao; Li, Weimin
2004-09-01
The architecture of the software which controls the LAMOST fiber positioning sub-system is described. The software is composed of two parts as follows: a main control program in a computer and a unit controller program in a MCS51 single chip microcomputer ROM. And the function of the software includes: Client/Server model establishment, observation planning, collision handling, data transmission, pulse generation, CCD control, image capture and processing, and data analysis etc. Particular attention is paid to the ways in which different parts of the software can communicate. Also software techniques for multi threads, SOCKET programming, Microsoft Windows message response, and serial communications are discussed.
2004-06-01
remote databases, has seen little vendor acceptance. Each database ( Oracle , DB2, MySQL , etc.) has its own client- server protocol. Therefore each...existing standards – SQL , X.500/LDAP, FTP, etc. • View information dissemination as selective replication – State-oriented vs . message-oriented...allowing the 8 application to start. The resource management system would serve as a broker to the resources, making sure that resources are not
Intelligent Advanced Communications IP Telephony Feasibility for the U.S. Navy: Phase 2
2009-03-31
PDAs) and smart phones. In addition, it considers how solutions integrate on-premise enterprise functions with the functions of mobile operators...and Control System GIG Global Information Grid GigE Gigabit Ethernet GIPS Global IP Solutions Inc. GMSK Gaussian Minimum Shift Keying GPHY Gigabit...Feasibility for the U.S. Navy – Phase 2 UAC User Agent Client UART Universal Asynchronous Receiver/Transmitter UAS User Agent Server UCR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, G.R.
1986-03-03
Prototypes of components of the Saguaro distributed operating system were implemented and the design of the entire system refined based on the experience. The philosophy behind Saguaro is to support the illusion of a single virtual machine while taking advantage of the concurrency and robustness that are possible in a network architecture. Within the system, these advantages are realized by the use of pools of server processes and decentralized allocation protocols. Potential concurrency and robustness are also made available to the user through low-cost mechanisms to control placement of executing commands and files, and to support semi-transparent file replication andmore » access. Another unique aspect of Saguaro is its extensive use of type system to describe user data such as files and to specify the types of arguments to commands and procedures. This enables the system to assist in type checking and leads to a user interface in which command-specific templates are available to facilitate command invocation. A mechanism, channels, is also provided to enable users to construct applications containing general graphs of communication processes.« less
Process Management inside ATLAS DAQ
NASA Astrophysics Data System (ADS)
Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.
2002-10-01
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
Implementation of Smart Metering based on Internet of Things
NASA Astrophysics Data System (ADS)
Kaur, Milanpreet; Mathew, Lini, Dr.; Alokdeep; Kumar, Ajay
2018-03-01
From the aspect of saving energy, there is a continuous modification in communication technology and information in order to satisfy all customers demand. Today customers are demanding for accurate energy measurement, timely data and for good customer services. The best solution is smart grid system with various communication technologies which can be cost effective and electrical section to have a bidirectional communication in which information about electrical energy consumption is shared between consumers as well as by utility for remote checking. This paper describes the monitoring of energy consumption with Arduino Uno board and Ethernet using IoT (Internet of Things) concept. This proposed design eliminates human inclusion in the conservation of electricity. The consumer can receive the information about consumption of energy by using IP address on their devices. The web client code is uploaded for checking the client information such as location, content, connection, and disconnection to the web server. This proposed system gives reliable and accurate information regarding electrical energy management system (EMS) through Internet of things (IoT).
Implementation of a World Wide Web server for the oil and gas industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, R.E.; Martin, F.D.; Emery, R.
1995-12-31
The Gas and Oil Technology Exchange and Communication Highway, (GO-TECH), provides an electronic information system for the petroleum community for the purpose of exchanging ideas, data, and technology. The personal computer-based system fosters communication and discussion by linking oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers are provided access to the GO-TECH World Wide Web home page via modem links, as well as Internet. The future GO-TECH applications will include the establishment of{open_quote}Virtual corporations {close_quotes} consisting of consortiums of smallmore » companies, consultants, and service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations.« less
Implementation of a World Wide Web server for the oil and gas industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blaylock, R.E.; Martin, F.D.; Emery, R.
1996-10-01
The Gas and Oil Technology Exchange and Communication Highway (GO-TECH) provides an electronic information system for the petroleum community for exchanging ideas, data, and technology. The PC-based system fosters communication and discussion by linking the oil and gas producers with resource centers, government agencies, consulting firms, service companies, national laboratories, academic research groups, and universities throughout the world. The oil and gas producers can access the GO-TECH World Wide Web (WWW) home page through modem links, as well as through the Internet. Future GO-TECH applications will include the establishment of virtual corporations consisting of consortia of small companies, consultants, andmore » service companies linked by electronic information systems. These virtual corporations will have the resources and expertise previously found only in major corporations.« less
An Indoor Location-Based Control System Using Bluetooth Beacons for IoT Systems.
Huh, Jun-Ho; Seo, Kyungryong
2017-12-19
The indoor location-based control system estimates the indoor position of a user to provide the service he/she requires. The major elements involved in the system are the localization server, service-provision client, user application positioning technology. The localization server controls access of terminal devices (e.g., Smart Phones and other wireless devices) to determine their locations within a specified space first and then the service-provision client initiates required services such as indoor navigation and monitoring/surveillance. The user application provides necessary data to let the server to localize the devices or allow the user to receive various services from the client. The major technological elements involved in this system are indoor space partition method, Bluetooth 4.0, RSSI (Received Signal Strength Indication) and trilateration. The system also employs the BLE communication technology when determining the position of the user in an indoor space. The position information obtained is then used to control a specific device(s). These technologies are fundamental in achieving a "Smart Living". An indoor location-based control system that provides services by estimating user's indoor locations has been implemented in this study (First scenario). The algorithm introduced in this study (Second scenario) is effective in extracting valid samples from the RSSI dataset but has it has some drawbacks as well. Although we used a range-average algorithm that measures the shortest distance, there are some limitations because the measurement results depend on the sample size and the sample efficiency depends on sampling speeds and environmental changes. However, the Bluetooth system can be implemented at a relatively low cost so that once the problem of precision is solved, it can be applied to various fields.
An Indoor Location-Based Control System Using Bluetooth Beacons for IoT Systems
Huh, Jun-Ho; Seo, Kyungryong
2017-01-01
The indoor location-based control system estimates the indoor position of a user to provide the service he/she requires. The major elements involved in the system are the localization server, service-provision client, user application positioning technology. The localization server controls access of terminal devices (e.g., Smart Phones and other wireless devices) to determine their locations within a specified space first and then the service-provision client initiates required services such as indoor navigation and monitoring/surveillance. The user application provides necessary data to let the server to localize the devices or allow the user to receive various services from the client. The major technological elements involved in this system are indoor space partition method, Bluetooth 4.0, RSSI (Received Signal Strength Indication) and trilateration. The system also employs the BLE communication technology when determining the position of the user in an indoor space. The position information obtained is then used to control a specific device(s). These technologies are fundamental in achieving a “Smart Living”. An indoor location-based control system that provides services by estimating user’s indoor locations has been implemented in this study (First scenario). The algorithm introduced in this study (Second scenario) is effective in extracting valid samples from the RSSI dataset but has it has some drawbacks as well. Although we used a range-average algorithm that measures the shortest distance, there are some limitations because the measurement results depend on the sample size and the sample efficiency depends on sampling speeds and environmental changes. However, the Bluetooth system can be implemented at a relatively low cost so that once the problem of precision is solved, it can be applied to various fields. PMID:29257044
Home medical monitoring network based on embedded technology
NASA Astrophysics Data System (ADS)
Liu, Guozhong; Deng, Wenyi; Yan, Bixi; Lv, Naiguang
2006-11-01
Remote medical monitoring network for long-term monitoring of physiological variables would be helpful for recovery of patients as people are monitored at more comfortable conditions. Furthermore, long-term monitoring would be beneficial to investigate slowly developing deterioration in wellness status of a subject and provide medical treatment as soon as possible. The home monitor runs on an embedded microcomputer Rabbit3000 and interfaces with different medical monitoring module through serial ports. The network based on asymmetric digital subscriber line (ADSL) or local area network (LAN) is established and a client - server model, each embedded home medical monitor is client and the monitoring center is the server, is applied to the system design. The client is able to provide its information to the server when client's request of connection to the server is permitted. The monitoring center focuses on the management of the communications, the acquisition of medical data, and the visualization and analysis of the data, etc. Diagnosing model of sleep apnea syndrome is built basing on ECG, heart rate, respiration wave, blood pressure, oxygen saturation, air temperature of mouth cavity or nasal cavity, so sleep status can be analyzed by physiological data acquired as people in sleep. Remote medical monitoring network based on embedded micro Internetworking technology have advantages of lower price, convenience and feasibility, which have been tested by the prototype.
Field test of classical symmetric encryption with continuous variables quantum key distribution.
Jouguet, Paul; Kunz-Jacques, Sébastien; Debuisschert, Thierry; Fossier, Simon; Diamanti, Eleni; Alléaume, Romain; Tualle-Brouri, Rosa; Grangier, Philippe; Leverrier, Anthony; Pache, Philippe; Painchault, Philippe
2012-06-18
We report on the design and performance of a point-to-point classical symmetric encryption link with fast key renewal provided by a Continuous Variable Quantum Key Distribution (CVQKD) system. Our system was operational and able to encrypt point-to-point communications during more than six months, from the end of July 2010 until the beginning of February 2011. This field test was the first demonstration of the reliability of a CVQKD system over a long period of time in a server room environment. This strengthens the potential of CVQKD for information technology security infrastructure deployments.
NASA Astrophysics Data System (ADS)
Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.
2016-03-01
Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.
Two-Cloud-Servers-Assisted Secure Outsourcing Multiparty Computation
Wen, Qiaoyan; Zhang, Hua; Jin, Zhengping; Li, Wenmin
2014-01-01
We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function. PMID:24982949
Two-cloud-servers-assisted secure outsourcing multiparty computation.
Sun, Yi; Wen, Qiaoyan; Zhang, Yudong; Zhang, Hua; Jin, Zhengping; Li, Wenmin
2014-01-01
We focus on how to securely outsource computation task to the cloud and propose a secure outsourcing multiparty computation protocol on lattice-based encrypted data in two-cloud-servers scenario. Our main idea is to transform the outsourced data respectively encrypted by different users' public keys to the ones that are encrypted by the same two private keys of the two assisted servers so that it is feasible to operate on the transformed ciphertexts to compute an encrypted result following the function to be computed. In order to keep the privacy of the result, the two servers cooperatively produce a custom-made result for each user that is authorized to get the result so that all authorized users can recover the desired result while other unauthorized ones including the two servers cannot. Compared with previous research, our protocol is completely noninteractive between any users, and both of the computation and the communication complexities of each user in our solution are independent of the computing function.
Timing Recovery Strategies in Magnetic Recording Systems
NASA Astrophysics Data System (ADS)
Kovintavewat, Piya
At some point in a digital communications receiver, the received analog signal must be sampled. Good performance requires that these samples be taken at the right times. The process of synchronizing the sampler with the received analog waveform is known as timing recovery. Conventional timing recovery techniques perform well only when operating at high signal-to-noise ratio (SNR). Nonetheless, iterative error-control codes allow reliable communication at very low SNR, where conventional techniques fail. This paper provides a detailed review on the timing recovery strategies based on per-survivor processing (PSP) that are capable of working at low SNR. We also investigate their performance in magnetic recording systems because magnetic recording is a primary method of storage for a variety of applications, including desktop, mobile, and server systems. Results indicate that the timing recovery strategies based on PSP perform better than the conventional ones and are thus worth being employed in magnetic recording systems.
PsychVACS: a system for asynchronous telepsychiatry.
Odor, Alberto; Yellowlees, Peter; Hilty, Donald; Parish, Michelle Burke; Nafiz, Najia; Iosif, Ana-Maria
2011-05-01
To describe the technical development of an asynchronous telepsychiatry application, the Psychiatric Video Archiving and Communication System. A client-server application was developed in Visual Basic.Net with Microsoft(®) SQL database as the backend. It includes the capability of storing video-recorded psychiatric interviews and manages the workflow of the system with automated messaging. Psychiatric Video Archiving and Communication System has been used to conduct the first ever series of asynchronous telepsychiatry consultations worldwide. A review of the software application and the process as part of this project has led to a number of improvements that are being implemented in the next version, which is being written in Java. This is the first description of the use of video recorded data in an asynchronous telemedicine application. Primary care providers and consulting psychiatrists have found it easy to work with and a valuable resource to increase the availability of psychiatric consultation in remote rural locations.
Development of mobile preventive notification system (PreNotiS)
NASA Astrophysics Data System (ADS)
Kumar, Abhinav; Akopian, David; Chen, Philip
2009-02-01
The tasks achievable by mobile handsets continuously exceed our imagination. Statistics show that the mobile phone sales are soaring, rising exponentially year after year with predictions being that they will rise to a billion units in 2009, with a large section of these being smartphones. Mobile service providers, mobile application developers and researchers have been working closely over the past decade to bring about revolutionary and hardware and software advancements in hand-sets such as embedded digital camera, large memory capacity, accelerometer, touch sensitive screens, GPS, Wi- Fi capabilities etc. as well as in the network infrastructure to support these features. Recently we presented a multi-platform, massive data collection system from distributive sources such as cell phone users1 called PreNotiS. This technology was intended to significantly simplify the response to the events and help e.g. special agencies to gather crucial information in time and respond as quickly as possible to prevent or contain potential emergency situations and act as a massive, centralized evidence collection mechanism that effectively exploits the advancements in mobile application development platforms and the existing network infrastructure to present an easy-touse, fast and effective tool to mobile phone users. We successfully demonstrated the functionality of the client-server application suite to post user information onto the server. This paper presents a new version of the system PreNotiS, with a revised client application and with all new server capabilities. PreNotiS still puts forth the idea of having a fast, efficient client-server based application suite for mobile phones which through a highly simplified user interface will collect security/calamity based information in a structured format from first responders and relay that structured information to a central server where this data is sorted into a database in a predefined manner. This information which includes selections, images and text will be instantly available to authorities and action forces through a secure web portal thus helping them to make decisions in a timely and prompt manner. All the cell phones have self-localizing capability according to FCC E9112 mandate, thus the communicated information can be further tagged automatically by location and time information at the server making all this information available through the secure web-portal.
NASA Astrophysics Data System (ADS)
Saito, Shoichi; Uehara, Tetsutaro; Izumi, Yutaka; Kunieda, Yoshitoshi
The VPN (Virtual Private Network) technique becomes more and more popular to protect contents of messages and to achieve secure communication from incidents, such as tapping. However, it grow in usage that a VPN server is used on a sub-network in part of an office-wide network. But, a PPTP system included in Windows operating systems cannot establish nested VPN links. Moreover encrypted communication by VPN hides a user of the VPN connection. Consequently, any administrators of network systems can’t find out the users of the VPN connection via firewall, moreover can’t decide whether if the user is legal or not. In order to solve this problem, we developed a multi step PPTP relay system on a firewall. This system solves all the problems of our previously developed PPTP relay system(1). The new relay system improves security by encrypting through the whole end-to-end communication and abolishing of prior registration of passwords for the next step. Furthermore, transport speed is accelerated, and the restriction of the number of steps on relay is also abolished. By these features the multi step PPTP relay system expands usability.
Smiianov, Vladyslav A; Dryha, Natalia O; Smiianova, Olha I; Obodyak, Victor K; Zudina, Tatyana O
2018-01-01
Introduction: Today mobile health`s protection service has no concrete meaning. As an research object it was called mHealth and named by Global observatory of electronic health`s protection as "Doctor and social health practice that can be supported by any mobile units (mobile phones or smartphones), units for patient`s health control, personal computers and other units of non-wired communication". An active usage of SMS in programs for patients` cure regimen keeping was quiet predictable. Mobile and electronic units only begin their development in medical sphere. Thus, to solve all health`s protection system reformation problems a special memorandum about cooperation in creating E-Health system in Ukraine was signed. The aim: Development of ICS for monitoring and non-infection ill patients` informing system optimization as a first level of medical help. Materials and methods: During research, we used systematical approach, meta-analysis, informational-analytical systems` schemes projection, expositive modeling. Developing the backend (server part of the site), we used next technologies: 1) the Apache web server; 2) programming language PHP; 3) Yii 2 PHP Framework. In the frontend developing were used the following technologies (client part of the site): 1) Bootstrap 3; 2) Vue JS Framework. Results and conclusions: Created duo-channel system "doctor-patient" and "patient-doctor" will allow usual doctors of family medicine (DFM) take the interactive dispensary cure and avoid uncontrolled illness progress. Doctor will monitor basic physical data of patient`s health and curing process. The main goal is to create automatic system to allow doctor regularly write periodical or non-periodical notifications, get patients` questioning answers and spread information between doctor and patient; that will optimize work of DFMs.
KernPaeP - a web-based pediatric palliative documentation system for home care.
Hartz, Tobias; Verst, Hendrik; Ueckert, Frank
2009-01-01
KernPaeP is a new web-based on- and offline documentation system, which has been developed for pediatric palliative care-teams supporting patient documentation and communication among health care professionals. It provides a reliable system making fast and secure home care documentation possible. KernPaeP is accessible online by registered users using any web-browser. Home care teams use an offline version of KernPaeP running on a netbook for patient documentation on site. Identifying and medical patient data are strictly separated and stored on two database servers. The system offers a stable, enhanced two-way algorithm for synchronization between the offline component and the central database servers. KernPaeP is implemented meeting highest security standards while still maintaining high usability. The web-based documentation system allows ubiquitous and immediate access to patient data. Sumptuous paper work is replaced by secure and comprehensive electronic documentation. KernPaeP helps saving time and improving the quality of documentation. Due to development in close cooperation with pediatric palliative professionals, KernPaeP fulfils the broad needs of home-care documentation. The technique of web-based online and offline documentation is in general applicable for arbitrary home care scenarios.
Twin-tailed fail-over for fileservers maintaining full performance in the presence of a failure
Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Steinmacher-Burow, Burkhard D.
2008-02-12
A method for maintaining full performance of a file system in the presence of a failure is provided. The file system having N storage devices, where N is an integer greater than zero and N primary file servers where each file server is operatively connected to a corresponding storage device for accessing files therein. The file system further having a secondary file server operatively connected to at least one of the N storage devices. The method including: switching the connection of one of the N storage devices to the secondary file server upon a failure of one of the N primary file servers; and switching the connections of one or more of the remaining storage devices to a primary file server other than the failed file server as necessary so as to prevent a loss in performance and to provide each storage device with an operating file server.
Wu, Hui-Qun; Lv, Zheng-Min; Geng, Xing-Yun; Jiang, Kui; Tang, Le-Min; Zhou, Guo-Min; Dong, Jian-Cheng
2013-01-01
To address issues in interoperability between different fundus image systems, we proposed a web eye-picture archiving and communication system (PACS) framework in conformance with digital imaging and communication in medicine (DICOM) and health level 7 (HL7) protocol to realize fundus images and reports sharing and communication through internet. Firstly, a telemedicine-based eye care work flow was established based on integrating the healthcare enterprise (IHE) Eye Care technical framework. Then, a browser/server architecture eye-PACS system was established in conformance with the web access to DICOM persistent object (WADO) protocol, which contains three tiers. In any client system installed with web browser, clinicians could log in the eye-PACS to observe fundus images and reports. Multipurpose internet mail extensions (MIME) type of a structured report is saved as pdf/html with reference link to relevant fundus image using the WADO syntax could provide enough information for clinicians. Some functions provided by open-source Oviyam could be used to query, zoom, move, measure, view DICOM fundus images. Such web eye-PACS in compliance to WADO protocol could be used to store and communicate fundus images and reports, therefore is of great significance for teleophthalmology.
NASA Astrophysics Data System (ADS)
Roach, Colin; Carlsson, Johan; Cary, John R.; Alexander, David A.
2002-11-01
The National Transport Code Collaboration (NTCC) has developed an array of software, including a data client/server. The data server, which is written in C++, serves local data (in the ITER Profile Database format) as well as remote data (by accessing one or several MDS+ servers). The client, a web-invocable Java applet, provides a uniform, intuitive, user-friendly, graphical interface to the data server. The uniformity of the interface relieves the user from the trouble of mastering the differences between different data formats and lets him/her focus on the essentials: plotting and viewing the data. The user runs the client by visiting a web page using any Java capable Web browser. The client is automatically downloaded and run by the browser. A reference to the data server is then retrieved via the standard Web protocol (HTTP). The communication between the client and the server is then handled by the mature, industry-standard CORBA middleware. CORBA has bindings for all common languages and many high-quality implementations are available (both Open Source and commercial). The NTCC data server has been installed at the ITPA International Multi-tokamak Confinement Profile Database, which is hosted by the UKAEA at Culham Science Centre. The installation of the data server is protected by an Internet firewall. To make it accessible to clients outside the firewall some modifications of the server were required. The working version of the ITPA confinement profile database is not open to the public. Authentification of legitimate users is done utilizing built-in Java security features to demand a password to download the client. We present an overview of the NTCC data client/server and some details of how the CORBA firewall-traversal issues were resolved and how the user authentification is implemented.
WebBee: A Platform for Secure Coordination and Communication in Crisis Scenarios
2008-04-16
implemented through database triggers. The Webbee Database Server contains an Information Server, which is a Postgres database with PostGIS [5] extension...sends it to the target user. The heavy lifting for this mechanism is done through an extension of Postgres triggers (Figures 6.1 and 6.2), resulting...in fewer queries and better performance. Trigger support in Postgres is table-based and comparatively primitive: with n table triggers, an update
NASA Astrophysics Data System (ADS)
Gonzalez, F.; Walker, R.; Rutherford, N.; Turner, C.
2011-04-01
This paper provides a review of the state of the art of relevant work on the use of public mobile data networks for aircraft telemetry and control proposes. Moreover, it describes the characterisation for airborne uses of the public mobile data communication systems known broadly as 3G. The motivation for this study was to explore how this mature public communication systems could be used for aviation purposes. An experimental system was fitted to a light aircraft to record communication latency, line speed, RF level, packet loss and cell tower identifier. Communications was established using internet protocols and connection was made to a local server. The aircraft was flown in both remote and populous areas at altitudes up to 8500 ft in a region located in South East Queensland, Australia. Results show that the average airborne RF levels are better than those on the ground by 21% and in the order of -77 dbm. Latencies were in the order of 500 ms (1/2 the latency of Iridium), an average download speed of 0.48 Mb/s, average uplink speed of 0.85 Mb/s, a packet of information loss of 6.5%. The maximum communication range was also observed to be 70 km from a single cell station. The paper also describes possible limitations and utility of using such communications architecture for both manned and unmanned aircraft systems.
Information and communication systems for the assistance of carers based on ACTION.
Kraner, M; Emery, D; Cvetkovic, S R; Procter, P; Smythe, C
1999-01-01
Recent advances in telecommunication technologies allow the design of information and communication systems for people who are caring for others in the home as family members or as professionals in the health or community centres. The present paper analyses and classifies the information flow and maps it to an information life cycle, which governs the design of the deployed hardware, software and the data-structure. This is based on the initial findings of ACTION (assisting carers using telematics interventions to meet older persons' needs) a European Union funded project. The proposed information architecture discusses different designs such as centralized or decentralized Web and Client server solutions. A user interface is developed reflecting the special requirements of the targeted user group, which influences the functionality and design of the software, data architecture and the integrated communication system using video-conferencing. ACTION has engineered a system using plain Web technology based on HTML, extended with JavaScript and ActiveX and a software switch enabling the integration of different types of videoconferencing and other applications providing manufacturer independence.
A Scalability Model for ECS's Data Server
NASA Technical Reports Server (NTRS)
Menasce, Daniel A.; Singhal, Mukesh
1998-01-01
This report presents in four chapters a model for the scalability analysis of the Data Server subsystem of the Earth Observing System Data and Information System (EOSDIS) Core System (ECS). The model analyzes if the planned architecture of the Data Server will support an increase in the workload with the possible upgrade and/or addition of processors, storage subsystems, and networks. The approaches in the report include a summary of the architecture of ECS's Data server as well as a high level description of the Ingest and Retrieval operations as they relate to ECS's Data Server. This description forms the basis for the development of the scalability model of the data server and the methodology used to solve it.
An Interactive Real-Time Locating System Based on Bluetooth Low-Energy Beacon Network †
Lin, You-Wei
2018-01-01
The ubiquity of Bluetooth-enabled smartphones and peripherals has brought tremendous convenience to our daily life. In recent years, Bluetooth beacons have also been gaining popularity in implementing a variety of innovative location-based services such as self-guided systems in exhibition centers. However, the broadcast-based beacon technology can only provide unidirectional communication. In case smartphone users would like to respond to the beacon messages, they have to rely on their own mobile Internet connections to send the information back to the backend system. Nevertheless, mobile Internet services may not be always available or too costly. In this work, we develop a real-time locating system based only on the Bluetooth low energy (BLE) technology to support interactive communications by combining the broadcast and mesh topology options to extend the applicability of beacon solutions. Specifically, we turn the smartphone into a beacon device and augment the beacon devices with the capability of forming a mesh network. The implementation result shows that our beacon devices can detect the presence of specific users at specific locations, and then the presence state can be sent to the application server via the relay of beacon devices. Moreover, the application server can send personalized location-based messages to the users, again via the relay of beacon devices. With the capability of relaying messages between the beacon devices, it would be convenient for developers to implement a variety of interactive applications such as tracking VIP customers at the airport, or tracking an elder with Alzheimer’s disease in the neighborhood. PMID:29883386
An Interactive Real-Time Locating System Based on Bluetooth Low-Energy Beacon Network †.
Lin, You-Wei; Lin, Chi-Yi
2018-05-21
The ubiquity of Bluetooth-enabled smartphones and peripherals has brought tremendous convenience to our daily life. In recent years, Bluetooth beacons have also been gaining popularity in implementing a variety of innovative location-based services such as self-guided systems in exhibition centers. However, the broadcast-based beacon technology can only provide unidirectional communication. In case smartphone users would like to respond to the beacon messages, they have to rely on their own mobile Internet connections to send the information back to the backend system. Nevertheless, mobile Internet services may not be always available or too costly. In this work, we develop a real-time locating system based only on the Bluetooth low energy (BLE) technology to support interactive communications by combining the broadcast and mesh topology options to extend the applicability of beacon solutions. Specifically, we turn the smartphone into a beacon device and augment the beacon devices with the capability of forming a mesh network. The implementation result shows that our beacon devices can detect the presence of specific users at specific locations, and then the presence state can be sent to the application server via the relay of beacon devices. Moreover, the application server can send personalized location-based messages to the users, again via the relay of beacon devices. With the capability of relaying messages between the beacon devices, it would be convenient for developers to implement a variety of interactive applications such as tracking VIP customers at the airport, or tracking an elder with Alzheimer’s disease in the neighborhood.
Implementation of a WAP-based telemedicine system for patient monitoring.
Hung, Kevin; Zhang, Yuan-Ting
2003-06-01
Many parties have already demonstrated telemedicine applications that use cellular phones and the Internet. A current trend in telecommunication is the convergence of wireless communication and computer network technologies, and the emergence of wireless application protocol (WAP) devices is an example. Since WAP will also be a common feature found in future mobile communication devices, it is worthwhile to investigate its use in telemedicine. This paper describes the implementation and experiences with a WAP-based telemedicine system for patient-monitoring that has been developed in our laboratory. It utilizes WAP devices as mobile access terminals for general inquiry and patient-monitoring services. Authorized users can browse the patients' general data, monitored blood pressure (BP), and electrocardiogram (ECG) on WAP devices in store-and-forward mode. The applications, written in wireless markup language (WML), WMLScript, and Perl, resided in a content server. A MySQL relational database system was set up to store the BP readings, ECG data, patient records, clinic and hospital information, and doctors' appointments with patients. A wireless ECG subsystem was built for recording ambulatory ECG in an indoor environment and for storing ECG data into the database. For testing, a WAP phone compliant with WAP 1.1 was used at GSM 1800 MHz by circuit-switched data (CSD) to connect to the content server through a WAP gateway, which was provided by a mobile phone service provider in Hong Kong. Data were successfully retrieved from the database and displayed on the WAP phone. The system shows how WAP can be feasible in remote patient-monitoring and patient data retrieval.
Upgrade to the control system of the reflectometry diagnostic of ASDEX upgrade
NASA Astrophysics Data System (ADS)
Graça, S.; Santos, J.; Manso, M. E.
2004-10-01
The broadband frequency modulation-continuous wave microwave/millimeter wave reflectometer of ASDEX upgrade tokamak (Institut für Plasma Physik (IPP), Garching, Germany) developed by Centro de Fusão Nuclear (Lisboa, Portugal) with the collaboration of IPP, is a complex system with 13 channels (O and X modes) and two types of operation modes (swept and fixed frequency). The control system that ensures remote operation of the diagnostic incorporates VME and CAMAC bus based acquisition/timing systems. Microprocessor input/output boards are used to control and monitor the microwave circuitry and associated electronic devices. The implementation of the control system is based on an object-oriented client/server model: a centralized server manages the hardware and receives input from remote clients. Communication is handled through transmission control protocol/internet protocol sockets. Here we describe recent upgrades of the control system aiming to: (i) accommodate new channels; (ii) adapt to the heterogeneity of computing platforms and operating systems; and (iii) overcome remote access restrictions. Platform and operating system independence was achieved by redesigning the graphical user interface in JAVA. As secure shell is the standard remote access protocol adopted in major fusion laboratories, secure shell tunneling was implemented to allow remote operation of the diagnostic through the existing firewalls.
Time and Space Efficient Algorithms for Two-Party Authenticated Data Structures
NASA Astrophysics Data System (ADS)
Papamanthou, Charalampos; Tamassia, Roberto
Authentication is increasingly relevant to data management. Data is being outsourced to untrusted servers and clients want to securely update and query their data. For example, in database outsourcing, a client's database is stored and maintained by an untrusted server. Also, in simple storage systems, clients can store very large amounts of data but at the same time, they want to assure their integrity when they retrieve them. In this paper, we present a model and protocol for two-party authentication of data structures. Namely, a client outsources its data structure and verifies that the answers to the queries have not been tampered with. We provide efficient algorithms to securely outsource a skip list with logarithmic time overhead at the server and client and logarithmic communication cost, thus providing an efficient authentication primitive for outsourced data, both structured (e.g., relational databases) and semi-structured (e.g., XML documents). In our technique, the client stores only a constant amount of space, which is optimal. Our two-party authentication framework can be deployed on top of existing storage applications, thus providing an efficient authentication service. Finally, we present experimental results that demonstrate the practical efficiency and scalability of our scheme.
Planning and management of cloud computing networks
NASA Astrophysics Data System (ADS)
Larumbe, Federico
The evolution of the Internet has a great impact on a big part of the population. People use it to communicate, query information, receive news, work, and as entertainment. Its extraordinary usefulness as a communication media made the number of applications and technological resources explode. However, that network expansion comes at the cost of an important power consumption. If the power consumption of telecommunication networks and data centers is considered as the power consumption of a country, it would rank at the 5 th place in the world. Furthermore, the number of servers in the world is expected to grow by a factor of 10 between 2013 and 2020. This context motivates us to study techniques and methods to allocate cloud computing resources in an optimal way with respect to cost, quality of service (QoS), power consumption, and environmental impact. The results we obtained from our test cases show that besides minimizing capital expenditures (CAPEX) and operational expenditures (OPEX), the response time can be reduced up to 6 times, power consumption by 30%, and CO2 emissions by a factor of 60. Cloud computing provides dynamic access to IT resources as a service. In this paradigm, programs are executed in servers connected to the Internet that users access from their computers and mobile devices. The first advantage of this architecture is to reduce the time of application deployment and interoperability, because a new user only needs a web browser and does not need to install software on local computers with specific operating systems. Second, applications and information are available from everywhere and with any device with an Internet access. Also, servers and IT resources can be dynamically allocated depending on the number of users and workload, a feature called elasticity. This thesis studies the resource management of cloud computing networks and is divided in three main stages. We start by analyzing the planning of cloud computing networks to get a comprehensive vision. The first question to be solved is what are the optimal data center locations. We found that the location of each data center has a big impact on cost, QoS, power consumption, and greenhouse gas emissions. An optimization problem with a multi-criteria objective function is proposed to decide jointly the optimal location of data centers and software components, link capacities, and information routing. Once the network planning has been analyzed, the problem of dynamic resource provisioning in real time is addressed. In this context, virtualization is a key technique in cloud computing because each server can be shared by multiple Virtual Machines (VMs) and the total power consumption can be reduced. In the same line of location problems, we propose a Green Cloud Broker that optimizes VM placement across multiple data centers. In fact, when multiple data centers are considered, response time can be reduced by placing VMs close to users, cost can be minimized, power consumption can be optimized by using energy efficient data centers, and CO2 emissions can be decreased by choosing data centers provided with renewable energy sources. The third stage of the analysis is the short-term management of a cloud data center. In particular, a method is proposed to assign VMs to servers by considering communication traffic among VMs. Cloud data centers receive new applications over time and these applications need on-demand resource provisioning. Each application is composed of multiple types of VMs that interact among themselves. A program called scheduler must place each new VM in a server and that impacts the QoS and power consumption. Our method places VMs that communicate among themselves in servers that are close to each other in the network topology, thus reducing communication delay and increasing the throughput available among VMs. Furthermore, the power consumption of each type of server is considered and the most efficient ones are chosen to place the VMs. The number of VMs of each application can be dynamically changed to match the workload and servers not needed in a particular period can be suspended to save energy. The methodology developed is based on Mixed Integer Programming (MIP) models to formalize the problems and use state of the art optimization solvers. Then, heuristics are developed to solve cases with more than 1,000 potential data center locations for the planning problem, 1,000 nodes for the cloud broker, and 128,000 servers for the VM placement problem. Solutions with very short optimality gaps, between 0% and 1.95%, are obtained, and execution time in the order of minutes for the planning problem and less than a second for real time cases. We consider that this thesis on resource provisioning of cloud computing networks includes important contributions on this research area, and innovative commercial applications based on the proposed methods have promising future.
Honey Bee Colonies Remote Monitoring System.
Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús
2016-12-29
Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees' work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive-monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time.
Honey Bee Colonies Remote Monitoring System
Gil-Lebrero, Sergio; Quiles-Latorre, Francisco Javier; Ortiz-López, Manuel; Sánchez-Ruiz, Víctor; Gámiz-López, Victoria; Luna-Rodríguez, Juan Jesús
2016-01-01
Bees are very important for terrestrial ecosystems and, above all, for the subsistence of many crops, due to their ability to pollinate flowers. Currently, the honey bee populations are decreasing due to colony collapse disorder (CCD). The reasons for CCD are not fully known, and as a result, it is essential to obtain all possible information on the environmental conditions surrounding the beehives. On the other hand, it is important to carry out such information gathering as non-intrusively as possible to avoid modifying the bees’ work conditions and to obtain more reliable data. We designed a wireless-sensor networks meet these requirements. We designed a remote monitoring system (called WBee) based on a hierarchical three-level model formed by the wireless node, a local data server, and a cloud data server. WBee is a low-cost, fully scalable, easily deployable system with regard to the number and types of sensors and the number of hives and their geographical distribution. WBee saves the data in each of the levels if there are failures in communication. In addition, the nodes include a backup battery, which allows for further data acquisition and storage in the event of a power outage. Unlike other systems that monitor a single point of a hive, the system we present monitors and stores the temperature and relative humidity of the beehive in three different spots. Additionally, the hive is continuously weighed on a weighing scale. Real-time weight measurement is an innovation in wireless beehive—monitoring systems. We designed an adaptation board to facilitate the connection of the sensors to the node. Through the Internet, researchers and beekeepers can access the cloud data server to find out the condition of their hives in real time. PMID:28036061
2012-01-01
Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423
Modeling, Simulation and Analysis of Public Key Infrastructure
NASA Technical Reports Server (NTRS)
Liu, Yuan-Kwei; Tuey, Richard; Ma, Paul (Technical Monitor)
1998-01-01
Security is an essential part of network communication. The advances in cryptography have provided solutions to many of the network security requirements. Public Key Infrastructure (PKI) is the foundation of the cryptography applications. The main objective of this research is to design a model to simulate a reliable, scalable, manageable, and high-performance public key infrastructure. We build a model to simulate the NASA public key infrastructure by using SimProcess and MatLab Software. The simulation is from top level all the way down to the computation needed for encryption, decryption, digital signature, and secure web server. The application of secure web server could be utilized in wireless communications. The results of the simulation are analyzed and confirmed by using queueing theory.
Antony, Joby; Mathuria, D S; Datta, T S; Maity, Tanmoy
2015-12-01
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW(®). This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.
NASA Astrophysics Data System (ADS)
Antony, Joby; Mathuria, D. S.; Datta, T. S.; Maity, Tanmoy
2015-12-01
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similar control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as "CADS," which stands for "Complete Automation of Distribution System." CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW®. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antony, Joby; Mathuria, D. S.; Datta, T. S.
The power of Ethernet for control and automation technology is being largely understood by the automation industry in recent times. Ethernet with HTTP (Hypertext Transfer Protocol) is one of the most widely accepted communication standards today. Ethernet is best known for being able to control through internet from anywhere in the globe. The Ethernet interface with built-in on-chip embedded servers ensures global connections for crate-less model of control and data acquisition systems which have several advantages over traditional crate-based control architectures for slow applications. This architecture will completely eliminate the use of any extra PLC (Programmable Logic Controller) or similarmore » control hardware in any automation network as the control functions are firmware coded inside intelligent meters itself. Here, we describe the indigenously built project of a cryogenic control system built for linear accelerator at Inter University Accelerator Centre, known as “CADS,” which stands for “Complete Automation of Distribution System.” CADS deals with complete hardware, firmware, and software implementation of the automated linac cryogenic distribution system using many Ethernet based embedded cryogenic instruments developed in-house. Each instrument works as an intelligent meter called device-server which has the control functions and control loops built inside the firmware itself. Dedicated meters with built-in servers were designed out of ARM (Acorn RISC (Reduced Instruction Set Computer) Machine) and ATMEL processors and COTS (Commercially Off-the-Shelf) SMD (Surface Mount Devices) components, with analog sensor front-end and a digital back-end web server implementing remote procedure call over HTTP for digital control and readout functions. At present, 24 instruments which run 58 embedded servers inside, each specific to a particular type of sensor-actuator combination for closed loop operations, are now deployed and distributed across control LAN (Local Area Network). A group of six categories of such instruments have been identified for all cryogenic applications required for linac operation which were designed to build this medium-scale cryogenic automation setup. These devices have special features like remote rebooters, daughter boards for PIDs (Proportional Integral Derivative), etc., to operate them remotely in radiation areas and also have emergency switches by which each device can be taken to emergency mode temporarily. Finally, all the data are monitored, logged, controlled, and analyzed online at a central control room which has a user-friendly control interface developed using LabVIEW{sup ®}. This paper discusses the overall hardware, firmware, software design, and implementation for the cryogenics setup.« less
Zhang, P; Aungskunsiri, K; Martín-López, E; Wabnig, J; Lobino, M; Nock, R W; Munns, J; Bonneau, D; Jiang, P; Li, H W; Laing, A; Rarity, J G; Niskanen, A O; Thompson, M G; O'Brien, J L
2014-04-04
We demonstrate a client-server quantum key distribution (QKD) scheme. Large resources such as laser and detectors are situated at the server side, which is accessible via telecom fiber to a client requiring only an on-chip polarization rotator, which may be integrated into a handheld device. The detrimental effects of unstable fiber birefringence are overcome by employing the reference-frame-independent QKD protocol for polarization qubits in polarization maintaining fiber, where standard QKD protocols fail, as we show for comparison. This opens the way for quantum enhanced secure communications between companies and members of the general public equipped with handheld mobile devices, via telecom-fiber tethering.
Wearable technology as a booster of clinical care
NASA Astrophysics Data System (ADS)
Jonas, Stephan; Hannig, Andreas; Spreckelsen, Cord; Deserno, Thomas M.
2014-03-01
Wearable technology defines a new class of smart devices that are accessories or clothing equipped with computational power and sensors, like Google Glass. In this work, we propose a novel concept for supporting everyday clinical pathways with wearable technology. In contrast to most prior work, we are not focusing on the omnipresent screen to display patient information or images, but are trying to maintain existing workflows. To achieve this, our system supports clinical staff as a documenting observer, only intervening adequately if problems are detected. Using the example of medication preparation and administration, a task known to be prone to errors, we demonstrate the full potential of the new devices. Patient and medication identifier are captured with the built-in camera, and the information is send to a transaction server. The server communicates with the hospital information system to obtain patient records and medication information. The system then analyses the new medication for possible side-effects and interactions with already administered drugs. The result is sent to the device while encapsulating all sensitive information respecting data security and privacy. The user only sees a traffic light style encoded feedback to avoid distraction. The server can reduce documentation efforts and reports in real-time on possible problems during medication preparation or administration. In conclusion, we designed a secure system around three basic principles with many applications in everyday clinical work: (i) interaction and distraction is kept as low as possible; (ii) no patient data is displayed; and (iii) device is pure observer, not part of the workflow. By reducing errors and documentation burden, our approach has the capability to boost clinical care.
Network Upgrade for the SLC: PEP II Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crane, M.; Call, M.; Clark, S.
2011-09-09
The PEP-II control system required a new network to support the system functions. This network, called CTLnet, is an FDDI/Ethernet based network using only TCP/IP protocols. An upgrade of the SLC Control System micro communications to use TCP/IP and SLCNET would allow all PEP-II control system nodes to use TCP/IP. CTLnet is private and separate from the SLAC public network. Access to nodes and control system functions is provided by multi-homed application servers with connections to both the private CTLnet and the SLAC public network. Monitoring and diagnostics are provided using a dedicated system. Future plans and current status informationmore » is included.« less
Building a Smartphone Seismic Network
NASA Astrophysics Data System (ADS)
Kong, Q.; Allen, R. M.
2013-12-01
We are exploring to build a new type of seismic network by using the smartphones. The accelerometers in smartphones can be used to record earthquakes, the GPS unit can give an accurate location, and the built-in communication unit makes the communication easier for this network. In the future, these smartphones may work as a supplement network to the current traditional network for scientific research and real-time applications. In order to build this network, we developed an application for android phones and server to record the acceleration in real time. These records can be sent back to a server in real time, and analyzed at the server. We evaluated the performance of the smartphone as a seismic recording instrument by comparing them with high quality accelerometer while located on controlled shake tables for a variety of tests, and also the noise floor test. Based on the daily human activity data recorded by the volunteers and the shake table tests data, we also developed algorithm for the smartphones to detect earthquakes from daily human activities. These all form the basis of setting up a new prototype smartphone seismic network in the near future.
Study on an agricultural environment monitoring server system using Wireless Sensor Networks.
Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun
2010-01-01
This paper proposes an agricultural environment monitoring server system for monitoring information concerning an outdoors agricultural production environment utilizing Wireless Sensor Network (WSN) technology. The proposed agricultural environment monitoring server system collects environmental and soil information on the outdoors through WSN-based environmental and soil sensors, collects image information through CCTVs, and collects location information using GPS modules. This collected information is converted into a database through the agricultural environment monitoring server consisting of a sensor manager, which manages information collected from the WSN sensors, an image information manager, which manages image information collected from CCTVs, and a GPS manager, which processes location information of the agricultural environment monitoring server system, and provides it to producers. In addition, a solar cell-based power supply is implemented for the server system so that it could be used in agricultural environments with insufficient power infrastructure. This agricultural environment monitoring server system could even monitor the environmental information on the outdoors remotely, and it could be expected that the use of such a system could contribute to increasing crop yields and improving quality in the agricultural field by supporting the decision making of crop producers through analysis of the collected information.
Zebra: A striped network file system
NASA Technical Reports Server (NTRS)
Hartman, John H.; Ousterhout, John K.
1992-01-01
The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.
Code of Federal Regulations, 2010 CFR
2010-04-01
... server or hard drive. Certificate Policy means a named set of rules that sets forth the applicability of..., prescribe specific performance requirements, practices, formats, communications protocols, etc., for...
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
NASA Astrophysics Data System (ADS)
Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi
2016-08-01
The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR).
The Development of the Command and Control Centre for Trial Kondari
2010-07-01
the C2 centre inside a blue bubble whose modems have privately assigned IP addresses which are authenticated by Telstra’s radius server. No other sim...cards can communicate on this private network unless authorised by the radius server. The Next IP network is a network bubble within the larger Next...for all machines on the network. EPLRS Network Manager (ENM) radio – authenticates and manages all the EPLRS radios. The basic plan’s final
Optimal Self-Tuning PID Controller Based on Low Power Consumption for a Server Fan Cooling System.
Lee, Chengming; Chen, Rongshun
2015-05-20
Recently, saving the cooling power in servers by controlling the fan speed has attracted considerable attention because of the increasing demand for high-density servers. This paper presents an optimal self-tuning proportional-integral-derivative (PID) controller, combining a PID neural network (PIDNN) with fan-power-based optimization in the transient-state temperature response in the time domain, for a server fan cooling system. Because the thermal model of the cooling system is nonlinear and complex, a server mockup system simulating a 1U rack server was constructed and a fan power model was created using a third-order nonlinear curve fit to determine the cooling power consumption by the fan speed control. PIDNN with a time domain criterion is used to tune all online and optimized PID gains. The proposed controller was validated through experiments of step response when the server operated from the low to high power state. The results show that up to 14% of a server's fan cooling power can be saved if the fan control permits a slight temperature response overshoot in the electronic components, which may provide a time-saving strategy for tuning the PID controller to control the server fan speed during low fan power consumption.
Communications Effects Server (CES) Model for Systems Engineering Research
2012-01-31
Visualization Tool Interface «logical» HLA Tool Interface «logical» DIS Tool Interface «logical» STK Tool Interface «module» Execution Kernels «logical...interoperate with STK when running simulations. GUI Components Architect – The Architect represents the main network design and visualization ...interest» CES «block» Third Party Visualization Tool «block» Third Party Analysis Tool «block» Third Party Text Editor «block» HLA Tools Analyst User Army
On the optimal use of a slow server in two-stage queueing systems
NASA Astrophysics Data System (ADS)
Papachristos, Ioannis; Pandelis, Dimitrios G.
2017-07-01
We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.
Analysis of practical backoff protocols for contention resolution with multiple servers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, L.A.; MacKenzie, P.D.
Backoff protocols are probably the most widely used protocols for contention resolution in multiple access channels. In this paper, we analyze the stochastic behavior of backoff protocols for contention resolution among a set of clients and servers, each server being a multiple access channel that deals with contention like an Ethernet channel. We use the standard model in which each client generates requests for a given server according to a Bemoulli distribution with a specified mean. The client-server request rate of a system is the maximum over all client-server pairs (i, j) of the sum of all request rates associatedmore » with either client i or server j. Our main result is that any superlinear polynomial backoff protocol is stable for any multiple-server system with a sub-unit client-server request rate. We confirm the practical relevance of our result by demonstrating experimentally that the average waiting time of requests is very small when such a system is run with reasonably few clients and reasonably small request rates such as those that occur in actual ethernets. Our result is the first proof of stability for any backoff protocol for contention resolution with multiple servers. Our result is also the first proof that any weakly acknowledgment based protocol is stable for contention resolution with multiple servers and such high request rates. Two special cases of our result are of interest. Hastad, Leighton and Rogoff have shown that for a single-server system with a sub-unit client-server request rate any modified superlinear polynomial backoff protocol is stable. These modified backoff protocols are similar to standard backoff protocols but require more random bits to implement. The special case of our result in which there is only one server extends the result of Hastad, Leighton and Rogoff to standard (practical) backoff protocols. Finally, our result applies to dynamic routing in optical networks.« less
NASA Technical Reports Server (NTRS)
Fishkind, Stanley; Harris, Richard N.; Pfeiffer, William A.
1996-01-01
The methodologies of the NASA requirements processing system, originally designed to enhance NASA's customer interface and response time, are reviewed. The response of NASA to the problems associated with the system is presented, and it is shown what was done to facilitate the process and to improve customer relations. The requirements generation system (RGS), a computer-supported client-server system, adopted by NASA is presented. The RGS system is configurable on a per-mission basis and can be structured to allow levels of requirements. The details provided concerning the RGS include the recommended configuration, information on becoming an RGS user and network connectivity worksheets for computer users.
NASA Astrophysics Data System (ADS)
Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A.
2015-11-01
In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement.
Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A
2015-11-18
In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement.
Masys, D. R.; Baker, D. B.
1997-01-01
The Internet's World-Wide Web (WWW) provides an appealing medium for the communication of health related information due to its ease of use and growing popularity. But current technologies for communicating data between WWW clients and servers are systematically vulnerable to certain types of security threats. Prominent among these threats are "Trojan horse" programs running on client workstations, which perform some useful and known function for a user, while breaching security via background functions that are not apparent to the user. The Patient-Centered Access to Secure Systems Online (PCASSO) project of SAIC and UCSD is a research, development and evaluation project to exploit state-of-the-art security and WWW technology for health care. PCASSO is designed to provide secure access to clinical data for healthcare providers and their patients using the Internet. PCASSO will be evaluated for both safety and effectiveness, and may provide a model for secure communications via public data networks. PMID:9357644
2009-12-01
Management Pilot Project.” NRSW E -Notes, No. 132, January 30, 2008. http://secnavportal.donhq.navy.mil/ portal /server.pt/gateway/PTARGS_0_0_2426...CONSERVATION.........................................................................................52 E . MATERIAL AND SOCIAL INCENTIVES IMPACT BEHAVIOR...73 E . SUPPORT AN INTEGRATED COMMUNICATION PROCESS...........75 VI. METHODS
CIS3/398: Implementation of a Web-Based Electronic Patient Record for Transplant Recipients
Fritsche, L; Lindemann, G; Schroeter, K; Schlaefer, A; Neumayer, H-H
1999-01-01
Introduction While the "Electronic patient record" (EPR) is a frequently quoted term in many areas of healthcare, only few working EPR-systems are available so far. To justify their use, EPRs must be able to store and display all kinds of medical information in a reliable, secure, time-saving, user-friendly way at an affordable price. Fields with patients who are attended to by a large number of medical specialists over a prolonged period of time are best suited to demonstrate the potential benefits of an EPR. The aim of our project was to investigate the feasibility of an EPR based solely on "of-the-shelf"-software and Internet-technology in the field of organ transplantation. Methods The EPR-system consists of three main elements: Data-storage facilities, a Web-server and a user-interface. Data are stored either in a relational database (Sybase Adaptive 11.5, Sybase Inc., CA) or in case of pictures (JPEG) and files in application formats (e. g. Word-Documents) on a Windows NT 4.0 Server (Microsoft Corp., WA). The entire communication of all data is handled by a Web-server (IIS 4.0, Microsoft) with an Active Server Pages extension. The database is accessed by ActiveX Data Objects via the ODBC-interface. The only software required on the user's computer is the Internet Explorer 4.01 (Microsoft), during the first use of the EPR, the ActiveX HTML Layout Control is automatically added. The user can access the EPR via Local or Wide Area Network or by dial-up connection. If the EPR is accessed from outside the firewall, all communication is encrypted (SSL 3.0, Netscape Comm. Corp., CA).The speed of the EPR-system was tested with 50 repeated measurements of the duration of two key-functions: 1) Display of all lab results for a given day and patient and 2) automatic composition of a letter containing diagnoses, medication, notes and lab results. For the test a 233 MHz Pentium II Processor with 10 Mbit/s Ethernet connection (ping-time below 10 ms) over 2 hubs to the server (400 MHz Pentium II, 256 MB RAM) was used. Results So far the EPR-system has been running for eight consecutive months and contains complete records of 673 transplant recipients with an average follow-up of 9.9 (SD :4.9) years and a total of 1.1 million lab values. Instruction to enable new users to perform basic operations took less than two hours in all cases. The average duration of laboratory access was 0.9 (SD:0.5) seconds, the automatic composition of a letter took 6.1 (SD:2.4) seconds. Apart from the database and Windows NT, all other components are available for free. The development of the EPR-system required less than two person-years. Conclusion Implementation of an Electronic patient record that meets the requirements of comprehensiveness, reliability, security, speed, user-friendliness and affordability using a combination of "of-the-shelf" software-products can be feasible, if the current state-of-the-art internet technology is applied.
HIS priorities in developing countries.
Amado Espinosa, L
1995-04-01
Looking for a solution to fulfill the requirements that the new global economical system demands, developing countries face a reality of poor communications infrastructure, a delay in applying information technology to the organizations, and a semi-closed political system avoiding the necessary reforms. HIS technology has been developed more for transactional purposes on mini and mainframe platforms. Administrative modules are the most frequently observed and physicians are now requiring more support for their activities. The second information systems generation will take advantage of PC technology, client-server models and telecommunications to achieve integration. International organizations, academic and industrial, public and private, will play a major role to transfer technology and to develop this area.
Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture
NASA Technical Reports Server (NTRS)
Fiene, Bruce F.
1994-01-01
The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.
Modular multiple sensors information management for computer-integrated surgery.
Vaccarella, Alberto; Enquobahrie, Andinet; Ferrigno, Giancarlo; Momi, Elena De
2012-09-01
In the past 20 years, technological advancements have modified the concept of modern operating rooms (ORs) with the introduction of computer-integrated surgery (CIS) systems, which promise to enhance the outcomes, safety and standardization of surgical procedures. With CIS, different types of sensor (mainly position-sensing devices, force sensors and intra-operative imaging devices) are widely used. Recently, the need for a combined use of different sensors raised issues related to synchronization and spatial consistency of data from different sources of information. In this study, we propose a centralized, multi-sensor management software architecture for a distributed CIS system, which addresses sensor information consistency in both space and time. The software was developed as a data server module in a client-server architecture, using two open-source software libraries: Image-Guided Surgery Toolkit (IGSTK) and OpenCV. The ROBOCAST project (FP7 ICT 215190), which aims at integrating robotic and navigation devices and technologies in order to improve the outcome of the surgical intervention, was used as the benchmark. An experimental protocol was designed in order to prove the feasibility of a centralized module for data acquisition and to test the application latency when dealing with optical and electromagnetic tracking systems and ultrasound (US) imaging devices. Our results show that a centralized approach is suitable for minimizing synchronization errors; latency in the client-server communication was estimated to be 2 ms (median value) for tracking systems and 40 ms (median value) for US images. The proposed centralized approach proved to be adequate for neurosurgery requirements. Latency introduced by the proposed architecture does not affect tracking system performance in terms of frame rate and limits US images frame rate at 25 fps, which is acceptable for providing visual feedback to the surgeon in the OR. Copyright © 2012 John Wiley & Sons, Ltd.
Eccher, C; Berloffa, F; Demichelis, F; Larcher, B; Galvagni, M; Sboner, A; Graiff, A; Forti, S
1999-01-01
Introduction This study describes a tele-consultation system (TCS) developed to provide a computing environment over a Wide Area Network (WAN) in North Italy (Province of Trento), that can be used by two or more physicians to share medical data and to work co-operatively on medical records. A pilot study has been carried out in oncology to assess the effectiveness of the system. The aim of this project is to facilitate the management of oncology patients by improving communication among the specialists of central and district hospitals. Methods and Results The TCS is an Intranet-based solution. The Intranet is based on a PC WAN with Windows NT Server, Microsoft SQL Server, and Internet Information Server. TCS is composed of native and custom applications developed in the Microsoft Windows (9x and NT) environment. The basic component of the system is the multimedia digital medical record, structured as a collection of HTML and ASP pages. A distributed relational database will allow users to store and retrieve medical records, accessed by a dedicated Web browser via the Web Server. The medical data to be stored and the presentation architecture of the clinical record had been determined in close collaboration with the clinicians involved in the project. TCS will allow a multi-point tele-consultation (TC) among two or more participants on remote computers, providing synchronized surfing through the clinical report. A set of collaborative and personal tools, whiteboard with drawing tools, point-to-point digital audio-conference, chat, local notepad, e-mail service, are integrated in the system to provide an user friendly environment. TCS has been developed as a client-server architecture. The client part of the system is based on the Microsoft Web Browser control and provides the user interface and the tools described above. The server part, running all the time on a dedicated computer, accepts connection requests and manages the connections among the participants in a TC, allowing multiple TC to run simultaneously. TCS has been developed in Visual C++ environment using MFC library and COM technology; ActiveX controls have been written in Visual Basic to perform dedicated tasks from the inside of the HTML clinical report. Before deploying the system in the hospital departments involved in the project, TCS has been tested in our laboratory by clinicians involved in the project to evaluate the usability of the system. Discussion TCS has the potential to support a "multi-disciplinary distributed virtual oncological meeting". The specialists of different departments and of different hospitals can attend "virtual meetings" and interactively discuss on medical data. An expected benefit of the "virtual meeting" should be the possibility to provide expert remote advice from oncologists to peripheral cancer units in formulating treatment plans, conducting follow-up sessions and supporting clinical research.
search.bioPreprint: a discovery tool for cutting edge, preprint biomedical research articles
Iwema, Carrie L.; LaDue, John; Zack, Angela; Chattopadhyay, Ansuman
2016-01-01
The time it takes for a completed manuscript to be published traditionally can be extremely lengthy. Article publication delay, which occurs in part due to constraints associated with peer review, can prevent the timely dissemination of critical and actionable data associated with new information on rare diseases or developing health concerns such as Zika virus. Preprint servers are open access online repositories housing preprint research articles that enable authors (1) to make their research immediately and freely available and (2) to receive commentary and peer review prior to journal submission. There is a growing movement of preprint advocates aiming to change the current journal publication and peer review system, proposing that preprints catalyze biomedical discovery, support career advancement, and improve scientific communication. While the number of articles submitted to and hosted by preprint servers are gradually increasing, there has been no simple way to identify biomedical research published in a preprint format, as they are not typically indexed and are only discoverable by directly searching the specific preprint server websites. To address this issue, we created a search engine that quickly compiles preprints from disparate host repositories and provides a one-stop search solution. Additionally, we developed a web application that bolsters the discovery of preprints by enabling each and every word or phrase appearing on any web site to be integrated with articles from preprint servers. This tool, search.bioPreprint, is publicly available at http://www.hsls.pitt.edu/resources/preprint. PMID:27508060
Video streaming technologies using ActiveX and LabVIEW
NASA Astrophysics Data System (ADS)
Panoiu, M.; Rat, C. L.; Panoiu, C.
2015-06-01
The goal of this paper is to present the possibilities of remote image processing through data exchange between two programming technologies: LabVIEW and ActiveX. ActiveX refers to the process of controlling one program from another via ActiveX component; where one program acts as the client and the other as the server. LabVIEW can be either client or server. Both programs (client and server) exist independent of each other but are able to share information. The client communicates with the ActiveX objects that the server opens to allow the sharing of information [7]. In the case of video streaming [1] [2], most ActiveX controls can only display the data, being incapable of transforming it into a data type that LabVIEW can process. This becomes problematic when the system is used for remote image processing. The LabVIEW environment itself provides little if any possibilities for video streaming, and the methods it does offer are usually not high performance, but it possesses high performance toolkits and modules specialized in image processing, making it ideal for processing the captured data. Therefore, we chose to use existing software, specialized in video streaming along with LabVIEW and to capture the data provided by them, for further use, within LabVIEW. The software we studied (the ActiveX controls of a series of media players that utilize streaming technology) provide high quality data and a very small transmission delay, ensuring the reliability of the results of the image processing.
Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A
2017-04-01
In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.
2009-01-01
Oracle 9i, 10g MySQL MS SQL Server MS SQL Server Operating System Supported Windows 2003 Server Windows 2000 Server (32 bit...WebStar (Mac OS X) SunOne Internet Information Services (IIS) Database Server Supported MS SQL Server MS SQL Server Oracle 9i, 10g...challenges of Web-based surveys are: 1) identifying the best Commercial Off the Shelf (COTS) Web-based survey packages to serve the particular
Interfaces for Distributed Systems of Information Servers.
ERIC Educational Resources Information Center
Kahle, Brewster; And Others
1992-01-01
Describes two systems--Wide Area Information Servers (WAIS) and Rosebud--that provide protocol-based mechanisms for accessing remote full-text information servers. Design constraints, human interface design, and implementation are examined for five interfaces to these systems developed to run on the Macintosh or Unix terminals. Sample screen…
Achieving Reliable Communication in Dynamic Emergency Responses
Chipara, Octav; Plymoth, Anders N.; Liu, Fang; Huang, Ricky; Evans, Brian; Johansson, Per; Rao, Ramesh; Griswold, William G.
2011-01-01
Emergency responses require the coordination of first responders to assess the condition of victims, stabilize their condition, and transport them to hospitals based on the severity of their injuries. WIISARD is a system designed to facilitate the collection of medical information and its reliable dissemination during emergency responses. A key challenge in WIISARD is to deliver data with high reliability as first responders move and operate in a dynamic radio environment fraught with frequent network disconnections. The initial WIISARD system employed a client-server architecture and an ad-hoc routing protocol was used to exchange data. The system had low reliability when deployed during emergency drills. In this paper, we identify the underlying causes of unreliability and propose a novel peer-to-peer architecture that in combination with a gossip-based communication protocol achieves high reliability. Empirical studies show that compared to the initial WIISARD system, the redesigned system improves reliability by as much as 37% while reducing the number of transmitted packets by 23%. PMID:22195075
Commercialization of Advanced Communications Technology Satellite (ACTS) technology
NASA Astrophysics Data System (ADS)
Plecity, Mark S.; Strickler, Walter M.; Bauer, Robert A.
1996-03-01
In an on-going effort to maintain United States leadership in communication satellite technology, the National Aeronautics and Space Administration (NASA), led the development of the Advanced Communications Technology Satellite (ACTS). NASA's ACTS program provides industry, academia, and government agencies the opportunity to perform both technology and telecommunication service experiments with a leading-edge communication satellite system. Over 80 organizations are using ACTS as a multi server test bed to establish communication technologies and services of the future. ACTS was designed to provide demand assigned multiple access (DAMA) digital communications with a minimum switchable circuit bandwidth of 64 Kbps, and a maximum channel bandwidth of 900 MHZ. It can, therefore, provide service to thin routes as well as connect fiber backbones in supercomputer networks, across oceans, or restore full communications in the event of national or manmade disaster. Service can also be provided to terrestrial and airborne mobile users. Commercial applications of ACTS technologies include: telemedicine; distance education; Department of Defense operations; mobile communications, aeronautical applications, terrestrial applications, and disaster recovery. This paper briefly describes the ACTS system and the enabling technologies employed by ACTS including Ka-band hopping spot beams, on-board routing and switching, and rain fade compensation. When used in conjunction with a time division multiple access (TDMA) architecture, these technologies provide a higher capacity, lower cost satellite system. Furthermore, examples of completed user experiments, future experiments, and plans of organizations to commercialize ACTS technology in their own future offerings will be discussed.
The HydroServer Platform for Sharing Hydrologic Data
NASA Astrophysics Data System (ADS)
Tarboton, D. G.; Horsburgh, J. S.; Schreuders, K.; Maidment, D. R.; Zaslavsky, I.; Valentine, D. W.
2010-12-01
The CUAHSI Hydrologic Information System (HIS) is an internet based system that supports sharing of hydrologic data. HIS consists of databases connected using the Internet through Web services, as well as software for data discovery, access, and publication. The HIS system architecture is comprised of servers for publishing and sharing data, a centralized catalog to support cross server data discovery and a desktop client to access and analyze data. This paper focuses on HydroServer, the component developed for sharing and publishing space-time hydrologic datasets. A HydroServer is a computer server that contains a collection of databases, web services, tools, and software applications that allow data producers to store, publish, and manage the data from an experimental watershed or project site. HydroServer is designed to permit publication of data as part of a distributed national/international system, while still locally managing access to the data. We describe the HydroServer architecture and software stack, including tools for managing and publishing time series data for fixed point monitoring sites as well as spatially distributed, GIS datasets that describe a particular study area, watershed, or region. HydroServer adopts a standards based approach to data publication, relying on accepted and emerging standards for data storage and transfer. CUAHSI developed HydroServer code is free with community code development managed through the codeplex open source code repository and development system. There is some reliance on widely used commercial software for general purpose and standard data publication capability. The sharing of data in a common format is one way to stimulate interdisciplinary research and collaboration. It is anticipated that the growing, distributed network of HydroServers will facilitate cross-site comparisons and large scale studies that synthesize information from diverse settings, making the network as a whole greater than the sum of its parts in advancing hydrologic research. Details of the CUAHSI HIS can be found at http://his.cuahsi.org, and HydroServer codeplex site http://hydroserver.codeplex.com.
The transition to digital media in biocommunications.
Lynch, P J
1996-01-01
As digital audiovisual media become dominant in biomedical communications, the skills of human interface design and the technology of client-server multimedia data networks will underlie and influence virtually every aspect of biocommunications professional practice. The transition to digital communications media will require financial, organizational, and professional changes in current biomedical communications departments, and will require a multi-disciplinary approach that will blur the boundaries of the current biocommunications professions.
Providing Internet Access to High-Resolution Lunar Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMoon server is a computer program that provides Internet access to high-resolution Lunar images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of the Moon. The OnMoon server implements the Open Geospatial Consortium (OGC) Web Map Service (WMS) server protocol and supports Moon-specific extensions. Unlike other Internet map servers that provide Lunar data using an Earth coordinate system, the OnMoon server supports encoding of data in Moon-specific coordinate systems. The OnMoon server offers access to most of the available high-resolution Lunar image and elevation data. This server can generate image and map files in the tagged image file format (TIFF) or the Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. Full-precision spectral arithmetic processing is also available, by use of a custom SLD extension. This server can dynamically add shaded relief based on the Lunar elevation to any image layer. This server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
NASA Technical Reports Server (NTRS)
Hein, G. F.
1974-01-01
Special purpose satellites are very cost sensitive to the number of broadcast channels, usually will have Poisson arrivals, fairly low utilization (less than 35%), and a very high availability requirement. To solve the problem of determining the effects of limiting C the number of channels, the Poisson arrival, infinite server queueing model will be modified to describe the many server case. The model is predicated on the reproductive property of the Poisson distribution.
A telemedicine system for wireless home healthcare based on Bluetooth and the Internet.
Zhao, Xiaoming; Fei, Ding-Yu; Doarn, Charles R; Harnett, Brett; Merrell, Ronald
2004-01-01
The VitalPoll Telemedicine System (VTS) was designed and developed for wireless home healthcare. The aims of this study were: to design the architecture and communication methods for a telemedicine system; to implement a physiologic routing hub to collect data from different medical devices and sensors; and to evaluate the feasibility of this system for applications in wireless home healthcare. The VTS was built using Bluetooth wireless and Internet technologies with client/server architecture. Several medical devices, which acquire vital signs, such as real-time electrocardiogram signals, heart rate, body temperature, and activity (physical motion), were integrated into the VTS. Medical information and data were transmitted over short-range interface (USB, RS232), wireless communication, and the Internet. The medical results were stored in a database and presented using a web browser. The patient's vital signals can be collected, transmitted, and displayed in real time by the VTS. The experiments verified no data loss during Bluetooth and Internet communication. Bluetooth and the Internet provide enough bandwidth channels to tranmit these vital signs. The experimental results show that VTS may be suitable for a practical telemedicine system in home healthcare.
Report #11-P-0597, September 9, 2011. Vulnerability testing of EPA’s directory service system authentication and authorization servers conducted in March 2011 identified authentication and authorization servers with numerous vulnerabilities.
DIABCARE Quality Network in Europe--a model for quality management in chronic diseases.
Piwernetz, K
2001-04-01
The DIABCARE Q-Net project developed a complete and integrated information technology system to monitor diabetes care, according to the gold standards of the St Vincent Declaration Action Program. This is the first Telematic platform for standardized documentation on medical quality and evaluation across Europe, which will serve as a model for other chronic diseases. Quality development starts from the comparison of diabetes services, based on the key data on diabetes care in the basic information sheet. This is a 141 field form, which is to be completed once a year for each patient under the care of the diabetes team. The system performs an analysis of the local data and compares the data with peer teams by means of telecommunication of anonymous data. These data are collected regionally. At the next level these regional data are compared on a national basis across Europe using dedicated communication lines. National data can be compared transnationally by the use of the Internet and the DIABCARE benchmarking servers. These different lines are used according to the necessary security standards. Medical data are transferred via dedicated lines, aggregated data via the Internet. The architecture follows the open-platform concept in order to allow for heterogeneous technical environments. Already at the start of the project, the necessity for expanding the quality approach to telemedicine methodology was identified and included. For each level, specific programs are available to improve the performance of diabetes care delivery: DIABCARE data as client and DIABCARE server as regional and DIABCARE 'international server' as transnational server. Functioning pilots were established across all levels. The clients have been linked to the servers on a routine basis. According to the open architecture design, the various countries decided on different systems at the entry point: full system--Portugal; fax systems--Italy, Bavaria; implementation into doctor's office systems--Norway; paper forms and chip cards--France. This system can improve the local, regional and national diabetes care. Initiatives in several countries proved the feasibility of the system. The most extensive use, from Portugal, will be reported later in this paper. The exploitation of the DIABCARE Q-Net system will be performed with the DIABCARE International European Economic Interest Grouping as a co-ordinator and several commercial companies as contractors to market the products inside the system. The key project participants are: DIABCARE Office EURO, DIABCARE Portugal, DIABCARE France, DIABCARE Bavaria, DIABCARE UK, DIABCARE Netherlands, DIABCARE Norway, DIABCARE Italy, DIABCARE Sweden, DIABCARE Austria, DIABCARE Spain, GSF Research Centre for Health and Environment, FAST Research Institute for Applied Software Technology, Tromsø University Hospital, Stavanger Technical College, Technical University of Ilmenau, World Health Organisation (WHO), Regional Office for Europe.
Neuhaeuser, Jakob; D'Angelo, Lorenzo T
2013-01-01
The goal of the concept and of the device presented in this contribution is to be able to collect sensor data from wearable sensors directly, automatically and wirelessly and to make them available over a wired local area network. Several concepts in e-health and telemedicine make use of portable and wearable sensors to collect movement or activity data. Usually these data are either collected via a wireless personal area network or using a connection to the user's smartphone. However, users might not carry smartphones on them while inside a residential building such as a nursing home or a hospital, but also within their home. Also, in such areas the use of other wireless communication technologies might be limited. The presented system is an embedded server which can be deployed in several rooms in order to ensure live data collection in bigger buildings. Also, the collection of data batches recorded out of range, as soon as a connection is established, is also possible. Both, the system concept and the realization are presented.
A complete history of everything
NASA Astrophysics Data System (ADS)
Lanclos, Kyle; Deich, William T. S.
2012-09-01
This paper discusses Lick Observatory's local solution for retaining a complete history of everything. Leveraging our existing deployment of a publish/subscribe communications model that is used to broadcast the state of all systems at Lick Observatory, a monitoring daemon runs on a dedicated server that subscribes to and records all published messages. Our success with this system is a testament to the power of simple, straightforward approaches to complex problems. The solution itself is written in Python, and the initial version required about a week of development time; the data are stored in PostgreSQL database tables using a distinctly simple schema. Over time, we addressed scaling issues as the data set grew, which involved reworking the PostgreSQL database schema on the back-end. We also duplicate the data in flat files to enable recovery or migration of the data from one server to another. This paper will cover both the initial design as well as the solutions to the subsequent deployment issues, the trade-offs that motivated those choices, and the integration of this history database with existing client applications.
The ICT monitoring system of the ASTRI SST-2M prototype proposed for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Gianotti, F.; Bruno, P.; Tacchini, A.; Conforti, V.; Fioretti, V.; Tanci, C.; Grillo, A.; Leto, G.; Malaguti, G.; Trifoglio, M.
2016-08-01
In the framework of the international Cherenkov Telescope Array (CTA) observatory, the Italian National Institute for Astrophysics (INAF) has developed a dual mirror, small sized, telescope prototype (ASTRI SST-2M), installed in Italy at the INAF observing station located at Serra La Nave, Mt. Etna. The ASTRI SST-2M prototype is the basis of the ASTRI telescopes that will form the mini-array proposed to be installed at the CTA southern site during its preproduction phase. This contribution presents the solutions implemented to realize the monitoring system for the Information and Communication Technology (ICT) infrastructure of the ASTRI SST-2M prototype. The ASTRI ICT monitoring system has been implemented by integrating traditional tools used in computer centers, with specific custom tools which interface via Open Platform Communication Unified Architecture (OPC UA) to the Alma Common Software (ACS) that is used to operate the ASTRI SST-2M prototype. The traditional monitoring tools are based on Simple Network Management Protocol (SNMP) and commercial solutions and features embedded in the devices themselves. They generate alerts by email and SMS. The specific custom tools convert the SNMP protocol into the OPC UA protocol and implement an OPC UA server. The server interacts with an OPC UA client implemented in an ACS component that, through the ACS Notification Channel, sends monitor data and alerts to the central console of the ASTRI SST-2M prototype. The same approach has been proposed also for the monitoring of the CTA onsite ICT infrastructures.
Web Service Distributed Management Framework for Autonomic Server Virtualization
NASA Astrophysics Data System (ADS)
Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea
Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.
Access control and privacy in large distributed systems
NASA Technical Reports Server (NTRS)
Leiner, B. M.; Bishop, M.
1986-01-01
Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.
Reducing Interprocessor Dependence in Recoverable Distributed Shared Memory
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. Kent
1994-01-01
Checkpointing techniques in parallel systems use dependency tracking and/or message logging to ensure that a system rolls back to a consistent state. Traditional dependency tracking in distributed shared memory (DSM) systems is expensive because of high communication frequency. In this paper we show that, if designed correctly, a DSM system only needs to consider dependencies due to the transfer of blocks of data, resulting in reduced dependency tracking overhead and reduced potential for rollback propagation. We develop an ownership timestamp scheme to tolerate the loss of block state information and develop a passive server model of execution where interactions between processors are considered atomic. With our scheme, dependencies are significantly reduced compared to the traditional message-passing model.
National Medical Terminology Server in Korea
NASA Astrophysics Data System (ADS)
Lee, Sungin; Song, Seung-Jae; Koh, Soonjeong; Lee, Soo Kyoung; Kim, Hong-Gee
Interoperable EHR (Electronic Health Record) necessitates at least the use of standardized medical terminologies. This paper describes a medical terminology server, LexCare Suite, which houses terminology management applications, such as a terminology editor, and a terminology repository populated with international standard terminology systems such as Systematized Nomenclature of Medicine (SNOMED). The server is to satisfy the needs of quality terminology systems to local primary to tertiary hospitals. Our partner general hospitals have used the server to test its applicability. This paper describes the server and the results of the applicability test.
Maltz, Jonathan; C Ng, Thomas; Li, Dustin; Wang, Jian; Wang, Kang; Bergeron, William; Martin, Ron; Budinger, Thomas
2005-01-01
In mass trauma situations, emergency personnel are challenged with the task of prioritizing the care of many injured victims. We propose a trauma patient tracking system (TPTS) where first-responders tag all patients with a wireless monitoring device that continuously reports the location of each patient. The system can be used not only to prioritize patient care, but also to determine the time taken for each patient to receive treatment. This is important in training emergency personnel and in identifying bottlenecks in the disaster response process. In situations where biochemical agents are involved, a TPTS may be employed to determine sites of cross-contamination. In order to track patient location in both outdoor and indoor environments, we employ both Global Positioning System (GPS) and Television/ Radio Frequency (TVRF) technologies. Each patient tag employs IEEE 802.11 (Wi-Fi)/TCP/IP networking to communicate with a central server via any available Wi-Fi basestation. A key component to increase TPTS fault-tolerance is a mobile Wi-Fi basestation that employs redundant Internet connectivity to ensure that tags at the disaster scene can send information to the central server even when local infrastructure is unavailable for use. We demonstrate the robustness of the system in tracking multiple patients in a simulated trauma situation in an urban environment.
Effect of video server topology on contingency capacity requirements
NASA Astrophysics Data System (ADS)
Kienzle, Martin G.; Dan, Asit; Sitaram, Dinkar; Tetzlaff, William H.
1996-03-01
Video servers need to assign a fixed set of resources to each video stream in order to guarantee on-time delivery of the video data. If a server has insufficient resources to guarantee the delivery, it must reject the stream request rather than slowing down all existing streams. Large scale video servers are being built as clusters of smaller components, so as to be economical, scalable, and highly available. This paper uses a blocking model developed for telephone systems to evaluate video server cluster topologies. The goal is to achieve high utilization of the components and low per-stream cost combined with low blocking probability and high user satisfaction. The analysis shows substantial economies of scale achieved by larger server images. Simple distributed server architectures can result in partitioning of resources with low achievable resource utilization. By comparing achievable resource utilization of partitioned and monolithic servers, we quantify the cost of partitioning. Next, we present an architecture for a distributed server system that avoids resource partitioning and results in highly efficient server clusters. Finally, we show how, in these server clusters, further optimizations can be achieved through caching and batching of video streams.
Design and implementation of GRID-based PACS in a hospital with multiple imaging departments
NASA Astrophysics Data System (ADS)
Yang, Yuanyuan; Jin, Jin; Sun, Jianyong; Zhang, Jianguo
2008-03-01
Usually, there were multiple clinical departments providing imaging-enabled healthcare services in enterprise healthcare environment, such as radiology, oncology, pathology, and cardiology, the picture archiving and communication system (PACS) is now required to support not only radiology-based image display, workflow and data flow management, but also to have more specific expertise imaging processing and management tools for other departments providing imaging-guided diagnosis and therapy, and there were urgent demand to integrate the multiple PACSs together to provide patient-oriented imaging services for enterprise collaborative healthcare. In this paper, we give the design method and implementation strategy of developing grid-based PACS (Grid-PACS) for a hospital with multiple imaging departments or centers. The Grid-PACS functions as a middleware between the traditional PACS archiving servers and workstations or image viewing clients and provide DICOM image communication and WADO services to the end users. The images can be stored in distributed multiple archiving servers, but can be managed with central mode. The grid-based PACS has auto image backup and disaster recovery services and can provide best image retrieval path to the image requesters based on the optimal algorithms. The designed grid-based PACS has been implemented in Shanghai Huadong Hospital and been running for two years smoothly.
The Quasar Network Observations in e-VLBI Mode
NASA Astrophysics Data System (ADS)
Bezrukov, I.; Finkelstein, A.; Ipatov, A.; Kaidanovsky, A.; Mikhailov, A.; Salnikov, A.; Yakovlev, V.
2011-07-01
This paper describes activity of the Institute of Applied Astronomy in developing real-time VLBI-system using high speed digital communication links. Real-time VLBI-technology has been developing at IAA since 2007 when the very first experiment was successfully done with Haystack observatory. All observatories of VLBI-Network Quasar were connected by "last mile" communication channels and via the Internet at 100 Mbps rate. Additional UNIX servers were installed for data buffering. Now e-VLBI sessions are carried out routinely within domestic VLBI-programs for UT1-determination. Observational data of 1-hour sessions are transmitted simultaneously from Svetloe, Zelenchukskaya and Badary observatories to the IAA Data Processing Center in Saint-Petersburg through fiber lines at 50-70 Mbps via Tsunami-UDP protocol. In September 2010 few scans were successfully transmitted from Quasar-Network observatories to Correlator Center at Shanghai observatory and vice-versa from Shanghai observatory to Correlator of RAS. Within these experiments observation data recorded by Mark 5B recorder are transmitted to the buffer server during time interval when an antenna pointed from one source to another. This procedure allows us to reduce total time of obtaining final result by 30%. Now an advanced algorithm for automation of the data transmitting process from the recorder to correlator is developing.
Enhancing the AliEn Web Service Authentication
NASA Astrophysics Data System (ADS)
Zhu, Jianlin; Saiz, Pablo; Carminati, Federico; Betev, Latchezar; Zhou, Daicui; Mendez Lorenzo, Patricia; Grigoras, Alina Gabriela; Grigoras, Costin; Furano, Fabrizio; Schreiner, Steffen; Vladimirovna Datskova, Olga; Sankar Banerjee, Subho; Zhang, Guoping
2011-12-01
Web Services are an XML based technology that allow applications to communicate with each other across disparate systems. Web Services are becoming the de facto standard that enable inter operability between heterogeneous processes and systems. AliEn2 is a grid environment based on web services. The AliEn2 services can be divided in three categories: Central services, deployed once per organization; Site services, deployed on each of the participating centers; Job Agents running on the worker nodes automatically. A security model to protect these services is essential for the whole system. Current implementations of web server, such as Apache, are not suitable to be used within the grid environment. Apache with the mod_ssl and OpenSSL only supports the X.509 certificates. But in the grid environment, the common credential is the proxy certificate for the purpose of providing restricted proxy and delegation. An Authentication framework was taken for AliEn2 web services to add the ability to accept X.509 certificates and proxy certificates from client-side to Apache Web Server. The authentication framework could also allow the generation of access control policies to limit access to the AliEn2 web services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.
Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. Formore » better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.« less
NASA Astrophysics Data System (ADS)
Niranjan, S. P.; Chandrasekaran, V. M.; Indhira, K.
2018-04-01
This paper examines bulk arrival and batch service queueing system with functioning server failure and multiple vacations. Customers are arriving into the system in bulk according to Poisson process with rate λ. Arriving customers are served in batches with minimum of ‘a’ and maximum of ‘b’ number of customers according to general bulk service rule. In the service completion epoch if the queue length is less than ‘a’ then the server leaves for vacation (secondary job) of random length. After a vacation completion, if the queue length is still less than ‘a’ then the server leaves for another vacation. The server keeps on going vacation until the queue length reaches the value ‘a’. The server is not stable at all the times. Sometimes it may fails during functioning of customers. Though the server fails service process will not be interrupted.It will be continued for the current batch of customers with lower service rate than the regular service rate. The server will be repaired after the service completion with lower service rate. The probability generating function of the queue size at an arbitrary time epoch will be obtained for the modelled queueing system by using supplementary variable technique. Moreover various performance characteristics will also be derived with suitable numerical illustrations.
SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhlemann, I; Graduate School for Computing in Medicine and Life Sciences, University of Luebeck; Jauer, P
Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety featuresmore » create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking applications, including image quality control and target tracking.« less
UNIX based client/server hospital information system.
Nakamura, S; Sakurai, K; Uchiyama, M; Yoshii, Y; Tachibana, N
1995-01-01
SMILE (St. Luke's Medical Center Information Linkage Environment) is a HIS which is a client/server system using a UNIX workstation under an open network, LAN(FDDI&10BASE-T). It provides a multivendor environment, high performance with low cost and a user-friendly GUI. However, the client/server architecture with a UNIX workstation does not have the same OLTP environment (ex. TP monor) as the mainframe. So, our system problems and the steps used to solve them were reviewed. Several points that are necessary for a client/server system with a UNIX workstation in the future are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsai, H.; Chen, K.; Jusko, M.
The Packaging Certification Program (PCP) of the U.S. Department of Energy (DOE) Environmental Management (EM), Office of Packaging and Transportation (EM-14), has developed a radio frequency identification (RFID) tracking and monitoring system for the management of nuclear materials during storage and transportation. The system, developed by the PCP team at Argonne National Laboratory, consists of hardware (Mk-series sensor tags, fixed and handheld readers, form factor for multiple drum types, seal integrity sensors, and enhanced battery management), software (application programming interface, ARG-US software for local and remote/web applications, secure server and database management), and cellular/satellite communication interfaces for vehicle tracking andmore » item monitoring during transport. The ability of the above system to provide accurate, real-time tracking and monitoring of the status of multiple, certified containers of nuclear materials has been successfully demonstrated in a week-long, 1,700-mile DEMO performed in April 2008. While the feedback from the approximately fifty (50) stakeholders who participated in and/or observed the DEMO progression were very positive and encouraging, two major areas of further improvements - system integration and web application enhancement - were identified in the post-DEMO evaluation. The principal purpose of the MiniDemo described in this report was to verify these two specific improvements. The MiniDemo was conducted on August 28, 2009. In terms of system integration, a hybrid communication interface - combining the RFID item-monitoring features and a commercial vehicle tracking system by Qualcomm - was developed and implemented. In the MiniDemo, the new integrated system worked well in reporting tag status and vehicle location accurately and promptly. There was no incompatibility of components. The robust commercial communication gear, as expected, helped improve system reliability. The MiniDemo confirmed that system integration is technically feasible and reliable with the existing RFID and Qualcomm satellite equipment. In terms of web application, improvements in mapping, tracking, data presentation, and post-incident spatial query reporting were implemented in ARG-US, the application software that manages the dataflow among the RFID tags, readers, and servers. These features were tested in the MiniDemo and found to be satisfactory. The resulting web application is both informative and user-friendly. A joint developmental project is being planned between the PCP and the DOE TRANSCOM that uses the Qualcomm gear in vehicles for tracking and communication of radioactive material shipments across the country. Adding an RFID interface to TRANSCOM is a significant enhancement to the DOE infrastructure for tracking and monitoring shipments of radioactive materials.« less
A collaborative platform for consensus sessions in pathology over Internet.
Zapletal, Eric; Le Bozec, Christel; Degoulet, Patrice; Jaulent, Marie-Christine
2003-01-01
The design of valid databases in pathology faces the problem of diagnostic disagreement between pathologists. Organizing consensus sessions between experts to reduce the variability is a difficult task. The TRIDEM platform addresses the issue to organize consensus sessions in pathology over the Internet. In this paper, we present the basis to achieve such collaborative platform. On the one hand, the platform integrates the functionalities of the IDEM consensus module that alleviates the consensus task by presenting to pathologists preliminary computed consensus through ergonomic interfaces (automatic step). On the other hand, a set of lightweight interaction tools such as vocal annotations are implemented to ease the communication between experts as they discuss a case (interactive step). The architecture of the TRIDEM platform is based on a Java-Server-Page web server that communicate with the ObjectStore PSE/PRO database used for the object storage. The HTML pages generated by the web server run Java applets to perform the different steps (automatic and interactive) of the consensus. The current limitations of the platform is to only handle a synchronous process. Moreover, improvements like re-writing the consensus workflow with a protocol such as BPML are already forecast.
Home media server content management
NASA Astrophysics Data System (ADS)
Tokmakoff, Andrew A.; van Vliet, Harry
2001-07-01
With the advent of set-top boxes, the convergence of TV (broadcasting) and PC (Internet) is set to enter the home environment. Currently, a great deal of activity is occurring in developing standards (TV-Anytime Forum) and devices (TiVo) for local storage on Home Media Servers (HMS). These devices lie at the heart of convergence of the triad: communications/networks - content/media - computing/software. Besides massive storage capacity and being a communications 'gateway', the home media server is characterised by the ability to handle metadata and software that provides an easy to use on-screen interface and intelligent search/content handling facilities. In this paper, we describe a research prototype HMS that is being developed within the GigaCE project at the Telematica Instituut . Our prototype demonstrates advanced search and retrieval (video browsing), adaptive user profiling and an innovative 3D component of the Electronic Program Guide (EPG) which represents online presence. We discuss the use of MPEG-7 for representing metadata, the use of MPEG-21 working draft standards for content identification, description and rights expression, and the use of HMS peer-to-peer content distribution approaches. Finally, we outline explorative user behaviour experiments that aim to investigate the effectiveness of the prototype HMS during development.
Feasibility of Using Distributed Wireless Mesh Networks for Medical Emergency Response
Braunstein, Brian; Trimble, Troy; Mishra, Rajesh; Manoj, B. S.; Rao, Ramesh; Lenert, Leslie
2006-01-01
Achieving reliable, efficient data communications networks at a disaster site is a difficult task. Network paradigms, such as Wireless Mesh Network (WMN) architectures, form one exemplar for providing high-bandwidth, scalable data communication for medical emergency response activity. WMNs are created by self-organized wireless nodes that use multi-hop wireless relaying for data transfer. In this paper, we describe our experience using a mesh network architecture we developed for homeland security and medical emergency applications. We briefly discuss the architecture and present the traffic behavioral observations made by a client-server medical emergency application tested during a large-scale homeland security drill. We present our traffic measurements, describe lessons learned, and offer functional requirements (based on field testing) for practical 802.11 mesh medical emergency response networks. With certain caveats, the results suggest that 802.11 mesh networks are feasible and scalable systems for field communications in disaster settings. PMID:17238308
Secure Server Login by Using Third Party and Chaotic System
NASA Astrophysics Data System (ADS)
Abdulatif, Firas A.; zuhiar, Maan
2018-05-01
Server is popular among all companies and it used by most of them but due to the security threat on the server make this companies are concerned when using it so that in this paper we will design a secure system based on one time password and third parity authentication (smart phone). The proposed system make security to the login process of server by using one time password to authenticate person how have permission to login and third parity device (smart phone) as other level of security.
NASA Astrophysics Data System (ADS)
Shahzad, Muhammad A.
1999-02-01
With the emergence of data warehousing, Decision support systems have evolved to its best. At the core of these warehousing systems lies a good database management system. Database server, used for data warehousing, is responsible for providing robust data management, scalability, high performance query processing and integration with other servers. Oracle being the initiator in warehousing servers, provides a wide range of features for facilitating data warehousing. This paper is designed to review the features of data warehousing - conceptualizing the concept of data warehousing and, lastly, features of Oracle servers for implementing a data warehouse.
Development of wireless sensor network for landslide monitoring system
NASA Astrophysics Data System (ADS)
Suryadi; Puranto, Prabowo; Adinanta, Hendra; Tohari, Adrin; Priambodo, Purnomo S.
2017-05-01
A wireless sensor network has been developed to monitor soil movement of some observed areas periodically. The system consists of four nodes and one gateway which installed on a scope area of 0.2 Km2. Each of nodehastwo types of sensor,an inclinometer and an extensometer. An inclinometer sensor is used to measure the tilt of a structure while anextensometer sensor is used to measure the displacement of soil movement. Each of nodeisalso supported by awireless communication device, a solar power supply unit, and a microcontroller unit called sensor module. In this system, there is also gateway module as a main communication system consistinga wireless communication device, power supply unit, and rain gauge to measure the rainfall intensity of the observed area. Each sensor of inclinometer and extensometer isconnected to the sensor module in wiring system but sensor module iscommunicating with gateway in a wireless system. Those four nodes are alsoconnectedeach other in a wireless system collecting the data from inclinometer and extensometer sensors. Module Gateway istransmitting the instruction code to each sensor module one by one and collecting the data from them. Gateway module is an important part to communicate with not only sensor modules but also to the server. This wireless system wasdesigned toreducethe electric consumption powered by 80 WP solar panel and 55Ah battery. This system has been implemented in Pangalengan, Bandung, which has high intensity of rainfall and it can be seen on the website.
[Radiology information system using HTML, JavaScript, and Web server].
Sone, M; Sasaki, M; Oikawa, H; Yoshioka, K; Ehara, S; Tamakawa, Y
1997-12-01
We have developed a radiology information system using intranet techniques, including hypertext markup language, JavaScript, and Web server. JavaScript made it possible to develop an easy-to-use application, as well as to reduce network traffic and load on the server. The system we have developed is inexpensive and flexible, and its development and maintenance are much easier than with the previous system.
MOD Tool (Microwave Optics Design Tool)
NASA Technical Reports Server (NTRS)
Katz, Daniel S.; Borgioli, Andrea; Cwik, Tom; Fu, Chuigang; Imbriale, William A.; Jamnejad, Vahraz; Springer, Paul L.
1999-01-01
The Jet Propulsion Laboratory (JPL) is currently designing and building a number of instruments that operate in the microwave and millimeter-wave bands. These include MIRO (Microwave Instrument for the Rosetta Orbiter), MLS (Microwave Limb Sounder), and IMAS (Integrated Multispectral Atmospheric Sounder). These instruments must be designed and built to meet key design criteria (e.g., beamwidth, gain, pointing) obtained from the scientific goals for the instrument. These criteria are frequently functions of the operating environment (both thermal and mechanical). To design and build instruments which meet these criteria, it is essential to be able to model the instrument in its environments. Currently, a number of modeling tools exist. Commonly used tools at JPL include: FEMAP (meshing), NASTRAN (structural modeling), TRASYS and SINDA (thermal modeling), MACOS/IMOS (optical modeling), and POPO (physical optics modeling). Each of these tools is used by an analyst, who models the instrument in one discipline. The analyst then provides the results of this modeling to another analyst, who continues the overall modeling in another discipline. There is a large reengineering task in place at JPL to automate and speed-up the structural and thermal modeling disciplines, which does not include MOD Tool. The focus of MOD Tool (and of this paper) is in the fields unique to microwave and millimeter-wave instrument design. These include initial design and analysis of the instrument without thermal or structural loads, the automation of the transfer of this design to a high-end CAD tool, and the analysis of the structurally deformed instrument (due to structural and/or thermal loads). MOD Tool is a distributed tool, with a database of design information residing on a server, physical optics analysis being performed on a variety of supercomputer platforms, and a graphical user interface (GUI) residing on the user's desktop computer. The MOD Tool client is being developed using Tcl/Tk, which allows the user to work on a choice of platforms (PC, Mac, or Unix) after downloading the Tcl/Tk binary, which is readily available on the web. The MOD Tool server is written using Expect, and it resides on a Sun workstation. Client/server communications are performed over a socket, where upon a connection from a client to the server, the server spawns a child which is be dedicated to communicating with that client. The server communicates with other machines, such as supercomputers using Expect with the username and password being provided by the user on the client.
Implementing TCP/IP and a socket interface as a server in a message-passing operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hipp, E.; Wiltzius, D.
1990-03-01
The UNICOS 4.3BSD network code and socket transport interface are the basis of an explicit network server for NLTSS, a message passing operating system on the Cray YMP. A BSD socket user library provides access to the network server using an RPC mechanism. The advantages of this server methodology are its modularity and extensibility to migrate to future protocol suites (e.g. OSI) and transport interfaces. In addition, the network server is implemented in an explicit multi-tasking environment to take advantage of the Cray YMP multi-processor platform. 19 refs., 5 figs.
Providing Internet Access to High-Resolution Mars Images
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
The OnMars server is a computer program that provides Internet access to high-resolution Mars images, maps, and elevation data, all suitable for use in geographical information system (GIS) software for generating images, maps, and computational models of Mars. The OnMars server is an implementation of the Open Geospatial Consortium (OGC) Web Map Service (WMS) server. Unlike other Mars Internet map servers that provide Martian data using an Earth coordinate system, the OnMars WMS server supports encoding of data in Mars-specific coordinate systems. The OnMars server offers access to most of the available high-resolution Martian image and elevation data, including an 8-meter-per-pixel uncontrolled mosaic of most of the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) image collection, which is not available elsewhere. This server can generate image and map files in the tagged image file format (TIFF), Joint Photographic Experts Group (JPEG), 8- or 16-bit Portable Network Graphics (PNG), or Keyhole Markup Language (KML) format. Image control is provided by use of the OGC Style Layer Descriptor (SLD) protocol. The OnMars server also implements tiled WMS protocol and super-overlay KML for high-performance client application programs.
CTserver: A Computational Thermodynamics Server for the Geoscience Community
NASA Astrophysics Data System (ADS)
Kress, V. C.; Ghiorso, M. S.
2006-12-01
The CTserver platform is an Internet-based computational resource that provides on-demand services in Computational Thermodynamics (CT) to a diverse geoscience user base. This NSF-supported resource can be accessed at ctserver.ofm-research.org. The CTserver infrastructure leverages a high-quality and rigorously tested software library of routines for computing equilibrium phase assemblages and for evaluating internally consistent thermodynamic properties of materials, e.g. mineral solid solutions and a variety of geological fluids, including magmas. Thermodynamic models are currently available for 167 phases. Recent additions include Duan, Møller and Weare's model for supercritical C-O-H-S, extended to include SO2 and S2 species, and an entirely new associated solution model for O-S-Fe-Ni sulfide liquids. This software library is accessed via the CORBA Internet protocol for client-server communication. CORBA provides a standardized, object-oriented, language and platform independent, fast, low-bandwidth interface to phase property modules running on the server cluster. Network transport, language translation and resource allocation are handled by the CORBA interface. Users access server functionality in two principal ways. Clients written as browser- based Java applets may be downloaded which provide specific functionality such as retrieval of thermodynamic properties of phases, computation of phase equilibria for systems of specified composition, or modeling the evolution of these systems along some particular reaction path. This level of user interaction requires minimal programming effort and is ideal for classroom use. A more universal and flexible mode of CTserver access involves making remote procedure calls from user programs directly to the server public interface. The CTserver infrastructure relieves the user of the burden of implementing and testing the often complex thermodynamic models of real liquids and solids. A pilot application of this distributed architecture involves CFD computation of magma convection at Volcan Villarrica with magma properties and phase proportions calculated at each spatial node and at each time step via distributed function calls to MELTS-objects executing on the CTserver. Documentation and programming examples are provided at http://ctserver.ofm- research.org.
EIR: enterprise imaging repository, an alternative imaging archiving and communication system.
Bian, Jiang; Topaloglu, Umit; Lane, Cheryl
2009-01-01
The enormous number of studies performed at the Nuclear Medicine Department of University of Arkansas for Medical Sciences (UAMS) generates a huge amount PET/CT images daily. A DICOM workstation had been used as "mini-PACS" to route all studies, which is historically proven to be slow due to various reasons. However, replacing the workstation with a commercial PACS server is not only cost inefficient; and more often, the PACS vendors are reluctant to take responsibility for the final integration of these components. Therefore, in this paper, we propose an alternative imaging archiving and communication system called Enterprise Imaging Repository (EIR). EIR consists of two distinguished components: an image processing daemon and a user friendly web interface. EIR not only reduces the overall waiting time of transferring a study from the modalities to radiologists' workstations, but also provides a more preferable presentation.
A RESTful image gateway for multiple medical image repositories.
Valente, Frederico; Viana-Ferreira, Carlos; Costa, Carlos; Oliveira, José Luis
2012-05-01
Mobile technologies are increasingly important components in telemedicine systems and are becoming powerful decision support tools. Universal access to data may already be achieved by resorting to the latest generation of tablet devices and smartphones. However, the protocols employed for communicating with image repositories are not suited to exchange data with mobile devices. In this paper, we present an extensible approach to solving the problem of querying and delivering data in a format that is suitable for the bandwidth and graphic capacities of mobile devices. We describe a three-tiered component-based gateway that acts as an intermediary between medical applications and a number of Picture Archiving and Communication Systems (PACS). The interface with the gateway is accomplished using Hypertext Transfer Protocol (HTTP) requests following a Representational State Transfer (REST) methodology, which relieves developers from dealing with complex medical imaging protocols and allows the processing of data on the server side.
Farouk, Ahmed; Zakaria, Magdy; Megahed, Adel; Omara, Fatma A.
2015-01-01
In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N − 1 disjointed users u1, u2, …, uN−1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N − 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N − 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement. PMID:26577473
NASA Astrophysics Data System (ADS)
Sasikala, S.; Indhira, K.; Chandrasekaran, V. M.
2017-11-01
In this paper, we have considered an MX / (a,b) / 1 queueing system with server breakdown without interruption, multiple vacations, setup times and N-policy. After a batch of service, if the size of the queue is ξ (< a), then the server immediately takes a vacation. Upon returns from a vacation, if the queue is less than N, then the server takes another vacation. This process continues until the server finds atleast N customers in the queue. After a vacation, if the server finds at least N customers waiting for service, then the server needs a setup time to start the service. After a batch of service, if the amount of waiting customers in the queue is ξ (≥ a) then the server serves a batch of min(ξ,b) customers, where b ≥ a. We derived the probability generating function of queue length at arbitrary time epoch. Further, we obtained some important performance measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The system is developed to collect, process, store and present the information provided by the radio frequency identification (RFID) devices. The system contains three parts, the application software, the database and the web page. The application software manages multiple RFID devices, such as readers and portals, simultaneously. It communicates with the devices through application programming interface (API) provided by the device vendor. The application software converts data collected by the RFID readers and portals to readable information. It is capable of encrypting data using 256 bits advanced encryption standard (AES). The application software has a graphical user interface (GUI). Themore » GUI mimics the configurations of the nucler material storage sites or transport vehicles. The GUI gives the user and system administrator an intuitive way to read the information and/or configure the devices. The application software is capable of sending the information to a remote, dedicated and secured web and database server. Two captured screen samples, one for storage and transport, are attached. The database is constructed to handle a large number of RFID tag readers and portals. A SQL server is employed for this purpose. An XML script is used to update the database once the information is sent from the application software. The design of the web page imitates the design of the application software. The web page retrieves data from the database and presents it in different panels. The user needs a user name combined with a password to access the web page. The web page is capable of sending e-mail and text messages based on preset criteria, such as when alarm thresholds are excceeded. A captured screen sample is attached. The application software is designed to be installed on a local computer. The local computer is directly connected to the RFID devices and can be controlled locally or remotely. There are multiple local computers managing different sites or transport vehicles. The control from remote sites and information transmitted to a central database server is through secured internet. The information stored in the central databaser server is shown on the web page. The users can view the web page on the internet. A dedicated and secured web and database server (https) is used to provide information security.« less
The Application of Wireless Sensor Networks in Management of Orchard
NASA Astrophysics Data System (ADS)
Zhu, Guizhi
A monitoring system based on wireless sensor network is established, aiming at the difficulty of information acquisition in the orchard on the hill at present. The temperature and humidity sensors are deployed around fruit trees to gather the real-time environmental parameters, and the wireless communication modules with self-organized form, which transmit the data to a remote central server, can realize the function of monitoring. By setting the parameters of data intelligent analysis judgment, the information on remote diagnosis and decision support can be timely and effectively feed back to users.
Parallel Computing Using Web Servers and "Servlets".
ERIC Educational Resources Information Center
Lo, Alfred; Bloor, Chris; Choi, Y. K.
2000-01-01
Describes parallel computing and presents inexpensive ways to implement a virtual parallel computer with multiple Web servers. Highlights include performance measurement of parallel systems; models for using Java and intranet technology including single server, multiple clients and multiple servers, single client; and a comparison of CGI (common…
Experience with Adaptive Security Policies.
1998-03-01
3.1 Introduction r: 3.2 Logical Groupings of audited permission checks 29 3.3 Auditing of system servers via microkernel snooping 31 3.4...performed by servers other than the microkernel . Since altering each server to audit events would complicate the integration of new servers, a...modification to the microkernel was implemented to allow the microkernel to audit the requests made of other servers. Both methods for enhancing audit
Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments
NASA Astrophysics Data System (ADS)
Pretto, N.; Poiesi, F.
2017-11-01
We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.
A Multiserver Biometric Authentication Scheme for TMIS using Elliptic Curve Cryptography.
Chaudhry, Shehzad Ashraf; Khan, Muhammad Tawab; Khan, Muhammad Khurram; Shon, Taeshik
2016-11-01
Recently several authentication schemes are proposed for telecare medicine information system (TMIS). Many of such schemes are proved to have weaknesses against known attacks. Furthermore, numerous such schemes cannot be used in real time scenarios. Because they assume a single server for authentication across the globe. Very recently, Amin et al. (J. Med. Syst. 39(11):180, 2015) designed an authentication scheme for secure communication between a patient and a medical practitioner using a trusted central medical server. They claimed their scheme to extend all security requirements and emphasized the efficiency of their scheme. However, the analysis in this article proves that the scheme designed by Amin et al. is vulnerable to stolen smart card and stolen verifier attacks. Furthermore, their scheme is having scalability issues along with inefficient password change and password recovery phases. Then we propose an improved scheme. The proposed scheme is more practical, secure and lightweight than Amin et al.'s scheme. The security of proposed scheme is proved using the popular automated tool ProVerif.
A Framework for Real-Time Collection, Analysis, and Classification of Ubiquitous Infrasound Data
NASA Astrophysics Data System (ADS)
Christe, A.; Garces, M. A.; Magana-Zook, S. A.; Schnurr, J. M.
2015-12-01
Traditional infrasound arrays are generally expensive to install and maintain. There are ~10^3 infrasound channels on Earth today. The amount of data currently provided by legacy architectures can be processed on a modest server. However, the growing availability of low-cost, ubiquitous, and dense infrasonic sensor networks presents a substantial increase in the volume, velocity, and variety of data flow. Initial data from a prototype ubiquitous global infrasound network is already pushing the boundaries of traditional research server and communication systems, in particular when serving data products over heterogeneous, international network topologies. We present a scalable, cloud-based approach for capturing and analyzing large amounts of dense infrasonic data (>10^6 channels). We utilize Akka actors with WebSockets to maintain data connections with infrasound sensors. Apache Spark provides streaming, batch, machine learning, and graph processing libraries which will permit signature classification, cross-correlation, and other analytics in near real time. This new framework and approach provide significant advantages in scalability and cost.
New Web Server - the Java Version of Tempest - Produced
NASA Technical Reports Server (NTRS)
York, David W.; Ponyik, Joseph G.
2000-01-01
A new software design and development effort has produced a Java (Sun Microsystems, Inc.) version of the award-winning Tempest software (refs. 1 and 2). In 1999, the Embedded Web Technology (EWT) team received a prestigious R&D 100 Award for Tempest, Java Version. In this article, "Tempest" will refer to the Java version of Tempest, a World Wide Web server for desktop or embedded systems. Tempest was designed at the NASA Glenn Research Center at Lewis Field to run on any platform for which a Java Virtual Machine (JVM, Sun Microsystems, Inc.) exists. The JVM acts as a translator between the native code of the platform and the byte code of Tempest, which is compiled in Java. These byte code files are Java executables with a ".class" extension. Multiple byte code files can be zipped together as a "*.jar" file for more efficient transmission over the Internet. Today's popular browsers, such as Netscape (Netscape Communications Corporation) and Internet Explorer (Microsoft Corporation) have built-in Virtual Machines to display Java applets.
Adaptable radiation monitoring system and method
Archer, Daniel E [Livermore, CA; Beauchamp, Brock R [San Ramon, CA; Mauger, G Joseph [Livermore, CA; Nelson, Karl E [Livermore, CA; Mercer, Michael B [Manteca, CA; Pletcher, David C [Sacramento, CA; Riot, Vincent J [Berkeley, CA; Schek, James L [Tracy, CA; Knapp, David A [Livermore, CA
2006-06-20
A portable radioactive-material detection system capable of detecting radioactive sources moving at high speeds. The system has at least one radiation detector capable of detecting gamma-radiation and coupled to an MCA capable of collecting spectral data in very small time bins of less than about 150 msec. A computer processor is connected to the MCA for determining from the spectral data if a triggering event has occurred. Spectral data is stored on a data storage device, and a power source supplies power to the detection system. Various configurations of the detection system may be adaptably arranged for various radiation detection scenarios. In a preferred embodiment, the computer processor operates as a server which receives spectral data from other networked detection systems, and communicates the collected data to a central data reporting system.
Design of SIP transformation server for efficient media negotiation
NASA Astrophysics Data System (ADS)
Pack, Sangheon; Paik, Eun Kyoung; Choi, Yanghee
2001-07-01
Voice over IP (VoIP) is one of the advanced services supported by the next generation mobile communication. VoIP should support various media formats and terminals existing together. This heterogeneous environment may prevent diverse users from establishing VoIP sessions among them. To solve the problem an efficient media negotiation mechanism is required. In this paper, we propose the efficient media negotiation architecture using the transformation server and the Intelligent Location Server (ILS). The transformation server is an extended Session Initiation Protocol (SIP) proxy server. It can modify an unacceptable session INVITE message into an acceptable one using the ILS. The ILS is a directory server based on the Lightweight Directory Access Protocol (LDAP) that keeps userí*s location information and available media information. The proposed architecture can eliminate an unnecessary response and re-INVITE messages of the standard SIP architecture. It takes only 1.5 round trip times to negotiate two different media types while the standard media negotiation mechanism takes 2.5 round trip times. The extra processing time in message handling is negligible in comparison to the reduced round trip time. The experimental results show that the session setup time in the proposed architecture is less than the setup time in the standard SIP. These results verify that the proposed media negotiation mechanism is more efficient in solving diversity problems.
Interactive video audio system: communication server for INDECT portal
NASA Astrophysics Data System (ADS)
Mikulec, Martin; Voznak, Miroslav; Safarik, Jakub; Partila, Pavol; Rozhon, Jan; Mehic, Miralem
2014-05-01
The paper deals with presentation of the IVAS system within the 7FP EU INDECT project. The INDECT project aims at developing the tools for enhancing the security of citizens and protecting the confidentiality of recorded and stored information. It is a part of the Seventh Framework Programme of European Union. We participate in INDECT portal and the Interactive Video Audio System (IVAS). This IVAS system provides a communication gateway between police officers working in dispatching centre and police officers in terrain. The officers in dispatching centre have capabilities to obtain information about all online police officers in terrain, they can command officers in terrain via text messages, voice or video calls and they are able to manage multimedia files from CCTV cameras or other sources, which can be interesting for officers in terrain. The police officers in terrain are equipped by smartphones or tablets. Besides common communication, they can reach pictures or videos sent by commander in office and they can respond to the command via text or multimedia messages taken by their devices. Our IVAS system is unique because we are developing it according to the special requirements from the Police of the Czech Republic. The IVAS communication system is designed to use modern Voice over Internet Protocol (VoIP) services. The whole solution is based on open source software including linux and android operating systems. The technical details of our solution are presented in the paper.
Scalable Integrated Multi-Mission Support System Simulator Release 3.0
NASA Technical Reports Server (NTRS)
Kim, John; Velamuri, Sarma; Casey, Taylor; Bemann, Travis
2012-01-01
The Scalable Integrated Multi-mission Support System (SIMSS) is a tool that performs a variety of test activities related to spacecraft simulations and ground segment checks. SIMSS is a distributed, component-based, plug-and-play client-server system useful for performing real-time monitoring and communications testing. SIMSS runs on one or more workstations and is designed to be user-configurable or to use predefined configurations for routine operations. SIMSS consists of more than 100 modules that can be configured to create, receive, process, and/or transmit data. The SIMSS/GMSEC innovation is intended to provide missions with a low-cost solution for implementing their ground systems, as well as significantly reducing a mission s integration time and risk.
Control of Smart Building Using Advanced SCADA
NASA Astrophysics Data System (ADS)
Samuel, Vivin Thomas
For complete control of the building, a proper SCADA implementation and the optimization strategy has to be build. For better communication and efficiency a proper channel between the Communication protocol and SCADA has to be designed. This paper concentrate mainly between the communication protocol, and the SCADA implementation, for a better optimization and energy savings is derived to large scale industrial buildings. The communication channel used in order to completely control the building remotely from a distant place. For an efficient result we consider the temperature values and the power ratings of the equipment so that while controlling the equipment, we are setting a threshold values for FDD technique implementation. Building management system became a vital source for any building to maintain it and for safety purpose. Smart buildings, refers to various distinct features, where the complete automation system, office building controls, data center controls. ELC's are used to communicate the load values of the building to the remote server from a far location with the help of an Ethernet communication channel. Based on the demand fluctuation and the peak voltage, the loads operate differently increasing the consumption rate thus results in the increase in the annual consumption bill. In modern days, saving energy and reducing the consumption bill is most essential for any building for a better and long operation. The equipment - monitored regularly and optimization strategy is implemented for cost reduction automation system. Thus results in the reduction of annual cost reduction and load lifetime increase.
A Study on Secure Medical-Contents Strategies with DRM Based on Cloud Computing
Měsíček, Libor; Choi, Jongsun
2018-01-01
Many hospitals and medical clinics have been using a wearable sensor in its health care system because the wearable sensor, which is able to measure the patients' biometric information, has been developed to analyze their patients remotely. The measured information is saved to a server in a medical center, and the server keeps the medical information, which also involves personal information, on a cloud system. The server and network devices are used by connecting each other, and sensitive medical records are dealt with remotely. However, these days, the attackers, who try to attack the server or the network systems, are increasing. In addition, the server and the network system have a weak protection and security policy against the attackers. In this paper, it is suggested that security compliance of medical contents should be followed to improve the level of security. As a result, the medical contents are kept safely. PMID:29796233
System and Method for Providing a Climate Data Persistence Service
NASA Technical Reports Server (NTRS)
Schnase, John L. (Inventor); Ripley, III, William David (Inventor); Duffy, Daniel Q. (Inventor); Thompson, John H. (Inventor); Strong, Savannah L. (Inventor); McInerney, Mark (Inventor); Sinno, Scott (Inventor); Tamkin, Glenn S. (Inventor); Nadeau, Denis (Inventor)
2018-01-01
A system, method and computer-readable storage devices for providing a climate data persistence service. A system configured to provide the service can include a climate data server that performs data and metadata storage and management functions for climate data objects, a compute-storage platform that provides the resources needed to support a climate data server, provisioning software that allows climate data server instances to be deployed as virtual climate data servers in a cloud computing environment, and a service interface, wherein persistence service capabilities are invoked by software applications running on a client device. The climate data objects can be in various formats, such as International Organization for Standards (ISO) Open Archival Information System (OAIS) Reference Model Submission Information Packages, Archive Information Packages, and Dissemination Information Packages. The climate data server can enable scalable, federated storage, management, discovery, and access, and can be tailored for particular use cases.
A Study on Secure Medical-Contents Strategies with DRM Based on Cloud Computing.
Ko, Hoon; Měsíček, Libor; Choi, Jongsun; Hwang, Seogchan
2018-01-01
Many hospitals and medical clinics have been using a wearable sensor in its health care system because the wearable sensor, which is able to measure the patients' biometric information, has been developed to analyze their patients remotely. The measured information is saved to a server in a medical center, and the server keeps the medical information, which also involves personal information, on a cloud system. The server and network devices are used by connecting each other, and sensitive medical records are dealt with remotely. However, these days, the attackers, who try to attack the server or the network systems, are increasing. In addition, the server and the network system have a weak protection and security policy against the attackers. In this paper, it is suggested that security compliance of medical contents should be followed to improve the level of security. As a result, the medical contents are kept safely.
Distributed road assessment system
Beer, N. Reginald; Paglieroni, David W
2014-03-25
A system that detects damage on or below the surface of a paved structure or pavement is provided. A distributed road assessment system includes road assessment pods and a road assessment server. Each road assessment pod includes a ground-penetrating radar antenna array and a detection system that detects road damage from the return signals as the vehicle on which the pod is mounted travels down a road. Each road assessment pod transmits to the road assessment server occurrence information describing each occurrence of road damage that is newly detected on a current scan of a road. The road assessment server maintains a road damage database of occurrence information describing the previously detected occurrences of road damage. After the road assessment server receives occurrence information for newly detected occurrences of road damage for a portion of a road, the road assessment server determines which newly detected occurrences correspond to which previously detected occurrences of road damage.
Wireless-PDA-controlled image workflow from PACS: the next trend in the health care enterprise?
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Documet, Jorge; Zhou, Michael Z.; Cao, Fei; Liu, Brent J.; Mogel, Greg T.; Huang, H. K.
2003-05-01
Image workflow in today's Picture Archiving and Communication Systems (PACS) is controlled from fixed Display Workstations (DW) using proprietary control interfaces. A remote access to the Hospital Information System (HIS) and Radiology Information System (RIS) for urgent patient information retrieval does not exist or gradually become available. The lack for remote access and workflow control for HIS and RIS is especially true when it comes to medical images of a PACS on Department or Hospital level. As images become more complex and data sizes expand rapidly with new image techniques like functional MRI, Mammography or routine spiral CT to name a few, the access and manageability becomes an important issue. Long image downloads or incomplete work lists cannot be tolerated in a busy health care environment. In addition, the domain of the PACS is no longer limited to the imaging department and PACS is also being used in the ER and emergency care units. Thus a prompt and secure access and manageability not only by the radiologist, but also from the physician becomes crucial to optimally utilize the PACS in the health care enterprise of the new millennium. The purpose of this paper is to introduce a concept and its implementation of a remote access and workflow control of the PACS combining wireless, Internet and Internet2 technologies. A wireless device, the Personal Digital Assistant (PDA), is used to communicate to a PACS web server that acts as a gateway controlling the commands for which the user has access to the PACS server. The commands implemented for this test-bed are query/retrieve of the patient list and study list including modality, examination, series and image selection and pushing any list items to a selected DW on the PACS network.
The SAMS: Smartphone Addiction Management System and verification.
Lee, Heyoung; Ahn, Heejune; Choi, Samwook; Choi, Wanbok
2014-01-01
While the popularity of smartphones has given enormous convenience to our lives, their pathological use has created a new mental health concern among the community. Hence, intensive research is being conducted on the etiology and treatment of the condition. However, the traditional clinical approach based surveys and interviews has serious limitations: health professionals cannot perform continual assessment and intervention for the affected group and the subjectivity of assessment is questionable. To cope with these limitations, a comprehensive ICT (Information and Communications Technology) system called SAMS (Smartphone Addiction Management System) is developed for objective assessment and intervention. The SAMS system consists of an Android smartphone application and a web application server. The SAMS client monitors the user's application usage together with GPS location and Internet access location, and transmits the data to the SAMS server. The SAMS server stores the usage data and performs key statistical data analysis and usage intervention according to the clinicians' decision. To verify the reliability and efficacy of the developed system, a comparison study with survey-based screening with the K-SAS (Korean Smartphone Addiction Scale) as well as self-field trials is performed. The comparison study is done using usage data from 14 users who are 19 to 50 year old adults that left at least 1 week usage logs and completed the survey questionnaires. The field trial fully verified the accuracy of the time, location, and Internet access information in the usage measurement and the reliability of the system operation over more than 2 weeks. The comparison study showed that daily use count has a strong correlation with K-SAS scores, whereas daily use times do not strongly correlate for potentially addicted users. The correlation coefficients of count and times with total K-SAS score are CC = 0.62 and CC =0.07, respectively, and the t-test analysis for the contrast group of potential addicts and the values for the non-addicts were p = 0.047 and p = 0.507, respectively.
Observing Consistency in Online Communication Patterns for User Re-Identification.
Adeyemi, Ikuesan Richard; Razak, Shukor Abd; Salleh, Mazleena; Venter, Hein S
2016-01-01
Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas.
An object-oriented, technology-adaptive information model
NASA Technical Reports Server (NTRS)
Anyiwo, Joshua C.
1995-01-01
The primary objective was to develop a computer information system for effectively presenting NASA's technologies to American industries, for appropriate commercialization. To this end a comprehensive information management model, applicable to a wide variety of situations, and immune to computer software/hardware technological gyrations, was developed. The model consists of four main elements: a DATA_STORE, a data PRODUCER/UPDATER_CLIENT and a data PRESENTATION_CLIENT, anchored to a central object-oriented SERVER engine. This server engine facilitates exchanges among the other model elements and safeguards the integrity of the DATA_STORE element. It is designed to support new technologies, as they become available, such as Object Linking and Embedding (OLE), on-demand audio-video data streaming with compression (such as is required for video conferencing), Worldwide Web (WWW) and other information services and browsing, fax-back data requests, presentation of information on CD-ROM, and regular in-house database management, regardless of the data model in place. The four components of this information model interact through a system of intelligent message agents which are customized to specific information exchange needs. This model is at the leading edge of modern information management models. It is independent of technological changes and can be implemented in a variety of ways to meet the specific needs of any communications situation. This summer a partial implementation of the model has been achieved. The structure of the DATA_STORE has been fully specified and successfully tested using Microsoft's FoxPro 2.6 database management system. Data PRODUCER/UPDATER and PRESENTATION architectures have been developed and also successfully implemented in FoxPro; and work has started on a full implementation of the SERVER engine. The model has also been successfully applied to a CD-ROM presentation of NASA's technologies in support of Langley Research Center's TAG efforts.
Retrieving high-resolution images over the Internet from an anatomical image database
NASA Astrophysics Data System (ADS)
Strupp-Adams, Annette; Henderson, Earl
1999-12-01
The Visible Human Data set is an important contribution to the national collection of anatomical images. To enhance the availability of these images, the National Library of Medicine has supported the design and development of a prototype object-oriented image database which imports, stores, and distributes high resolution anatomical images in both pixel and voxel formats. One of the key database modules is its client-server Internet interface. This Web interface provides a query engine with retrieval access to high-resolution anatomical images that range in size from 100KB for browser viewable rendered images, to 1GB for anatomical structures in voxel file formats. The Web query and retrieval client-server system is composed of applet GUIs, servlets, and RMI application modules which communicate with each other to allow users to query for specific anatomical structures, and retrieve image data as well as associated anatomical images from the database. Selected images can be downloaded individually as single files via HTTP or downloaded in batch-mode over the Internet to the user's machine through an applet that uses Netscape's Object Signing mechanism. The image database uses ObjectDesign's object-oriented DBMS, ObjectStore that has a Java interface. The query and retrieval systems has been tested with a Java-CDE window system, and on the x86 architecture using Windows NT 4.0. This paper describes the Java applet client search engine that queries the database; the Java client module that enables users to view anatomical images online; the Java application server interface to the database which organizes data returned to the user, and its distribution engine that allow users to download image files individually and/or in batch-mode.
Optimal Resource Allocation under Fair QoS in Multi-tier Server Systems
NASA Astrophysics Data System (ADS)
Akai, Hirokazu; Ushio, Toshimitsu; Hayashi, Naoki
Recent development of network technology realizes multi-tier server systems, where several tiers perform functionally different processing requested by clients. It is an important issue to allocate resources of the systems to clients dynamically based on their current requests. On the other hand, Q-RAM has been proposed for resource allocation in real-time systems. In the server systems, it is important that execution results of all applications requested by clients are the same QoS(quality of service) level. In this paper, we extend Q-RAM to multi-tier server systems and propose a method for optimal resource allocation with fairness of the QoS levels of clients’ requests. We also consider an assignment problem of physical machines to be sleep in each tier sothat the energy consumption is minimized.
NASA Astrophysics Data System (ADS)
Joshi, Rajan L.
2006-03-01
In medical imaging, the popularity of image capture modalities such as multislice CT and MRI is resulting in an exponential increase in the amount of volumetric data that needs to be archived and transmitted. At the same time, the increased data is taxing the interpretation capabilities of radiologists. One of the workflow strategies recommended for radiologists to overcome the data overload is the use of volumetric navigation. This allows the radiologist to seek a series of oblique slices through the data. However, it might be inconvenient for a radiologist to wait until all the slices are transferred from the PACS server to a client, such as a diagnostic workstation. To overcome this problem, we propose a client-server architecture based on JPEG2000 and JPEG2000 Interactive Protocol (JPIP) for rendering oblique slices through 3D volumetric data stored remotely at a server. The client uses the JPIP protocol for obtaining JPEG2000 compressed data from the server on an as needed basis. In JPEG2000, the image pixels are wavelet-transformed and the wavelet coefficients are grouped into precincts. Based on the positioning of the oblique slice, compressed data from only certain precincts is needed to render the slice. The client communicates this information to the server so that the server can transmit only relevant compressed data. We also discuss the use of caching on the client side for further reduction in bandwidth requirements. Finally, we present simulation results to quantify the bandwidth savings for rendering a series of oblique slices.
Flexible distributed architecture for semiconductor process control and experimentation
NASA Astrophysics Data System (ADS)
Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.
1997-01-01
Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.
A mobile field-work data collection system for the wireless era of health surveillance.
Forsell, Marianne; Sjögren, Petteri; Renard, Matthew; Johansson, Olle
2011-03-01
In many countries or regions the capacity of health care resources is below the needs of the population and new approaches for health surveillance are needed. Innovative projects, utilizing wireless communication technology, contribute to reliable methods for field-work data collection and reporting to databases. The objective was to describe a new version of a wireless IT-support system for field-work data collection and administration. The system requirements were drawn from the design objective and translated to system functions. The system architecture was based on fieldwork experiences and administrative requirements. The Smartphone devices were HTC Touch Diamond2s, while the system was based on a platform with Microsoft .NET components, and a SQL Server 2005 with Microsoft Windows Server 2003 operating system. The user interfaces were based on .NET programming, and Microsoft Windows Mobile operating system. A synchronization module enabled download of field data to the database, via a General Packet Radio Services (GPRS) to a Local Area Network (LAN) interface. The field-workers considered the here-described applications user-friendly and almost self-instructing. The office administrators considered that the back-office interface facilitated retrieval of health reports and invoice distribution. The current IT-support system facilitates short lead times from fieldwork data registration to analysis, and is suitable for various applications. The advantages of wireless technology, and paper-free data administration need to be increasingly emphasized in development programs, in order to facilitate reliable and transparent use of limited resources.
Designing and Implementation of River Classification Assistant Management System
NASA Astrophysics Data System (ADS)
Zhao, Yinjun; Jiang, Wenyuan; Yang, Rujun; Yang, Nan; Liu, Haiyan
2018-03-01
In an earlier publication, we proposed a new Decision Classifier (DCF) for Chinese river classification based on their structures. To expand, enhance and promote the application of the DCF, we build a computer system to support river classification named River Classification Assistant Management System. Based on ArcEngine and ArcServer platform, this system implements many functions such as data management, extraction of river network, river classification, and results publication under combining Client / Server with Browser / Server framework.
A Large-scale Distributed Indexed Learning Framework for Data that Cannot Fit into Memory
2015-03-27
learn a classifier. Integrating three learning techniques (online, semi-supervised and active learning ) together with a selective sampling with minimum communication between the server and the clients solved this problem.
An Optimization of the Basic School Military Occupational Skill Assignment Process
2003-06-01
Corps Intranet (NMCI)23 supports it. We evaluated the use of Microsoft’s SQL Server, but dismissed this after learning that TBS did not possess a SQL ...Server license or a qualified SQL Server administrator.24 SQL Server would have provided for additional security measures not available in MS...administrator. Although not has powerful as SQL Server, MS Access can handle the multi-user environment necessary for this system.25 The training
Kuzmak, P. M.; Dayhoff, R. E.
1992-01-01
There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner. PMID:1482906
Kuzmak, P M; Dayhoff, R E
1992-01-01
There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner.
Enhanced networked server management with random remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2003-08-01
In this paper, the model is focused on available server management in network environments. The (remote) backup servers are hooked up by VPN (Virtual Private Network) and replace broken main severs immediately. A virtual private network (VPN) is a way to use a public network infrastructure and hooks up long-distance servers within a single network infrastructure. The servers can be represent as "machines" and then the system deals with main unreliable and random auxiliary spare (remote backup) machines. When the system performs a mandatory routine maintenance, auxiliary machines are being used for backups during idle periods. Unlike other existing models, the availability of auxiliary machines is changed for each activation in this enhanced model. Analytically tractable results are obtained by using several mathematical techniques and the results are demonstrated in the framework of optimized networked server allocation problems.
Client - server programs analysis in the EPOCA environment
NASA Astrophysics Data System (ADS)
Donatelli, Susanna; Mazzocca, Nicola; Russo, Stefano
1996-09-01
Client - server processing is a popular paradigm for distributed computing. In the development of client - server programs, the designer has first to ensure that the implementation behaves correctly, in particular that it is deadlock free. Second, he has to guarantee that the program meets predefined performance requirements. This paper addresses the issues in the analysis of client - server programs in EPOCA. EPOCA is a computer-aided software engeneering (CASE) support system that allows the automated construction and analysis of generalized stochastic Petri net (GSPN) models of concurrent applications. The paper describes, on the basis of a realistic case study, how client - server systems are modelled in EPOCA, and the kind of qualitative and quantitative analysis supported by its tools.
Stockburger, D W
1999-05-01
Active server pages permit a software developer to customize the Web experience for users by inserting server-side script and database access into Web pages. This paper describes applications of these techniques and provides a primer on the use of these methods. Applications include a system that generates and grades individualized homework assignments and tests for statistics students. The student accesses the system as a Web page, prints out the assignment, does the assignment, and enters the answers on the Web page. The server, running on NT Server 4.0, grades the assignment, updates the grade book (on a database), and returns the answer key to the student.
Saving energy for the data collection point in WBAN network
NASA Astrophysics Data System (ADS)
Nguyen-Duc, Toan; Kamioka, Eiji
2017-11-01
Wireless sensor networking (WSN) has been rapidly developed and become essential in various domains including health care systems. Such systems use WSN to collect real-time medical sensed data, aiming at improving the patient safety. For instance, patients suffered from adverse events, i.e., cardiac or respiratory arrests, are monitored so as to prevent them from getting harm. Sensors are placed on, in or near the patients' body to continuously collect sensing data such as the electrocardiograms, blood oxygenation, breathing, and heart rate. In this case, the sensors form a subcategory of WSN called wireless body area network (WBAN). In WBAN, sensing data are sent to one or more data collection points called personal server (PS). The role of PS is important since it forwards sensed data, to a medical server via a Bluetooth/WLAN connection in real time to support storage of information and real-time diagnosis, the device can also issue a notification of an emergency status. Since PS is a battery-based device, when its battery is empty, it will disconnect the sensed medical data with the rest network. To best of our knowledge, very few studies that focus on saving energy for the PS. To this end, this work investigates the trade-off between energy consumption for wireless communication and the amount of sensing data. An energy consumption model for wireless communication has been proposed based on direct measurement using real testbed. According to our findings, it is possible to save energy for the PS by selecting suitable wireless technology to be used based on the amount of data to be transmitted.
Implementation of information and communication technologies for health in Bangladesh
Tabassum, Reshman
2015-01-01
Abstract Problem Bangladesh has yet to develop a fully integrated health information system infrastructure that is critical to guiding policy development and planning. Approach Initial pilot telemedicine and eHealth programmes were not coordinated at national level. However, in 2011, a national eHealth policy was implemented. Local setting Bangladesh has made substantial improvements to its health system. However, the country still faces public health challenges with limited and inequitable access to health services and lack of adequate resources to meet the demands of the population. Relevant changes In 2008, eHealth services were introduced, including computerization of health facilities at sub-district levels, internet connections, internet servers and an mHealth service for communicating with health-care providers. Health facilities at sub-district levels were provided with internet connections and servers. In 482 upazila health complexes and district hospitals, an mHealth service was set-up where an on-duty doctor is available for patients at all hours to provide consultations by mobile phone. A government operated telemedicine service was initiated and by 2014, 43 fully equipped centres were in service. These centres provide medical consultations by qualified physicians to patients visiting rural and remote community clinics and union health centres. Lessons learnt Despite early pilot interventions and successful implementation, progress in adopting eHealth strategies in Bangladesh has been slow. There is a lack of common standards on information technology for health, which causes difficulties in data management and sharing among different databases. Limited internet bandwidth and the high cost of infrastructure and software development are barriers to adoption of these technologies. PMID:26549909
Implementation of information and communication technologies for health in Bangladesh.
Islam, Sheik Mohammed Shariful; Tabassum, Reshman
2015-11-01
Bangladesh has yet to develop a fully integrated health information system infrastructure that is critical to guiding policy development and planning. Initial pilot telemedicine and eHealth programmes were not coordinated at national level. However, in 2011, a national eHealth policy was implemented. Bangladesh has made substantial improvements to its health system. However, the country still faces public health challenges with limited and inequitable access to health services and lack of adequate resources to meet the demands of the population. In 2008, eHealth services were introduced, including computerization of health facilities at sub-district levels, internet connections, internet servers and an mHealth service for communicating with health-care providers. Health facilities at sub-district levels were provided with internet connections and servers. In 482 upazila health complexes and district hospitals, an mHealth service was set-up where an on-duty doctor is available for patients at all hours to provide consultations by mobile phone. A government operated telemedicine service was initiated and by 2014, 43 fully equipped centres were in service. These centres provide medical consultations by qualified physicians to patients visiting rural and remote community clinics and union health centres. Despite early pilot interventions and successful implementation, progress in adopting eHealth strategies in Bangladesh has been slow. There is a lack of common standards on information technology for health, which causes difficulties in data management and sharing among different databases. Limited internet bandwidth and the high cost of infrastructure and software development are barriers to adoption of these technologies.
Designing a scalable video-on-demand server with data sharing
NASA Astrophysics Data System (ADS)
Lim, Hyeran; Du, David H.
2000-12-01
As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.
Designing a scalable video-on-demand server with data sharing
NASA Astrophysics Data System (ADS)
Lim, Hyeran; Du, David H. C.
2001-01-01
As current disk space and transfer speed increase, the bandwidth between a server and its disks has become critical for video-on-demand (VOD) services. Our VOD server consists of several hosts sharing data on disks through a ring-based network. Data sharing provided by the spatial-reuse ring network between servers and disks not only increases the utilization towards full bandwidth but also improves the availability of videos. Striping and replication methods are introduced in order to improve the efficiency of our VOD server system as well as the availability of videos. We consider tow kinds of resources of a VOD server system. Given a representative access profile, our intention is to propose an algorithm to find an initial condition, place videos on disks in the system successfully. If any copy of a video cannot be placed due to lack of resources, more servers/disks are added. When all videos are place on the disks by our algorithm, the final configuration is determined with indicator of how tolerable it is against the fluctuation in demand of videos. Considering it is a NP-hard problem, our algorithm generates the final configuration with O(M log M) at best, where M is the number of movies.
NASA Astrophysics Data System (ADS)
Hernandez, C.
2010-09-01
The weakness of small island electrical grids implies a handicap for the electrical generation with renewable energy sources. With the intention of maximizing the installation of photovoltaic generators in the Canary Islands, arises the need to develop a solar forecasting system that allows knowing in advance the amount of PV generated electricity that will be going into the grid, from the installed PV power plants installed in the island. The forecasting tools need to get feedback from real weather data in "real time" from remote weather stations. Nevertheless, the transference of this data to the calculation computer servers is very complicated with the old point to point telecommunication systems that, neither allow the transfer of data from several remote weather stations simultaneously nor high frequency of sampling of weather parameters due to slowness of the connection. This one project has developed a telecommunications infrastructure that allows sensorizadas remote stations, to send data of its sensors, once every minute and simultaneously, to the calculation server running the solar forecasting numerical models. For it, the Canary Islands Institute of Technology has added a sophisticated communications network to its 30 weather stations measuring irradiation at strategic sites, areas with high penetration of photovoltaic generation or that have potential to host in the future photovoltaic power plants connected to the grid. In each one of the stations, irradiance and temperature measurement instruments have been installed, over inclined silicon cell, global radiation on horizontal surface and room temperature. Mobile telephone devices have been installed and programmed in each one of the weather stations, which allow the transfer of their data taking advantage of the UMTS service offered by the local telephone operator. Every minute the computer server running the numerical weather forecasting models receives data inputs from 120 instruments distributed over the 30 radiometric stations. As a the result, currently it exist a stable, flexible, safe and economic infrastructure of radiometric stations and telecommunications that allows, on the one hand, to have data in real time from all 30 remote weather stations, and on the other hand allows to communicate with them in order to reprogram them and to carry out maintenance works.
Embedded System Implementation on FPGA System With μCLinux OS
NASA Astrophysics Data System (ADS)
Fairuz Muhd Amin, Ahmad; Aris, Ishak; Syamsul Azmir Raja Abdullah, Raja; Kalos Zakiah Sahbudin, Ratna
2011-02-01
Embedded systems are taking on more complicated tasks as the processors involved become more powerful. The embedded systems have been widely used in many areas such as in industries, automotives, medical imaging, communications, speech recognition and computer vision. The complexity requirements in hardware and software nowadays need a flexibility system for further enhancement in any design without adding new hardware. Therefore, any changes in the design system will affect the processor that need to be changed. To overcome this problem, a System On Programmable Chip (SOPC) has been designed and developed using Field Programmable Gate Array (FPGA). A softcore processor, NIOS II 32-bit RISC, which is the microprocessor core was utilized in FPGA system together with the embedded operating system(OS), μClinux. In this paper, an example of web server is explained and demonstrated
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
NASA Astrophysics Data System (ADS)
Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.
2012-01-01
Surveillance applications usually require high levels of video quality, resulting in high power consumption. The existence of a well-behaved scheme to balance video quality and power consumption is crucial for the system's performance. In the present work, we adopt the game-theoretic approach of Kalai-Smorodinsky Bargaining Solution (KSBS) to deal with the problem of optimal resource allocation in a multi-node wireless visual sensor network (VSN). In our setting, the Direct Sequence Code Division Multiple Access (DS-CDMA) method is used for channel access, while a cross-layer optimization design, which employs a central processing server, accounts for the overall system efficacy through all network layers. The task assigned to the central server is the communication with the nodes and the joint determination of their transmission parameters. The KSBS is applied to non-convex utility spaces, efficiently distributing the source coding rate, channel coding rate and transmission powers among the nodes. In the underlying model, the transmission powers assume continuous values, whereas the source and channel coding rates can take only discrete values. Experimental results are reported and discussed to demonstrate the merits of KSBS over competing policies.
Casimage project: a digital teaching files authoring environment.
Rosset, Antoine; Muller, Henning; Martins, Martina; Dfouni, Natalia; Vallée, Jean-Paul; Ratib, Osman
2004-04-01
The goal of the Casimage project is to offer an authoring and editing environment integrated with the Picture Archiving and Communication Systems (PACS) for creating image-based electronic teaching files. This software is based on a client/server architecture allowing remote access of users to a central database. This authoring environment allows radiologists to create reference databases and collection of digital images for teaching and research directly from clinical cases being reviewed on PACS diagnostic workstations. The environment includes all tools to create teaching files, including textual description, annotations, and image manipulation. The software also allows users to generate stand-alone CD-ROMs and web-based teaching files to easily share their collections. The system includes a web server compatible with the Medical Imaging Resource Center standard (MIRC, http://mirc.rsna.org) to easily integrate collections in the RSNA web network dedicated to teaching files. This software could be installed on any PACS workstation to allow users to add new cases at any time and anywhere during clinical operations. Several images collections were created with this tool, including thoracic imaging that was subsequently made available on a CD-Rom and on our web site and through the MIRC network for public access.
Using object-oriented analysis to design a multi-mission ground data system
NASA Technical Reports Server (NTRS)
Shames, Peter
1995-01-01
This paper describes an analytical approach and descriptive methodology that is adapted from Object-Oriented Analysis (OOA) techniques. The technique is described and then used to communicate key issues of system logical architecture. The essence of the approach is to limit the analysis to only service objects, with the idea of providing a direct mapping from the design to a client-server implementation. Key perspectives on the system, such as user interaction, data flow and management, service interfaces, hardware configuration, and system and data integrity are covered. A significant advantage of this service-oriented approach is that it permits mapping all of these different perspectives on the system onto a single common substrate. This services substrate is readily represented diagramatically, thus making details of the overall design much more accessible.
FISH Oracle 2: a web server for integrative visualization of genomic data in cancer research.
Mader, Malte; Simon, Ronald; Kurtz, Stefan
2014-03-31
A comprehensive view on all relevant genomic data is instrumental for understanding the complex patterns of molecular alterations typically found in cancer cells. One of the most effective ways to rapidly obtain an overview of genomic alterations in large amounts of genomic data is the integrative visualization of genomic events. We developed FISH Oracle 2, a web server for the interactive visualization of different kinds of downstream processed genomics data typically available in cancer research. A powerful search interface and a fast visualization engine provide a highly interactive visualization for such data. High quality image export enables the life scientist to easily communicate their results. A comprehensive data administration allows to keep track of the available data sets. We applied FISH Oracle 2 to published data and found evidence that, in colorectal cancer cells, the gene TTC28 may be inactivated in two different ways, a fact that has not been published before. The interactive nature of FISH Oracle 2 and the possibility to store, select and visualize large amounts of downstream processed data support life scientists in generating hypotheses. The export of high quality images supports explanatory data visualization, simplifying the communication of new biological findings. A FISH Oracle 2 demo server and the software is available at http://www.zbh.uni-hamburg.de/fishoracle.
Application of SQL database to the control system of MOIRCS
NASA Astrophysics Data System (ADS)
Yoshikawa, Tomohiro; Omata, Koji; Konishi, Masahiro; Ichikawa, Takashi; Suzuki, Ryuji; Tokoku, Chihiro; Uchimoto, Yuka Katsuno; Nishimura, Tetsuo
2006-06-01
MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru telescope. In order to perform observations of near-infrared imaging and spectroscopy with cold slit mask, MOIRCS contains many device components, which are distributed on an Ethernet LAN. Two PCs wired to the focal plane array electronics operate two HAWAII2 detectors, respectively, and other two PCs are used for integrated control and quick data reduction, respectively. Though most of the devices (e.g., filter and grism turrets, slit exchange mechanism for spectroscopy) are controlled via RS232C interface, they are accessible from TCP/IP connection using TCP/IP to RS232C converters. Moreover, other devices are also connected to the Ethernet LAN. This network distributed structure provides flexibility of hardware configuration. We have constructed an integrated control system for such network distributed hardwares, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS has also network distributed software design, applying TCP/IP socket communication to interprocess communication. In order to help the communication between the device interfaces and the user interfaces, we defined three layers in T-LECS; an external layer for user interface applications, an internal layer for device interface applications, and a communication layer, which connects two layers above. In the communication layer, we store the data of the system to an SQL database server; they are status data, FITS header data, and also meta data such as device configuration data and FITS configuration data. We present our software system design and the database schema to manage observations of MOIRCS with Subaru.
System integration and DICOM image creation for PET-MR fusion.
Hsiao, Chia-Hung; Kao, Tsair; Fang, Yu-Hua; Wang, Jiunn-Kuen; Guo, Wan-Yuo; Chao, Liang-Hsiao; Yen, Sang-Hue
2005-03-01
This article demonstrates a gateway system for converting image fusion results to digital imaging and communication in medicine (DICOM) objects. For the purpose of standardization and integration, we have followed the guidelines of the Integrated Healthcare Enterprise technical framework and developed a DICOM gateway. The gateway system combines data from hospital information system, image fusion results, and the information generated itself to constitute new DICOM objects. All the mandatory tags defined in standard DICOM object were generated in the gateway system. The gateway system will generate two series of SOP instances of each PET-MR fusion result; SOP (Service Object Pair) one for the reconstructed magnetic resonance (MR) images and the other for position emission tomography (PET) images. The size, resolution, spatial coordinates, and number of frames are the same in both series of SOP instances. Every new generated MR image exactly fits with one of the reconstructed PET images. Those DICOM images are stored to the picture archiving and communication system (PACS) server by means of standard DICOM protocols. When those images are retrieved and viewed by standard DICOM viewing systems, both images can be viewed at the same anatomy location. This system is useful for precise diagnosis and therapy.
Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology
NASA Astrophysics Data System (ADS)
Ritschel, Bernd; Seelus, Christoph; Neher, Günther; Iyemori, Toshihiko; Koyama, Yukinobu; Yatagai, Akiyo; Murayama, Yasuhiro; King, Todd; Hughes, John; Fung, Shing; Galkin, Ivan; Hapgood, Michael; Belehaki, Anna
2015-04-01
Opportunities for the Mashup of Heterogenous Data Server via Semantic Web Technology European Union ESPAS, Japanese IUGONET and GFZ ISDC data server are developed for the ingestion, archiving and distributing of geo and space science domain data. Main parts of the data -managed by the mentioned data server- are related to near earth-space and geomagnetic field data. A smart mashup of the data server would allow a seamless browse and access to data and related context information. However the achievement of a high level of interoperability is a challenge because the data server are based on different data models and software frameworks. This paper is focused on the latest experiments and results for the mashup of the data server using the semantic Web approach. Besides the mashup of domain and terminological ontologies, especially the options to connect data managed by relational databases using D2R server and SPARQL technology will be addressed. A successful realization of the data server mashup will not only have a positive impact to the data users of the specific scientific domain but also to related projects, such as e.g. the development of a new interoperable version of NASA's Planetary Data System (PDS) or ICUS's World Data System alliance. ESPAS data server: https://www.espas-fp7.eu/portal/ IUGONET data server: http://search.iugonet.org/iugonet/ GFZ ISDC data server (semantic Web based prototype): http://rz-vm30.gfz-potsdam.de/drupal-7.9/ NASA PDS: http://pds.nasa.gov ICSU-WDS: https://www.icsu-wds.org
DOT National Transportation Integrated Search
1997-06-01
The Geographic Information System-Transportation (GIS-T) ISTEA Management Systems Server Net Prototype Pooled Fund Study represents the first national cooperative effort in the transportation industry to address the management and monitoring systems ...
Understanding Customer Dissatisfaction with Underutilized Distributed File Servers
NASA Technical Reports Server (NTRS)
Riedel, Erik; Gibson, Garth
1996-01-01
An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance, as perceived by users, the response time of distributed operations must improve. In this paper we analyze measurements of an Andrew file system (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server's overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to period of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server central processing unit (CPU) use. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has to potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst period should be reduced.
A Wireless Physiological Signal Monitoring System with Integrated Bluetooth and WiFi Technologies.
Yu, Sung-Nien; Cheng, Jen-Chieh
2005-01-01
This paper proposes a wireless patient monitoring system which integrates Bluetooth and WiFi wireless technologies. A wireless portable multi-parameter device was designated to acquire physiological signals and transmit them to a local server via Bluetooth wireless technology. Four kinds of monitor units were designed to communicate via the WiFi wireless technology, including a local monitor unit, a control center, mobile devices (personal digital assistant; PDA), and a web page. The use of various monitor units is intending to meet different medical requirements for different medical personnel. This system was demonstrated to promote the mobility and flexibility for both the patients and the medical personnel, which further improves the quality of health care.
The Data Acquisition System of the Stockholm Educational Air Shower Array
NASA Astrophysics Data System (ADS)
Hofverberg, P.; Johansson, H.; Pearce, M.; Rydstrom, S.; Wikstrom, C.
2005-12-01
The Stockholm Educational Air Shower Array (SEASA) project is deploying an array of plastic scintillator detector stations on school roofs in the Stockholm area. Signals from GPS satellites are used to time synchronise signals from the widely separated detector stations, allowing cosmic ray air showers to be identified and studied. A low-cost and highly scalable data acquisition system has been produced using embedded Linux processors which communicate station data to a central server running a MySQL database. Air shower data can be visualised in real-time using a Java-applet client. It is also possible to query the database and manage detector stations from the client. In this paper, the design and performance of the system are described
Medical Data Transmission Using Cell Phone Networks
NASA Astrophysics Data System (ADS)
Voos, J.; Centeno, C.; Riva, G.; Zerbini, C.; Gonzalez, E.
2011-12-01
A big challenge in telemedicine systems is related to have the technical requirements needed for a successful implementation in remote locations where the available hardware and communication infrastructure is not adequate for a good medical data transmission. Despite of the wide standards availability, methodologies, applications and systems integration facilities in telemedicine, in many cases the implementation requirements are not achievable to allow the system execution in remote areas of our country. Therefore, this paper presents an alternative for the messages transmission related to medical studies using the cellular network and the standard HL7 V3 [1] for data modeling. The messages are transmitted to a web server and stored in a centralized database which allows data sharing with other specialists.
A hierarchical distributed control model for coordinating intelligent systems
NASA Technical Reports Server (NTRS)
Adler, Richard M.
1991-01-01
A hierarchical distributed control (HDC) model for coordinating cooperative problem-solving among intelligent systems is described. The model was implemented using SOCIAL, an innovative object-oriented tool for integrating heterogeneous, distributed software systems. SOCIAL embeds applications in 'wrapper' objects called Agents, which supply predefined capabilities for distributed communication, control, data specification, and translation. The HDC model is realized in SOCIAL as a 'Manager'Agent that coordinates interactions among application Agents. The HDC Manager: indexes the capabilities of application Agents; routes request messages to suitable server Agents; and stores results in a commonly accessible 'Bulletin-Board'. This centralized control model is illustrated in a fault diagnosis application for launch operations support of the Space Shuttle fleet at NASA, Kennedy Space Center.
Consumer server: A UNIX based event distributor in new CDF data acquisition system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, F.; Morita, Y.; Nomachi, M.
1994-12-31
Consumer Server is a program to handle event data and consumer trigger requests I/Os among Level 3 farm and consumer processes in CDF new data acquisition system. This program uses standard UNIX libraries and commercial network technologies to obtain higher portability. The authors describe the concept and configuration of the Consumer Server and report its performance.
How to securely replicate services
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth
1992-01-01
A method is presented for constructing replicated services that retain their availability and integrity despite several servers and clients corrupted by an intruder, in addition to others failing benignly. More precisely, a service is replicated by n servers in such a way that a correct client will accept a correct server's response if, for some prespecified parameter k, at least k servers are correct and fewer than k servers are corrupt. The issue of maintaining causality among client requests is also addressed. A security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service is illustrated. An approach to counter this problem is proposed that requires fewer than k servers to be corrupt and that is live if at least k+b servers are correct, where b is the assumed maximum total number of corrupt servers in any system run. An important and novel feature of these schemes is that the client need not be able to identify or authenticate even a single server. Instead, the client is required only to possess at most two public keys for the service. The practicality of these schemes is illustrated through a discussion of several issues pertinent to their implementation and use, and their intended role in a secure version of the Isis system is also described.
Design and implementation of a smart card based healthcare information system.
Kardas, Geylani; Tunali, E Turhan
2006-01-01
Smart cards are used in information technologies as portable integrated devices with data storage and data processing capabilities. As in other fields, smart card use in health systems became popular due to their increased capacity and performance. Their efficient use with easy and fast data access facilities leads to implementation particularly widespread in security systems. In this paper, a smart card based healthcare information system is developed. The system uses smart card for personal identification and transfer of health data and provides data communication via a distributed protocol which is particularly developed for this study. Two smart card software modules are implemented that run on patient and healthcare professional smart cards, respectively. In addition to personal information, general health information about the patient is also loaded to patient smart card. Health care providers use their own smart cards to be authenticated on the system and to access data on patient cards. Encryption keys and digital signature keys stored on smart cards of the system are used for secure and authenticated data communication between clients and database servers over distributed object protocol. System is developed on Java platform by using object oriented architecture and design patterns.
NASA Technical Reports Server (NTRS)
Lyle, Stacey D.
2009-01-01
A software package that has been designed to allow authentication for determining if the rover(s) is/are within a set of boundaries or a specific area to access critical geospatial information by using GPS signal structures as a means to authenticate mobile devices into a network wirelessly and in real-time has been developed. The advantage lies in that the system only allows those with designated geospatial boundaries or areas into the server. The Geospatial Authentication software has two parts Server and Client. The server software is a virtual private network (VPN) developed in Linux operating system using Perl programming language. The server can be a stand-alone VPN server or can be combined with other applications and services. The client software is a GUI Windows CE software, or Mobile Graphical Software, that allows users to authenticate into a network. The purpose of the client software is to pass the needed satellite information to the server for authentication.
Huang, Ean-Wen; Chiou, Shwu-Fen; Pan, Mei-Lien; Wu, Hua-Huan; Jiang, Jia-Rong; Lu, Yi-De
2017-08-01
Rapid progress in information and communication technologies and the increasing popularity of healthcare-related applications has increased interest in the topic of intelligent medical care. This topic emphasizes the use of information and communication technologies to collect and analyze a variety of data in order to provide physicians and other healthcare professionals with clinical decision support. At present, so-called smart hospitals are the focal point of most intelligent-systems development activity, with little attention currently being focused on long-term care needs. The present article discusses the application of intelligent systems in the field of long-term care, especially in community and home-based models of care. System-implementation components such as the data entry interface components of mobile devices, the data transmission and synchronization components between the mobile device and file server, the data presentation, and the statistics analysis components are also introduced. These components have been used to develop long-term care service-related applications, including home health nursing, home-care services, meals on wheels, and assistive devices rental. We believe that the findings will be useful for the promotion of innovative long-term care services as well as the improvement of healthcare quality and efficiency.
PCASSO: a design for secure communication of personal health information via the internet.
Baker, D B; Masys, D R
1999-05-01
The Internet holds both promise and peril for the communications of person-identifiable health information. Because of technical features designed to promote accessibility and interoperability rather than security, Internet addressing conventions and transport protocols are vulnerable to compromise by malicious persons and programs. In addition, most commonly used personal computer (PC) operating systems currently lack the hardware-based system software protection and process isolation that are essential for ensuring the integrity of trusted applications. Security approaches designed for electronic commerce, that trade known security weaknesses for limited financial liability, are not sufficient for personal health data, where the personal damage caused by unintentional disclosure may be far more serious. To overcome these obstacles, we are developing and evaluating an Internet-based communications system called PCASSO (Patient-centered access to secure systems online) that applies state of the art security to health information. PCASSO includes role-based access control, multi-level security, strong device and user authentication, session-specific encryption and audit trails. Unlike Internet-based electronic commerce 'solutions,' PCASSO secures data end-to-end: in the server; in the data repository; across the network; and on the client. PCASSO is designed to give patients as well as providers access to personal health records via the Internet.
Unluturk, Mehmet S
2012-06-01
Nurse call system is an electrically functioning system by which patients can call upon from a bedside station or from a duty station. An intermittent tone shall be heard and a corridor lamp located outside the room starts blinking with a slow or a faster rate depending on the call origination. It is essential to alert nurses on time so that they can offer care and comfort without any delay. There are currently many devices available for a nurse call system to improve communication between nurses and patients such as pagers, RFID (radio frequency identification) badges, wireless phones and so on. To integrate all these devices into an existing nurse call system and make they communicate with each other, we propose software client applications called bridges in this paper. We also propose a window server application called SEE (Supervised Event Executive) that delivers messages among these devices. A single hardware dongle is utilized for authentication and copy protection for SEE. Protecting SEE with securities provided by dongle only is a weak defense against hackers. In this paper, we develop some defense patterns for hackers such as calculating checksums in runtime, making calls to dongle from multiple places in code and handling errors properly by logging them into database.
[Design of Smart Care Tele-Monitoring System for Mother and Fetus].
Xi, Haiyan; Gan, Guanghui; Zhang, Huilian; Chen, Chaomin
2015-03-01
To study and design a maternal and fetal monitoring system based on the cloud computing and internet of things, which can monitor and take smart care of the mother and fetus in 24 h. Using a new kind of wireless fetal monitoring detector and a mobile phone, thus the doctor can keep touch with hospital through internet. The mobile terminal was developed on the Android system, which accepted the data of fetal heart rate and uterine contraction transmitted from the wireless detector, exchange information with the server and display the monitoring data and the doctor's advice in real-time. The mobile phone displayed the fetal heart rate line and uterine contraction line in real-time, recorded the fetus' grow process. It implemented the real-time communication between the doctor and the user, through wireless communication technology. The system removes the constraint of traditional telephone cable for users, while the users can get remote monitoring from the medical institutions at home or in the nearest community at any time, providing health and safety guarantee for mother and fetus.
E-Control: First Public Release of Remote Control Software for VLBI Telescopes
NASA Technical Reports Server (NTRS)
Neidhardt, Alexander; Ettl, Martin; Rottmann, Helge; Ploetz, Christian; Muehlbauer, Matthias; Hase, Hayo; Alef, Walter; Sobarzo, Sergio; Herrera, Cristian; Himwich, Ed
2010-01-01
Automating and remotely controlling observations are important for future operations in a Global Geodetic Observing System (GGOS). At the Geodetic Observatory Wettzell, in cooperation with the Max-Planck-Institute for Radio Astronomy in Bonn, a software extension to the existing NASA Field System has been developed for remote control. It uses the principle of a remotely accessible, autonomous process cell as a server extension for the Field System. The communication is realized for low transfer rates using Remote Procedure Calls (RPC). It uses generative programming with the interface software generator idl2rpc.pl developed at Wettzell. The user interacts with this system over a modern graphical user interface created with wxWidgets. For security reasons the communication is automatically tunneled through a Secure Shell (SSH) session to the telescope. There are already successful test observations with the telescopes at O Higgins, Concepcion, and Wettzell. At Wettzell the software is already used routinely for weekend observations. Therefore the first public release of the software is now available, which will also be useful for other telescopes.
Lee, Tian-Fu
2014-12-01
Telecare medicine information systems provide a communicating platform for accessing remote medical resources through public networks, and help health care workers and medical personnel to rapidly making correct clinical decisions and treatments. An authentication scheme for data exchange in telecare medicine information systems enables legal users in hospitals and medical institutes to establish a secure channel and exchange electronic medical records or electronic health records securely and efficiently. This investigation develops an efficient and secure verified-based three-party authentication scheme by using extended chaotic maps for data exchange in telecare medicine information systems. The proposed scheme does not require server's public keys and avoids time-consuming modular exponential computations and scalar multiplications on elliptic curve used in previous related approaches. Additionally, the proposed scheme is proven secure in the random oracle model, and realizes the lower bounds of messages and rounds in communications. Compared to related verified-based approaches, the proposed scheme not only possesses higher security, but also has lower computational cost and fewer transmissions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Meng, Philipp; Fehre, Karsten; Rappelsberger, Andrea; Adlassnig, Klaus-Peter
2014-01-01
Antibiotic resistance is a heterogeneous phenomenon. It does not only differ between countries or states, but also between wards of hospitals, where different resistance patterns have been found. To support clinicians in administering empiric antibiotic therapy, we developed software to present information about antibiotic resistance using a mobile concept. A pre-existing infrastructure was deployed as the server component. The systems analyze and aggregate data from laboratory information systems, generating statistical data on antibiotic resistance. The information is presented to the Android client using a Representational State Transfer (REST) interface. Geographical localization is performed using near field communication (NFC) tags. The prototype provides tabulated data concerning antibiotic resistance patterns in the wards of a hospital. Using Android, NFC, and data caching, the usability of the system is estimated to be high. We hypothesize that antibiotic stewardship in hospitals can be supported by this software, thus improving medical monitoring of antibiotic resistance. Future studies in a productive environment are needed to measure the impact of the system on the outcome of patient care.
Openlobby: an open game server for lobby and matchmaking
NASA Astrophysics Data System (ADS)
Zamzami, E. M.; Tarigan, J. T.; Jaya, I.; Hardi, S. M.
2018-03-01
Online Multiplayer is one of the most essential feature in modern games. However, while developing a multiplayer feature can be done with a simple computer networking programming, creating a balanced multiplayer session requires more player management components such as game lobby and matchmaking system. Our objective is to develop OpenLobby, a server that available to be used by other developers to support their multiplayer application. The proposed system acts as a lobby and matchmaker where queueing players will be matched to other player according to a certain criteria defined by developer. The solution provides an application programing interface that can be used by developer to interact with the server. For testing purpose, we developed a game that uses the server as their multiplayer server.
MX: A beamline control system toolkit
NASA Astrophysics Data System (ADS)
Lavender, William M.
2000-06-01
The development of experimental and beamline control systems for two Collaborative Access Teams at the Advanced Photon Source has resulted in the creation of a portable data acquisition and control toolkit called MX. MX consists of a set of servers, application programs and libraries that enable the creation of command line and graphical user interface applications that may be easily retargeted to new and different kinds of motor and device controllers. The source code for MX is written in ANSI C and Tcl/Tk with interprocess communication via TCP/IP. MX is available for several versions of Unix, Windows 95/98/NT and DOS. It may be downloaded from the web site http://www.imca.aps.anl.gov/mx/.
Real-Time Speaker Detection for User-Device Binding
2010-12-01
31 xi THIS PAGE INTENTIONALLY LEFT BLANK xii CHAPTER 1: Introduction The roll-out of commercial wireless networks continues to rise worldwide...in a secured facility. It could also be connected to the call server via a Virtual Private Network (VPN) or public lines if security is not a top...communications network [25]. Yet, James Arden Barnett, Jr., Chief of the Public Safety and Homeland Security Bureau, argues that emergency communications
NASA Astrophysics Data System (ADS)
Mehring, James W.; Thomas, Scott D.
1995-11-01
The Data Services Segment of the Defense Mapping Agency's Digital Production System provides a digital archive of imagery source data for use by DMA's cartographic user's. This system was developed in the mid-1980's and is currently undergoing modernization. This paper addresses the modernization of the imagery buffer function that was performed by custom hardware in the baseline system and is being replaced by a RAID Server based on commercial off the shelf (COTS) hardware. The paper briefly describes the baseline DMA image system and the modernization program, that is currently under way. Throughput benchmark measurements were made to make design configuration decisions for a commercial off the shelf (COTS) RAID Server to perform as system image buffer. The test program began with performance measurements of the RAID read and write operations between the RAID arrays and the server CPU for RAID levels 0, 5 and 0+1. Interface throughput measurements were made for the HiPPI interface between the RAID Server and the image archive and processing system as well as the client side interface between a custom interface board that provides the interface between the internal bus of the RAID Server and the Input- Output Processor (IOP) external wideband network currently in place in the DMA system to service client workstations. End to end measurements were taken from the HiPPI interface through the RAID write and read operations to the IOP output interface.
Mathematical defense method of networked servers with controlled remote backups
NASA Astrophysics Data System (ADS)
Kim, Song-Kyoo
2006-05-01
The networked server defense model is focused on reliability and availability in security respects. The (remote) backup servers are hooked up by VPN (Virtual Private Network) with high-speed optical network and replace broken main severs immediately. The networked server can be represent as "machines" and then the system deals with main unreliable, spare, and auxiliary spare machine. During vacation periods, when the system performs a mandatory routine maintenance, auxiliary machines are being used for back-ups; the information on the system is naturally delayed. Analog of the N-policy to restrict the usage of auxiliary machines to some reasonable quantity. The results are demonstrated in the network architecture by using the stochastic optimization techniques.
Scaling NS-3 DCE Experiments on Multi-Core Servers
2016-06-15
that work well together. 3.2 Simulation Server Details We ran the simulations on a Dell® PowerEdge M520 blade server[8] running Ubuntu Linux 14.04...To minimize the amount of time needed to complete all of the simulations, we planned to run multiple simulations at the same time on a blade server...MacBook was running the simulation inside a virtual machine (Ubuntu 14.04), while the blade server was running the same operating system directly on
OPeNDAP Server4: Buidling a High-Performance Server for the DAP by Leveraging Existing Software
NASA Astrophysics Data System (ADS)
Potter, N.; West, P.; Gallagher, J.; Garcia, J.; Fox, P.
2006-12-01
OPeNDAP has been working in conjunction with NCAR/ESSL/HAO to develop a modular, high performance data server that will be the successor to the current OPeNDAP data server. The new server, called Server4, is really two servers: A 'Back-End' data server which reads information from various types of data sources and packages the results in DAP objects; and A 'Front-End' which receives client DAP request and then decides how use features of the Back-End data server to build the correct responses. This architecture can be configured in several interesting ways: The Front- and Back-End components can be run on either the same or different machines, depending on security and performance needs, new Front-End software can be written to support other network data access protocols and local applications can interact directly with the Back-End data server. This new server's Back-End component will use the server infrastructure developed by HAO for use with the Earth System Grid II project. Extensions needed to use it as part of the new OPeNDAP server were minimal. The HAO server was modified so that it loads 'data handlers' at run-time. Each data handler module only needs to satisfy a simple interface which both enabled the existing data handlers written for the old OPeNDAP server to be directly used and also simplifies writing new handlers from scratch. The Back-End server leverages high- performance features developed for the ESG II project, so applications that can interact with it directly can read large volumes of data efficiently. The Front-End module of Server4 uses the Java Servlet system in place of the Common Gateway Interface (CGI) used in the past. New front-end modules can be written to support different network data access protocols, so that same server will ultimately be able to support more than the DAP/2.0 protocol. As an example, we will discuss a SOAP interface that's currently in development. In addition to support for DAP/2.0 and prototypical support for a SOAP interface, the new server includes support for the THREDDS cataloging protocol. THREDDS is tightly integrated into the Front-End of Server4. The Server4 Front-End can make full use of the advanced THREDDS features such as attribute specification and inheritance, custom catalogs which segue into automatically generated catalogs as well as providing a default behavior which requires almost no catalog configuration.
NASA Astrophysics Data System (ADS)
Jain, Madhu; Meena, Rakesh Kumar
2018-03-01
Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.
Mandellos, George J; Koutelakis, George V; Panagiotakopoulos, Theodor C; Koukias, Andreas M; Koukias, Mixalis N; Lymberopoulos, Dimitrios K
2008-01-01
Early and specialized pre-hospital patient treatment improves outcome in terms of mortality and morbidity, in emergency cases. This paper focuses on the design and implementation of a telemedicine system that supports diverse types of endpoints including moving transports (MT) (ambulances, ships, planes, etc.), handheld devices and fixed units, using diverse communication networks. Target of the above telemedicine system is the pre-hospital patient treatment. While vital sign transmission is prior to other services provided by the telemedicine system (videoconference, remote management, voice calls etc.), a predefined algorithm controls provision and quality of the other services. A distributed database system controlled by a central server, aims to manage patient attributes, exams and incidents handled by different Telemedicine Coordination Centers (TCC).
Sung, Wen-Tsai; Lin, Jia-Syun
2013-01-01
This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.
Ngarambe, Donart; Pan, Yun-he; Chen, De-ren
2003-01-01
There have been numerous attempts recently to promote technology based education (Shrestha, 1997) in the poorer third world countries, but so far all these have not provided a sustainable solution as they are either centered and controlled from abroad and relying solely on foreign donors for their sustenance or they are not web-based, which make distribution problematic, and some are not affordable by most of the local population in these places. In this paper we discuss an application, the Local College Learning Management System (LoColms), which we are developing, that is both sustainable and economical to suit the situation in these countries. The application is a web-based system, and aims at improving the traditional form of education by empowering the local universities. Its economy comes from the fact that it is supported by traditional communication technology, the public switching telephone network system, PSTN, which eliminates the need for packet switched or dedicated private virtual networks (PVN) usually required in similar situations. At a later stage, we shall incorporate ontology and paging tools to improve resource sharing and storage optimization in the Proxy Caches (ProCa) and LoColms servers. The system is based on the client/server paradigm and its infrastructure consists of the PSTN, ProCa, with the learning centers accessing the universities by means of point-to-point protocol (PPP).
Bringing Control System User Interfaces to the Web
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xihui; Kasemir, Kay
With the evolution of web based technologies, especially HTML5 [1], it becomes possible to create web-based control system user interfaces (UI) that are cross-browser and cross-device compatible. This article describes two technologies that facilitate this goal. The first one is the WebOPI [2], which can seamlessly display CSS BOY [3] Operator Interfaces (OPI) in web browsers without modification to the original OPI file. The WebOPI leverages the powerful graphical editing capabilities of BOY and provides the convenience of re-using existing OPI files. On the other hand, it uses generic JavaScript and a generic communication mechanism between the web browser andmore » web server. It is not optimized for a control system, which results in unnecessary network traffic and resource usage. Our second technology is the WebSocket-based Process Data Access (WebPDA) [4]. It is a protocol that provides efficient control system data communication using WebSocket [5], so that users can create web-based control system UIs using standard web page technologies such as HTML, CSS and JavaScript. WebPDA is control system independent, potentially supporting any type of control system.« less
COM1/348: Design and Implementation of a Portal for the Market of the Medical Equipment (MEDICOM)
Palamas, S; Vlachos, I; Panou-Diamandi, O; Marinos, G; Kalivas, D; Zeelenberg, C; Nimwegen, C; Koutsouris, D
1999-01-01
Introduction The MEDICOM system provides the electronic means for medical equipment manufacturers to communicate online with their customers supporting the Purchasing Process and the Post Market Surveillance. The MEDICOM service will be provided over the Internet by the MEDICOM Portal, and by a set of distributed subsystems dedicated to handle structured information related to medical devices. There are three kinds of these subsystems, the Hypermedia Medical Catalogue (HMC), Virtual Medical Exhibition (VME), which contains information in a form of Virtual Models, and the Post Market Surveillance system (PMS). The Universal Medical Devices Nomenclature System (UMDNS) is used to register all products. This work was partially funded by the ESPRIT Project 25289 (MEDICOM). Methods The Portal provides the end user interface operating as the MEDICOM Portal, acts as the yellow pages for finding both products and providers, providing links to the providers servers, implements the system management and supports the subsystem database compatibility. The Portal hosts a database system composed of two parts: (a) the Common Database, which describes a set of encoded parameters (like Supported Languages, Geographic Regions, UMDNS Codes, etc) common to all subsystems and (b) the Short Description Database, which contains summarised descriptions of medical devices, including a text description, the codes of the manufacturer, UMDNS code, attribute values and links to the corresponding HTML pages of the HMC, VME and PMS servers. The Portal provides the MEDICOM user interface including services like end user profiling and registration, end user query forms, creation and hosting of newsgroups, links to online libraries, end user subscription to manufacturers' mailing lists, online information for the MEDICOM system and special messages or advertisements from manufacturers. Results Platform independence and interoperability characterise the system design. A general purpose RDBMS is used for the implementation of the databases. The end user interface is implemented using HTML and Java applets, while the subsystem administration applications are developed using Java. The JDBC interface is used in order to provide database access to these applications. The communication between subsystems is implemented using CORBA objects and Java servlets are used in subsystem servers for the activation of remote operations. Discussion In the second half of 1999, the MEDICOM Project will enter the phase of evaluation and pilot operation. The benefits of the MEDICOM system are expected to be the establishment of a world wide accessible marketplace between providers and health care professionals. The latter will achieve the provision of up-to-date and high quality products information in an easy and friendly way, and the enhancement of the marketing procedures and after sales support efficiency.
Integrated Environment for Ubiquitous Healthcare and Mobile IPv6 Networks
NASA Astrophysics Data System (ADS)
Cagalaban, Giovanni; Kim, Seoksoo
The development of Internet technologies based on the IPv6 protocol will allow real-time monitoring of people with health deficiencies and improve the independence of elderly people. This paper proposed a ubiquitous healthcare system for the personalized healthcare services with the support of mobile IPv6 networks. Specifically, this paper discusses the integration of ubiquitous healthcare and wireless networks and its functional requirements. This allow an integrated environment where heterogeneous devices such a mobile devices and body sensors can continuously monitor patient status and communicate remotely with healthcare servers, physicians, and family members to effectively deliver healthcare services.
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732
Server-Based and Server-Less Byod Solutions to Support Electronic Learning
2016-06-01
Knowledge Online NSD National Security Directive OS operating system OWA Outlook Web Access PC personal computer PED personal electronic device PDA...mobile devices, institute mobile device policies and standards, and promote the development and use of DOD mobile and web -enabled applications” (DOD...with an isolated BYOD web server, properly educated system administrators must carry out and execute the necessary, pre-defined network security
Client/server approach to image capturing
NASA Astrophysics Data System (ADS)
Tuijn, Chris; Stokes, Earle
1998-01-01
The diversity of the digital image capturing devices on the market today is quite astonishing and ranges from low-cost CCD scanners to digital cameras (for both action and stand-still scenes), mid-end CCD scanners for desktop publishing and pre- press applications and high-end CCD flatbed scanners and drum- scanners with photo multiplier technology. Each device and market segment has its own specific needs which explains the diversity of the associated scanner applications. What all those applications have in common is the need to communicate with a particular device to import the digital images; after the import, additional image processing might be needed as well as color management operations. Although the specific requirements for all of these applications might differ considerably, a number of image capturing and color management facilities as well as other services are needed which can be shared. In this paper, we propose a client/server architecture for scanning and image editing applications which can be used as a common component for all these applications. One of the principal components of the scan server is the input capturing module. The specification of the input jobs is based on a generic input device model. Through this model we make abstraction of the specific scanner parameters and define the scan job definitions by a number of absolute parameters. As a result, scan job definitions will be less dependent on a particular scanner and have a more universal meaning. In this context, we also elaborate on the interaction of the generic parameters and the color characterization (i.e., the ICC profile). Other topics that are covered are the scheduling and parallel processing capabilities of the server, the image processing facilities, the interaction with the ICC engine, the communication facilities (both in-memory and over the network) and the different client architectures (stand-alone applications, TWAIN servers, plug-ins, OLE or Apple-event driven applications). This paper is structured as follows. In the introduction, we further motive the need for a scan server-based architecture. In the second section, we give a brief architectural overview of the scan server and the other components it is connected to. The third chapter exposes the generic model for input devices as well as the image processing model; the fourth chapter reveals the different shapes the scanning applications (or modules) can have. In the last section, we briefly summarize the presented material and point out trends for future development.
The University of Michigan's Computer-Aided Engineering Network.
ERIC Educational Resources Information Center
Atkins, D. E.; Olsen, Leslie A.
1986-01-01
Presents an overview of the Computer-Aided Engineering Network (CAEN) of the University of Michigan. Describes its arrangement of workstations, communication networks, and servers. Outlines the factors considered in hardware and software decision making. Reviews the program's impact on students. (ML)