Sample records for distributed application management

  1. Tools for distributed application management

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Cooper, Robert; Wood, Mark; Birman, Kenneth P.

    1990-01-01

    Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system (a collection of tools for constructing distributed application management software) is described. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real-time reactive program. The underlying application is instrumented with a variety of built-in and user-defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when preexisting, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.

  2. Tools for distributed application management

    NASA Technical Reports Server (NTRS)

    Marzullo, Keith; Wood, Mark; Cooper, Robert; Birman, Kenneth P.

    1990-01-01

    Distributed application management consists of monitoring and controlling an application as it executes in a distributed environment. It encompasses such activities as configuration, initialization, performance monitoring, resource scheduling, and failure response. The Meta system is described: a collection of tools for constructing distributed application management software. Meta provides the mechanism, while the programmer specifies the policy for application management. The policy is manifested as a control program which is a soft real time reactive program. The underlying application is instrumented with a variety of built-in and user defined sensors and actuators. These define the interface between the control program and the application. The control program also has access to a database describing the structure of the application and the characteristics of its environment. Some of the more difficult problems for application management occur when pre-existing, nondistributed programs are integrated into a distributed application for which they may not have been intended. Meta allows management functions to be retrofitted to such programs with a minimum of effort.

  3. Developing Use Cases for Evaluation of ADMS Applications to Accelerate Technology Adoption: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Veda, Santosh; Wu, Hongyu; Martin, Maurice

    Grid modernization for the distribution systems comprise of the ability to effectively monitor and manage unplanned events while ensuring reliable operations. Integration of Distributed Energy Resources (DERs) and proliferation of autonomous smart controllers like microgrids and smart inverters in the distribution networks challenge the status quo of distribution system operations. Advanced Distribution Management System (ADMS) technologies are being increasingly deployed to manage the complexities of operating distribution systems. The ability to evaluate the ADMS applications in specific utility environments and for future scenarios will accelerate wider adoption of the ADMS and will lower the risks and costs of their implementation.more » This paper addresses the first step - identify and define the use cases for evaluating these applications. The applications that are selected for this discussion include Volt-VAr Optimization (VVO), Fault Location Isolation and Service Restoration (FLISR), Online Power Flow (OLPF)/Distribution System State Estimation (DSSE) and Market Participation. A technical description and general operational requirements for each of these applications is presented. The test scenarios that are most relevant to the utility challenges are also addressed.« less

  4. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  5. DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, Annabelle; Veda, Santosh; Maitra, Arindam

    Efficient and effective management of the electrical distribution system requires an integrated system approach for Distribution Management Systems (DMS), Distributed Energy Resources (DERs), Distributed Energy Resources Management System (DERMS), and microgrids to work in harmony. This paper highlights some of the outcomes from a U.S. Department of Energy (DOE), Office of Electricity (OE) project, including 1) Architecture of these integrated systems, and 2) Expanded functions of two example DMS applications, Volt-VAR optimization (VVO) and Fault Location, Isolation and Service Restoration (FLISR), to accommodate DER. For these two example applications, the relevant DER Group Functions necessary to support communication between DMSmore » and Microgrid Controller (MC) in grid-tied mode are identified.« less

  6. Research and Design of the Three-tier Distributed Network Management System Based on COM / COM + and DNA

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Bi, Yushen

    Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.

  7. Towards a Cloud Based Smart Traffic Management Framework

    NASA Astrophysics Data System (ADS)

    Rahimi, M. M.; Hakimpour, F.

    2017-09-01

    Traffic big data has brought many opportunities for traffic management applications. However several challenges like heterogeneity, storage, management, processing and analysis of traffic big data may hinder their efficient and real-time applications. All these challenges call for well-adapted distributed framework for smart traffic management that can efficiently handle big traffic data integration, indexing, query processing, mining and analysis. In this paper, we present a novel, distributed, scalable and efficient framework for traffic management applications. The proposed cloud computing based framework can answer technical challenges for efficient and real-time storage, management, process and analyse of traffic big data. For evaluation of the framework, we have used OpenStreetMap (OSM) real trajectories and road network on a distributed environment. Our evaluation results indicate that speed of data importing to this framework exceeds 8000 records per second when the size of datasets is near to 5 million. We also evaluate performance of data retrieval in our proposed framework. The data retrieval speed exceeds 15000 records per second when the size of datasets is near to 5 million. We have also evaluated scalability and performance of our proposed framework using parallelisation of a critical pre-analysis in transportation applications. The results show that proposed framework achieves considerable performance and efficiency in traffic management applications.

  8. A hierarchical distributed control model for coordinating intelligent systems

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1991-01-01

    A hierarchical distributed control (HDC) model for coordinating cooperative problem-solving among intelligent systems is described. The model was implemented using SOCIAL, an innovative object-oriented tool for integrating heterogeneous, distributed software systems. SOCIAL embeds applications in 'wrapper' objects called Agents, which supply predefined capabilities for distributed communication, control, data specification, and translation. The HDC model is realized in SOCIAL as a 'Manager'Agent that coordinates interactions among application Agents. The HDC Manager: indexes the capabilities of application Agents; routes request messages to suitable server Agents; and stores results in a commonly accessible 'Bulletin-Board'. This centralized control model is illustrated in a fault diagnosis application for launch operations support of the Space Shuttle fleet at NASA, Kennedy Space Center.

  9. Information Interaction Study for DER and DMS Interoperability

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Lu, Yiming; Lv, Guangxian; Liu, Peng; Chen, Yu; Zhang, Xinhui

    The Common Information Model (CIM) is an abstract data model that can be used to represent the major objects in Distribution Management System (DMS) applications. Because the Common Information Model (CIM) doesn't modeling the Distributed Energy Resources (DERs), it can't meet the requirements of DER operation and management for Distribution Management System (DMS) advanced applications. Modeling of DER were studied based on a system point of view, the article initially proposed a CIM extended information model. By analysis the basic structure of the message interaction between DMS and DER, a bidirectional messaging mapping method based on data exchange was proposed.

  10. Automatic Management of Parallel and Distributed System Resources

    NASA Technical Reports Server (NTRS)

    Yan, Jerry; Ngai, Tin Fook; Lundstrom, Stephen F.

    1990-01-01

    Viewgraphs on automatic management of parallel and distributed system resources are presented. Topics covered include: parallel applications; intelligent management of multiprocessing systems; performance evaluation of parallel architecture; dynamic concurrent programs; compiler-directed system approach; lattice gaseous cellular automata; and sparse matrix Cholesky factorization.

  11. Wireless remote control clinical image workflow: utilizing a PDA for offsite distribution

    NASA Astrophysics Data System (ADS)

    Liu, Brent J.; Documet, Luis; Documet, Jorge; Huang, H. K.; Muldoon, Jean

    2004-04-01

    Last year we presented in RSNA an application to perform wireless remote control of PACS image distribution utilizing a handheld device such as a Personal Digital Assistant (PDA). This paper describes the clinical experiences including workflow scenarios of implementing the PDA application to route exams from the clinical PACS archive server to various locations for offsite distribution of clinical PACS exams. By utilizing this remote control application, radiologists can manage image workflow distribution with a single wireless handheld device without impacting their clinical workflow on diagnostic PACS workstations. A PDA application was designed and developed to perform DICOM Query and C-Move requests by a physician from a clinical PACS Archive to a CD-burning device for automatic burning of PACS data for the distribution to offsite. In addition, it was also used for convenient routing of historical PACS exams to the local web server, local workstations, and teleradiology systems. The application was evaluated by radiologists as well as other clinical staff who need to distribute PACS exams to offsite referring physician"s offices and offsite radiologists. An application for image workflow management utilizing wireless technology was implemented in a clinical environment and evaluated. A PDA application was successfully utilized to perform DICOM Query and C-Move requests from the clinical PACS archive to various offsite exam distribution devices. Clinical staff can utilize the PDA to manage image workflow and PACS exam distribution conveniently for offsite consultations by referring physicians and radiologists. This solution allows the radiologist to expand their effectiveness in health care delivery both within the radiology department as well as offisite by improving their clinical workflow.

  12. Use of EPANET solver to manage water distribution in Smart City

    NASA Astrophysics Data System (ADS)

    Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.

    2018-02-01

    Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.

  13. DMS Advanced Applications for Accommodating High Penetrations of DERs and Microgrids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pratt, Annabelle; Veda, Santosh; Maitra, Arindam

    Efficient and effective management of the electric distribution system requires an integrated approach to allow various systems to work in harmony, including distribution management systems (DMS), distributed energy resources (DERs), distributed energy resources management systems, and microgrids. This study highlights some outcomes from a recent project sponsored by the US Department of Energy, Office of Electricity Delivery and Energy Reliability, including information about (i) the architecture of these integrated systems and (ii) expanded functions of two example DMS applications to accommodate DERs: volt-var optimisation and fault location, isolation, and service restoration. In addition, the relevant DER group functions necessary tomore » support communications between the DMS and a microgrid controller in grid-tied mode are identified.« less

  14. How applicable is uneven-age management in northern forest types?

    Treesearch

    Stanley M. Filip

    1977-01-01

    For the proper application and practice of uneven-age management, one must consider residual stocking, maximum tree-size objective, and diameter distribution. All three components are described, and it is shown how they fit into a practical package for application in a timber tract.

  15. Automation of the space station core module power management and distribution system

    NASA Technical Reports Server (NTRS)

    Weeks, David J.

    1988-01-01

    Under the Advanced Development Program for Space Station, Marshall Space Flight Center has been developing advanced automation applications for the Power Management and Distribution (PMAD) system inside the Space Station modules for the past three years. The Space Station Module Power Management and Distribution System (SSM/PMAD) test bed features three artificial intelligence (AI) systems coupled with conventional automation software functioning in an autonomous or closed-loop fashion. The AI systems in the test bed include a baseline scheduler/dynamic rescheduler (LES), a load shedding management system (LPLMS), and a fault recovery and management expert system (FRAMES). This test bed will be part of the NASA Systems Autonomy Demonstration for 1990 featuring cooperating expert systems in various Space Station subsystem test beds. It is concluded that advanced automation technology involving AI approaches is sufficiently mature to begin applying the technology to current and planned spacecraft applications including the Space Station.

  16. Quantum key distribution network for multiple applications

    NASA Astrophysics Data System (ADS)

    Tajima, A.; Kondoh, T.; Ochi, T.; Fujiwara, M.; Yoshino, K.; Iizuka, H.; Sakamoto, T.; Tomita, A.; Shimamura, E.; Asami, S.; Sasaki, M.

    2017-09-01

    The fundamental architecture and functions of secure key management in a quantum key distribution (QKD) network with enhanced universal interfaces for smooth key sharing between arbitrary two nodes and enabling multiple secure communication applications are proposed. The proposed architecture consists of three layers: a quantum layer, key management layer and key supply layer. We explain the functions of each layer, the key formats in each layer and the key lifecycle for enabling a practical QKD network. A quantum key distribution-advanced encryption standard (QKD-AES) hybrid system and an encrypted smartphone system were developed as secure communication applications on our QKD network. The validity and usefulness of these systems were demonstrated on the Tokyo QKD Network testbed.

  17. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    DOEpatents

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  18. CompatPM: enabling energy efficient multimedia workloads for distributed mobile platforms

    NASA Astrophysics Data System (ADS)

    Nathuji, Ripal; O'Hara, Keith J.; Schwan, Karsten; Balch, Tucker

    2007-01-01

    The computation and communication abilities of modern platforms are enabling increasingly capable cooperative distributed mobile systems. An example is distributed multimedia processing of sensor data in robots deployed for search and rescue, where a system manager can exploit the application's cooperative nature to optimize the distribution of roles and tasks in order to successfully accomplish the mission. Because of limited battery capacities, a critical task a manager must perform is online energy management. While support for power management has become common for the components that populate mobile platforms, what is lacking is integration and explicit coordination across the different management actions performed in a variety of system layers. This papers develops an integration approach for distributed multimedia applications, where a global manager specifies both a power operating point and a workload for a node to execute. Surprisingly, when jointly considering power and QoS, experimental evaluations show that using a simple deadline-driven approach to assigning frequencies can be non-optimal. These trends are further affected by certain characteristics of underlying power management mechanisms, which in our research, are identified as groupings that classify component power management as "compatible" (VFC) or "incompatible" (VFI) with voltage and frequency scaling. We build on these findings to develop CompatPM, a vertically integrated control strategy for power management in distributed mobile systems. Experimental evaluations of CompatPM indicate average energy improvements of 8% when platform resources are managed jointly rather than independently, demonstrating that previous attempts to maximize battery life by simply minimizing frequency are inappropriate from a platform-level perspective.

  19. KeyWare: an open wireless distributed computing environment

    NASA Astrophysics Data System (ADS)

    Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir

    1995-12-01

    Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.

  20. A resilient and secure software platform and architecture for distributed spacecraft

    NASA Astrophysics Data System (ADS)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  1. 77 FR 24984 - Importer of Controlled Substances; Notice of Application; Clinical Supplies Management, Inc.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-26

    ... Application; Clinical Supplies Management, Inc. Pursuant to 21 U.S.C. 958(i), the Attorney General shall... on November 13, 2011, Clinical Supplies Management, Inc., 342 42nd Street South, Fargo, North Dakota... distributing to customers which are qualified clinical sites conducting clinical trials under the auspices of...

  2. Distributed network management in the flat structured mobile communities

    NASA Astrophysics Data System (ADS)

    Balandina, Elena

    2005-10-01

    Delivering proper management into the flat structured mobile communities is crucial for improving users experience and increase applications diversity in mobile networks. The available P2P applications do application-centric management, but it cannot replace network-wide management, especially when a number of different applications are used simultaneously in the network. The network-wide management is the key element required for a smooth transition from standalone P2P applications to the self-organizing mobile communities that maintain various services with quality and security guaranties. The classical centralized network management solutions are not applicable in the flat structured mobile communities due to the decentralized nature and high mobility of the underlying networks. Also the basic network management tasks have to be revised taking into account specialties of the flat structured mobile communities. The network performance management becomes more dependent on the current nodes' context, which also requires extension of the configuration management functionality. The fault management has to take into account high mobility of the network nodes. The performance and accounting managements are mainly targeted in maintain an efficient and fair access to the resources within the community, however they also allow unbalanced resource use of the nodes that explicitly permit it, e.g. as a voluntary donation to the community or due to the profession (commercial) reasons. The security management must implement the new trust models, which are based on the community feedback, professional authorization, and a mix of both. For fulfilling these and another specialties of the flat structured mobile communities, a new network management solution is demanded. The paper presents a distributed network management solution for flat structured mobile communities. Also the paper points out possible network management roles for the different parties (e.g. operators, service providing hubs/super nodes, etc.) involved in a service providing chain.

  3. Video fingerprinting for copy identification: from research to industry applications

    NASA Astrophysics Data System (ADS)

    Lu, Jian

    2009-02-01

    Research that began a decade ago in video copy detection has developed into a technology known as "video fingerprinting". Today, video fingerprinting is an essential and enabling tool adopted by the industry for video content identification and management in online video distribution. This paper provides a comprehensive review of video fingerprinting technology and its applications in identifying, tracking, and managing copyrighted content on the Internet. The review includes a survey on video fingerprinting algorithms and some fundamental design considerations, such as robustness, discriminability, and compactness. It also discusses fingerprint matching algorithms, including complexity analysis, and approximation and optimization for fast fingerprint matching. On the application side, it provides an overview of a number of industry-driven applications that rely on video fingerprinting. Examples are given based on real-world systems and workflows to demonstrate applications in detecting and managing copyrighted content, and in monitoring and tracking video distribution on the Internet.

  4. System approach to distributed sensor management

    NASA Astrophysics Data System (ADS)

    Mayott, Gregory; Miller, Gordon; Harrell, John; Hepp, Jared; Self, Mid

    2010-04-01

    Since 2003, the US Army's RDECOM CERDEC Night Vision Electronic Sensor Directorate (NVESD) has been developing a distributed Sensor Management System (SMS) that utilizes a framework which demonstrates application layer, net-centric sensor management. The core principles of the design support distributed and dynamic discovery of sensing devices and processes through a multi-layered implementation. This results in a sensor management layer that acts as a System with defined interfaces for which the characteristics, parameters, and behaviors can be described. Within the framework, the definition of a protocol is required to establish the rules for how distributed sensors should operate. The protocol defines the behaviors, capabilities, and message structures needed to operate within the functional design boundaries. The protocol definition addresses the requirements for a device (sensors or processes) to dynamically join or leave a sensor network, dynamically describe device control and data capabilities, and allow dynamic addressing of publish and subscribe functionality. The message structure is a multi-tiered definition that identifies standard, extended, and payload representations that are specifically designed to accommodate the need for standard representations of common functions, while supporting the need for feature-based functions that are typically vendor specific. The dynamic qualities of the protocol enable a User GUI application the flexibility of mapping widget-level controls to each device based on reported capabilities in real-time. The SMS approach is designed to accommodate scalability and flexibility within a defined architecture. The distributed sensor management framework and its application to a tactical sensor network will be described in this paper.

  5. Managing Distributed Systems with Smart Subscriptions

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Lee, Diana D.; Swanson, Keith (Technical Monitor)

    2000-01-01

    We describe an event-based, publish-and-subscribe mechanism based on using 'smart subscriptions' to recognize weakly-structured events. We present a hierarchy of subscription languages (propositional, predicate, temporal and agent) and algorithms for efficiently recognizing event matches. This mechanism has been applied to the management of distributed applications.

  6. Motion/imagery secure cloud enterprise architecture analysis

    NASA Astrophysics Data System (ADS)

    DeLay, John L.

    2012-06-01

    Cloud computing with storage virtualization and new service-oriented architectures brings a new perspective to the aspect of a distributed motion imagery and persistent surveillance enterprise. Our existing research is focused mainly on content management, distributed analytics, WAN distributed cloud networking performance issues of cloud based technologies. The potential of leveraging cloud based technologies for hosting motion imagery, imagery and analytics workflows for DOD and security applications is relatively unexplored. This paper will examine technologies for managing, storing, processing and disseminating motion imagery and imagery within a distributed network environment. Finally, we propose areas for future research in the area of distributed cloud content management enterprises.

  7. A Distributed Dynamic Programming-Based Solution for Load Management in Smart Grids

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Xu, Yinliang; Li, Sisi; Zhou, MengChu; Liu, Wenxin; Xu, Ying

    2018-03-01

    Load management is being recognized as an important option for active user participation in the energy market. Traditional load management methods usually require a centralized powerful control center and a two-way communication network between the system operators and energy end-users. The increasing user participation in smart grids may limit their applications. In this paper, a distributed solution for load management in emerging smart grids is proposed. The load management problem is formulated as a constrained optimization problem aiming at maximizing the overall utility of users while meeting the requirement for load reduction requested by the system operator, and is solved by using a distributed dynamic programming algorithm. The algorithm is implemented via a distributed framework and thus can deliver a highly desired distributed solution. It avoids the required use of a centralized coordinator or control center, and can achieve satisfactory outcomes for load management. Simulation results with various test systems demonstrate its effectiveness.

  8. D-MSR: a distributed network management scheme for real-time monitoring and process control applications in wireless industrial automation.

    PubMed

    Zand, Pouria; Dilo, Arta; Havinga, Paul

    2013-06-27

    Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead.

  9. D-MSR: A Distributed Network Management Scheme for Real-Time Monitoring and Process Control Applications in Wireless Industrial Automation

    PubMed Central

    Zand, Pouria; Dilo, Arta; Havinga, Paul

    2013-01-01

    Current wireless technologies for industrial applications, such as WirelessHART and ISA100.11a, use a centralized management approach where a central network manager handles the requirements of the static network. However, such a centralized approach has several drawbacks. For example, it cannot cope with dynamicity/disturbance in large-scale networks in a real-time manner and it incurs a high communication overhead and latency for exchanging management traffic. In this paper, we therefore propose a distributed network management scheme, D-MSR. It enables the network devices to join the network, schedule their communications, establish end-to-end connections by reserving the communication resources for addressing real-time requirements, and cope with network dynamicity (e.g., node/edge failures) in a distributed manner. According to our knowledge, this is the first distributed management scheme based on IEEE 802.15.4e standard, which guides the nodes in different phases from joining until publishing their sensor data in the network. We demonstrate via simulation that D-MSR can address real-time and reliable communication as well as the high throughput requirements of industrial automation wireless networks, while also achieving higher efficiency in network management than WirelessHART, in terms of delay and overhead. PMID:23807687

  10. Web Application to Monitor Logistics Distribution of Disaster Relief Using the CodeIgniter Framework

    NASA Astrophysics Data System (ADS)

    Jamil, Mohamad; Ridwan Lessy, Mohamad

    2018-03-01

    Disaster management is the responsibility of the central government and local governments. The principles of disaster management, among others, are quick and precise, priorities, coordination and cohesion, efficient and effective manner. Help that is needed by most societies are logistical assistance, such as the assistance covers people’s everyday needs, such as food, instant noodles, fast food, blankets, mattresses etc. Logistical assistance is needed for disaster management, especially in times of disasters. The support of logistical assistance must be timely, to the right location, target, quality, quantity, and needs. The purpose of this study is to make a web application to monitorlogistics distribution of disaster relefusing CodeIgniter framework. Through this application, the mechanisms of aid delivery will be easily controlled from and heading to the disaster site.

  11. Development of Risk Assessment Methodology for Land Application and Distribution and Marketing of Municipal Sludge

    EPA Science Inventory

    This is one of a series of reports that present methodologies for assessing the potential risks to humans or other organisms from the disposal or reuse of municipal sludge. The sludge management practices addressed by this series include land application practices, distribution a...

  12. a Framework for Distributed Mixed Language Scientific Applications

    NASA Astrophysics Data System (ADS)

    Quarrie, D. R.

    The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.

  13. Autonomic Management in a Distributed Storage System

    NASA Astrophysics Data System (ADS)

    Tauber, Markus

    2010-07-01

    This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.

  14. Predictive Anomaly Management for Resilient Virtualized Computing Infrastructures

    DTIC Science & Technology

    2015-05-27

    PREC: Practical Root Exploit Containment for Android Devices, ACM Conference on Data and Application Security and Privacy (CODASPY) . 03-MAR-14...05-OCT-11, . : , Hiep Nguyen, Yongmin Tan, Xiaohui Gu. Propagation-aware Anomaly Localization for Cloud Hosted Distributed Applications , ACM...Workshop on Managing Large-Scale Systems via the Analysis of System Logs and the Application of Machine Learning Techniques (SLAML) in conjunction with SOSP

  15. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    NASA Astrophysics Data System (ADS)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  16. ADMS State of the Industry and Gap Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agalgaonkar, Yashodhan P.; Marinovici, Maria C.; Vadari, Subramanian V.

    2016-03-31

    An Advanced distribution management system (ADMS) is a platform for optimized distribution system operational management. This platform comprises of distribution management system (DMS) applications, supervisory control and data acquisition (SCADA), outage management system (OMS), and distributed energy resource management system (DERMS). One of the primary objectives of this work is to study and analyze several ADMS component and auxiliary systems. All the important component and auxiliary systems, SCADA, GISs, DMSs, AMRs/AMIs, OMSs, and DERMS, are discussed in this report. Their current generation technologies are analyzed, and their integration (or evolution) with an ADMS technology is discussed. An ADMS technology statemore » of the art and gap analysis is also presented. There are two technical gaps observed. The integration challenge between the component operational systems is the single largest challenge for ADMS design and deployment. Another significant challenge noted is concerning essential ADMS applications, for instance, fault location, isolation, and service restoration (FLISR), volt-var optimization (VVO), etc. There are a relatively small number of ADMS application developers as ADMS software platform is not open source. There is another critical gap and while not being technical in nature (when compared the two above) is still important to consider. The data models currently residing in utility GIS systems are either incomplete or inaccurate or both. This data is essential for planning and operations because it is typically one of the primary sources from which power system model are created. To achieve the full potential of ADMS, the ability to execute acute Power Flow solution is an important pre-requisite. These critical gaps are hindering wider Utility adoption of an ADMS technology. The development of an open architecture platform can eliminate many of these barriers and also aid seamless integration of distribution Utility legacy systems with an ADMS.« less

  17. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.

  18. NELS 2.0 - A general system for enterprise wide information management

    NASA Technical Reports Server (NTRS)

    Smith, Stephanie L.

    1993-01-01

    NELS, the NASA Electronic Library System, is an information management tool for creating distributed repositories of documents, drawings, and code for use and reuse by the aerospace community. The NELS retrieval engine can load metadata and source files of full text objects, perform natural language queries to retrieve ranked objects, and create links to connect user interfaces. For flexibility, the NELS architecture has layered interfaces between the application program and the stored library information. The session manager provides the interface functions for development of NELS applications. The data manager is an interface between session manager and the structured data system. The center of the structured data system is the Wide Area Information Server. This system architecture provides access to information across heterogeneous platforms in a distributed environment. There are presently three user interfaces that connect to the NELS engine; an X-Windows interface, and ASCII interface and the Spatial Data Management System. This paper describes the design and operation of NELS as an information management tool and repository.

  19. A data and information system for processing, archival, and distribution of data for global change research

    NASA Technical Reports Server (NTRS)

    Graves, Sara J.

    1994-01-01

    Work on this project was focused on information management techniques for Marshall Space Flight Center's EOSDIS Version 0 Distributed Active Archive Center (DAAC). The centerpiece of this effort has been participation in EOSDIS catalog interoperability research, the result of which is a distributed Information Management System (IMS) allowing the user to query the inventories of all the DAAC's from a single user interface. UAH has provided the MSFC DAAC database server for the distributed IMS, and has contributed to definition and development of the browse image display capabilities in the system's user interface. Another important area of research has been in generating value-based metadata through data mining. In addition, information management applications for local inventory and archive management, and for tracking data orders were provided.

  20. Advanced Inverter Functions and Communication Protocols for Distribution Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagarajan, Adarsh; Palmintier, Bryan; Baggu, Murali

    2016-05-05

    This paper aims at identifying the advanced features required by distribution management systems (DMS) service providers to bring inverter-connected distributed energy resources into use as an intelligent grid resource. This work explores the standard functions needed in the future DMS for enterprise integration of distributed energy resources (DER). The important DMS functionalities such as DER management in aggregate groups, including the discovery of capabilities, status monitoring, and dispatch of real and reactive power are addressed in this paper. It is intended to provide the industry with a point of reference for DER integration with other utility applications and to providemore » guidance to research and standards development organizations.« less

  1. Integrated Distribution Management System for Alabama Principal Investigator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schatz, Joe

    2013-03-31

    Southern Company Services, under contract with the Department of Energy, along with Alabama Power, Alstom Grid (formerly AREVA T&D) and others moved the work product developed in the first phase of the Integrated Distribution Management System (IDMS) from “Proof of Concept” to true deployment through the activity described in this Final Report. This Project – Integrated Distribution Management Systems in Alabama – advanced earlier developed proof of concept activities into actual implementation and furthermore completed additional requirements to fully realize the benefits of an IDMS. These tasks include development and implementation of a Distribution System based Model that enables datamore » access and enterprise application integration.« less

  2. Integrating the autonomous subsystems management process

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry R.

    1992-01-01

    Ways in which the ranking of the Space Station Module Power Management and Distribution testbed may be achieved and an individual subsystem's internal priorities may be managed within the complete system are examined. The application of these results in the integration and performance leveling of the autonomously managed system is discussed.

  3. Improving Royal Australian Air Force Strategic Airlift Planning by Application of a Computer Based Management Information System

    DTIC Science & Technology

    1991-12-01

    AUSTRALIAN AIR FORCE STRATEGIC AIRLIFT PLANNING bY APPLICATION OF A COMPTER BASED MANAGEMENT INFO4ATION SYSTEM THESIS Presented to the Faculty of the...Master of Science in Information Management Neil A. Cooper, BBus Squadron Leader, RAAF December 1991 Approved for public release; distribution unlimited...grateful to the time and honest views given to me by the ADANS manager , Lieutenant Colonel Charlie Davis. For my Canadian research, I relied on the

  4. Integrated Data for Improved Asset Management

    DOT National Transportation Integrated Search

    2016-05-26

    The objective of this research is to demonstrate the potential benefits for agency-wide data integration for VDOT asset management. This objective is achieved through an example application that requires information distributed across multiple databa...

  5. Coordinating complex problem-solving among distributed intelligent agents

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1992-01-01

    A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.

  6. 75 FR 42177 - Federated Enhanced Treasury Income Fund, et al.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-20

    ... closed-end management investment companies to make periodic distributions of long-term capital gains with..., the ``Current Funds'') and Federated Investment Management Company (``Federated'' or the ``Adviser... Investment Management, Office of Investment Company Regulation). SUPPLEMENTARY INFORMATION: The following is...

  7. Horizon: The Portable, Scalable, and Reusable Framework for Developing Automated Data Management and Product Generation Systems

    NASA Astrophysics Data System (ADS)

    Huang, T.; Alarcon, C.; Quach, N. T.

    2014-12-01

    Capture, curate, and analysis are the typical activities performed at any given Earth Science data center. Modern data management systems must be adaptable to heterogeneous science data formats, scalable to meet the mission's quality of service requirements, and able to manage the life-cycle of any given science data product. Designing a scalable data management doesn't happen overnight. It takes countless hours of refining, refactoring, retesting, and re-architecting. The Horizon data management and workflow framework, developed at the Jet Propulsion Laboratory, is a portable, scalable, and reusable framework for developing high-performance data management and product generation workflow systems to automate data capturing, data curation, and data analysis activities. The NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC)'s Data Management and Archive System (DMAS) is its core data infrastructure that handles capturing and distribution of hundreds of thousands of satellite observations each day around the clock. DMAS is an application of the Horizon framework. The NASA Global Imagery Browse Services (GIBS) is NASA's Earth Observing System Data and Information System (EOSDIS)'s solution for making high-resolution global imageries available to the science communities. The Imagery Exchange (TIE), an application of the Horizon framework, is a core subsystem for GIBS responsible for data capturing and imagery generation automation to support the EOSDIS' 12 distributed active archive centers and 17 Science Investigator-led Processing Systems (SIPS). This presentation discusses our ongoing effort in refining, refactoring, retesting, and re-architecting the Horizon framework to enable data-intensive science and its applications.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liss, W.; Dybel, M.; West, R.

    This report covers the first year's work performed by the Gas Technology Institute and Encorp Inc. under subcontract to the National Renewable Energy Laboratory. The objective of this three-year contract is to develop innovative grid interconnection and control systems. This supports the advancement of distributed generation in the marketplace by making installations more cost-effective and compatible across the electric power and energy management systems. Specifically, the goals are: (1) To develop and demonstrate cost-effective distributed power grid interconnection products and software and communication solutions applicable to improving the economics of a broad range of distributed power systems, including existing, emerging,more » and other power generation technologies. (2) To enhance the features and capabilities of distributed power products to integrate, interact, and provide operational benefits to the electric power and advanced energy management systems. This includes features and capabilities for participating in resource planning, the provision of ancillary services, and energy management. Specific topics of this report include the development of an advanced controller, a power sensing board, expanded communication capabilities, a revenue-grade meter interface, and a case study of an interconnection distributed power system application that is a model for demonstrating the functionalities of the design of the advanced controller.« less

  9. Scalable collaborative risk management technology for complex critical systems

    NASA Technical Reports Server (NTRS)

    Campbell, Scott; Torgerson, Leigh; Burleigh, Scott; Feather, Martin S.; Kiper, James D.

    2004-01-01

    We describe here our project and plans to develop methods, software tools, and infrastructure tools to address challenges relating to geographically distributed software development. Specifically, this work is creating an infrastructure that supports applications working over distributed geographical and organizational domains and is using this infrastructure to develop a tool that supports project development using risk management and analysis techniques where the participants are not collocated.

  10. Distributed Prognostic Health Management with Gaussian Process Regression

    NASA Technical Reports Server (NTRS)

    Saha, Sankalita; Saha, Bhaskar; Saxena, Abhinav; Goebel, Kai Frank

    2010-01-01

    Distributed prognostics architecture design is an enabling step for efficient implementation of health management systems. A major challenge encountered in such design is formulation of optimal distributed prognostics algorithms. In this paper. we present a distributed GPR based prognostics algorithm whose target platform is a wireless sensor network. In addition to challenges encountered in a distributed implementation, a wireless network poses constraints on communication patterns, thereby making the problem more challenging. The prognostics application that was used to demonstrate our new algorithms is battery prognostics. In order to present trade-offs within different prognostic approaches, we present comparison with the distributed implementation of a particle filter based prognostics for the same battery data.

  11. Applications of space observations to the management and utilization of coastal fishery resources

    NASA Technical Reports Server (NTRS)

    Kemmerer, A. J.; Savastano, K. J.; Faller, K. H.

    1977-01-01

    Information needs of those concerned with the harvest and management of coastal fishery resources can be satisfied in part through applications of satellite remote sensing. Recently completed and ongoing investigations have demonstrated potentials for defining fish distribution patterns from multispectral data, monitoring fishing distribution and effort with synthetic aperture radar systems, forecasting recruitment of certain estuarine-dependent species, and tracking marine mammals. These investigations, which are reviewed in this paper, have relied on Landsat 1 and 2, Skylab-3, and Nimbus-6 supported sensors and sensors carried by aircraft and mounted on surface platforms to simulate applications from Seasat-A and other future spacecraft systems. None of the systems are operational as all were designed to identify and demonstrate applications and to aid in the specification of requirements for future spaceborne systems.

  12. MASM: a market architecture for sensor management in distributed sensor networks

    NASA Astrophysics Data System (ADS)

    Viswanath, Avasarala; Mullen, Tracy; Hall, David; Garga, Amulya

    2005-03-01

    Rapid developments in sensor technology and its applications have energized research efforts towards devising a firm theoretical foundation for sensor management. Ubiquitous sensing, wide bandwidth communications and distributed processing provide both opportunities and challenges for sensor and process control and optimization. Traditional optimization techniques do not have the ability to simultaneously consider the wildly non-commensurate measures involved in sensor management in a single optimization routine. Market-oriented programming provides a valuable and principled paradigm to designing systems to solve this dynamic and distributed resource allocation problem. We have modeled the sensor management scenario as a competitive market, wherein the sensor manager holds a combinatorial auction to sell the various items produced by the sensors and the communication channels. However, standard auction mechanisms have been found not to be directly applicable to the sensor management domain. For this purpose, we have developed a specialized market architecture MASM (Market architecture for Sensor Management). In MASM, the mission manager is responsible for deciding task allocations to the consumers and their corresponding budgets and the sensor manager is responsible for resource allocation to the various consumers. In addition to having a modified combinatorial winner determination algorithm, MASM has specialized sensor network modules that address commensurability issues between consumers and producers in the sensor network domain. A preliminary multi-sensor, multi-target simulation environment has been implemented to test the performance of the proposed system. MASM outperformed the information theoretic sensor manager in meeting the mission objectives in the simulation experiments.

  13. 41 CFR 101-28.301 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 28-STORAGE AND DISTRIBUTION 28.3-Customer Supply Centers § 101-28.301 Applicability. This subpart is applicable to all activities that are eligible to use customer supply centers. Eligible activities include executive agencies, elements of the...

  14. SCSODC: Integrating Ocean Data for Visualization Sharing and Application

    NASA Astrophysics Data System (ADS)

    Xu, C.; Li, S.; Wang, D.; Xie, Q.

    2014-02-01

    The South China Sea Ocean Data Center (SCSODC) was founded in 2010 in order to improve collecting and managing of ocean data of the South China Sea Institute of Oceanology (SCSIO). The mission of SCSODC is to ensure the long term scientific stewardship of ocean data, information and products - collected through research groups, monitoring stations and observation cruises - and to facilitate the efficient use and distribution to possible users. However, data sharing and applications were limited due to the characteristics of distribution and heterogeneity that made it difficult to integrate the data. To surmount those difficulties, the Data Sharing System has been developed by the SCSODC using the most appropriate information management and information technology. The Data Sharing System uses open standards and tools to promote the capability to integrate ocean data and to interact with other data portals or users and includes a full range of processes such as data discovery, evaluation and access combining C/S and B/S mode. It provides a visualized management interface for the data managers and a transparent and seamless data access and application environment for users. Users are allowed to access data using the client software and to access interactive visualization application interface via a web browser. The architecture, key technologies and functionality of the system are discussed briefly in this paper. It is shown that the system of SCSODC is able to implement web visualization sharing and seamless access to ocean data in a distributed and heterogeneous environment.

  15. Applications of Ontologies in Knowledge Management Systems

    NASA Astrophysics Data System (ADS)

    Rehman, Zobia; Kifor, Claudiu V.

    2014-12-01

    Enterprises are realizing that their core asset in 21st century is knowledge. In an organization knowledge resides in databases, knowledge bases, filing cabinets and peoples' head. Organizational knowledge is distributed in nature and its poor management causes repetition of activities across the enterprise. To get true benefits from this asset, it is important for an organization to "know what they know". That's why many organizations are investing a lot in managing their knowledge. Artificial intelligence techniques have a huge contribution in organizational knowledge management. In this article we are reviewing the applications of ontologies in knowledge management realm

  16. The Management and Security Expert (MASE)

    NASA Technical Reports Server (NTRS)

    Miller, Mark D.; Barr, Stanley J.; Gryphon, Coranth D.; Keegan, Jeff; Kniker, Catherine A.; Krolak, Patrick D.

    1991-01-01

    The Management and Security Expert (MASE) is a distributed expert system that monitors the operating systems and applications of a network. It is capable of gleaning the information provided by the different operating systems in order to optimize hardware and software performance; recognize potential hardware and/or software failure, and either repair the problem before it becomes an emergency, or notify the systems manager of the problem; and monitor applications and known security holes for indications of an intruder or virus. MASE can eradicate much of the guess work of system management.

  17. U.S. Geological Survey science for the Wyoming Landscape Conservation Initiative—2014 annual report

    USGS Publications Warehouse

    Bowen, Zachary H.; Aldridge, Cameron L.; Anderson, Patrick J.; Assal, Timothy J.; Bartos, Timothy T.; Biewick, Laura R; Boughton, Gregory K.; Chalfoun, Anna D.; Chong, Geneva W.; Dematatis, Marie K.; Eddy-Miller, Cheryl A.; Garman, Steven L.; Germaine, Stephen S.; Homer, Collin G.; Huber, Christopher; Kauffman, Matthew J.; Latysh, Natalie; Manier, Daniel; Melcher, Cynthia P.; Miller, Alexander; Miller, Kirk A.; Olexa, Edward M.; Schell, Spencer; Walters, Annika W.; Wilson, Anna B.; Wyckoff, Teal B.

    2015-01-01

    Finally, capabilities of the WLCI Web site and the USGS ScienceBase infrastructure were maintained and upgraded to help ensure access to and efficient use of all the WLCI data, products, assessment tools, and outreach materials that have been developed. Of particular note is the completion of three Web applications developed for mapping (1) the 1900−2008 progression of oil and gas development;(2) the predicted distributions of Wyoming’s Species of Greatest Conservation Need; and (3) the locations of coal and wind energy production, sage-grouse distribution and core management areas, and alternative routes for transmission lines within the WLCI region. Collectively, these applications tools provide WLCI planners and managers with powerful tools for better understanding the distributions of wildlife species and potential alternatives for energy development.

  18. FRIEDA: Flexible Robust Intelligent Elastic Data Management Framework

    DOE PAGES

    Ghoshal, Devarshi; Hendrix, Valerie; Fox, William; ...

    2017-02-01

    Scientific applications are increasingly using cloud resources for their data analysis workflows. However, managing data effectively and efficiently over these cloud resources is challenging due to the myriad storage choices with different performance, cost trade-offs, complex application choices and complexity associated with elasticity, failure rates in these environments. The different data access patterns for data-intensive scientific applications require a more flexible and robust data management solution than the ones currently in existence. FRIEDA is a Flexible Robust Intelligent Elastic Data Management framework that employs a range of data management strategies in cloud environments. FRIEDA can manage storage and data lifecyclemore » of applications in cloud environments. There are four different stages in the data management lifecycle of FRIEDA – (i) storage planning, (ii) provisioning and preparation, (iii) data placement, and (iv) execution. FRIEDA defines a data control plane and an execution plane. The data control plane defines the data partition and distribution strategy, whereas the execution plane manages the execution of the application using a master-worker paradigm. FRIEDA also provides different data management strategies, either to partition the data in real-time, or predetermine the data partitions prior to application execution.« less

  19. FRIEDA: Flexible Robust Intelligent Elastic Data Management Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghoshal, Devarshi; Hendrix, Valerie; Fox, William

    Scientific applications are increasingly using cloud resources for their data analysis workflows. However, managing data effectively and efficiently over these cloud resources is challenging due to the myriad storage choices with different performance, cost trade-offs, complex application choices and complexity associated with elasticity, failure rates in these environments. The different data access patterns for data-intensive scientific applications require a more flexible and robust data management solution than the ones currently in existence. FRIEDA is a Flexible Robust Intelligent Elastic Data Management framework that employs a range of data management strategies in cloud environments. FRIEDA can manage storage and data lifecyclemore » of applications in cloud environments. There are four different stages in the data management lifecycle of FRIEDA – (i) storage planning, (ii) provisioning and preparation, (iii) data placement, and (iv) execution. FRIEDA defines a data control plane and an execution plane. The data control plane defines the data partition and distribution strategy, whereas the execution plane manages the execution of the application using a master-worker paradigm. FRIEDA also provides different data management strategies, either to partition the data in real-time, or predetermine the data partitions prior to application execution.« less

  20. Knowledge Management System Model for Learning Organisations

    ERIC Educational Resources Information Center

    Amin, Yousif; Monamad, Roshayu

    2017-01-01

    Based on the literature of knowledge management (KM), this paper reports on the progress of developing a new knowledge management system (KMS) model with components architecture that are distributed over the widely-recognised socio-technical system (STS) aspects to guide developers for selecting the most applicable components to support their KM…

  1. A Learning Management System Enhanced with Internet of Things Applications

    ERIC Educational Resources Information Center

    Mershad, Khaleel; Wakim, Pilar

    2018-01-01

    A breakthrough in the development of online learning occurred with the utilization of Learning Management Systems (LMS) as a tool for creating, distributing, tracking, and managing various types of educational and training material. Since the appearance of the first LMS, major technological enhancements transformed this tool into a powerful…

  2. 78 FR 64265 - Hours of Service of Drivers: U.S. Department of Defense (DOD); Application for Exemption

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-28

    ...) Military Surface Deployment and Distribution Command (SDDC) an exemption from the minimum 30-minute rest... Surface Deployment and Distribution Command (SDDC) manages the motor carrier industry contracts for the...

  3. Optimal management of stationary lithium-ion battery system in electricity distribution grids

    NASA Astrophysics Data System (ADS)

    Purvins, Arturs; Sumner, Mark

    2013-11-01

    The present article proposes an optimal battery system management model in distribution grids for stationary applications. The main purpose of the management model is to maximise the utilisation of distributed renewable energy resources in distribution grids, preventing situations of reverse power flow in the distribution transformer. Secondly, battery management ensures efficient battery utilisation: charging at off-peak prices and discharging at peak prices when possible. This gives the battery system a shorter payback time. Management of the system requires predictions of residual distribution grid demand (i.e. demand minus renewable energy generation) and electricity price curves (e.g. for 24 h in advance). Results of a hypothetical study in Great Britain in 2020 show that the battery can contribute significantly to storing renewable energy surplus in distribution grids while being highly utilised. In a distribution grid with 25 households and an installed 8.9 kW wind turbine, a battery system with rated power of 8.9 kW and battery capacity of 100 kWh can store 7 MWh of 8 MWh wind energy surplus annually. Annual battery utilisation reaches 235 cycles in per unit values, where one unit is a full charge-depleting cycle depth of a new battery (80% of 100 kWh).

  4. Technology Requirements for Information Management

    NASA Technical Reports Server (NTRS)

    Graves, Sara; Knoblock, Craig A.; Lannom, Larry

    2002-01-01

    This report provides the results of a panel study conducted into the technology requirements for information management in support of application domains of particular government interest, including digital libraries, mission operations, and scientific research. The panel concluded that it was desirable to have a coordinated program of R&D that pursues a science of information management focused on an environment typified by applications of government interest - highly distributed with very large amounts of data and a high degree of heterogeneity of sources, data, and users.

  5. The Network Configuration of an Object Relational Database Management System

    NASA Technical Reports Server (NTRS)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  6. The StratusLab cloud distribution: Use-cases and support for scientific applications

    NASA Astrophysics Data System (ADS)

    Floros, E.

    2012-04-01

    The StratusLab project is integrating an open cloud software distribution that enables organizations to setup and provide their own private or public IaaS (Infrastructure as a Service) computing clouds. StratusLab distribution capitalizes on popular infrastructure virtualization solutions like KVM, the OpenNebula virtual machine manager, Claudia service manager and SlipStream deployment platform, which are further enhanced and expanded with additional components developed within the project. The StratusLab distribution covers the core aspects of a cloud IaaS architecture, namely Computing (life-cycle management of virtual machines), Storage, Appliance management and Networking. The resulting software stack provides a packaged turn-key solution for deploying cloud computing services. The cloud computing infrastructures deployed using StratusLab can support a wide range of scientific and business use cases. Grid computing has been the primary use case pursued by the project and for this reason the initial priority has been the support for the deployment and operation of fully virtualized production-level grid sites; a goal that has already been achieved by operating such a site as part of EGI's (European Grid Initiative) pan-european grid infrastructure. In this area the project is currently working to provide non-trivial capabilities like elastic and autonomic management of grid site resources. Although grid computing has been the motivating paradigm, StratusLab's cloud distribution can support a wider range of use cases. Towards this direction, we have developed and currently provide support for setting up general purpose computing solutions like Hadoop, MPI and Torque clusters. For what concerns scientific applications the project is collaborating closely with the Bioinformatics community in order to prepare VM appliances and deploy optimized services for bioinformatics applications. In a similar manner additional scientific disciplines like Earth Science can take advantage of StratusLab cloud solutions. Interested users are welcomed to join StratusLab's user community by getting access to the reference cloud services deployed by the project and offered to the public.

  7. Methodology and application of combined watershed and ground-water models in Kansas

    USGS Publications Warehouse

    Sophocleous, M.; Perkins, S.P.

    2000-01-01

    Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and versatility of this relatively simple and conceptually clear approach, making public acceptance of the integrated watershed modeling system much easier. This approach also enhances model calibration and thus the reliability of model results. (C) 2000 Elsevier Science B.V.Increased irrigation in Kansas and other regions during the last several decades has caused serious water depletion, making the development of comprehensive strategies and tools to resolve such problems increasingly important. This paper makes the case for an intermediate complexity, quasi-distributed, comprehensive, large-watershed model, which falls between the fully distributed, physically based hydrological modeling system of the type of the SHE model and the lumped, conceptual rainfall-runoff modeling system of the type of the Stanford watershed model. This is achieved by integrating the quasi-distributed watershed model SWAT with the fully-distributed ground-water model MODFLOW. The advantage of this approach is the appreciably smaller input data requirements and the use of readily available data (compared to the fully distributed, physically based models), the statistical handling of watershed heterogeneities by employing the hydrologic-response-unit concept, and the significantly increased flexibility in handling stream-aquifer interactions, distributed well withdrawals, and multiple land uses. The mechanics of integrating the component watershed and ground-water models are outlined, and three real-world management applications of the integrated model from Kansas are briefly presented. Three different aspects of the integrated model are emphasized: (1) management applications of a Decision Support System for the integrated model (Rattlesnake Creek subbasin); (2) alternative conceptual models of spatial heterogeneity related to the presence or absence of an underlying aquifer with shallow or deep water table (Lower Republican River basin); and (3) the general nature of the integrated model linkage by employing a watershed simulator other than SWAT (Wet Walnut Creek basin). These applications demonstrate the practicality and ve

  8. Interpreting drinking water quality in the distribution system using Dempster-Shafer theory of evidence.

    PubMed

    Sadiq, Rehan; Rodriguez, Manuel J

    2005-04-01

    Interpreting water quality data routinely generated for control and monitoring purposes in water distribution systems is a complicated task for utility managers. In fact, data for diverse water quality indicators (physico-chemical and microbiological) are generated at different times and at different locations in the distribution system. To simplify and improve the understanding and the interpretation of water quality, methodologies for aggregation and fusion of data must be developed. In this paper, the Dempster-Shafer theory also called theory of evidence is introduced as a potential methodology for interpreting water quality data. The conceptual basis of this methodology and the process for its implementation are presented by two applications. The first application deals with the interpretation of spatial water quality data fusion, while the second application deals with the development of water quality index based on key monitored indicators. Based on the obtained results, the authors discuss the potential contribution of theory of evidence as a decision-making tool for water quality management.

  9. Integrated Warfighter Biodefense Program (IWBP)

    DTIC Science & Technology

    2011-08-01

    Distribution. Sincerely, Frank T. Abbott VP of Administration & Finance fta @quantumleap.us cc: Dr. Ganesh Vaidyanathan, Project Manager, Code 34...goals of IWBP. Areas of potential application include health care administration, clinical data analysis and health care research applications

  10. 5 CFR 950.501 - Applicability.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) SOLICITATION OF FEDERAL CIVILIAN AND UNIFORMED SERVICE PERSONNEL FOR CONTRIBUTIONS TO PRIVATE VOLUNTARY ORGANIZATIONS Undesignated Funds § 950.501 Applicability. (a) All undesignated funds shall be distributed to all...

  11. 5 CFR 950.501 - Applicability.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) SOLICITATION OF FEDERAL CIVILIAN AND UNIFORMED SERVICE PERSONNEL FOR CONTRIBUTIONS TO PRIVATE VOLUNTARY ORGANIZATIONS Undesignated Funds § 950.501 Applicability. (a) All undesignated funds shall be distributed to all...

  12. 5 CFR 950.501 - Applicability.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) SOLICITATION OF FEDERAL CIVILIAN AND UNIFORMED SERVICE PERSONNEL FOR CONTRIBUTIONS TO PRIVATE VOLUNTARY ORGANIZATIONS Undesignated Funds § 950.501 Applicability. (a) All undesignated funds shall be distributed to all...

  13. 5 CFR 950.501 - Applicability.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) SOLICITATION OF FEDERAL CIVILIAN AND UNIFORMED SERVICE PERSONNEL FOR CONTRIBUTIONS TO PRIVATE VOLUNTARY ORGANIZATIONS Undesignated Funds § 950.501 Applicability. (a) All undesignated funds shall be distributed to all...

  14. 5 CFR 950.501 - Applicability.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT (CONTINUED) CIVIL SERVICE REGULATIONS (CONTINUED) SOLICITATION OF FEDERAL CIVILIAN AND UNIFORMED SERVICE PERSONNEL FOR CONTRIBUTIONS TO PRIVATE VOLUNTARY ORGANIZATIONS Undesignated Funds § 950.501 Applicability. (a) All undesignated funds shall be distributed to all...

  15. 78 FR 48927 - Hours of Service of Drivers: U.S. Department of Defense (DOD); Application for Exemption

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-12

    .... Department of Defense (DOD) Military Surface Deployment and Distribution Command (SDDC) for an exemption from... effect on July 1, 2013. The Military Surface Deployment and Distribution Command (SDDC) manages the motor...

  16. Coordinated scheduling for dynamic real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei

    1994-01-01

    In this project, we addressed issues in coordinated scheduling for dynamic real-time systems. In particular, we concentrated on design and implementation of a new distributed real-time system called R-Shell. The design objective of R-Shell is to provide computing support for space programs that have large, complex, fault-tolerant distributed real-time applications. In R-shell, the approach is based on the concept of scheduling agents, which reside in the application run-time environment, and are customized to provide just those resource management functions which are needed by the specific application. With this approach, we avoid the need for a sophisticated OS which provides a variety of generalized functionality, while still not burdening application programmers with heavy responsibility for resource management. In this report, we discuss the R-Shell approach, summarize the achievement of the project, and describe a preliminary prototype of R-Shell system.

  17. [Research on the Application of Lean Management in Medical Consumables Material Logistics Management].

    PubMed

    Yang, Chai; Zhang, Wei; Gu, Wei; Shen, Aizong

    2016-11-01

    Solve the problems of high cost, low utilization rate of resources, low medical care quality problem in medical consumables material logistics management for scientific of medical consumables management. Analysis of the problems existing in the domestic medical consumables material logistics management in hospital, based on lean management method, SPD(Supply, Processing, Distribution) for specific applications, combined HBOS(Hospital Business Operation System), HIS (Hospital Information System) system for medical consumables material management. Achieve the lean management in medical consumables material purchase, warehouse construction, push, clinical use and retrospect. Lean management in medical consumables material can effectively control the cost in logistics management, optimize the alocation of resources, liberate unnecessary time of medical staff, improve the quality of medical care. It is a scientific management method.

  18. Data management in Oceanography at SOCIB

    NASA Astrophysics Data System (ADS)

    Joaquin, Tintoré; March, David; Lora, Sebastian; Sebastian, Kristian; Frontera, Biel; Gómara, Sonia; Pau Beltran, Joan

    2014-05-01

    SOCIB, the Balearic Islands Coastal Ocean Observing and Forecasting System (http://www.socib.es), is a Marine Research Infrastructure, a multiplatform distributed and integrated system, a facility of facilities that extends from the nearshore to the open sea and provides free, open and quality control data. SOCIB is a facility o facilities and has three major infrastructure components: (1) a distributed multiplatform observing system, (2) a numerical forecasting system, and (3) a data management and visualization system. We present the spatial data infrastructure and applications developed at SOCIB. One of the major goals of the SOCIB Data Centre is to provide users with a system to locate and download the data of interest (near real-time and delayed mode) and to visualize and manage the information. Following SOCIB principles, data need to be (1) discoverable and accessible, (2) freely available, and (3) interoperable and standardized. In consequence, SOCIB Data Centre Facility is implementing a general data management system to guarantee international standards, quality assurance and interoperability. The combination of different sources and types of information requires appropriate methods to ingest, catalogue, display, and distribute this information. SOCIB Data Centre is responsible for directing the different stages of data management, ranging from data acquisition to its distribution and visualization through web applications. The system implemented relies on open source solutions. In other words, the data life cycle relies in the following stages: • Acquisition: The data managed by SOCIB mostly come from its own observation platforms, numerical models or information generated from the activities in the SIAS Division. • Processing: Applications developed at SOCIB to deal with all collected platform data performing data calibration, derivation, quality control and standardization. • Archival: Storage in netCDF and spatial databases. • Distribution: Data web services using Thredds, Geoserver and RESTful own services. • Catalogue: Metadata is provided through the ncISO plugin in Thredds and Geonetwork. • Visualization: web and mobile applications to present SOCIB data to different user profiles. SOCIB data services and applications have been developed to provide response to science and society needs (eg. European initiatives such as Emodnet or Copernicus), by targeting different user profiles (eg. researchers, technicians, policy and decision makers, educators, students, and society in general). For example, SOCIB has developed applications to: 1) allow researchers and technicians to access oceanographic information; 2) provide decision support for oil spills response; 3) disseminate information about the coastal state for tourists and recreational users; 4) present coastal research in educational programs; and 5) offer easy and fast access to marine information through mobile devices. In conclusion, the organizational and conceptual structure of SOCIB's Data Centre and the components developed provide an example of marine information systems within the framework of new ocean observatories and/or marine research infrastructures.

  19. Multi-agent systems and their applications

    DOE PAGES

    Xie, Jing; Liu, Chen-Ching

    2017-07-14

    The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less

  20. Multi-agent systems and their applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Jing; Liu, Chen-Ching

    The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less

  1. Device Access Abstractions for Resilient Information Architecture Platform for Smart Grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubey, Abhishek; Karsai, Gabor; Volgyesi, Peter

    An open application platform distributes the intelligence and control capability to local endpoints (or nodes) reducing total network traffic, improving speed of local actions by avoiding latency, and improving reliability by reducing dependencies on numerous devices and communication interfaces. The platform must be multi-tasking and able to host multiple applications running simultaneously. Given such a system, the core functions of power grid control systems include grid state determination, low level control, fault intelligence and reconfiguration, outage intelligence, power quality measurement, remote asset monitoring, configuration management, power and energy management (including local distributed energy resources, such as wind, solar and energymore » storage) can be eventually distributed. However, making this move requires extensive regression testing of systems to prove out new technologies, such as phasor measurement units (PMU). Additionally, as the complexity of the systems increase with the inclusion of new functionality (especially at the distribution and consumer levels), hidden coupling issues becomes a challenge with possible N-way interactions known and not known by device and application developers. Therefore, it is very important to provide core abstractions that ensure uniform operational semantics across such interactions. Here in this paper, we describe the pattern for abstracting device interactions we have developed for the RIAPS platform in the context of a microgrid control application we have developed.« less

  2. Device Access Abstractions for Resilient Information Architecture Platform for Smart Grid

    DOE PAGES

    Dubey, Abhishek; Karsai, Gabor; Volgyesi, Peter; ...

    2018-06-12

    An open application platform distributes the intelligence and control capability to local endpoints (or nodes) reducing total network traffic, improving speed of local actions by avoiding latency, and improving reliability by reducing dependencies on numerous devices and communication interfaces. The platform must be multi-tasking and able to host multiple applications running simultaneously. Given such a system, the core functions of power grid control systems include grid state determination, low level control, fault intelligence and reconfiguration, outage intelligence, power quality measurement, remote asset monitoring, configuration management, power and energy management (including local distributed energy resources, such as wind, solar and energymore » storage) can be eventually distributed. However, making this move requires extensive regression testing of systems to prove out new technologies, such as phasor measurement units (PMU). Additionally, as the complexity of the systems increase with the inclusion of new functionality (especially at the distribution and consumer levels), hidden coupling issues becomes a challenge with possible N-way interactions known and not known by device and application developers. Therefore, it is very important to provide core abstractions that ensure uniform operational semantics across such interactions. Here in this paper, we describe the pattern for abstracting device interactions we have developed for the RIAPS platform in the context of a microgrid control application we have developed.« less

  3. A METHODOLOGY FOR ESTIMATING UNCERTAINTY OF A DISTRIBUTED HYDROLOGIC MODEL: APPLICATION TO POCONO CREEK WATERSHED

    EPA Science Inventory

    Utility of distributed hydrologic and water quality models for watershed management and sustainability studies should be accompanied by rigorous model uncertainty analysis. However, the use of complex watershed models primarily follows the traditional {calibrate/validate/predict}...

  4. The procedure execution manager and its application to Advanced Photon Source operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borland, M.

    1997-06-01

    The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less

  5. Modeling and Simulation in Support of Testing and Evaluation

    DTIC Science & Technology

    1997-03-01

    contains standardized automated test methodology, synthetic stimuli and environments based on TECOM Ground Truth data and physics . The VPG is a distributed...Systems Acquisition Management (FSAM) coursebook , Defense Systems Management College, January 1994. Crocker, Charles M. “Application of the Simulation

  6. Engineering Management Capstone Project EM 697: Compare and Contrast Risk Management Implementation at NASA and the US Army

    NASA Technical Reports Server (NTRS)

    Brothers, Mary Ann; Safie, Fayssal M. (Technical Monitor)

    2002-01-01

    NASA at Marshall Space Flight Center (MSFC) and the U.S. Army at Redstone Arsenal were analyzed to determine whether they were successful in implementing their risk management program. Risk management implementation surveys were distributed to aid in this analysis. The scope is limited to NASA S&MA (Safety and Mission Assurance) at MSFC, including applicable support contractors, and the US Army Engineering Directorate, including applicable contractors, located at Redstone Arsenal. NASA has moderately higher risk management implementation survey scores than the Army. Accordingly, the implementation of the risk management program at NASA is considered good while only two of five of the survey categories indicated that the risk management implementation is good at the Army.

  7. Application of Physics Based Distributed Hydrologic Models to Assess Anthropologic Land Disturbance in Watersheds

    NASA Astrophysics Data System (ADS)

    Downer, C. W.; Ogden, F. L.; Byrd, A. R.

    2008-12-01

    The Department of Defense (DoD) manages approximately 200,000 km2 of land within the United States on military installations and flood control and river improvement projects. The Watershed Systems Group (WSG) within the Coastal and Hydraulics Laboratory of the Engineer Research and Development Center (ERDC) supports the US Army and the US Army Corps of Engineers in both military and civil operations through the development, modification and application of surface and sub-surface hydrologic models. The US Army has a long history of land management and the development of analytical tools to assist with the management of US Army lands. The US Army has invested heavily in the distributed hydrologic model GSSHA and its predecessor CASC2D. These tools have been applied at numerous military and civil sites to analyze the effects of landscape alteration on hydrologic response and related consequences, changes in erosion and sediment transport, along with associated contaminants. Examples include: impacts of military training and land management activities, impact of changing land use (urbanization or environmental restoration), as well as impacts of management practices employed to abate problems, i.e. Best Management Practices (BMPs). Traditional models such as HSPF and SWAT, are largely conceptual in nature. GSSHA attempts to simulate the physical processes actually occurring in the watershed allowing the user to explicitly simulate changing parameter values in response to changes in land use, land cover, elevation, etc. Issues of scale raise questions: How do we best include fine-scale land use or management features in models of large watersheds? Do these features have to be represented explicitly through physical processes in the watershed domain? Can a point model, physical or empirical, suffice? Can these features be lumped into coarsely resolved numerical grids or sub-watersheds? In this presentation we will discuss the US Army's distributed hydrologic models in terms of how they simulate the relevant processes and present multiple applications of the models used for analyzing land management and land use change. Using these applications as a basis we will discuss issues related to the analysis of anthropogenic alterations in the landscape.

  8. Power management and distribution technology

    NASA Astrophysics Data System (ADS)

    Dickman, John Ellis

    Power management and distribution (PMAD) technology is discussed in the context of developing working systems for a piloted Mars nuclear electric propulsion (NEP) vehicle. The discussion is presented in vugraph form. The following topics are covered: applications and systems definitions; high performance components; the Civilian Space Technology Initiative (CSTI) high capacity power program; fiber optic sensors for power diagnostics; high temperature power electronics; 200 C baseplate electronics; high temperature component characterization; a high temperature coaxial transformer; and a silicon carbide mosfet.

  9. Power management and distribution technology

    NASA Technical Reports Server (NTRS)

    Dickman, John Ellis

    1993-01-01

    Power management and distribution (PMAD) technology is discussed in the context of developing working systems for a piloted Mars nuclear electric propulsion (NEP) vehicle. The discussion is presented in vugraph form. The following topics are covered: applications and systems definitions; high performance components; the Civilian Space Technology Initiative (CSTI) high capacity power program; fiber optic sensors for power diagnostics; high temperature power electronics; 200 C baseplate electronics; high temperature component characterization; a high temperature coaxial transformer; and a silicon carbide mosfet.

  10. Multicast for savings in cache-based video distribution

    NASA Astrophysics Data System (ADS)

    Griwodz, Carsten; Zink, Michael; Liepert, Michael; On, Giwon; Steinmetz, Ralf

    1999-12-01

    Internet video-on-demand (VoD) today streams videos directly from server to clients, because re-distribution is not established yet. Intranet solutions exist but are typically managed centrally. Caching may overcome these management needs, however existing web caching strategies are not applicable because they work in different conditions. We propose movie distribution by means of caching, and study the feasibility from the service providers' point of view. We introduce the combination of our reliable multicast protocol LCRTP for caching hierarchies combined with our enhancement to the patching technique for bandwidth friendly True VoD, not depending on network resource guarantees.

  11. Workflow-enabled distributed component-based information architecture for digital medical imaging enterprises.

    PubMed

    Wong, Stephen T C; Tjandra, Donny; Wang, Huili; Shen, Weimin

    2003-09-01

    Few information systems today offer a flexible means to define and manage the automated part of radiology processes, which provide clinical imaging services for the entire healthcare organization. Even fewer of them provide a coherent architecture that can easily cope with heterogeneity and inevitable local adaptation of applications and can integrate clinical and administrative information to aid better clinical, operational, and business decisions. We describe an innovative enterprise architecture of image information management systems to fill the needs. Such a system is based on the interplay of production workflow management, distributed object computing, Java and Web techniques, and in-depth domain knowledge in radiology operations. Our design adapts the approach of "4+1" architectural view. In this new architecture, PACS and RIS become one while the user interaction can be automated by customized workflow process. Clinical service applications are implemented as active components. They can be reasonably substituted by applications of local adaptations and can be multiplied for fault tolerance and load balancing. Furthermore, the workflow-enabled digital radiology system would provide powerful query and statistical functions for managing resources and improving productivity. This paper will potentially lead to a new direction of image information management. We illustrate the innovative design with examples taken from an implemented system.

  12. Irrigation system management assisted by thermal imagery and spatial statistics

    USDA-ARS?s Scientific Manuscript database

    Thermal imaging has the potential to assist with many aspects of irrigation management including scheduling water application, detecting leaky irrigation canals, and gauging the overall effectiveness of water distribution networks used in furrow irrigation. Many challenges exist for the use of therm...

  13. Managing MDO Software Development Projects

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Salas, A. O.

    2002-01-01

    Over the past decade, the NASA Langley Research Center developed a series of 'grand challenge' applications demonstrating the use of parallel and distributed computation and multidisciplinary design optimization. All but the last of these applications were focused on the high-speed civil transport vehicle; the final application focused on reusable launch vehicles. Teams of discipline experts developed these multidisciplinary applications by integrating legacy engineering analysis codes. As teams became larger and the application development became more complex with increasing levels of fidelity and numbers of disciplines, the need for applying software engineering practices became evident. This paper briefly introduces the application projects and then describes the approaches taken in project management and software engineering for each project; lessons learned are highlighted.

  14. Network Information Management Subsystem

    NASA Technical Reports Server (NTRS)

    Chatburn, C. C.

    1985-01-01

    The Deep Space Network is implementing a distributed data base management system in which the data are shared among several applications and the host machines are not totally dedicated to a particular application. Since the data and resources are to be shared, the equipment must be operated carefully so that the resources are shared equitably. The current status of the project is discussed and policies, roles, and guidelines are recommended for the organizations involved in the project.

  15. The ATLAS TAGS database distribution and management - Operational challenges of a multi-terabyte distributed database

    NASA Astrophysics Data System (ADS)

    Viegas, F.; Malon, D.; Cranshaw, J.; Dimitrov, G.; Nowak, M.; Nairz, A.; Goossens, L.; Gallas, E.; Gamboa, C.; Wong, A.; Vinek, E.

    2010-04-01

    The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.

  16. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  17. Evaluation of Microcomputer-Based Operation and Maintenance Management Systems for Army Water/Wastewater Treatment Plant Operation.

    DTIC Science & Technology

    1986-07-01

    COMPUTER-AIDED OPERATION MANAGEMENT SYSTEM ................. 29 Functions of an Off-Line Computer-Aided Operation Management System Applications of...System Comparisons 85 DISTRIBUTION 5V J. • 0. FIGURES Number Page 1 Hardware Components 21 2 Basic Functions of a Computer-Aided Operation Management System...Plant Visits 26 4 Computer-Aided Operation Management Systems Reviewed for Analysis of Basic Functions 29 5 Progress of Software System Installation and

  18. Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture

    NASA Astrophysics Data System (ADS)

    Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel

    2003-11-01

    Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.

  19. Technologies for network-centric C4ISR

    NASA Astrophysics Data System (ADS)

    Dunkelberger, Kirk A.

    2003-07-01

    Three technologies form the heart of any network-centric command, control, communication, intelligence, surveillance, and reconnaissance (C4ISR) system: distributed processing, reconfigurable networking, and distributed resource management. Distributed processing, enabled by automated federation, mobile code, intelligent process allocation, dynamic multiprocessing groups, check pointing, and other capabilities creates a virtual peer-to-peer computing network across the force. Reconfigurable networking, consisting of content-based information exchange, dynamic ad-hoc routing, information operations (perception management) and other component technologies forms the interconnect fabric for fault tolerant inter processor and node communication. Distributed resource management, which provides the means for distributed cooperative sensor management, foe sensor utilization, opportunistic collection, symbiotic inductive/deductive reasoning and other applications provides the canonical algorithms for network-centric enterprises and warfare. This paper introduces these three core technologies and briefly discusses a sampling of their component technologies and their individual contributions to network-centric enterprises and warfare. Based on the implied requirements, two new algorithms are defined and characterized which provide critical building blocks for network centricity: distributed asynchronous auctioning and predictive dynamic source routing. The first provides a reliable, efficient, effective approach for near-optimal assignment problems; the algorithm has been demonstrated to be a viable implementation for ad-hoc command and control, object/sensor pairing, and weapon/target assignment. The second is founded on traditional dynamic source routing (from mobile ad-hoc networking), but leverages the results of ad-hoc command and control (from the contributed auctioning algorithm) into significant increases in connection reliability through forward prediction. Emphasis is placed on the advantages gained from the closed-loop interaction of the multiple technologies in the network-centric application environment.

  20. Intelligent Load Manager (LOADMAN): Application of Expert System Technology to Load Management Problems in Power Generation and Distribution Systems

    DTIC Science & Technology

    1988-08-10

    addrsesed to it, the wall-receptacle module energizes a relay. Modules can be built to use a triac instead and have the capacity to increase or decrease... modulated by other constraints for a safe, functional ana effective power distribution system. 2.2.3 BackuR Equipment Alternate power sources are...environments have limited sensor capability and no remote control capability. However, future enhancements to current equipment, such as frequency- modulated

  1. Software Management System

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A software management system, originally developed for Goddard Space Flight Center (GSFC) by Century Computing, Inc. has evolved from a menu and command oriented system to a state-of-the art user interface development system supporting high resolution graphics workstations. Transportable Applications Environment (TAE) was initially distributed through COSMIC and backed by a TAE support office at GSFC. In 1993, Century Computing assumed the support and distribution functions and began marketing TAE Plus, the system's latest version. The software is easy to use and does not require programming experience.

  2. Application of Bayesian Classification to Content-Based Data Management

    NASA Technical Reports Server (NTRS)

    Lynnes, Christopher; Berrick, S.; Gopalan, A.; Hua, X.; Shen, S.; Smith, P.; Yang, K-Y.; Wheeler, K.; Curry, C.

    2004-01-01

    The high volume of Earth Observing System data has proven to be challenging to manage for data centers and users alike. At the Goddard Earth Sciences Distributed Active Archive Center (GES DAAC), about 1 TB of new data are archived each day. Distribution to users is also about 1 TB/day. A substantial portion of this distribution is MODIS calibrated radiance data, which has a wide variety of uses. However, much of the data is not useful for a particular user's needs: for example, ocean color users typically need oceanic pixels that are free of cloud and sun-glint. The GES DAAC is using a simple Bayesian classification scheme to rapidly classify each pixel in the scene in order to support several experimental content-based data services for near-real-time MODIS calibrated radiance products (from Direct Readout stations). Content-based subsetting would allow distribution of, say, only clear pixels to the user if desired. Content-based subscriptions would distribute data to users only when they fit the user's usability criteria in their area of interest within the scene. Content-based cache management would retain more useful data on disk for easy online access. The classification may even be exploited in an automated quality assessment of the geolocation product. Though initially to be demonstrated at the GES DAAC, these techniques have applicability in other resource-limited environments, such as spaceborne data systems.

  3. Simplified Distributed Computing

    NASA Astrophysics Data System (ADS)

    Li, G. G.

    2006-05-01

    The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.

  4. Using ESAP Software for Predicting the Spatial Distributions of NDVI and Transpiration of Cotton

    USDA-ARS?s Scientific Manuscript database

    The normalized difference vegetation index (NDVI) has many applications in agricultural management, including monitoring real-time crop coefficients for estimating crop evapotranspiration (ET). However, frequent monitoring of NDVI as needed in such applications is generally not feasible from aerial ...

  5. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  6. Artificial intelligence and space power systems automation

    NASA Technical Reports Server (NTRS)

    Weeks, David J.

    1987-01-01

    Various applications of artificial intelligence to space electrical power systems are discussed. An overview is given of completed, on-going, and planned knowledge-based system activities. These applications include the Nickel-Cadmium Battery Expert System (NICBES) (the expert system interfaced with the Hubble Space Telescope electrical power system test bed); the early work with the Space Station Experiment Scheduler (SSES); the three expert systems under development in the space station advanced development effort in the core module power management and distribution system test bed; planned cooperation of expert systems in the Core Module Power Management and Distribution (CM/PMAD) system breadboard with expert systems for the space station at other research centers; and the intelligent data reduction expert system under development.

  7. Productivity Measurement: An Analytic Approach

    DTIC Science & Technology

    1983-09-01

    LMDC-TR-83-4 6a. NAME OF PERFORMING ORGANIZATION Leadership and Management Development Center (AU) 6b. OFFICE SYMBOL (If applicable) 7a. NAME...Charles R. White, USAFRES September 1983 Approved for public release; distribution unlimited. \\\\ LEADERSHIP AND MANAGEMENT DEVELOPMENT CENTER, AIR...UNIVERSITY Maxwell Air Force Base, Alabama 36112 0 007 LMDC-TR-83-4 Technical Reports prepared by the Leadership and Management Development

  8. The QuakeSim Project: Web Services for Managing Geophysical Data and Applications

    NASA Astrophysics Data System (ADS)

    Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet

    2008-04-01

    We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.

  9. The Research of Paper Datum Mmanagement Information System

    NASA Astrophysics Data System (ADS)

    Zhigang, Ji; Gaifang, Niu; Lingxi, Liu

    Now, paper management is becoming an important work in many colleges and universities, and the digitization of paper management is a significant constituent part of the information of college management. We have studied a universal framework of comprehensive management system spanning departments and geographical positions by taking the opportunity of the developing of the paper management system. The framework provides support for setting up large complicated distributed application fleetly, efficiently, expansively and safely, and it is a new project to realize the standardization of paper information management.

  10. Use of the gamma distribution to represent monthly rainfall in Africa for drought monitoring applications

    USGS Publications Warehouse

    Husak, Gregory J.; Michaelsen, Joel C.; Funk, Christopher C.

    2007-01-01

    Evaluating a range of scenarios that accurately reflect precipitation variability is critical for water resource applications. Inputs to these applications can be provided using location- and interval-specific probability distributions. These distributions make it possible to estimate the likelihood of rainfall being within a specified range. In this paper, we demonstrate the feasibility of fitting cell-by-cell probability distributions to grids of monthly interpolated, continent-wide data. Future work will then detail applications of these grids to improved satellite-remote sensing of drought and interpretations of probabilistic climate outlook forum forecasts. The gamma distribution is well suited to these applications because it is fairly familiar to African scientists, and capable of representing a variety of distribution shapes. This study tests the goodness-of-fit using the Kolmogorov–Smirnov (KS) test, and compares these results against another distribution commonly used in rainfall events, the Weibull. The gamma distribution is suitable for roughly 98% of the locations over all months. The techniques and results presented in this study provide a foundation for use of the gamma distribution to generate drivers for various rain-related models. These models are used as decision support tools for the management of water and agricultural resources as well as food reserves by providing decision makers with ways to evaluate the likelihood of various rainfall accumulations and assess different scenarios in Africa. 

  11. U.S. Air Force Application of a U.S. Army Transportation Capability Assessment Methodology.

    DTIC Science & Technology

    1987-09-01

    Management Command Transportation Engineering Agency, Newport News VA, July 1986. 22. Lambert, Douglas M. and James R. Stock. Strategic Physical...Distribution Management. Homewood IL: Richard D. Irwin, Inc., 1982. 23. Mabe , Capt Richard D. and Lt Col Paul A. Reid. Syllabus and Notetaking Package LOG

  12. Potential and challenges in use of thermal imaging for humid region irrigation system management

    USDA-ARS?s Scientific Manuscript database

    Thermal imaging has shown potential to assist with many aspects of irrigation management including scheduling water application, detecting leaky irrigation canals, and gauging the overall effectiveness of water distribution networks used in furrow irrigation. Many challenges exist for the use of the...

  13. A Performance Support Tool for Cisco Training Program Managers

    ERIC Educational Resources Information Center

    Benson, Angela D.; Bothra, Jashoda; Sharma, Priya

    2004-01-01

    Performance support systems can play an important role in corporations by managing and allowing distribution of information more easily. These systems run the gamut from simple paper job aids to sophisticated computer- and web-based software applications that support the entire corporate supply chain. According to Gery (1991), a performance…

  14. Structural Dynamics of Management Zones for the Site-Specific Control of Tarnished Plant Bugs in Cotton

    USDA-ARS?s Scientific Manuscript database

    Precision-based agricultural application of insecticide relies on a non-random distribution of pests; tarnished plant bugs (Lygus lineolaris) are known to prefer vigorously growing patches of cotton. Management zones for various crops have been readily defined using NDVI (Normalized Difference Vege...

  15. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  16. Distributed architecture and distributed processing mode in urban sewage treatment

    NASA Astrophysics Data System (ADS)

    Zhou, Ruipeng; Yang, Yuanming

    2017-05-01

    Decentralized rural sewage treatment facility over the broad area, a larger operation and management difficult, based on the analysis of rural sewage treatment model based on the response to these challenges, we describe the principle, structure and function in networking technology and network communications technology as the core of distributed remote monitoring system, through the application of case analysis to explore remote monitoring system features in a decentralized rural sewage treatment facilities in the daily operation and management. Practice shows that the remote monitoring system to provide technical support for the long-term operation and effective supervision of the facilities, and reduced operating, maintenance and supervision costs for development.

  17. Development of a mobile borehole investigation software using augmented reality

    NASA Astrophysics Data System (ADS)

    Son, J.; Lee, S.; Oh, M.; Yun, D. E.; Kim, S.; Park, H. D.

    2015-12-01

    Augmented reality (AR) is one of the most developing technologies in smartphone and IT areas. While various applications have been developed using the AR, there are a few geological applications which adopt its advantages. In this study, a smartphone application to manage boreholes using AR has been developed. The application is consisted of three major modules, an AR module, a map module and a data management module. The AR module calculates the orientation of the device and displays nearby boreholes distributed in three dimensions using the orientation. This module shows the boreholes in a transparent layer on a live camera screen so the user can find and understand the overall characteristics of the underground geology. The map module displays the boreholes on a 2D map to show their distribution and the location of the user. The database module uses SQLite library which has proper characteristics for mobile platforms, and Binary XML is adopted to enable containing additional customized data. The application is able to provide underground information in an intuitive and refined forms and to decrease time and general equipment required for geological field investigations.

  18. Mobile healthcare information management utilizing Cloud Computing and Android OS.

    PubMed

    Doukas, Charalampos; Pliakas, Thomas; Maglogiannis, Ilias

    2010-01-01

    Cloud Computing provides functionality for managing information data in a distributed, ubiquitous and pervasive manner supporting several platforms, systems and applications. This work presents the implementation of a mobile system that enables electronic healthcare data storage, update and retrieval using Cloud Computing. The mobile application is developed using Google's Android operating system and provides management of patient health records and medical images (supporting DICOM format and JPEG2000 coding). The developed system has been evaluated using the Amazon's S3 cloud service. This article summarizes the implementation details and presents initial results of the system in practice.

  19. One System for Blood Program Information Management

    PubMed Central

    Gero, Michael G.; Klickstein, Judith S.; Hurst, Timm M.

    1980-01-01

    A system which integrates the diverse functions of a Blood Program within one structure is being assembled at the American National Red Cross Blood Services, Northeast Region. When finished, it will provide technical support for collection scheduling, donor recruitment, recordkeeping, laboratory processing, inventory management, HLA typing and matching, distribution, and administration within the Program. By linking these applications, a reporting structure useful to top management will be provided.

  20. Linear Power-Flow Models in Multiphase Distribution Networks: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Dall'Anese, Emiliano

    This paper considers multiphase unbalanced distribution systems and develops approximate power-flow models where bus-voltages, line-currents, and powers at the point of common coupling are linearly related to the nodal net power injections. The linearization approach is grounded on a fixed-point interpretation of the AC power-flow equations, and it is applicable to distribution systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. The proposed linear models can facilitate the development of computationally-affordable optimization and control applications -- frommore » advanced distribution management systems settings to online and distributed optimization routines. Performance of the proposed models is evaluated on different test feeders.« less

  1. Assessing the Application of Three-Dimensional Collaborative Technologies within an E-Learning Environment

    ERIC Educational Resources Information Center

    McArdle, Gavin; Bertolotto, Michela

    2012-01-01

    Today, the Internet plays a major role in distributing learning material within third level education. Multiple online facilities provide access to educational resources. While early systems relied on webpages, which acted as repositories for learning material, nowadays sophisticated online applications manage and deliver learning resources.…

  2. 78 FR 17235 - Global X Funds, et al.; Notice of Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-20

    ... SECURITIES AND EXCHANGE COMMISSION [Investment Company Act Release No. 30426; 812-14079] Global X... relying on rule 12d1-2 under the 1940 Act to invest in certain financial instruments. Applicants: Global X Funds (``Trust'''), Global X Management Company LLC (``Adviser'') and SEI Investment Distribution Co...

  3. Leveraging AMI data for distribution system model calibration and situational awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini

    The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less

  4. Leveraging AMI data for distribution system model calibration and situational awareness

    DOE PAGES

    Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini; ...

    2015-01-15

    The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less

  5. Wireless remote control of clinical image workflow: using a PDA for off-site distribution and disaster recovery.

    PubMed

    Documet, Jorge; Liu, Brent J; Documet, Luis; Huang, H K

    2006-07-01

    This paper describes a picture archiving and communication system (PACS) tool based on Web technology that remotely manages medical images between a PACS archive and remote destinations. Successfully implemented in a clinical environment and also demonstrated for the past 3 years at the conferences of various organizations, including the Radiological Society of North America, this tool provides a very practical and simple way to manage a PACS, including off-site image distribution and disaster recovery. The application is robust and flexible and can be used on a standard PC workstation or a Tablet PC, but more important, it can be used with a personal digital assistant (PDA). With a PDA, the Web application becomes a powerful wireless and mobile image management tool. The application's quick and easy-to-use features allow users to perform Digital Imaging and Communications in Medicine (DICOM) queries and retrievals with a single interface, without having to worry about the underlying configuration of DICOM nodes. In addition, this frees up dedicated PACS workstations to perform their specialized roles within the PACS workflow. This tool has been used at Saint John's Health Center in Santa Monica, California, for 2 years. The average number of queries per month is 2,021, with 816 C-MOVE retrieve requests. Clinical staff members can use PDAs to manage image workflow and PACS examination distribution conveniently for off-site consultations by referring physicians and radiologists and for disaster recovery. This solution also improves radiologists' effectiveness and efficiency in health care delivery both within radiology departments and for off-site clinical coverage.

  6. AIMES Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katz, Daniel S; Jha, Shantenu; Weissman, Jon

    2017-01-31

    This is the final technical report for the AIMES project. Many important advances in science and engineering are due to large-scale distributed computing. Notwithstanding this reliance, we are still learning how to design and deploy large-scale production Distributed Computing Infrastructures (DCI). This is evidenced by missing design principles for DCI, and an absence of generally acceptable and usable distributed computing abstractions. The AIMES project was conceived against this backdrop, following on the heels of a comprehensive survey of scientific distributed applications. AIMES laid the foundations to address the tripartite challenge of dynamic resource management, integrating information, and portable and interoperablemore » distributed applications. Four abstractions were defined and implemented: skeleton, resource bundle, pilot, and execution strategy. The four abstractions were implemented into software modules and then aggregated into the AIMES middleware. This middleware successfully integrates information across the application layer (skeletons) and resource layer (Bundles), derives a suitable execution strategy for the given skeleton and enacts its execution by means of pilots on one or more resources, depending on the application requirements, and resource availabilities and capabilities.« less

  7. AIMES Final Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weissman, Jon; Katz, Dan; Jha, Shantenu

    2017-01-31

    This is the final technical report for the AIMES project. Many important advances in science and engineering are due to large scale distributed computing. Notwithstanding this reliance, we are still learning how to design and deploy large-scale production Distributed Computing Infrastructures (DCI). This is evidenced by missing design principles for DCI, and an absence of generally acceptable and usable distributed computing abstractions. The AIMES project was conceived against this backdrop, following on the heels of a comprehensive survey of scientific distributed applications. AIMES laid the foundations to address the tripartite challenge of dynamic resource management, integrating information, and portable andmore » interoperable distributed applications. Four abstractions were defined and implemented: skeleton, resource bundle, pilot, and execution strategy. The four abstractions were implemented into software modules and then aggregated into the AIMES middleware. This middleware successfully integrates information across the application layer (skeletons) and resource layer (Bundles), derives a suitable execution strategy for the given skeleton and enacts its execution by means of pilots on one or more resources, depending on the application requirements, and resource availabilities and capabilities.« less

  8. An access control model with high security for distributed workflow and real-time application

    NASA Astrophysics Data System (ADS)

    Han, Ruo-Fei; Wang, Hou-Xiang

    2007-11-01

    The traditional mandatory access control policy (MAC) is regarded as a policy with strict regulation and poor flexibility. The security policy of MAC is so compelling that few information systems would adopt it at the cost of facility, except some particular cases with high security requirement as military or government application. However, with the increasing requirement for flexibility, even some access control systems in military application have switched to role-based access control (RBAC) which is well known as flexible. Though RBAC can meet the demands for flexibility but it is weak in dynamic authorization and consequently can not fit well in the workflow management systems. The task-role-based access control (T-RBAC) is then introduced to solve the problem. It combines both the advantages of RBAC and task-based access control (TBAC) which uses task to manage permissions dynamically. To satisfy the requirement of system which is distributed, well defined with workflow process and critically for time accuracy, this paper will analyze the spirit of MAC, introduce it into the improved T&RBAC model which is based on T-RBAC. At last, a conceptual task-role-based access control model with high security for distributed workflow and real-time application (A_T&RBAC) is built, and its performance is simply analyzed.

  9. Application of distributed optical fiber sensing technologies to the monitoring of leakage and abnormal disturbance of oil pipeline

    NASA Astrophysics Data System (ADS)

    Yang, Xiaojun; Zhu, Xiaofei; Deng, Chi; Li, Junyi; Liu, Cheng; Yu, Wenpeng; Luo, Hui

    2017-10-01

    To improve the level of management and monitoring of leakage and abnormal disturbance of long distance oil pipeline, the distributed optical fiber temperature and vibration sensing system is employed to test the feasibility for the healthy monitoring of a domestic oil pipeline. The simulating leakage and abnormal disturbance affairs of oil pipeline are performed in the experiment. It is demonstrated that the leakage and abnormal disturbance affairs of oil pipeline can be monitored and located accurately with the distributed optical fiber sensing system, which exhibits good performance in the sensitivity, reliability, operation and maintenance etc., and shows good market application prospect.

  10. OPPORTUNITIES IN NITROGEN MANAGEMENT RESEARCH; IMPROVING APPLICATIONS FOR PROVEN TECHNOLOGIES AND IDENTIFYING NEW TOOLS FOR MANAGING NITROGEN FLUX AND INPUT IN ECOSYSTEMS

    EPA Science Inventory

    The presence and distribution of undesirable quantities of bioavailable nitrogenous compounds in the environment are issues of long-standing concern. Importantly for us today, deleterious effects associated with high levels of nitrogen in the ecosystem are becoming everyday news...

  11. Foundational Report Series: Advanced Distribution Management Systems for Grid Modernization, High-Level Use Cases for DMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui; Lu, Xiaonan; Martino, Sal

    Many distribution management systems (DMS) projects have achieved limited success because the electric utility did not sufficiently plan for actual use of the DMS functions in the control room environment. As a result, end users were not clear on how to use the new application software in actual production environments with existing, well-established business processes. An important first step in the DMS implementation process is development and refinement of the “to be” business processes. Development of use cases for the required DMS application functions is a key activity that leads to the formulation of the “to be” requirements. It ismore » also an important activity that is needed to develop specifications that are used to procure a new DMS.« less

  12. Study of power management technology for orbital multi-100KWe applications. Volume 2: Study results

    NASA Technical Reports Server (NTRS)

    Mildice, J. W.

    1980-01-01

    The preliminary requirements and technology advances required for cost effective space power management systems for multi-100 kilowatt requirements were identified. System requirements were defined by establishing a baseline space platform in the 250 KE KWe range and examining typical user loads and interfaces. The most critical design parameters identified for detailed analysis include: increased distribution voltages and space plasma losses, the choice between ac and dc distribution systems, shuttle servicing effects on reliability, life cycle costs, and frequency impacts to power management system and payload systems for AC transmission. The first choice for a power management system for this kind of application and size range is a hybrid ac/dc combination with the following major features: modular design and construction-sized minimum weight/life cycle cost; high voltage transmission (100 Vac RMS); medium voltage array or = 440 Vdc); resonant inversion; transformer rotary joint; high frequency power transmission line or = 20 KHz); energy storage on array side or rotary joint; fully redundant; and 10 year life with minimal replacement and repair.

  13. Application of ESE Data and Tools to Air Quality Management: Services for Helping the Air Quality Community use ESE Data (SHAirED)

    NASA Technical Reports Server (NTRS)

    Falke, Stefan; Husar, Rudolf

    2011-01-01

    The goal of this REASoN applications and technology project is to deliver and use Earth Science Enterprise (ESE) data and tools in support of air quality management. Its scope falls within the domain of air quality management and aims to develop a federated air quality information sharing network that includes data from NASA, EPA, US States and others. Project goals were achieved through a access of satellite and ground observation data, web services information technology, interoperability standards, and air quality community collaboration. In contributing to a network of NASA ESE data in support of particulate air quality management, the project will develop access to distributed data, build Web infrastructure, and create tools for data processing and analysis. The key technologies used in the project include emerging web services for developing self describing and modular data access and processing tools, and service oriented architecture for chaining web services together to assemble customized air quality management applications. The technology and tools required for this project were developed within DataFed.net, a shared infrastructure that supports collaborative atmospheric data sharing and processing web services. Much of the collaboration was facilitated through community interactions through the Federation of Earth Science Information Partners (ESIP) Air Quality Workgroup. The main activities during the project that successfully advanced DataFed, enabled air quality applications and established community-oriented infrastructures were: develop access to distributed data (surface and satellite), build Web infrastructure to support data access, processing and analysis create tools for data processing and analysis foster air quality community collaboration and interoperability.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horiike, S.; Okazaki, Y.

    This paper describes a performance estimation tool developed for modeling and simulation of open distributed energy management systems to support their design. The approach of discrete event simulation with detailed models is considered for efficient performance estimation. The tool includes basic models constituting a platform, e.g., Ethernet, communication protocol, operating system, etc. Application softwares are modeled by specifying CPU time, disk access size, communication data size, etc. Different types of system configurations for various system activities can be easily studied. Simulation examples show how the tool is utilized for the efficient design of open distributed energy management systems.

  15. Master list and index to NASA directives

    NASA Technical Reports Server (NTRS)

    1984-01-01

    All NASA management directives in force as of August 1, 1984 are listed by major subject headings showing number, effective date, title, responsible office, and distribution code. Delegations of authority in print by that date are listed numerically as well as by the installation or office to which special authority is assigned. Other consolidated lists show all management handbooks, directives applicable to the Jet Propulsion Laboratory, directives published in the Code of Federal Regulations, complementary manuals, and NASA safety standards. Distribution policies and instructions for ordering directives are included.

  16. Master list and index to NASA directives

    NASA Technical Reports Server (NTRS)

    1982-01-01

    All NASA management directives in force as of August 1, 1982 are listed by major subject headings showing number, effective data, title, responsible office, and distribution code. Delegations of authority in print by that date are listed numerically as well as by the installation or office to which special authority is assigned. Other consolidated lists show all management handbooks, directives applicable to the Jet Propulsion Laboratory, directions published in the Code of Federal Regulations, complementary manuals, and NASA safety standards. Distribution policies and instructions for ordering directives are included.

  17. Progress in distributed fiber optic temperature sensing

    NASA Astrophysics Data System (ADS)

    Hartog, Arthur H.

    2002-02-01

    The paper reviews the adoption of distributed temperature sensing (DTS) technology based on Raman backscatter. With one company alone having installed more than 400 units, the DTS is becoming accepted practice in several applications, notably in energy cable monitoring, specialised fire detection and oil production monitoring. The paper will provide case studies in these applications. In each case the benefit (whether economic or safety) will be addressed, together with key application engineering issues. The latter range from the selection and installation of the fibre sensor, the specific performance requirements of the opto-electronic equipment and the issues of data management. The paper will also address advanced applications of distributed sensing, notably the problem of monitoring very long ranges, which apply in subsea DC energy cables or in subsea oil wells linked to platforms through very long (e.g. 30km flowlines). These applications are creating the need for a new generation of DTS systems able to achieve measurements at up to 40km with very high temperature resolution, without sacrificing spatial resolution. This challenge is likely to drive the development of new concepts in the field of distributed sensing.

  18. A Review of Distributed Optical Fiber Sensors for Civil Engineering Applications

    PubMed Central

    Barrias, António; Casas, Joan R.; Villalba, Sergi

    2016-01-01

    The application of structural health monitoring (SHM) systems to civil engineering structures has been a developing studied and practiced topic, that has allowed for a better understanding of structures’ conditions and increasingly lead to a more cost-effective management of those infrastructures. In this field, the use of fiber optic sensors has been studied, discussed and practiced with encouraging results. The possibility of understanding and monitor the distributed behavior of extensive stretches of critical structures it’s an enormous advantage that distributed fiber optic sensing provides to SHM systems. In the past decade, several R & D studies have been performed with the goal of improving the knowledge and developing new techniques associated with the application of distributed optical fiber sensors (DOFS) in order to widen the range of applications of these sensors and also to obtain more correct and reliable data. This paper presents, after a brief introduction to the theoretical background of DOFS, the latest developments related with the improvement of these products by presenting a wide range of laboratory experiments as well as an extended review of their diverse applications in civil engineering structures. PMID:27223289

  19. A Review of Distributed Optical Fiber Sensors for Civil Engineering Applications.

    PubMed

    Barrias, António; Casas, Joan R; Villalba, Sergi

    2016-05-23

    The application of structural health monitoring (SHM) systems to civil engineering structures has been a developing studied and practiced topic, that has allowed for a better understanding of structures' conditions and increasingly lead to a more cost-effective management of those infrastructures. In this field, the use of fiber optic sensors has been studied, discussed and practiced with encouraging results. The possibility of understanding and monitor the distributed behavior of extensive stretches of critical structures it's an enormous advantage that distributed fiber optic sensing provides to SHM systems. In the past decade, several R & D studies have been performed with the goal of improving the knowledge and developing new techniques associated with the application of distributed optical fiber sensors (DOFS) in order to widen the range of applications of these sensors and also to obtain more correct and reliable data. This paper presents, after a brief introduction to the theoretical background of DOFS, the latest developments related with the improvement of these products by presenting a wide range of laboratory experiments as well as an extended review of their diverse applications in civil engineering structures.

  20. Software Quality Measurement for Distributed Systems. Volume 3. Distributed Computing Systems: Impact on Software Quality.

    DTIC Science & Technology

    1983-07-01

    Distributed Computing Systems impact DrnwrR - aehR on Sotwar Quaity. PERFORMING 010. REPORT NUMBER 7. AUTNOW) S. CONTRACT OR GRANT "UMBER(*)IS ThomasY...C31 Application", "Space Systems Network", "Need for Distributed Database Management", and "Adaptive Routing". This is discussed in the last para ...data reduction, buffering, encryption, and error detection and correction functions. Examples of such data streams include imagery data, video

  1. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  2. A Component-based Programming Model for Composite, Distributed Applications

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.

  3. A relational data-knowledge base system and its potential in developing a distributed data-knowledge system

    NASA Technical Reports Server (NTRS)

    Rahimian, Eric N.; Graves, Sara J.

    1988-01-01

    A new approach used in constructing a rational data knowledge base system is described. The relational database is well suited for distribution due to its property of allowing data fragmentation and fragmentation transparency. An example is formulated of a simple relational data knowledge base which may be generalized for use in developing a relational distributed data knowledge base system. The efficiency and ease of application of such a data knowledge base management system is briefly discussed. Also discussed are the potentials of the developed model for sharing the data knowledge base as well as the possible areas of difficulty in implementing the relational data knowledge base management system.

  4. Telecommunications: Opportunities in the emerging technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schultheis, R.W.

    1994-12-31

    A series of slides present opportunities for Utility Telecommunications. The following aspects are covered: (1) Technology in a period of revolution; (2) Technology Management by the Energy Utility, (3) Contemporary Telecommunication Network Architectures, (4) Opportunity Management, (5) Strategic Planning for Profits and Growth, (6) Energy Industry in a period of challenge. Management topics and applications are presented in a matrix for generation, transmission, distribution, customer service and new business revenue growth subjects.

  5. A Hybrid Key Management Scheme for WSNs Based on PPBR and a Tree-Based Path Key Establishment Method

    PubMed Central

    Zhang, Ying; Liang, Jixing; Zheng, Bingxin; Chen, Wei

    2016-01-01

    With the development of wireless sensor networks (WSNs), in most application scenarios traditional WSNs with static sink nodes will be gradually replaced by Mobile Sinks (MSs), and the corresponding application requires a secure communication environment. Current key management researches pay less attention to the security of sensor networks with MS. This paper proposes a hybrid key management schemes based on a Polynomial Pool-based key pre-distribution and Basic Random key pre-distribution (PPBR) to be used in WSNs with MS. The scheme takes full advantages of these two kinds of methods to improve the cracking difficulty of the key system. The storage effectiveness and the network resilience can be significantly enhanced as well. The tree-based path key establishment method is introduced to effectively solve the problem of communication link connectivity. Simulation clearly shows that the proposed scheme performs better in terms of network resilience, connectivity and storage effectiveness compared to other widely used schemes. PMID:27070624

  6. Structural stocking guides: a new look at an old friend

    Treesearch

    Jeffrey H. Gove

    2004-01-01

    A parameter recovery-based model is developed that allows the incorporation of diameter distribution information directly into stocking guides. The method is completely general in applicability across different guides and forest types and could be adapted to other systems such as density management diagrams. It relies on a simple measure of diameter distribution shape...

  7. Application of Knowledge Management: Pressing questions and practical answers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FROMM-LEWIS,MICHELLE

    2000-02-11

    Sandia National Laboratory are working on ways to increase production using Knowledge Management. Knowledge Management is: finding ways to create, identify, capture, and distribute organizational knowledge to the people who need it; to help information and knowledge flow to the right people at the right time so they can act more efficiently and effectively; recognizing, documenting and distributing explicit knowledge (explicit knowledge is quantifiable and definable, it makes up reports, manuals, instructional materials, etc.) and tacit knowledge (tacit knowledge is doing and performing, it is a combination of experience, hunches, intuition, emotions, and beliefs) in order to improve organizational performancemore » and a systematic approach to find, understand and use knowledge to create value.« less

  8. A Software Architecture for Intelligent Synthesis Environments

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Norvig, Peter (Technical Monitor)

    2001-01-01

    The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.

  9. Research on Key Technologies of Cloud Computing

    NASA Astrophysics Data System (ADS)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  10. 32 CFR 45.2 - Applicability and scope.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... CERTIFICATE OF RELEASE OR DISCHARGE FROM ACTIVE DUTY (DD FORM 214/5 SERIES) § 45.2 Applicability and scope. (a... on the preparation and distribution of DD Forms 214, 214WS, 215 (Appendices A, B, and C) which record... Management and Personnel) approval is obtained.) DD Forms 214 and 215 (or their substitutes) will provide: (1...

  11. 76 FR 80980 - Notice of Acceptance for Docketing of the Application, Notice of Opportunity for Hearing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-27

    ... Agencywide Documents Access and Management System (ADAMS) Public Electronic Reading Room online in the NRC... of the document. The E-Filing system also distributes an email notice that provides access to the... Docketing of the Application, Notice of Opportunity for Hearing, Regarding Renewal of Facility Operating...

  12. The 1991 Goddard Conference on Space Applications of Artificial Intelligence

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Editor)

    1991-01-01

    The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed. The papers in this proceeding fall into the following areas: Planning and scheduling, fault monitoring/diagnosis/recovery, machine vision, robotics, system development, information management, knowledge acquisition and representation, distributed systems, tools, neural networks, and miscellaneous applications.

  13. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  14. Application of Compressive Sensing to Digital Holography

    DTIC Science & Technology

    2015-05-01

    WITH ASSIGNED DISTRIBUTION STATEMENT. // Signature// // Signature// DAVID J. RABB BRIAN D. EWERT, Chief Program Manager...Signature// TRACY W. JOHNSTON, Chief Multispectral Sensing and Detection Division Sensors Directorate This report is published in

  15. Developments in space power components for power management and distribution

    NASA Technical Reports Server (NTRS)

    Renz, D. D.

    1984-01-01

    Advanced power electronic components development for space applications is discussed. The components described include transformers, inductors, semiconductor devices such as transistors and diodes, remote power controllers, and transmission lines.

  16. Moving beyond Blackboard: Using a Social Network as a Learning Management System

    ERIC Educational Resources Information Center

    Thacker, Christopher

    2012-01-01

    Web 2.0 is a paradigm of a participatory Internet, which has implications for the delivery of online courses. Instructors and students can now develop, distribute, and aggregate content through the use of third-party web applications, particularly social networking platforms, which combine to form a user-created learning management system (LMS).…

  17. DIY visualizations: opportunities for story-telling with esri tools

    Treesearch

    Charles H. Perry; Barry T. Wilson

    2015-01-01

    The Forest Service and Esri recently entered into a partnership: (1) to distribute FIA and other Forest Service data with the public and stakeholders through ArcGIS Online, and (2) to facilitate the application of the ArcGIS platform within the Forest Service to develop forest management and landscape management plans, and support their scientific research activities....

  18. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    ISIS and META are two distributed systems projects at Cornell University. The ISIS project, has developed a new methodology, virtual synchrony, for writing robust distributed software. This approach is directly supported by the ISIS Toolkit, a programming system that is distributed to over 300 academic and industrial sites. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project, is about distributed control in a soft real time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are presented. This approach to distributed computing, a philosophy that is believed to significantly distinguish the work from that of others in the field, is explained.

  19. Foundational Report Series: Advanced Distribution Management Systems for Grid Modernization, Business Case Calculations for DMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Xiaonan; Singh, Ravindra; Wang, Jianhui

    Distribution Management System (DMS) applications require a substantial commitment of technical and financial resources. In order to proceed beyond limited-scale demonstration projects, utilities must have a clear understanding of the business case for committing these resources that recognizes the total cost of ownership. Many of the benefits provided by investments in DMSs do not translate easily into monetary terms, making cost-benefit calculations difficult. For example, Fault Location Isolation and Service Restoration (FLISR) can significantly reduce customer outage duration and improve reliability. However, there is no well-established and universally-accepted procedure for converting these benefits into monetary terms that can be comparedmore » directly to investment costs. This report presents a methodology to analyze the benefits and costs of DMS applications as fundamental to the business case.« less

  20. Site-specific management of nematodes pitfalls and practicalities.

    PubMed

    Evans, Ken; Webster, Richard M; Halford, Paul D; Barker, Anthony D; Russell, Michael D

    2002-09-01

    The greatest constraint to potato production in the United Kingdom (UK) is damage by the potato cyst nematodes (PCN) Globodera pallida and G. rostochiensis. Management of PCN depends heavily on nematicides, which are costly. Of all the inputs in UK agriculture, nematicides offer the largest potential cost savings from spatially variable application, and these savings would be accompanied by environmental benefits. We mapped PCN infestations in potato fields and monitored the changes in population density and distribution that occurred when susceptible potato crops were grown. The inverse relationship between population density before planting and multiplication rate of PCN makes it difficult to devise reliable spatial nematicide application procedures, especially when the pre-planting population density is just less than the detection threshold. Also, the spatial dependence found suggests that the coarse sampling grids used commercially are likely to produce misleading distribution maps.

  1. PILOT: An intelligent distributed operations support system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Arthur N.

    1993-01-01

    The Real-Time Data System (RTDS) project is exploring the application of advanced technologies to the real-time flight operations environment of the Mission Control Centers at NASA's Johnson Space Center. The system, based on a network of engineering workstations, provides services such as delivery of real time telemetry data to flight control applications. To automate the operation of this complex distributed environment, a facility called PILOT (Process Integrity Level and Operation Tracker) is being developed. PILOT comprises a set of distributed agents cooperating with a rule-based expert system; together they monitor process operation and data flows throughout the RTDS network. The goal of PILOT is to provide unattended management and automated operation under user control.

  2. Water Management Applications of Advanced Precipitation Products

    NASA Astrophysics Data System (ADS)

    Johnson, L. E.; Braswell, G.; Delaney, C.

    2012-12-01

    Advanced precipitation sensors and numerical models track storms as they occur and forecast the likelihood of heavy rain for time frames ranging from 1 to 8 hours, 1 day, and extended outlooks out to 3 to 7 days. Forecast skill decreases at the extended time frames but the outlooks have been shown to provide "situational awareness" which aids in preparation for flood mitigation and water supply operations. In California the California-Nevada River Forecast Centers and local Weather Forecast Offices provide precipitation products that are widely used to support water management and flood response activities of various kinds. The Hydrometeorology Testbed (HMT) program is being conducted to help advance the science of precipitation tracking and forecasting in support of the NWS. HMT high-resolution products have found applications for other non-federal water management activities as well. This presentation will describe water management applications of HMT advanced precipitation products, and characterization of benefits expected to accrue. Two case examples will be highlighted, 1) reservoir operations for flood control and water supply, and 2) urban stormwater management. Application of advanced precipitation products in support of reservoir operations is a focus of the Sonoma County Water Agency. Examples include: a) interfacing the high-resolution QPE products with a distributed hydrologic model for the Russian-Napa watersheds, b) providing early warning of in-coming storms for flood preparedness and water supply storage operations. For the stormwater case, San Francisco wastewater engineers are developing a plan to deploy high resolution gap-filling radars looking off shore to obtain longer lead times on approaching storms. A 4 to 8 hour lead time would provide opportunity to optimize stormwater capture and treatment operations, and minimize combined sewer overflows into the Bay.ussian River distributed hydrologic model.

  3. A development framework for distributed artificial intelligence

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1989-01-01

    The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.

  4. Clinical image processing engine

    NASA Astrophysics Data System (ADS)

    Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald

    2009-02-01

    Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.

  5. Indiva: a middleware for managing distributed media environment

    NASA Astrophysics Data System (ADS)

    Ooi, Wei-Tsang; Pletcher, Peter; Rowe, Lawrence A.

    2003-12-01

    This paper presents a unified set of abstractions and operations for hardware devices, software processes, and media data in a distributed audio and video environment. These abstractions, which are provided through a middleware layer called Indiva, use a file system metaphor to access resources and high-level commands to simplify the development of Internet webcast and distributed collaboration control applications. The design and implementation of Indiva are described and examples are presented to illustrate the usefulness of the abstractions.

  6. Proceedings of the Workshop on Large, Distributed, Parallel Architecture, Real-Time Systems Held in Alexandria, Virginia on 15-19 March 1993

    DTIC Science & Technology

    1993-07-01

    distributed system. Second, to support the development of scaleable end-use applications that implement the mission critical control policies of the...implementation. These and other cogent reasons suggest two important rules for designing large, distributed, realtime systems: i) separate policies required...system design rules. 0 The separation of system coordination and management policies and mechanisms allows for the "objectification" of the underlying

  7. Complex Systems Engineering Applications for Future Battle Management and Command and Control

    DTIC Science & Technology

    2013-06-01

    en" hanced and shared situational~ awareness achieved through the sharing and common processing of data and information from ~ - the distributed...architecture proposed for future tactical BMC2 applications. UA .• ~,")]-. -1 "-l’- e;;; 1 -y:~u~ c_c.,... p I’"" t Tf ? - 80-20 Principle ( According to

  8. Geospatial Applications on Different Parallel and Distributed Systems in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Rodila, D.; Bacu, V.; Gorgan, D.

    2012-04-01

    The execution of Earth Science applications and services on parallel and distributed systems has become a necessity especially due to the large amounts of Geospatial data these applications require and the large geographical areas they cover. The parallelization of these applications comes to solve important performance issues and can spread from task parallelism to data parallelism as well. Parallel and distributed architectures such as Grid, Cloud, Multicore, etc. seem to offer the necessary functionalities to solve important problems in the Earth Science domain: storing, distribution, management, processing and security of Geospatial data, execution of complex processing through task and data parallelism, etc. A main goal of the FP7-funded project enviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is the development of a Spatial Data Infrastructure targeting this catchment region but also the development of standardized and specialized tools for storing, analyzing, processing and visualizing the Geospatial data concerning this area. For achieving these objectives, the enviroGRIDS deals with the execution of different Earth Science applications, such as hydrological models, Geospatial Web services standardized by the Open Geospatial Consortium (OGC) and others, on parallel and distributed architecture to maximize the obtained performance. This presentation analysis the integration and execution of Geospatial applications on different parallel and distributed architectures and the possibility of choosing among these architectures based on application characteristics and user requirements through a specialized component. Versions of the proposed platform have been used in enviroGRIDS project on different use cases such as: the execution of Geospatial Web services both on Web and Grid infrastructures [2] and the execution of SWAT hydrological models both on Grid and Multicore architectures [3]. The current focus is to integrate in the proposed platform the Cloud infrastructure, which is still a paradigm with critical problems to be solved despite the great efforts and investments. Cloud computing comes as a new way of delivering resources while using a large set of old as well as new technologies and tools for providing the necessary functionalities. The main challenges in the Cloud computing, most of them identified also in the Open Cloud Manifesto 2009, address resource management and monitoring, data and application interoperability and portability, security, scalability, software licensing, etc. We propose a platform able to execute different Geospatial applications on different parallel and distributed architectures such as Grid, Cloud, Multicore, etc. with the possibility of choosing among these architectures based on application characteristics and complexity, user requirements, necessary performances, cost support, etc. The execution redirection on a selected architecture is realized through a specialized component and has the purpose of offering a flexible way in achieving the best performances considering the existing restrictions.

  9. A Semantic Web Management Model for Integrative Biomedical Informatics

    PubMed Central

    Deus, Helena F.; Stanislaus, Romesh; Veiga, Diogo F.; Behrens, Carmen; Wistuba, Ignacio I.; Minna, John D.; Garner, Harold R.; Swisher, Stephen G.; Roth, Jack A.; Correa, Arlene M.; Broom, Bradley; Coombes, Kevin; Chang, Allen; Vogel, Lynn H.; Almeida, Jonas S.

    2008-01-01

    Background Data, data everywhere. The diversity and magnitude of the data generated in the Life Sciences defies automated articulation among complementary efforts. The additional need in this field for managing property and access permissions compounds the difficulty very significantly. This is particularly the case when the integration involves multiple domains and disciplines, even more so when it includes clinical and high throughput molecular data. Methodology/Principal Findings The emergence of Semantic Web technologies brings the promise of meaningful interoperation between data and analysis resources. In this report we identify a core model for biomedical Knowledge Engineering applications and demonstrate how this new technology can be used to weave a management model where multiple intertwined data structures can be hosted and managed by multiple authorities in a distributed management infrastructure. Specifically, the demonstration is performed by linking data sources associated with the Lung Cancer SPORE awarded to The University of Texas MDAnderson Cancer Center at Houston and the Southwestern Medical Center at Dallas. A software prototype, available with open source at www.s3db.org, was developed and its proposed design has been made publicly available as an open source instrument for shared, distributed data management. Conclusions/Significance The Semantic Web technologies have the potential to addresses the need for distributed and evolvable representations that are critical for systems Biology and translational biomedical research. As this technology is incorporated into application development we can expect that both general purpose productivity software and domain specific software installed on our personal computers will become increasingly integrated with the relevant remote resources. In this scenario, the acquisition of a new dataset should automatically trigger the delegation of its analysis. PMID:18698353

  10. OGC and Grid Interoperability in enviroGRIDS Project

    NASA Astrophysics Data System (ADS)

    Gorgan, Dorian; Rodila, Denisa; Bacu, Victor; Giuliani, Gregory; Ray, Nicolas

    2010-05-01

    EnviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is a 4-years FP7 Project aiming to address the subjects of ecologically unsustainable development and inadequate resource management. The project develops a Spatial Data Infrastructure of the Black Sea Catchment region. The geospatial technologies offer very specialized functionality for Earth Science oriented applications as well as the Grid oriented technology that is able to support distributed and parallel processing. One challenge of the enviroGRIDS project is the interoperability between geospatial and Grid infrastructures by providing the basic and the extended features of the both technologies. The geospatial interoperability technology has been promoted as a way of dealing with large volumes of geospatial data in distributed environments through the development of interoperable Web service specifications proposed by the Open Geospatial Consortium (OGC), with applications spread across multiple fields but especially in Earth observation research. Due to the huge volumes of data available in the geospatial domain and the additional introduced issues (data management, secure data transfer, data distribution and data computation), the need for an infrastructure capable to manage all those problems becomes an important aspect. The Grid promotes and facilitates the secure interoperations of geospatial heterogeneous distributed data within a distributed environment, the creation and management of large distributed computational jobs and assures a security level for communication and transfer of messages based on certificates. This presentation analysis and discusses the most significant use cases for enabling the OGC Web services interoperability with the Grid environment and focuses on the description and implementation of the most promising one. In these use cases we give a special attention to issues such as: the relations between computational grid and the OGC Web service protocols, the advantages offered by the Grid technology - such as providing a secure interoperability between the distributed geospatial resource -and the issues introduced by the integration of distributed geospatial data in a secure environment: data and service discovery, management, access and computation. enviroGRIDS project proposes a new architecture which allows a flexible and scalable approach for integrating the geospatial domain represented by the OGC Web services with the Grid domain represented by the gLite middleware. The parallelism offered by the Grid technology is discussed and explored at the data level, management level and computation level. The analysis is carried out for OGC Web service interoperability in general but specific details are emphasized for Web Map Service (WMS), Web Feature Service (WFS), Web Coverage Service (WCS), Web Processing Service (WPS) and Catalog Service for Web (CSW). Issues regarding the mapping and the interoperability between the OGC and the Grid standards and protocols are analyzed as they are the base in solving the communication problems between the two environments: grid and geospatial. The presetation mainly highlights how the Grid environment and Grid applications capabilities can be extended and utilized in geospatial interoperability. Interoperability between geospatial and Grid infrastructures provides features such as the specific geospatial complex functionality and the high power computation and security of the Grid, high spatial model resolution and geographical area covering, flexible combination and interoperability of the geographical models. According with the Service Oriented Architecture concepts and requirements of interoperability between geospatial and Grid infrastructures each of the main functionality is visible from enviroGRIDS Portal and consequently, by the end user applications such as Decision Maker/Citizen oriented Applications. The enviroGRIDS portal is the single way of the user to get into the system and the portal faces a unique style of the graphical user interface. Main reference for further information: [1] enviroGRIDS Project, http://www.envirogrids.net/

  11. Distributed File System Utilities to Manage Large DatasetsVersion 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-05-21

    FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.

  12. Integrating Data Distribution and Data Assimilation Between the OOI CI and the NOAA DIF

    NASA Astrophysics Data System (ADS)

    Meisinger, M.; Arrott, M.; Clemesha, A.; Farcas, C.; Farcas, E.; Im, T.; Schofield, O.; Krueger, I.; Klacansky, I.; Orcutt, J.; Peach, C.; Chave, A.; Raymer, D.; Vernon, F.

    2008-12-01

    The Ocean Observatories Initiative (OOI) is an NSF funded program to establish the ocean observing infrastructure of the 21st century benefiting research and education. It is currently approaching final design and promises to deliver cyber and physical observatory infrastructure components as well as substantial core instrumentation to study environmental processes of the ocean at various scales, from coastal shelf-slope exchange processes to the deep ocean. The OOI's data distribution network lies at the heart of its cyber- infrastructure, which enables a multitude of science and education applications, ranging from data analysis, to processing, visualization and ontology supported query and mediation. In addition, it fundamentally supports a class of applications exploiting the knowledge gained from analyzing observational data for objective-driven ocean observing applications, such as automatically triggered response to episodic environmental events and interactive instrument tasking and control. The U.S. Department of Commerce through NOAA operates the Integrated Ocean Observing System (IOOS) providing continuous data in various formats, rates and scales on open oceans and coastal waters to scientists, managers, businesses, governments, and the public to support research and inform decision-making. The NOAA IOOS program initiated development of the Data Integration Framework (DIF) to improve management and delivery of an initial subset of ocean observations with the expectation of achieving improvements in a select set of NOAA's decision-support tools. Both OOI and NOAA through DIF collaborate on an effort to integrate the data distribution, access and analysis needs of both programs. We present details and early findings from this collaboration; one part of it is the development of a demonstrator combining web-based user access to oceanographic data through ERDDAP, efficient science data distribution, and scalable, self-healing deployment in a cloud computing environment. ERDDAP is a web-based front-end application integrating oceanographic data sources of various formats, for instance CDF data files as aggregated through NcML or presented using a THREDDS server. The OOI-designed data distribution network provides global traffic management and computational load balancing for observatory resources; it makes use of the OpenDAP Data Access Protocol (DAP) for efficient canonical science data distribution over the network. A cloud computing strategy is the basis for scalable, self-healing organization of an observatory's computing and storage resources, independent of the physical location and technical implementation of these resources.

  13. Advanced systems engineering and network planning support

    NASA Technical Reports Server (NTRS)

    Walters, David H.; Barrett, Larry K.; Boyd, Ronald; Bazaj, Suresh; Mitchell, Lionel; Brosi, Fred

    1990-01-01

    The objective of this task was to take a fresh look at the NASA Space Network Control (SNC) element for the Advanced Tracking and Data Relay Satellite System (ATDRSS) such that it can be made more efficient and responsive to the user by introducing new concepts and technologies appropriate for the 1997 timeframe. In particular, it was desired to investigate the technologies and concepts employed in similar systems that may be applicable to the SNC. The recommendations resulting from this study include resource partitioning, on-line access to subsets of the SN schedule, fluid scheduling, increased use of demand access on the MA service, automating Inter-System Control functions using monitor by exception, increase automation for distributed data management and distributed work management, viewing SN operational control in terms of the OSI Management framework, and the introduction of automated interface management.

  14. Additional Security Considerations for Grid Management

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.

    2003-01-01

    The use of Grid computing environments is growing in popularity. A Grid computing environment is primarily a wide area network that encompasses multiple local area networks, where some of the local area networks are managed by different organizations. A Grid computing environment also includes common interfaces for distributed computing software so that the heterogeneous set of machines that make up the Grid can be used more easily. The other key feature of a Grid is that the distributed computing software includes appropriate security technology. The focus of most Grid software is on the security involved with application execution, file transfers, and other remote computing procedures. However, there are other important security issues related to the management of a Grid and the users who use that Grid. This note discusses these additional security issues and makes several suggestions as how they can be managed.

  15. Anticipating Forest and Range Land Development in Central Oregon (USA) for Landscape Analysis, with an Example Application Involving Mule Deer

    NASA Astrophysics Data System (ADS)

    Kline, Jeffrey D.; Moses, Alissa; Burcsu, Theresa

    2010-05-01

    Forest policymakers, public lands managers, and scientists in the Pacific Northwest (USA) seek ways to evaluate the landscape-level effects of policies and management through the multidisciplinary development and application of spatially explicit methods and models. The Interagency Mapping and Analysis Project (IMAP) is an ongoing effort to generate landscape-wide vegetation data and models to evaluate the integrated effects of disturbances and management activities on natural resource conditions in Oregon and Washington (USA). In this initial analysis, we characterized the spatial distribution of forest and range land development in a four-county pilot study region in central Oregon. The empirical model describes the spatial distribution of buildings and new building construction as a function of population growth, existing development, topography, land-use zoning, and other factors. We used the model to create geographic information system maps of likely future development based on human population projections to inform complementary landscape analyses underway involving vegetation, habitat, and wildfire interactions. In an example application, we use the model and resulting maps to show the potential impacts of future forest and range land development on mule deer ( Odocoileus hemionus) winter range. Results indicate significant development encroachment and habitat loss already in 2000 with development located along key migration routes and increasing through the projection period to 2040. The example application illustrates a simple way for policymakers and public lands managers to combine existing data and preliminary model outputs to begin to consider the potential effects of development on future landscape conditions.

  16. Distributed user interfaces for clinical ubiquitous computing applications.

    PubMed

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  17. Legacy systems: managing evolution through integration in a distributed and object-oriented computing environment.

    PubMed

    Lemaitre, D; Sauquet, D; Fofol, I; Tanguy, L; Jean, F C; Degoulet, P

    1995-01-01

    Legacy systems are crucial for organizations since they support key functionalities. But they become obsolete with aging and the apparition of new techniques. Managing their evolution is a key issue in software engineering. This paper presents a strategy that has been developed at Broussais University Hospital in Paris to make a legacy system devoted to the management of health care units evolve towards a new up-to-date software. A two-phase evolution pathway is described. The first phase consists in separating the interface from the data storage and application control and in using a communication channel between the individualized components. The second phase proposes to use an object-oriented DBMS in place of the homegrown system. An application example for the management of hypertensive patients is described.

  18. Design of a QoS-controlled ATM-based communications system in chorus

    NASA Astrophysics Data System (ADS)

    Coulson, Geoff; Campbell, Andrew; Robin, Philippe; Blair, Gordon; Papathomas, Michael; Shepherd, Doug

    1995-05-01

    We describe the design of an application platform able to run distributed real-time and multimedia applications alongside conventional UNIX programs. The platform is embedded in a microkernel/PC environment and supported by an ATM-based, QoS-driven communications stack. In particular, we focus on resource-management aspects of the design and deal with CPU scheduling, network resource-management and memory-management issues. An architecture is presented that guarantees QoS levels of both communications and processing with varying degrees of commitment as specified by user-level QoS parameters. The architecture uses admission tests to determine whether or not new activities can be accepted and includes modules to translate user-level QoS parameters into representations usable by the scheduling, network, and memory-management subsystems.

  19. Distributed data analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Nilsson, Paul; Atlas Collaboration

    2012-12-01

    Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.

  20. Telerobotic management system: coordinating multiple human operators with multiple robots

    NASA Astrophysics Data System (ADS)

    King, Jamie W.; Pretty, Raymond; Brothers, Brendan; Gosine, Raymond G.

    2003-09-01

    This paper describes an application called the Tele-robotic management system (TMS) for coordinating multiple operators with multiple robots for applications such as underground mining. TMS utilizes several graphical interfaces to allow the user to define a partially ordered plan for multiple robots. This plan is then converted to a Petri net for execution and monitoring. TMS uses a distributed framework to allow robots and operators to easily integrate with the applications. This framework allows robots and operators to join the network and advertise their capabilities through services. TMS then decides whether tasks should be dispatched to a robot or a remote operator based on the services offered by the robots and operators.

  1. RICIS Software Engineering 90 Symposium: Aerospace Applications and Research Directions Proceedings

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Papers presented at RICIS Software Engineering Symposium are compiled. The following subject areas are covered: synthesis - integrating product and process; Serpent - a user interface management system; prototyping distributed simulation networks; and software reuse.

  2. First International Conference on Ada (R) Programming Language Applications for the NASA Space Station, volume 1

    NASA Technical Reports Server (NTRS)

    Bown, Rodney L. (Editor)

    1986-01-01

    Topics discussed include: test and verification; environment issues; distributed Ada issues; life cycle issues; Ada in Europe; management/training issues; common Ada interface set; and run time issues.

  3. Large Scale System Defense

    DTIC Science & Technology

    2008-10-01

    AD); Aeolos, a distributed intrusion detection and event correlation infrastructure; STAND, a training-set sanitization technique applicable to ADs...UU 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON Frank H. Born a. REPORT U b. ABSTRACT U c . THIS PAGE U 19b. TELEPHONE...Summary of findings 2 (a) Automatic Patch Generation 2 (b) Better Patch Management 2 ( c ) Artificial Diversity 3 (d) Distributed Anomaly Detection 3

  4. Counting Dependence Predictors

    DTIC Science & Technology

    2008-05-02

    sophisticated dependence predictors, such as Store Sets, have been tightly coupled to the fetch and ex- ecution streams, requiring global knowledge of...applicable to any architecture with distributed fetch and distributed memory banks, in which the comprehensive event completion knowledge needed by previous...adapted for Core Fusion [5] by giv- ing its steering management unit (SMU) the responsibilities of the controller core. While Ipek et al. describe how a

  5. A PDA study management tool (SMT) utilizing wireless broadband and full DICOM viewing capability

    NASA Astrophysics Data System (ADS)

    Documet, Jorge; Liu, Brent; Zhou, Zheng; Huang, H. K.; Documet, Luis

    2007-03-01

    During the last 4 years IPI (Image Processing and Informatics) Laboratory has been developing a web-based Study Management Tool (SMT) application that allows Radiologists, Film librarians and PACS-related (Picture Archiving and Communication System) users to dynamically and remotely perform Query/Retrieve operations in a PACS network. The users utilizing a regular PDA (Personal Digital Assistant) can remotely query a PACS archive to distribute any study to an existing DICOM (Digital Imaging and Communications in Medicine) node. This application which has proven to be convenient to manage the Study Workflow [1, 2] has been extended to include a DICOM viewing capability in the PDA. With this new feature, users can take a quick view of DICOM images providing them mobility and convenience at the same time. In addition, we are extending this application to Metropolitan-Area Wireless Broadband Networks. This feature requires Smart Phones that are capable of working as a PDA and have access to Broadband Wireless Services. With the extended application to wireless broadband technology and the preview of DICOM images, the Study Management Tool becomes an even more powerful tool for clinical workflow management.

  6. Knowledge management: An abstraction of knowledge base and database management systems

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel D.

    1990-01-01

    Artificial intelligence application requirements demand powerful representation capabilities as well as efficiency for real-time domains. Many tools exist, the most prevalent being expert systems tools such as ART, KEE, OPS5, and CLIPS. Other tools just emerging from the research environment are truth maintenance systems for representing non-monotonic knowledge, constraint systems, object oriented programming, and qualitative reasoning. Unfortunately, as many knowledge engineers have experienced, simply applying a tool to an application requires a large amount of effort to bend the application to fit. Much work goes into supporting work to make the tool integrate effectively. A Knowledge Management Design System (KNOMAD), is described which is a collection of tools built in layers. The layered architecture provides two major benefits; the ability to flexibly apply only those tools that are necessary for an application, and the ability to keep overhead, and thus inefficiency, to a minimum. KNOMAD is designed to manage many knowledge bases in a distributed environment providing maximum flexibility and expressivity to the knowledge engineer while also providing support for efficiency.

  7. Distributed Virtual System (DIVIRS) Project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  8. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1994-01-01

    As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  9. DIstributed VIRtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, Clifford B.

    1995-01-01

    As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  10. Distributed Virtual System (DIVIRS) project

    NASA Technical Reports Server (NTRS)

    Schorr, Herbert; Neuman, B. Clifford

    1993-01-01

    As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.

  11. Distributed intelligent monitoring and reporting facilities

    NASA Astrophysics Data System (ADS)

    Pavlou, George; Mykoniatis, George; Sanchez-P, Jorge-A.

    1996-06-01

    Distributed intelligent monitoring and reporting facilities are of paramount importance in both service and network management as they provide the capability to monitor quality of service and utilization parameters and notify degradation so that corrective action can be taken. By intelligent, we refer to the capability of performing the monitoring tasks in a way that has the smallest possible impact on the managed network, facilitates the observation and summarization of information according to a number of criteria and in its most advanced form and permits the specification of these criteria dynamically to suit the particular policy in hand. In addition, intelligent monitoring facilities should minimize the design and implementation effort involved in such activities. The ISO/ITU Metric, Summarization and Performance management functions provide models that only partially satisfy the above requirements. This paper describes our extensions to the proposed models to support further capabilities, with the intention to eventually lead to fully dynamically defined monitoring policies. The concept of distributing intelligence is also discussed, including the consideration of security issues and the applicability of the model in ODP-based distributed processing environments.

  12. Operationalizing Dynamic Ocean Management (DOM): Understanding the Incentive Structure, Policy and Regulatory Context for DOM in Practice

    NASA Astrophysics Data System (ADS)

    Lewison, R. L.; Saumweber, W. J.; Erickson, A.; Martone, R. G.

    2016-12-01

    Dynamic ocean management, or management that uses near real-time data to guide the spatial distribution of commercial activities, is an emerging approach to balance ocean resource use and conservation. Employing a wide range of data types, dynamic ocean management in a fisheries context can be used to meet multiple objectives - managing target quota, bycatch reduction, and reducing interactions with species of conservation concern. There is a growing list of DOM applications currently in practice in fisheries around the world, yet the approach is new enough that both fishers and fisheries managers are unclear how DOM can be applied to their fishery. Here, we use the experience from dynamic ocean management applications that are currently in practice to address the commonly asked question "How can dynamic management approaches be implemented in a traditionally managed fishery?". Combining knowledge from the DOM participants with a review of regulatory frameworks and incentive structures, stakeholder participation, and technological requirements of DOM in practice, we identify ingredients that have supported successful implementation of this new management approach.

  13. Classification of cognitive systems dedicated to data sharing

    NASA Astrophysics Data System (ADS)

    Ogiela, Lidia; Ogiela, Marek R.

    2017-08-01

    In this paper will be presented classification of new cognitive information systems dedicated to cryptographic data splitting and sharing processes. Cognitive processes of semantic data analysis and interpretation, will be used to describe new classes of intelligent information and vision systems. In addition, cryptographic data splitting algorithms and cryptographic threshold schemes will be used to improve processes of secure and efficient information management with application of such cognitive systems. The utility of the proposed cognitive sharing procedures and distributed data sharing algorithms will be also presented. A few possible application of cognitive approaches for visual information management and encryption will be also described.

  14. Application of a distributed process-based hydrologic model to estimate the effects of forest road density on stormflows in the Southern Appalachians

    Treesearch

    Salli F. Dymond; W. Michael Aust; Stephen P. Prisley; Mark H. Eisenbies; James M. Vose

    2014-01-01

    Managed forests have historically been linked to watershed protection and flood mitigation. Research indicates that forests can potentially minimize peak flows during storm events, yet the relationship between forests and flooding is complex. Forest roads, usually found in managed systems, can potentially magnify the effects of forest harvesting on water yields. The...

  15. BIO-Plex Information System Concept

    NASA Technical Reports Server (NTRS)

    Jones, Harry; Boulanger, Richard; Arnold, James O. (Technical Monitor)

    1999-01-01

    This paper describes a suggested design for an integrated information system for the proposed BIO-Plex (Bioregenerative Planetary Life Support Systems Test Complex) at Johnson Space Center (JSC), including distributed control systems, central control, networks, database servers, personal computers and workstations, applications software, and external communications. The system will have an open commercial computing and networking, architecture. The network will provide automatic real-time transfer of information to database server computers which perform data collection and validation. This information system will support integrated, data sharing applications for everything, from system alarms to management summaries. Most existing complex process control systems have information gaps between the different real time subsystems, between these subsystems and central controller, between the central controller and system level planning and analysis application software, and between the system level applications and management overview reporting. An integrated information system is vitally necessary as the basis for the integration of planning, scheduling, modeling, monitoring, and control, which will allow improved monitoring and control based on timely, accurate and complete data. Data describing the system configuration and the real time processes can be collected, checked and reconciled, analyzed and stored in database servers that can be accessed by all applications. The required technology is available. The only opportunity to design a distributed, nonredundant, integrated system is before it is built. Retrofit is extremely difficult and costly.

  16. Soil nutrient concentration and distribution at riverbanks undergoing different land management practices: Implications for riverbank management

    NASA Astrophysics Data System (ADS)

    Xue, X. H.; Chang, S.; Yuan, L. Y.

    2017-08-01

    Riverbanks are important boundaries for the nutrient cycling between lands and freshwaters. This research aimed to explore effects of different land management methods on the soil nutrient concentration and distribution at riverbanks. Soils from the reed-covered riverbanks of middle Yangtze River were studied, including the soils respectively undergoing systematic agriculture (gathering young tender shoots, reaping reed straws, and burning residual straws), fires and no disturbances. Results showed that the agricultural activities sharply decreased the contents of soil organic matter (SOM), N, P and K in subsurface soils but less decreased the surface SOM, N and K contents, whereas phosphorus were evidently decreased at both surface and subsurface layers. In contrast, the single application of fires caused a marked increase of SOM, N, P and K contents in both surface and subsurface soils but had little impacts on soil nutrient distributions. Soils under all the three conditions showed a relative increase of soil nutrients at riverbank foot. This comparative study indicated that the different or even contrary effects of riverbank management practices on soil nutrient statuses should be carefully taken into account when assessing the ecological effects of management practices.

  17. CRC Clinical Trials Management System (CTMS): An Integrated Information Management Solution for Collaborative Clinical Research

    PubMed Central

    Payne, Philip R.O.; Greaves, Andrew W.; Kipps, Thomas J.

    2003-01-01

    The Chronic Lymphocytic Leukemia (CLL) Research Consortium (CRC) consists of 9 geographically distributed sites conducting a program of research including both basic science and clinical components. To enable the CRC’s clinical research efforts, a system providing for real-time collaboration was required. CTMS provides such functionality, and demonstrates that the use of novel data modeling, web-application platforms, and management strategies provides for the deployment of an extensible, cost effective solution in such an environment. PMID:14728471

  18. Flow-rate control for managing communications in tracking and surveillance networks

    NASA Astrophysics Data System (ADS)

    Miller, Scott A.; Chong, Edwin K. P.

    2007-09-01

    This paper describes a primal-dual distributed algorithm for managing communications in a bandwidth-limited sensor network for tracking and surveillance. The algorithm possesses some scale-invariance properties and adaptive gains that make it more practical for applications such as tracking where the conditions change over time. A simulation study comparing this algorithm with a priority-queue-based approach in a network tracking scenario shows significant improvement in the resulting track quality when using flow control to manage communications.

  19. 7 CFR 1775.36 - Purpose.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... to source, storage, treatment, and/or distribution. (b) Identify and evaluate solutions to waste... water and/or waste disposal loan/grant applications. (d) Provide technical assistance/training to association personnel that will improve the management, operation, and maintenance of water and waste...

  20. 7 CFR 1775.36 - Purpose.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... to source, storage, treatment, and/or distribution. (b) Identify and evaluate solutions to waste... water and/or waste disposal loan/grant applications. (d) Provide technical assistance/training to association personnel that will improve the management, operation, and maintenance of water and waste...

  1. Evaluation of power control concepts using the PMAD systems test bed. [Power Management and Distribution

    NASA Technical Reports Server (NTRS)

    Beach, R. F.; Kimnach, G. L.; Jett, T. A.; Trash, L. M.

    1989-01-01

    The Lewis Research Center's Power Management and Distribution (PMAD) System testbed and its use in the evaluation of control concepts applicable to the NASA Space Station Freedom electric power system (EPS) are described. The facility was constructed to allow testing of control hardware and software in an environment functionally similar to the space station electric power system. Control hardware and software have been developed to allow operation of the testbed power system in a manner similar to a supervisory control and data acquisition (SCADA) system employed by utility power systems for control. The system hardware and software are described.

  2. Orchestrating Distributed Resource Ensembles for Petascale Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baldin, Ilya; Mandal, Anirban; Ruth, Paul

    2014-04-24

    Distributed, data-intensive computational science applications of interest to DOE scientific com- munities move large amounts of data for experiment data management, distributed analysis steps, remote visualization, and accessing scientific instruments. These applications need to orchestrate ensembles of resources from multiple resource pools and interconnect them with high-capacity multi- layered networks across multiple domains. It is highly desirable that mechanisms are designed that provide this type of resource provisioning capability to a broad class of applications. It is also important to have coherent monitoring capabilities for such complex distributed environments. In this project, we addressed these problems by designing an abstractmore » API, enabled by novel semantic resource descriptions, for provisioning complex and heterogeneous resources from multiple providers using their native provisioning mechanisms and control planes: computational, storage, and multi-layered high-speed network domains. We used an extensible resource representation based on semantic web technologies to afford maximum flexibility to applications in specifying their needs. We evaluated the effectiveness of provisioning using representative data-intensive ap- plications. We also developed mechanisms for providing feedback about resource performance to the application, to enable closed-loop feedback control and dynamic adjustments to resource allo- cations (elasticity). This was enabled through development of a novel persistent query framework that consumes disparate sources of monitoring data, including perfSONAR, and provides scalable distribution of asynchronous notifications.« less

  3. A Distributed Prognostic Health Management Architecture

    NASA Technical Reports Server (NTRS)

    Bhaskar, Saha; Saha, Sankalita; Goebel, Kai

    2009-01-01

    This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.

  4. A multi-echelon supply chain model for municipal solid waste management system.

    PubMed

    Zhang, Yimei; Huang, Guo He; He, Li

    2014-02-01

    In this paper, a multi-echelon multi-period solid waste management system (MSWM) was developed by inoculating with multi-echelon supply chain. Waste managers, suppliers, industries and distributors could be engaged in joint strategic planning and operational execution. The principal of MSWM system is interactive planning of transportation and inventory for each organization in waste collection, delivery and disposal. An efficient inventory management plan for MSWM would lead to optimized productivity levels under available capacities (e.g., transportation and operational capacities). The applicability of the proposed system was illustrated by a case with three cities, one distribution and two waste disposal facilities. Solutions of the decision variable values under different significant levels indicate a consistent trend. With an increased significant level, the total generated waste would be decreased, and the total transported waste through distribution center to waste to energy and landfill would be decreased as well. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Digital Library Storage using iRODS Data Grids

    NASA Astrophysics Data System (ADS)

    Hedges, Mark; Blanke, Tobias; Hasan, Adil

    Digital repository software provides a powerful and flexible infrastructure for managing and delivering complex digital resources and metadata. However, issues can arise in managing the very large, distributed data files that may constitute these resources. This paper describes an implementation approach that combines the Fedora digital repository software with a storage layer implemented as a data grid, using the iRODS middleware developed by DICE (Data Intensive Cyber Environments) as the successor to SRB. This approach allows us to use Fedoras flexible architecture to manage the structure of resources and to provide application- layer services to users. The grid-based storage layer provides efficient support for managing and processing the underlying distributed data objects, which may be very large (e.g. audio-visual material). The Rule Engine built into iRODS is used to integrate complex workflows at the data level that need not be visible to users, e.g. digital preservation functionality.

  6. A multi-echelon supply chain model for municipal solid waste management system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yimei, E-mail: yimei.zhang1@gmail.com; Huang, Guo He; He, Li

    2014-02-15

    In this paper, a multi-echelon multi-period solid waste management system (MSWM) was developed by inoculating with multi-echelon supply chain. Waste managers, suppliers, industries and distributors could be engaged in joint strategic planning and operational execution. The principal of MSWM system is interactive planning of transportation and inventory for each organization in waste collection, delivery and disposal. An efficient inventory management plan for MSWM would lead to optimized productivity levels under available capacities (e.g., transportation and operational capacities). The applicability of the proposed system was illustrated by a case with three cities, one distribution and two waste disposal facilities. Solutions ofmore » the decision variable values under different significant levels indicate a consistent trend. With an increased significant level, the total generated waste would be decreased, and the total transported waste through distribution center to waste to energy and landfill would be decreased as well.« less

  7. Prediction-based Dynamic Energy Management in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Ma, Jun-Jie; Wang, Sheng; Bi, Dao-Wei

    2007-01-01

    Energy consumption is a critical constraint in wireless sensor networks. Focusing on the energy efficiency problem of wireless sensor networks, this paper proposes a method of prediction-based dynamic energy management. A particle filter was introduced to predict a target state, which was adopted to awaken wireless sensor nodes so that their sleep time was prolonged. With the distributed computing capability of nodes, an optimization approach of distributed genetic algorithm and simulated annealing was proposed to minimize the energy consumption of measurement. Considering the application of target tracking, we implemented target position prediction, node sleep scheduling and optimal sensing node selection. Moreover, a routing scheme of forwarding nodes was presented to achieve extra energy conservation. Experimental results of target tracking verified that energy-efficiency is enhanced by prediction-based dynamic energy management.

  8. Spatial and Temporal Distribution of Soil-Applied Neonicotinoids in Citrus Tree Foliage.

    PubMed

    Langdon, Kevin W; Schumann, Rhonda; Stelinski, Lukasz L; Rogers, Michael E

    2018-04-23

    Diaphorina citri Kuwayama (Hemiptera: Liviidae) is the insect vector of Candidatus Liberibacter asiaticus (CLas), the presumed cause of huanglongbing (HLB) in citrus (Rutaceae). Soil-applied neonicotinoids are used to manage vector populations and thus reduce the spread of HLB in Florida citrus. Studies were conducted in the greenhouse and field to quantify the spatial and temporal distribution of three neonicotinoid insecticides within individually sampled leaves and throughout the tree canopy. Following field application, no difference in parent material titer was observed between leaf middles versus leaf margins following application of Platinum 75SG or Belay 2.13SC; however, imidacloprid titer was higher in leaf margins than leaf middle following application of Admire Pro. The bottom region of trees contained more imidacloprid than other regions, but was not different from the spherical center region. In the greenhouse, imidacloprid and clothianidin titers peaked 5 wk following application of Admire and Belay, respectively, and thiamethoxam titer peaked 3 wk after application of Platinum. There was no effect of leaf age on uptakes of any insecticides tested. Titers of soil-applied neonicotinoids quantified in the field failed to reach known levels required to kill D. citri. Exposure of D. citri to sublethal dosages of neonicotinoids is of concern for HLB management because of possible failure to protect treated plants from D. citri and selection pressure for development of neonicotinoid resistance. Our results suggest that current soil-based use patterns of neonicotinoids for D. citri management may be suboptimal and require reevaluation to maintain the utility of this chemical class in citrus.

  9. Theory of Constraints for Services: Past, Present, and Future

    NASA Astrophysics Data System (ADS)

    Ricketts, John A.

    Theory of constraints (TOC) is a thinking process and a set of management applications based on principles that run counter to conventional wisdom. TOC is best known in the manufacturing and distribution sectors where it originated. Awareness is growing in some service sectors, such as Health Care. And it's been adopted in some high-tech industries, such as Computer Software. Until recently, however, TOC was barely known in the Professional, Scientific, and Technical Services (PSTS) sector. Professional services include law, accounting, and consulting. Scientific services include research and development. And Technical services include development, operation, and support of various technologies. The main reason TOC took longer to reach PSTS is it's much harder to apply TOC principles when services are highly customized. Nevertheless, with the management applications described in this chapter, TOC has been successfully adapted for PSTS. Those applications cover management of resources, projects, processes, and finances.

  10. Legacy systems: managing evolution through integration in a distributed and object-oriented computing environment.

    PubMed Central

    Lemaitre, D.; Sauquet, D.; Fofol, I.; Tanguy, L.; Jean, F. C.; Degoulet, P.

    1995-01-01

    Legacy systems are crucial for organizations since they support key functionalities. But they become obsolete with aging and the apparition of new techniques. Managing their evolution is a key issue in software engineering. This paper presents a strategy that has been developed at Broussais University Hospital in Paris to make a legacy system devoted to the management of health care units evolve towards a new up-to-date software. A two-phase evolution pathway is described. The first phase consists in separating the interface from the data storage and application control and in using a communication channel between the individualized components. The second phase proposes to use an object-oriented DBMS in place of the homegrown system. An application example for the management of hypertensive patients is described. PMID:8563252

  11. Investigation of energy management strategies for photovoltaic systems - An analysis technique

    NASA Technical Reports Server (NTRS)

    Cull, R. C.; Eltimsahy, A. H.

    1982-01-01

    Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.

  12. Investigation of energy management strategies for photovoltaic systems - An analysis technique

    NASA Astrophysics Data System (ADS)

    Cull, R. C.; Eltimsahy, A. H.

    Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.

  13. Extensible Interest Management for Scalable Persistent Distributed Virtual Environments

    DTIC Science & Technology

    1999-12-01

    Calvin, Cebula et al. 1995; Morse, Bic et al. 2000) uses a two grid, with each grid cell having two multicast addresses. An entity expresses interest...Entity distribution for experimental runs 78 s I * • ...... ^..... * * a» Sis*«*»* 1 ***** Jj |r...Multiple Users and Shared Applications with VRML. VRML 97, Monterey, CA. pp. 33-40. Calvin, J. O., D. P. Cebula , et al. (1995). Data Subscription in

  14. The AgESGUI geospatial simulation system for environmental model application and evaluation

    USDA-ARS?s Scientific Manuscript database

    Practical decision making in spatially-distributed environmental assessment and management is increasingly being based on environmental process-based models linked to geographical information systems (GIS). Furthermore, powerful computers and Internet-accessible assessment tools are providing much g...

  15. Accessing and distributing EMBL data using CORBA (common object request broker architecture).

    PubMed

    Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P

    2000-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.

  16. Accessing and distributing EMBL data using CORBA (common object request broker architecture)

    PubMed Central

    Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip

    2000-01-01

    Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259

  17. A development framework for artificial intelligence based distributed operations support systems

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1990-01-01

    Advanced automation is required to reduce costly human operations support requirements for complex space-based and ground control systems. Existing knowledge based technologies have been used successfully to automate individual operations tasks. Considerably less progress has been made in integrating and coordinating multiple operations applications for unified intelligent support systems. To fill this gap, SOCIAL, a tool set for developing Distributed Artificial Intelligence (DAI) systems is being constructed. SOCIAL consists of three primary language based components defining: models of interprocess communication across heterogeneous platforms; models for interprocess coordination, concurrency control, and fault management; and for accessing heterogeneous information resources. DAI applications subsystems, either new or existing, will access these distributed services non-intrusively, via high-level message-based protocols. SOCIAL will reduce the complexity of distributed communications, control, and integration, enabling developers to concentrate on the design and functionality of the target DAI system itself.

  18. Planning and Resource Management in an Intelligent Automated Power Management System

    NASA Technical Reports Server (NTRS)

    Morris, Robert A.

    1991-01-01

    Power system management is a process of guiding a power system towards the objective of continuous supply of electrical power to a set of loads. Spacecraft power system management requires planning and scheduling, since electrical power is a scarce resource in space. The automation of power system management for future spacecraft has been recognized as an important R&D goal. Several automation technologies have emerged including the use of expert systems for automating human problem solving capabilities such as rule based expert system for fault diagnosis and load scheduling. It is questionable whether current generation expert system technology is applicable for power system management in space. The objective of the ADEPTS (ADvanced Electrical Power management Techniques for Space systems) is to study new techniques for power management automation. These techniques involve integrating current expert system technology with that of parallel and distributed computing, as well as a distributed, object-oriented approach to software design. The focus of the current study is the integration of new procedures for automatically planning and scheduling loads with procedures for performing fault diagnosis and control. The objective is the concurrent execution of both sets of tasks on separate transputer processors, thus adding parallelism to the overall management process.

  19. Application of receptor-specific risk distribution in the arsenic contaminated land management.

    PubMed

    Chen, I-chun; Ng, Shane; Wang, Gen-shuh; Ma, Hwong-wen

    2013-11-15

    Concerns over health risks and financial costs have caused difficulties in the management of arsenic contaminated land in Taiwan. Inflexible risk criteria and lack of economic support often result in failure of a brownfields regeneration project. To address the issue of flexible risk criteria, this study is aimed to develop maps with receptor-specific risk distribution to facilitate scenario analysis of contaminated land management. A contaminated site risk map model (ArcGIS for risk assessment and management, abbreviated as Arc-RAM) was constructed by combining the four major steps of risk assessment with Geographic Information Systems. Sampling of contaminated media, survey of exposure attributes, and modeling of multimedia transport were integrated to produce receptor group-specific maps that depicted the probabilistic spatial distribution of risks of various receptor groups. Flexible risk management schemes can then be developed and assessed. In this study, a risk management program that took into account the ratios of various land use types at specified risk levels was explored. A case study of arsenic contaminated land of 6.387 km(2) has found that for a risk value between 1.00E-05 and 1.00E-06, the proposed flexible risk management of agricultural land achieves improved utilization of land. Using this method, the investigated case can reduce costs related to compensation for farmland totaling approximately NTD 5.94 million annually. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Blazing the trailway: Nuclear electric propulsion and its technology program plans

    NASA Technical Reports Server (NTRS)

    Doherty, Michael P.

    1992-01-01

    An overview is given of the plans for a program in nuclear electric propulsion (NEP) technology for space applications being considered by NASA, DOE, and DOD. Possible missions using NEP are examined, and NEP technology plans are addressed regarding concept development, systems engineering, nuclear fuels, power conversion, thermal management, power management and distribution, electric thrusters, facilities, and issues related to safety and environment. The programmatic characteristics are considered.

  1. Methods to estimate distribution and range extent of grizzly bears in the Greater Yellowstone Ecosystem

    USGS Publications Warehouse

    Haroldson, Mark A.; Schwartz, Charles C.; Thompson, Daniel J.; Bjornlie, Daniel D.; Gunther, Kerry A.; Cain, Steven L.; Tyers, Daniel B.; Frey, Kevin L.; Aber, Bryan C.

    2014-01-01

    The distribution of the Greater Yellowstone Ecosystem grizzly bear (Ursus arctos) population has expanded into areas unoccupied since the early 20th century. Up-to-date information on the area and extent of this distribution is crucial for federal, state, and tribal wildlife and land managers to make informed decisions regarding grizzly bear management. The most recent estimate of grizzly bear distribution (2004) utilized fixed-kernel density estimators to describe distribution. This method was complex and computationally time consuming and excluded observations of unmarked bears. Our objective was to develop a technique to estimate grizzly bear distribution that would allow for the use of all verified grizzly bear location data, as well as provide the simplicity to be updated more frequently. We placed all verified grizzly bear locations from all sources from 1990 to 2004 and 1990 to 2010 onto a 3-km × 3-km grid and used zonal analysis and ordinary kriging to develop a predicted surface of grizzly bear distribution. We compared the area and extent of the 2004 kriging surface with the previous 2004 effort and evaluated changes in grizzly bear distribution from 2004 to 2010. The 2004 kriging surface was 2.4% smaller than the previous fixed-kernel estimate, but more closely represented the data. Grizzly bear distribution increased 38.3% from 2004 to 2010, with most expansion in the northern and southern regions of the range. This technique can be used to provide a current estimate of grizzly bear distribution for management and conservation applications.

  2. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  3. Virtual System Environments

    NASA Astrophysics Data System (ADS)

    Vallée, Geoffroy; Naughton, Thomas; Ong, Hong; Tikotekar, Anand; Engelmann, Christian; Bland, Wesley; Aderholdt, Ferrol; Scott, Stephen L.

    Distributed and parallel systems are typically managed with “static” settings: the operating system (OS) and the runtime environment (RTE) are specified at a given time and cannot be changed to fit an application’s needs. This means that every time application developers want to use their application on a new execution platform, the application has to be ported to this new environment, which may be expensive in terms of application modifications and developer time. However, the science resides in the applications and not in the OS or the RTE. Therefore, it should be beneficial to adapt the OS and the RTE to the application instead of adapting the applications to the OS and the RTE.

  4. EMAAS: An extensible grid-based Rich Internet Application for microarray data analysis and management

    PubMed Central

    Barton, G; Abbott, J; Chiba, N; Huang, DW; Huang, Y; Krznaric, M; Mack-Smith, J; Saleem, A; Sherman, BT; Tiwari, B; Tomlinson, C; Aitman, T; Darlington, J; Game, L; Sternberg, MJE; Butcher, SA

    2008-01-01

    Background Microarray experimentation requires the application of complex analysis methods as well as the use of non-trivial computer technologies to manage the resultant large data sets. This, together with the proliferation of tools and techniques for microarray data analysis, makes it very challenging for a laboratory scientist to keep up-to-date with the latest developments in this field. Our aim was to develop a distributed e-support system for microarray data analysis and management. Results EMAAS (Extensible MicroArray Analysis System) is a multi-user rich internet application (RIA) providing simple, robust access to up-to-date resources for microarray data storage and analysis, combined with integrated tools to optimise real time user support and training. The system leverages the power of distributed computing to perform microarray analyses, and provides seamless access to resources located at various remote facilities. The EMAAS framework allows users to import microarray data from several sources to an underlying database, to pre-process, quality assess and analyse the data, to perform functional analyses, and to track data analysis steps, all through a single easy to use web portal. This interface offers distance support to users both in the form of video tutorials and via live screen feeds using the web conferencing tool EVO. A number of analysis packages, including R-Bioconductor and Affymetrix Power Tools have been integrated on the server side and are available programmatically through the Postgres-PLR library or on grid compute clusters. Integrated distributed resources include the functional annotation tool DAVID, GeneCards and the microarray data repositories GEO, CELSIUS and MiMiR. EMAAS currently supports analysis of Affymetrix 3' and Exon expression arrays, and the system is extensible to cater for other microarray and transcriptomic platforms. Conclusion EMAAS enables users to track and perform microarray data management and analysis tasks through a single easy-to-use web application. The system architecture is flexible and scalable to allow new array types, analysis algorithms and tools to be added with relative ease and to cope with large increases in data volume. PMID:19032776

  5. 78 FR 4589 - Proposed Information Collections; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-22

    ... Office of Management and Budget (OMB) approval of the relevant information collection. All comments are... maintain spirits accountability and protect tax revenue and public safety. The record retention requirement... and distribution, and protect tax revenue and public safety. Letterhead application and notice...

  6. 7 CFR 275.9 - Review process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING SYSTEM Management Evaluation (ME... problems and the causes of those problems. As each project area's operational structure will differ, State agencies shall review each program requirement applicable to the project area in a manner which will best...

  7. Scholarly Communication, Academic Libraries, and Technology.

    ERIC Educational Resources Information Center

    Ekman, Richard H.; Quandt, Richard E.

    1995-01-01

    Economic, technical, administrative, and other issues to be resolved in scholarly communication in an age of advancing technology are discussed. Recent initiatives in electronic publishing that address concerns in the areas of scholarly journals, books, data distribution and management, multimedia approaches, nontraditional applications, access…

  8. Watershed Management Tool for Selection and Spacial Allocation of Non-Point Source Pollution Control Practices

    EPA Science Inventory

    Distributed-parameter watershed models are often utilized for evaluating the effectiveness of sediment and nutrient abatement strategies through the traditional {calibrate→ validate→ predict} approach. The applicability of the method is limited due to modeling approximations. In ...

  9. Applications of Geomatics in Surface Mining

    NASA Astrophysics Data System (ADS)

    Blachowski, Jan; Górniak-Zimroz, Justyna; Milczarek, Wojciech; Pactwa, Katarzyna

    2017-12-01

    In terms of method of extracting mineral from deposit, mining can be classified into: surface, underground, and borehole mining. Surface mining is a form of mining, in which the soil and the rock covering the mineral deposits are removed. Types of surface mining include mainly strip and open-cast methods, as well as quarrying. Tasks associated with surface mining of minerals include: resource estimation and deposit documentation, mine planning and deposit access, mine plant development, extraction of minerals from deposits, mineral and waste processing, reclamation and reclamation of former mining grounds. At each stage of mining, geodata describing changes occurring in space during the entire life cycle of surface mining project should be taken into consideration, i.e. collected, analysed, processed, examined, distributed. These data result from direct (e.g. geodetic) and indirect (i.e. remote or relative) measurements and observations including airborne and satellite methods, geotechnical, geological and hydrogeological data, and data from other types of sensors, e.g. located on mining equipment and infrastructure, mine plans and maps. Management of such vast sources and sets of geodata, as well as information resulting from processing, integrated analysis and examining such data can be facilitated with geomatic solutions. Geomatics is a discipline of gathering, processing, interpreting, storing and delivering spatially referenced information. Thus, geomatics integrates methods and technologies used for collecting, management, processing, visualizing and distributing spatial data. In other words, its meaning covers practically every method and tool from spatial data acquisition to distribution. In this work examples of application of geomatic solutions in surface mining on representative case studies in various stages of mine operation have been presented. These applications include: prospecting and documenting mineral deposits, assessment of land accessibility for a potential large-scale surface mining project, modelling mineral deposit (granite) management, concept of a system for management of conveyor belt network technical condition, project of a geoinformation system of former mining terrains and objects, and monitoring and control of impact of surface mining on mine surroundings with satellite radar interferometry.

  10. Advanced electrical power, distribution and control for the Space Transportation System

    NASA Astrophysics Data System (ADS)

    Hansen, Irving G.; Brandhorst, Henry W., Jr.

    1990-08-01

    High frequency power distribution and management is a technology ready state of development. As such, a system employs the fewest power conversion steps, and employs zero current switching for those steps. It results in the most efficiency, and lowest total parts system count when equivalent systems are compared. The operating voltage and frequency are application specific trade off parameters. However, a 20 kHz Hertz system is suitable for wide range systems.

  11. Advanced electrical power, distribution and control for the Space Transportation System

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.; Brandhorst, Henry W., Jr.

    1990-01-01

    High frequency power distribution and management is a technology ready state of development. As such, a system employs the fewest power conversion steps, and employs zero current switching for those steps. It results in the most efficiency, and lowest total parts system count when equivalent systems are compared. The operating voltage and frequency are application specific trade off parameters. However, a 20 kHz Hertz system is suitable for wide range systems.

  12. An eConsent-based System Architecture Supporting Cooperation in Integrated Healthcare Networks.

    PubMed

    Bergmann, Joachim; Bott, Oliver J; Hoffmann, Ina; Pretschner, Dietrich P

    2005-01-01

    The economical need for efficient healthcare leads to cooperative shared care networks. A virtual electronic health record is required, which integrates patient related information but reflects the distributed infrastructure and restricts access only to those health professionals involved into the care process. Our work aims on specification and development of a system architecture fulfilling these requirements to be used in concrete regional pilot studies. Methodical analysis and specification have been performed in a healthcare network using the formal method and modelling tool MOSAIK-M. The complexity of the application field was reduced by focusing on the scenario of thyroid disease care, which still includes various interdisciplinary cooperation. Result is an architecture for a secure distributed electronic health record for integrated care networks, specified in terms of a MOSAIK-M-based system model. The architecture proposes business processes, application services, and a sophisticated security concept, providing a platform for distributed document-based, patient-centred, and secure cooperation. A corresponding system prototype has been developed for pilot studies, using advanced application server technologies. The architecture combines a consolidated patient-centred document management with a decentralized system structure without needs for replication management. An eConsent-based approach assures, that access to the distributed health record remains under control of the patient. The proposed architecture replaces message-based communication approaches, because it implements a virtual health record providing complete and current information. Acceptance of the new communication services depends on compatibility with the clinical routine. Unique and cross-institutional identification of a patient is also a challenge, but will loose significance with establishing common patient cards.

  13. Research on Collaborative Technology in Distributed Virtual Reality System

    NASA Astrophysics Data System (ADS)

    Lei, ZhenJiang; Huang, JiJie; Li, Zhao; Wang, Lei; Cui, JiSheng; Tang, Zhi

    2018-01-01

    Distributed virtual reality technology applied to the joint training simulation needs the CSCW (Computer Supported Cooperative Work) terminal multicast technology to display and the HLA (high-level architecture) technology to ensure the temporal and spatial consistency of the simulation, in order to achieve collaborative display and collaborative computing. In this paper, the CSCW’s terminal multicast technology has been used to modify and expand the implementation framework of HLA. During the simulation initialization period, this paper has used the HLA statement and object management service interface to establish and manage the CSCW network topology, and used the HLA data filtering mechanism for each federal member to establish the corresponding Mesh tree. During the simulation running period, this paper has added a new thread for the RTI and the CSCW real-time multicast interactive technology into the RTI, so that the RTI can also use the window message mechanism to notify the application update the display screen. Through many applications of submerged simulation training in substation under the operation of large power grid, it is shown that this paper has achieved satisfactory training effect on the collaborative technology used in distributed virtual reality simulation.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report contains papers on the following topics: NREN Security Issues: Policies and Technologies; Layer Wars: Protect the Internet with Network Layer Security; Electronic Commission Management; Workflow 2000 - Electronic Document Authorization in Practice; Security Issues of a UNIX PEM Implementation; Implementing Privacy Enhanced Mail on VMS; Distributed Public Key Certificate Management; Protecting the Integrity of Privacy-enhanced Electronic Mail; Practical Authorization in Large Heterogeneous Distributed Systems; Security Issues in the Truffles File System; Issues surrounding the use of Cryptographic Algorithms and Smart Card Applications; Smart Card Augmentation of Kerberos; and An Overview of the Advanced Smart Card Access Control System.more » Selected papers were processed separately for inclusion in the Energy Science and Technology Database.« less

  15. Energy-efficient sensing in wireless sensor networks using compressed sensing.

    PubMed

    Razzaque, Mohammad Abdur; Dobson, Simon

    2014-02-12

    Sensing of the application environment is the main purpose of a wireless sensor network. Most existing energy management strategies and compression techniques assume that the sensing operation consumes significantly less energy than radio transmission and reception. This assumption does not hold in a number of practical applications. Sensing energy consumption in these applications may be comparable to, or even greater than, that of the radio. In this work, we support this claim by a quantitative analysis of the main operational energy costs of popular sensors, radios and sensor motes. In light of the importance of sensing level energy costs, especially for power hungry sensors, we consider compressed sensing and distributed compressed sensing as potential approaches to provide energy efficient sensing in wireless sensor networks. Numerical experiments investigating the effectiveness of compressed sensing and distributed compressed sensing using real datasets show their potential for efficient utilization of sensing and overall energy costs in wireless sensor networks. It is shown that, for some applications, compressed sensing and distributed compressed sensing can provide greater energy efficiency than transform coding and model-based adaptive sensing in wireless sensor networks.

  16. The development of an airborne information management system for flight test

    NASA Technical Reports Server (NTRS)

    Bever, Glenn A.

    1992-01-01

    An airborne information management system is being developed at the NASA Dryden Flight Research Facility. This system will improve the state of the art in management data acquisition on-board research aircraft. The design centers around highly distributable, high-speed microprocessors that allow data compression, digital filtering, and real-time analysis. This paper describes the areas of applicability, approach to developing the system, potential for trouble areas, and reasons for this development activity. System architecture (including the salient points of what makes it unique), design philosophy, and tradeoff issues are also discussed.

  17. Reinforcement Learning Applications to Combat Identification

    DTIC Science & Technology

    2017-03-01

    Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork...DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) Crucial to the safe and effective operation of U.S. Navy vessels is the quick and accurate...abilities of a human operator . While this research does focus on a sea-based, naval, application, the findings can also be expanded to DOD-wide

  18. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing.

    PubMed

    Zhang, Nan; Yang, Xiaolong; Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network.

  19. Crowd-Funding: A New Resource Cooperation Mode for Mobile Cloud Computing

    PubMed Central

    Zhang, Min; Sun, Yan

    2016-01-01

    Mobile cloud computing, which integrates the cloud computing techniques into the mobile environment, is regarded as one of the enabler technologies for 5G mobile wireless networks. There are many sporadic spare resources distributed within various devices in the networks, which can be used to support mobile cloud applications. However, these devices, with only a few spare resources, cannot support some resource-intensive mobile applications alone. If some of them cooperate with each other and share their resources, then they can support many applications. In this paper, we propose a resource cooperative provision mode referred to as "Crowd-funding", which is designed to aggregate the distributed devices together as the resource provider of mobile applications. Moreover, to facilitate high-efficiency resource management via dynamic resource allocation, different resource providers should be selected to form a stable resource coalition for different requirements. Thus, considering different requirements, we propose two different resource aggregation models for coalition formation. Finally, we may allocate the revenues based on their attributions according to the concept of the "Shapley value" to enable a more impartial revenue share among the cooperators. It is shown that a dynamic and flexible resource-management method can be developed based on the proposed Crowd-funding model, relying on the spare resources in the network. PMID:28030553

  20. The ATLAS PanDA Monitoring System and its Evolution

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  1. The Particle Physics Data Grid. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Livny, Miron

    2002-08-16

    The main objective of the Particle Physics Data Grid (PPDG) project has been to implement and evaluate distributed (Grid-enabled) data access and management technology for current and future particle and nuclear physics experiments. The specific goals of PPDG have been to design, implement, and deploy a Grid-based software infrastructure capable of supporting the data generation, processing and analysis needs common to the physics experiments represented by the participants, and to adapt experiment-specific software to operate in the Grid environment and to exploit this infrastructure. To accomplish these goals, the PPDG focused on the implementation and deployment of several critical services:more » reliable and efficient file replication service, high-speed data transfer services, multisite file caching and staging service, and reliable and recoverable job management services. The focus of the activity was the job management services and the interplay between these services and distributed data access in a Grid environment. Software was developed to study the interaction between HENP applications and distributed data storage fabric. One key conclusion was the need for a reliable and recoverable tool for managing large collections of interdependent jobs. An attached document provides an overview of the current status of the Directed Acyclic Graph Manager (DAGMan) with its main features and capabilities.« less

  2. Exploiting replication in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, T. A.

    1989-01-01

    Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior. Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures. It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols. Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs.

  3. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    USGS Publications Warehouse

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew; Chignell, Steve

    2017-01-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  4. Using multi-date satellite imagery to monitor invasive grass species distribution in post-wildfire landscapes: An iterative, adaptable approach that employs open-source data and software

    NASA Astrophysics Data System (ADS)

    West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew W.; Chignell, Stephen M.

    2017-07-01

    Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.

  5. Automated Planning and Scheduling for Space Mission Operations

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Jonsson, Ari; Knight, Russell

    2005-01-01

    Research Trends: a) Finite-capacity scheduling under more complex constraints and increased problem dimensionality (subcontracting, overtime, lot splitting, inventory, etc.) b) Integrated planning and scheduling. c) Mixed-initiative frameworks. d) Management of uncertainty (proactive and reactive). e) Autonomous agent architectures and distributed production management. e) Integration of machine learning capabilities. f) Wider scope of applications: 1) analysis of supplier/buyer protocols & tradeoffs; 2) integration of strategic & tactical decision-making; and 3) enterprise integration.

  6. An overview of the artificial intelligence and expert systems component of RICIS

    NASA Technical Reports Server (NTRS)

    Feagin, Terry

    1987-01-01

    Artificial Intelligence and Expert Systems are the important component of RICIS (Research Institute and Information Systems) research program. For space applications, a number of problem areas that should be able to make good use of the above tools include: resource allocation and management, control and monitoring, environmental control and life support, power distribution, communications scheduling, orbit and attitude maintenance, redundancy management, intelligent man-machine interfaces and fault detection, isolation and recovery.

  7. Smart distribution systems

    DOE PAGES

    Jiang, Yazhou; Liu, Chen -Ching; Xu, Yin

    2016-04-19

    The increasing importance of system reliability and resilience is changing the way distribution systems are planned and operated. To achieve a distribution system self-healing against power outages, emerging technologies and devices, such as remote-controlled switches (RCSs) and smart meters, are being deployed. The higher level of automation is transforming traditional distribution systems into the smart distribution systems (SDSs) of the future. The availability of data and remote control capability in SDSs provides distribution operators with an opportunity to optimize system operation and control. In this paper, the development of SDSs and resulting benefits of enhanced system capabilities are discussed. Amore » comprehensive survey is conducted on the state-of-the-art applications of RCSs and smart meters in SDSs. Specifically, a new method, called Temporal Causal Diagram (TCD), is used to incorporate outage notifications from smart meters for enhanced outage management. To fully utilize the fast operation of RCSs, the spanning tree search algorithm is used to develop service restoration strategies. Optimal placement of RCSs and the resulting enhancement of system reliability are discussed. Distribution system resilience with respect to extreme events is presented. Furthermore, test cases are used to demonstrate the benefit of SDSs. Active management of distributed generators (DGs) is introduced. Future research in a smart distribution environment is proposed.« less

  8. Applications of acoustics in insect pest management

    USDA-ARS?s Scientific Manuscript database

    Acoustic technology has been applied for many years in studies of insect communication and in the monitoring of calling-insect population levels, geographic distributions, and diversity, as well as in the detection of cryptic insects in soil, wood, container crops, and stored products. Acoustic devi...

  9. 40 CFR 63.11085 - What are my general duties to minimize emissions?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Emission Limitations and Management Practices... control practices for minimizing emissions. Determination of whether such operation and maintenance... operation and maintenance records, and inspection of the source. (b) You must keep applicable records and...

  10. 40 CFR 63.11085 - What are my general duties to minimize emissions?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Emission Limitations and Management Practices... control practices for minimizing emissions. Determination of whether such operation and maintenance... operation and maintenance records, and inspection of the source. (b) You must keep applicable records and...

  11. 40 CFR 63.11085 - What are my general duties to minimize emissions?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Distribution Bulk Terminals, Bulk Plants, and Pipeline Facilities Emission Limitations and Management Practices... control practices for minimizing emissions. Determination of whether such operation and maintenance... operation and maintenance records, and inspection of the source. (b) You must keep applicable records and...

  12. Phylogenetic diversity of Brazilian Metarhizium associated with sugarcane agriculture

    USDA-ARS?s Scientific Manuscript database

    Biological control of spittlebug with Metarhizium in sugarcane is an example of the successful application of sustainable pest management in Brazil. However little is known about the richness, distribution and ecology of Metarhizium species in the agroecosystems and natural environments of Brazil. W...

  13. 75 FR 11477 - Proposed Establishment of Class E Airspace; Kemmerer, WY

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-11

    ... new Area Navigation (RNAV) Global Positioning System (GPS) Standard Instrument Approach Procedures... be submitted in triplicate to the Docket Management System (see ADDRESSES section for address and... Proposed Rulemaking Distribution System, which describes the application procedure. The Proposal The FAA is...

  14. DEVELOPMENT OF RISK ASSESSMENT METHODOLOGY FOR MUNICIPAL SLUDGE INCINERATION

    EPA Science Inventory

    This is one of a series of reports that present methodologies for assessing the potential risks to humans or other organisms from the disposal or reuse of municipal sludge. he sludge management practices addressed by this series include land application practices, distribution an...

  15. DEVELOPMENT OF RISK ASSESSMENT METHODOLOGY FOR MUNICIPAL SLUDGE LANDFILLING

    EPA Science Inventory

    This is one of a series of reports that present methodologies for assessing the potential risks to humans or other organisms from the disposal or reuse of municipal sludge. he sludge management practices addressed by this series include land application practices, distribution an...

  16. Multimedia on the Network: Has Its Time Come?

    ERIC Educational Resources Information Center

    Galbreath, Jeremy

    1995-01-01

    Examines the match between multimedia data and local area network (LAN) infrastructures. Highlights include applications for networked multimedia, i.e., asymmetric and symmetric; alternate LAN technology, including stream management software, Ethernet, FDDI (Fiber Distributed Data Interface), and ATM (Asynchronous Transfer Mode); WAN (Wide Area…

  17. Scalable parallel distance field construction for large-scale applications

    DOE PAGES

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  18. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.

  19. Distributed sensor management for space situational awareness via a negotiation game

    NASA Astrophysics Data System (ADS)

    Jia, Bin; Shen, Dan; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2015-05-01

    Space situational awareness (SSA) is critical to many space missions serving weather analysis, communications, and navigation. However, the number of sensors used in space situational awareness is limited which hinders collision avoidance prediction, debris assessment, and efficient routing. Hence, it is critical to use such sensor resources efficiently. In addition, it is desired to develop the SSA sensor management algorithm in a distributed manner. In this paper, a distributed sensor management approach using the negotiation game (NG-DSM) is proposed for the SSA. Specifically, the proposed negotiation game is played by each sensor and its neighboring sensors. The bargaining strategies are developed for each sensor based on negotiating for accurately tracking desired targets (e.g., satellite, debris, etc.) . The proposed NG-DSM method is tested in a scenario which includes eight space objects and three different sensor modalities which include a space based optical sensor, a ground radar, or a ground Electro-Optic sensor. The geometric relation between the sensor, the Sun, and the space object is also considered. The simulation results demonstrate the effectiveness of the proposed NG-DSM sensor management methods, which facilitates an application of multiple-sensor multiple-target tracking for space situational awareness.

  20. Evaluation and prediction of solar radiation for energy management based on neural networks

    NASA Astrophysics Data System (ADS)

    Aldoshina, O. V.; Van Tai, Dinh

    2017-08-01

    Currently, there is a high rate of distribution of renewable energy sources and distributed power generation based on intelligent networks; therefore, meteorological forecasts are particularly useful for planning and managing the energy system in order to increase its overall efficiency and productivity. The application of artificial neural networks (ANN) in the field of photovoltaic energy is presented in this article. Implemented in this study, two periodically repeating dynamic ANS, that are the concentration of the time delay of a neural network (CTDNN) and the non-linear autoregression of a network with exogenous inputs of the NAEI, are used in the development of a model for estimating and daily forecasting of solar radiation. ANN show good productivity, as reliable and accurate models of daily solar radiation are obtained. This allows to successfully predict the photovoltaic output power for this installation. The potential of the proposed method for controlling the energy of the electrical network is shown using the example of the application of the NAEI network for predicting the electric load.

  1. A component-based, distributed object services architecture for a clinical workstation.

    PubMed

    Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.

  2. A component-based, distributed object services architecture for a clinical workstation.

    PubMed Central

    Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.

    1996-01-01

    Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744

  3. REEF: Retainable Evaluator Execution Framework

    PubMed Central

    Weimer, Markus; Chen, Yingda; Chun, Byung-Gon; Condie, Tyson; Curino, Carlo; Douglas, Chris; Lee, Yunseong; Majestro, Tony; Malkhi, Dahlia; Matusevych, Sergiy; Myers, Brandon; Narayanamurthy, Shravan; Ramakrishnan, Raghu; Rao, Sriram; Sears, Russell; Sezgin, Beysim; Wang, Julia

    2015-01-01

    Resource Managers like Apache YARN have emerged as a critical layer in the cloud computing system stack, but the developer abstractions for leasing cluster resources and instantiating application logic are very low-level. This flexibility comes at a high cost in terms of developer effort, as each application must repeatedly tackle the same challenges (e.g., fault-tolerance, task scheduling and coordination) and re-implement common mechanisms (e.g., caching, bulk-data transfers). This paper presents REEF, a development framework that provides a control-plane for scheduling and coordinating task-level (data-plane) work on cluster resources obtained from a Resource Manager. REEF provides mechanisms that facilitate resource re-use for data caching, and state management abstractions that greatly ease the development of elastic data processing work-flows on cloud platforms that support a Resource Manager service. REEF is being used to develop several commercial offerings such as the Azure Stream Analytics service. Furthermore, we demonstrate REEF development of a distributed shell application, a machine learning algorithm, and a port of the CORFU [4] system. REEF is also currently an Apache Incubator project that has attracted contributors from several instititutions.1 PMID:26819493

  4. Semantic World Modelling and Data Management in a 4d Forest Simulation and Information System

    NASA Astrophysics Data System (ADS)

    Roßmann, J.; Hoppen, M.; Bücken, A.

    2013-08-01

    Various types of 3D simulation applications benefit from realistic forest models. They range from flight simulators for entertainment to harvester simulators for training and tree growth simulations for research and planning. Our 4D forest simulation and information system integrates the necessary methods for data extraction, modelling and management. Using modern methods of semantic world modelling, tree data can efficiently be extracted from remote sensing data. The derived forest models contain position, height, crown volume, type and diameter of each tree. This data is modelled using GML-based data models to assure compatibility and exchangeability. A flexible approach for database synchronization is used to manage the data and provide caching, persistence, a central communication hub for change distribution, and a versioning mechanism. Combining various simulation techniques and data versioning, the 4D forest simulation and information system can provide applications with "both directions" of the fourth dimension. Our paper outlines the current state, new developments, and integration of tree extraction, data modelling, and data management. It also shows several applications realized with the system.

  5. Robust Decision Making Approach to Managing Water Resource Risks (Invited)

    NASA Astrophysics Data System (ADS)

    Lempert, R.

    2010-12-01

    The IPCC and US National Academies of Science have recommended iterative risk management as the best approach for water management and many other types of climate-related decisions. Such an approach does not rely on a single set of judgments at any one time but rather actively updates and refines strategies as new information emerges. In addition, the approach emphasizes that a portfolio of different types of responses, rather than any single action, often provides the best means to manage uncertainty. Implementing an iterative risk management approach can however prove difficult in actual decision support applications. This talk will suggest that robust decision making (RDM) provides a particularly useful set of quantitative methods for implementing iterative risk management. This RDM approach is currently being used in a wide variety of water management applications. RDM employs three key concepts that differentiate it from most types of probabilistic risk analysis: 1) characterizing uncertainty with multiple views of the future (which can include sets of probability distributions) rather than a single probabilistic best-estimate, 2) employing a robustness rather than an optimality criterion to assess alternative policies, and 3) organizing the analysis with a vulnerability and response option framework, rather than a predict-then-act framework. This talk will summarize the RDM approach, describe its use in several different types of water management applications, and compare the results to those obtained with other methods.

  6. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  7. An Open Source Extensible Smart Energy Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rankin, Linda

    Aggregated distributed energy resources are the subject of much interest in the energy industry and are expected to play an important role in meeting our future energy needs by changing how we use, distribute and generate electricity. This energy future includes an increased amount of energy from renewable resources, load management techniques to improve resiliency and reliability, and distributed energy storage and generation capabilities that can be managed to meet the needs of the grid as well as individual customers. These energy assets are commonly referred to as Distributed Energy Resources (DER). DERs rely on a means to communicate informationmore » between an energy provider and multitudes of devices. Today DER control systems are typically vendor-specific, using custom hardware and software solutions. As a result, customers are locked into communication transport protocols, applications, tools, and data formats. Today’s systems are often difficult to extend to meet new application requirements, resulting in stranded assets when business requirements or energy management models evolve. By partnering with industry advisors and researchers, an implementation DER research platform was developed called the Smart Energy Framework (SEF). The hypothesis of this research was that an open source Internet of Things (IoT) framework could play a role in creating a commodity-based eco-system for DER assets that would reduce costs and provide interoperable products. SEF is based on the AllJoynTM IoT open source framework. The demonstration system incorporated DER assets, specifically batteries and smart water heaters. To verify the behavior of the distributed system, models of water heaters and batteries were also developed. An IoT interface for communicating between the assets and a control server was defined. This interface supports a series of “events” and telemetry reporting, similar to those defined by current smart grid communication standards. The results of this effort demonstrated the feasibility and application potential of using IoT frameworks for the creation of commodity-based DER systems. All of the identified commodity-based system requirements were met by the AllJoyn framework. By having commodity solutions, small vendors can enter the market and the cost of implementation for all parties is reduced. Utilities and aggregators can choose from multiple interoperable products reducing the risk of stranded assets. Based on this research it is recommended that interfaces based on existing smart grid communication protocol standards be created for these emerging IoT frameworks. These interfaces should be standardized as part of the IoT framework allowing for interoperability testing and certification. Similarly, IoT frameworks are introducing application level security. This type of security is needed for protecting application and platforms and will be important moving forward. Recommendations are that along with DER-based data model interfaces, platform and application security requirements also be prescribed when IoT devices support DER applications.« less

  8. The Arbo‑zoonet Information System.

    PubMed

    Di Lorenzo, Alessio; Di Sabatino, Daria; Blanda, Valeria; Cioci, Daniela; Conte, Annamaria; Bruno, Rossana; Sauro, Francesca; Calistri, Paolo; Savini, Lara

    2016-06-30

    The Arbo‑zoonet Information System has been developed as part of the 'International Network for Capacity Building for the Control of Emerging Viral Vector Borne Zoonotic Diseases (Arbo‑zoonet)' project. The project aims to create common knowledge, sharing data, expertise, experiences, and scientific information on West Nile Disease (WND), Crimean‑Congo haemorrhagic fever (CCHF), and Rift Valley fever (RVF). These arthropod‑borne diseases of domestic and wild animals can affect humans, posing great threat to public health. Since November 2011, when the Schmallenberg virus (SBV) has been discovered for the first time in Northern Europe, the Arbo‑zoonet Information System has been used in order to collect information on newly discovered disease and to manage the epidemic emergency. The system monitors the geographical distribution and epidemiological evolution of CCHF, RVF, and WND since 1946. More recently, it has also been deployed to monitor the SBV data. The Arbo‑zoonet Information System includes a web application for the management of the database in which data are stored and a WebGIS application to explore spatial disease distributions, facilitating the epidemiological analysis. The WebGIS application is an effective tool to show and share the information and to facilitate the exchange and dissemination of relevant data among project's participants.

  9. Applications integration in a hybrid cloud computing environment: modelling and platform

    NASA Astrophysics Data System (ADS)

    Li, Qing; Wang, Ze-yuan; Li, Wei-hua; Li, Jun; Wang, Cheng; Du, Rui-yang

    2013-08-01

    With the development of application services providers and cloud computing, more and more small- and medium-sized business enterprises use software services and even infrastructure services provided by professional information service companies to replace all or part of their information systems (ISs). These information service companies provide applications, such as data storage, computing processes, document sharing and even management information system services as public resources to support the business process management of their customers. However, no cloud computing service vendor can satisfy the full functional IS requirements of an enterprise. As a result, enterprises often have to simultaneously use systems distributed in different clouds and their intra enterprise ISs. Thus, this article presents a framework to integrate applications deployed in public clouds and intra ISs. A run-time platform is developed and a cross-computing environment process modelling technique is also developed to improve the feasibility of ISs under hybrid cloud computing environments.

  10. Integrated Micro-Power System (IMPS) Development at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Wilt, David; Hepp, Aloysius; Moran, Matt; Jenkins, Phillip; Scheiman, David; Raffaelle, Ryne

    2003-01-01

    Glenn Research Center (GRC) has a long history of energy related technology developments for large space related power systems, including photovoltaics, thermo-mechanical energy conversion, electrochemical energy storage. mechanical energy storage, power management and distribution and power system design. Recently, many of these technologies have begun to be adapted for small, distributed power system applications or Integrated Micro-Power Systems (IMPS). This paper will describe the IMPS component and system demonstration efforts to date.

  11. Authenticated IGMP for Controlling Access to Multicast Distribution Tree

    NASA Astrophysics Data System (ADS)

    Park, Chang-Seop; Kang, Hyun-Sun

    A receiver access control scheme is proposed to protect the multicast distribution tree from DoS attack induced by unauthorized use of IGMP, by extending the security-related functionality of IGMP. Based on a specific network and business model adopted for commercial deployment of IP multicast applications, a key management scheme is also presented for bootstrapping the proposed access control as well as accounting and billing for CP (Content Provider), NSP (Network Service Provider), and group members.

  12. wHospital: a web-based application with digital signature for drugs dispensing management.

    PubMed

    Rossi, Lorenzo; Margola, Lorenzo; Manzelli, Vacia; Bandera, Alessandra

    2006-01-01

    wHospital is the result of an information technology research project, based on the utilization of a web based application for managing the hospital drugs dispensing. Part of wHospital back bone and its key distinguishing characteristic is the adoption of the digital signature system,initially deployed by the Government of Lombardia, a Northern Italy Region, throughout the distribution of smart cards to all the healthcare and hospital staffs. The developed system is a web-based application with a proposed Health Records Digital Signature (HReDS) handshake to comply with the national law and with the Joint Commission International Standards. The prototype application, for a single hospital Operative Unit (OU), has focused on data and process management, related to drug therapy. Following a multi-faceted selection process, the Infective Disease OU of the Hospital in Busto Arsizio, Lombardia, was chosen for the development and prototype implementation. The project lead time, from user requirement analysis to training and deployment was approximately 8 months. This paper highlights the applied project methodology, the system architecture, and the achieved preliminary results.

  13. TMN: Introduction and interpretation

    NASA Astrophysics Data System (ADS)

    Pras, Aiko

    An overview of Telecommunications Management Network (TMN) status is presented. Its relation with Open System Interconnection (OSI) systems management is given and the commonalities and distinctions are identified. Those aspects that distinguish TMN from OSI management are introduced; TMN's functional and physical architectures and TMN's logical layered architecture are discussed. An analysis of the concepts used by these architectures (reference point, interface, function block, and building block) is given. The use of these concepts to express geographical distribution and functional layering is investigated. This aspect is interesting to understand how OSI management protocols can be used in a TMN environment. A statement regarding applicability of TMN as a model that helps the designers of (management) networks is given.

  14. Soil-Structural Stability as Affected by Clay Mineralogy, Soil Texture and Polyacrylamide Application

    USDA-ARS?s Scientific Manuscript database

    Soil-structural stability (expressed in terms of aggregate stability and pore size distribution) depends on (i) soil inherent properties, (ii) extrinsic condition prevailing in the soil that may vary temporally and spatially, and (iii) addition of soil amendments. Different soil management practices...

  15. 50 CFR 679.26 - Prohibited Species Donation Program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ALASKA Management Measures § 679.26 Prohibited Species Donation Program. (a) Authorized species. The PSD... maintain adequate funding for the distribution of fish under the PSD program. (vii) A copy of the applicant... received under the PSD program, including sufficient liability insurance to cover public interests relating...

  16. 50 CFR 679.26 - Prohibited Species Donation Program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ALASKA Management Measures § 679.26 Prohibited Species Donation Program. (a) Authorized species. The PSD... maintain adequate funding for the distribution of fish under the PSD program. (vii) A copy of the applicant... received under the PSD program, including sufficient liability insurance to cover public interests relating...

  17. 50 CFR 679.26 - Prohibited Species Donation Program.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ALASKA Management Measures § 679.26 Prohibited Species Donation Program. (a) Authorized species. The PSD... maintain adequate funding for the distribution of fish under the PSD program. (vii) A copy of the applicant... received under the PSD program, including sufficient liability insurance to cover public interests relating...

  18. 50 CFR 679.26 - Prohibited Species Donation Program.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... ALASKA Management Measures § 679.26 Prohibited Species Donation Program. (a) Authorized species. The PSD... distribution of fish under the PSD program. (vii) A copy of the applicant's articles of incorporation and... full responsibility for the documentation and disposition of fish received under the PSD program...

  19. An application of digital network technology to medical image management.

    PubMed

    Chu, W K; Smith, C L; Wobig, R K; Hahn, F A

    1997-01-01

    With the advent of network technology, there is considerable interest within the medical community to manage the storage and distribution of medical images by digital means. Higher workflow efficiency leading to better patient care is one of the commonly cited outcomes [1,2]. However, due to the size of medical image files and the unique requirements in detail and resolution, medical image management poses special challenges. Storage requirements are usually large, which implies expenses or investment costs make digital networking projects financially out of reach for many clinical institutions. New advances in network technology and telecommunication, in conjunction with the decreasing cost in computer devices, have made digital image management achievable. In our institution, we have recently completed a pilot project to distribute medical images both within the physical confines of the clinical enterprise as well as outside the medical center campus. The design concept and the configuration of a comprehensive digital image network is described in this report.

  20. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  1. Moving Toward Real Time Data Handling: Data Management at the IRIS DMC

    NASA Astrophysics Data System (ADS)

    Ahern, T. K.; Benson, R. B.

    2001-12-01

    The IRIS Data Management Center at the University of Washington has become a major archive and distribution center for a wide variety of seismological data. With a mass storage system with a 360-terabyte capacity, the center is well positioned to manage the data flow, both inbound and outbound, from all anticipated seismic sources for the foreseeable future. As data flow in and out of the IRIS DMC at an increasing rate, new methods to deal with data using purely automated techniques are being developed. The on-line and self-service data repositories of SPYDERr and FARM are collections of seismograms for all larger events. The WWW tool WILBER and the client application WEED are examples of tools that provide convenient access to the 1/2 terabyte of SPYDERr and FARM data. The Buffer of Uniform Data (BUD) system provides access to continuous data available in real time from GSN, FDSN, US regional networks, and other globally distributed stations. Continuous data that have received quality control are always available from the archive of continuous data. This presentation will review current and future data access techniques supported at IRIS. One of the most difficult tasks at the DMC is the management of the metadata that describes all the stations, sensors, and data holdings. Demonstrations of tools that provide access to the metadata will be presented. This presentation will focus on the new techniques of data management now being developed at the IRIS DMC. We believe that these techniques are generally applicable to other types of geophysical data management as well.

  2. Literature Survey on Operational Voltage Control and Reactive Power Management on Transmission and Sub-Transmission Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elizondo, Marcelo A.; Samaan, Nader A.; Makarov, Yuri V.

    Voltage and reactive power system control is generally performed following usual patterns of loads, based on off-line studies for daily and seasonal operations. This practice is currently challenged by the inclusion of distributed renewable generation, such as solar. There has been focus on resolving this problem at the distribution level; however, the transmission and sub-transmission levels have received less attention. This paper provides a literature review of proposed methods and solution approaches to coordinate and optimize voltage control and reactive power management, with an emphasis on applications at transmission and sub-transmission level. The conclusion drawn from the survey is thatmore » additional research is needed in the areas of optimizing switch shunt actions and coordinating all available resources to deal with uncertain patterns from increasing distributed renewable generation in the operational time frame. These topics are not deeply explored in the literature.« less

  3. A Content Markup Language for Data Services

    NASA Astrophysics Data System (ADS)

    Noviello, C.; Acampa, P.; Mango Furnari, M.

    Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.

  4. Handling Uncertain Gross Margin and Water Demand in Agricultural Water Resources Management using Robust Optimization

    NASA Astrophysics Data System (ADS)

    Chaerani, D.; Lesmana, E.; Tressiana, N.

    2018-03-01

    In this paper, an application of Robust Optimization in agricultural water resource management problem under gross margin and water demand uncertainty is presented. Water resource management is a series of activities that includes planning, developing, distributing and managing the use of water resource optimally. Water resource management for agriculture can be one of the efforts to optimize the benefits of agricultural output. The objective function of agricultural water resource management problem is to maximizing total benefits by water allocation to agricultural areas covered by the irrigation network in planning horizon. Due to gross margin and water demand uncertainty, we assume that the uncertain data lies within ellipsoidal uncertainty set. We employ robust counterpart methodology to get the robust optimal solution.

  5. A neural network approach to burst detection.

    PubMed

    Mounce, S R; Day, A J; Wood, A S; Khan, A; Widdop, P D; Machell, J

    2002-01-01

    This paper describes how hydraulic and water quality data from a distribution network may be used to provide a more efficient leakage management capability for the water industry. The research presented concerns the application of artificial neural networks to the issue of detection and location of leakage in treated water distribution systems. An architecture for an Artificial Neural Network (ANN) based system is outlined. The neural network uses time series data produced by sensors to directly construct an empirical model for predication and classification of leaks. Results are presented using data from an experimental site in Yorkshire Water's Keighley distribution system.

  6. Application of future remote sensing systems to irrigation

    NASA Technical Reports Server (NTRS)

    Miller, L. D.

    1982-01-01

    Area estimates of irrigated crops and knowledge of crop type are required for modeling water consumption to assist farmers, rangers, and agricultural consultants in scheduling irrigation for distributed management of crop yields. Information on canopy physiology and soil moisture status on a spatial basis is potentially available from remote sensors, so the questions to be addressed relate to: (1) timing (data frequency, instantaneous and integrated measurement); and scheduling (widely distributed spatial demands); (2) spatial resolution; (3) radiometric and geometric accuracy and geoencoding; and (4) information/data distribution. This latter should be overnight, with no central storage, onsite capture, and low cost.

  7. A method of distributed avionics data processing based on SVM classifier

    NASA Astrophysics Data System (ADS)

    Guo, Hangyu; Wang, Jinyan; Kang, Minyang; Xu, Guojing

    2018-03-01

    Under the environment of system combat, in order to solve the problem on management and analysis of the massive heterogeneous data on multi-platform avionics system, this paper proposes a management solution which called avionics "resource cloud" based on big data technology, and designs an aided decision classifier based on SVM algorithm. We design an experiment with STK simulation, the result shows that this method has a high accuracy and a broad application prospect.

  8. Evaluating wilderness recreational opportunities: application of an impact matrix

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Parsons, David J.

    1992-01-01

    An inventory of the severity and spatial distribution of wilderness campsite impacts in Sequoia and Kings Canyon National Parks identified a total of 273 distinct nodes of campsites or “management areas.” A campsite impact matrix was developed to evaluate management areas based on total impacts (correlated to the total area of campsite development) and the density, or concentration, of impacts relative to each area's potentially campable area. The matrix is used to quantify potential recreational opportunities for wilderness visitors in a spectrum from areas offering low impact-dispersed camping to those areas offering high impact-concentrated camping. Wilderness managers can use this type of information to evaluate use distribution patterns, identify areas to increase or decrease use, and to identify areas needing site-specific regulations (e.g., one-night camping limits) to preserve wilderness resources and guarantee outstanding opportunities for solitude.

  9. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  10. Web-based monitoring and management system for integrated enterprise-wide imaging networks

    NASA Astrophysics Data System (ADS)

    Ma, Keith; Slik, David; Lam, Alvin; Ng, Won

    2003-05-01

    Mass proliferation of IP networks and the maturity of standards has enabled the creation of sophisticated image distribution networks that operate over Intranets, Extranets, Communities of Interest (CoI) and even the public Internet. Unified monitoring, provisioning and management of such systems at the application and protocol levels represent a challenge. This paper presents a web based monitoring and management tool that employs established telecom standards for the creation of an open system that enables proactive management, provisioning and monitoring of image management systems at the enterprise level and across multi-site geographically distributed deployments. Utilizing established standards including ITU-T M.3100, and web technologies such as XML/XSLT, JSP/JSTL, and J2SE, the system allows for seamless device and protocol adaptation between multiple disparate devices. The goal has been to develop a unified interface that provides network topology views, multi-level customizable alerts, real-time fault detection as well as real-time and historical reporting of all monitored resources, including network connectivity, system load, DICOM transactions and storage capacities.

  11. Recent GRC Aerospace Technologies Applicable to Terrestrial Energy Systems

    NASA Technical Reports Server (NTRS)

    Kankam, David; Lyons, Valerie J.; Hoberecht, Mark A.; Tacina, Robert R.; Hepp, Aloysius F.

    2000-01-01

    This paper is an overview of a wide range of recent aerospace technologies under development at the NASA Glenn Research Center, in collaboration with other NASA centers, government agencies, industry and academia. The focused areas are space solar power, advanced power management and distribution systems, Stirling cycle conversion systems, fuel cells, advanced thin film photovoltaics and batteries, and combustion technologies. The aerospace-related objectives of the technologies are generation of space power, development of cost-effective and reliable, high performance power systems, cryogenic applications, energy storage, and reduction in gas-turbine emissions, with attendant clean jet engines. The terrestrial energy applications of the technologies include augmentation of bulk power in ground power distribution systems, and generation of residential, commercial and remote power, as well as promotion of pollution-free environment via reduction in combustion emissions.

  12. A spatially distributed energy balance snowmelt model for application in mountain basins

    USGS Publications Warehouse

    Marks, D.; Domingo, J.; Susong, D.; Link, T.; Garen, D.

    1999-01-01

    Snowmelt is the principal source for soil moisture, ground-water re-charge, and stream-flow in mountainous regions of the western US, Canada, and other similar regions of the world. Information on the timing, magnitude, and contributing area of melt under variable or changing climate conditions is required for successful water and resource management. A coupled energy and mass-balance model ISNOBAL is used to simulate the development and melting of the seasonal snowcover in several mountain basins in California, Idaho, and Utah. Simulations are done over basins varying from 1 to 2500 km2, with simulation periods varying from a few days for the smallest basin, Emerald Lake watershed in California, to multiple snow seasons for the Park City area in Utah. The model is driven by topographically corrected estimates of radiation, temperature, humidity, wind, and precipitation. Simulation results in all basins closely match independently measured snow water equivalent, snow depth, or runoff during both the development and depletion of the snowcover. Spatially distributed estimates of snow deposition and melt allow us to better understand the interaction between topographic structure, climate, and moisture availability in mountain basins of the western US. Application of topographically distributed models such as this will lead to improved water resource and watershed management.Snowmelt is the principal source for soil moisture, ground-water re-charge, and stream-flow in mountainous regions of the western US, Canada, and other similar regions of the world. Information on the timing, magnitude, and contributing area of melt under variable or changing climate conditions is required for successful water and resource management. A coupled energy and mass-balance model ISNOBAL is used to simulate the development and melting of the seasonal snowcover in several mountain basins in California, Idaho, and Utah. Simulations are done over basins varying from 1 to 2500 km2, with simulation periods varying from a few days for the smallest basin, Emerald Lake watershed in California, to multiple snow seasons for the Park City area in Utah. The model is driven by topographically corrected estimates of radiation, temperature, humidity, wind, and precipitation. Simulation results in all basins closely match independently measured snow water equivalent, snow depth, or runoff during both the development and depletion of the snowcover. Spatially distributed estimates of snow deposition and melt allow us to better understand the interaction between topographic structure, climate, and moisture availability in mountain basins of the western US. Application of topographically distributed models such as this will lead to improved water resource and watershed management.

  13. Taking the mystery out of mathematical model applications to karst aquifers—A primer

    USGS Publications Warehouse

    Kuniansky, Eve L.

    2014-01-01

    Advances in mathematical model applications toward the understanding of the complex flow, characterization, and water-supply management issues for karst aquifers have occurred in recent years. Different types of mathematical models can be applied successfully if appropriate information is available and the problems are adequately identified. The mathematical approaches discussed in this paper are divided into three major categories: 1) distributed parameter models, 2) lumped parameter models, and 3) fitting models. The modeling approaches are described conceptually with examples (but without equations) to help non-mathematicians understand the applications.

  14. Information Systems Should Be Both Useful and Used: The Benetton Experience.

    ERIC Educational Resources Information Center

    Zuccaro, Bruno

    1990-01-01

    Describes the information systems strategy and network development of the Benetton clothing business. Applications in the areas of manufacturing, scheduling, centralized distribution, and centralized cash flow are discussed; the GEIS managed network service is described; and internal and external electronic data interchange (EDI) is explained.…

  15. 40 CFR 165.67 - Registrants who distribute or sell pesticide products to refillers for repackaging.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... residue from a refillable container (portable or stationary pesticide container) before it is refilled. (i... application, the refilling residue removal procedure must describe how to manage any rinsate resulting from... description of acceptable refillable containers (portable or stationary pesticide containers) that can be used...

  16. Performance of a distributed semi-conceptual hydrological model under tropical watershed conditions

    USDA-ARS?s Scientific Manuscript database

    Many hydrologic models have been developed to help manage natural resources all over the world. Nevertheless, most models have presented a high complexity in terms of data base requirements, as well as, many calibration parameters. This has resulted in serious difficulties to application in catchmen...

  17. Sustainable Army Training Lands/Carrying Capacity: Training Use Distribution Model (TUDM)

    DTIC Science & Technology

    2002-05-01

    management (Senft, Rittenhouse, and Woodmansee 1983; Van Manen and Pelton 1997). The underlying principle behind this type of modeling application...Texas Rangeland. Technical Manuscript EN-95/02 (USACERL February 1995). tr o Van Manen , F.T. and M.R. Pelton. 1997. “A GIS Model to Predict Black

  18. 77 FR 8119 - Visas: Issuance of Full Validity L Visas to Qualified Applicants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-14

    ... for aliens employed in a specialized knowledge capacity, or seven years for aliens employed in a... between the national government and the States, or the distribution of power and responsibilities among... (executives, managers, and specialized knowledge employees) (a) Requirements for L classification. An alien...

  19. Digital Libraries Are Much More than Digitized Collections.

    ERIC Educational Resources Information Center

    Peters, Peter Evan

    1995-01-01

    The digital library encompasses the application of high-performance computers and networks to the production, distribution, management, and use of knowledge in research and education. A joint project by three federal agencies, which is investing in digital library initiatives at six universities, is discussed. A sidebar provides issues to consider…

  20. 78 FR 79298 - Securities Exempted; Distribution of Shares by Registered Open-End Management Investment Company...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-30

    ... SECURITIES AND EXCHANGE COMMISSION 17 CFR Parts 230 and 270 [Release No. 33-9503; IC-30845...; Applications Regarding Joint Enterprises or Arrangements and Certain Profit-Sharing Plans AGENCY: Securities and Exchange Commission. ACTION: Final rule; technical amendments. SUMMARY: The Securities and...

  1. The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model.

    PubMed

    Liu, Tongzhu; Shen, Aizong; Hu, Xiaojian; Tong, Guixian; Gu, Wei

    2017-06-01

    We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers.

  2. Data Sharing in DHT Based P2P Systems

    NASA Astrophysics Data System (ADS)

    Roncancio, Claudia; Del Pilar Villamil, María; Labbé, Cyril; Serrano-Alvarado, Patricia

    The evolution of peer-to-peer (P2P) systems triggered the building of large scale distributed applications. The main application domain is data sharing across a very large number of highly autonomous participants. Building such data sharing systems is particularly challenging because of the “extreme” characteristics of P2P infrastructures: massive distribution, high churn rate, no global control, potentially untrusted participants... This article focuses on declarative querying support, query optimization and data privacy on a major class of P2P systems, that based on Distributed Hash Table (P2P DHT). The usual approaches and the algorithms used by classic distributed systems and databases for providing data privacy and querying services are not well suited to P2P DHT systems. A considerable amount of work was required to adapt them for the new challenges such systems present. This paper describes the most important solutions found. It also identifies important future research trends in data management in P2P DHT systems.

  3. A Study of Energy Management Systems and its Failure Modes in Smart Grid Power Distribution

    NASA Astrophysics Data System (ADS)

    Musani, Aatif

    The subject of this thesis is distribution level load management using a pricing signal in a smart grid infrastructure. The project relates to energy management in a spe-cialized distribution system known as the Future Renewable Electric Energy Delivery and Management (FREEDM) system. Energy management through demand response is one of the key applications of smart grid. Demand response today is envisioned as a method in which the price could be communicated to the consumers and they may shift their loads from high price periods to the low price periods. The development and deployment of the FREEDM system necessitates controls of energy and power at the point of end use. In this thesis, the main objective is to develop the control model of the Energy Management System (EMS). The energy and power management in the FREEDM system is digitally controlled therefore all signals containing system states are discrete. The EMS is modeled as a discrete closed loop transfer function in the z-domain. A breakdown of power and energy control devices such as EMS components may result in energy con-sumption error. This leads to one of the main focuses of the thesis which is to identify and study component failures of the designed control system. Moreover, H-infinity ro-bust control method is applied to ensure effectiveness of the control architecture. A focus of the study is cyber security attack, specifically bad data detection in price. Test cases are used to illustrate the performance of the EMS control design, the effect of failure modes and the application of robust control technique. The EMS was represented by a linear z-domain model. The transfer function be-tween the pricing signal and the demand response was designed and used as a test bed. EMS potential failure modes were identified and studied. Three bad data detection meth-odologies were implemented and a voting policy was used to declare bad data. The run-ning mean and standard deviation analysis method proves to be the best method to detect bad data. An H-infinity robust control technique was applied for the first time to design discrete EMS controller for the FREEDM system.

  4. Computer Training for Entrepreneurial Meteorologists.

    NASA Astrophysics Data System (ADS)

    Koval, Joseph P.; Young, George S.

    2001-05-01

    Computer applications of increasing diversity form a growing part of the undergraduate education of meteorologists in the early twenty-first century. The advent of the Internet economy, as well as a waning demand for traditional forecasters brought about by better numerical models and statistical forecasting techniques has greatly increased the need for operational and commercial meteorologists to acquire computer skills beyond the traditional techniques of numerical analysis and applied statistics. Specifically, students with the skills to develop data distribution products are in high demand in the private sector job market. Meeting these demands requires greater breadth, depth, and efficiency in computer instruction. The authors suggest that computer instruction for undergraduate meteorologists should include three key elements: a data distribution focus, emphasis on the techniques required to learn computer programming on an as-needed basis, and a project orientation to promote management skills and support student morale. In an exploration of this approach, the authors have reinvented the Applications of Computers to Meteorology course in the Department of Meteorology at The Pennsylvania State University to teach computer programming within the framework of an Internet product development cycle. Because the computer skills required for data distribution programming change rapidly, specific languages are valuable for only a limited time. A key goal of this course was therefore to help students learn how to retrain efficiently as technologies evolve. The crux of the course was a semester-long project during which students developed an Internet data distribution product. As project management skills are also important in the job market, the course teamed students in groups of four for this product development project. The success, failures, and lessons learned from this experiment are discussed and conclusions drawn concerning undergraduate instructional methods for computer applications in meteorology.

  5. OASIS: a data and software distribution service for Open Science Grid

    NASA Astrophysics Data System (ADS)

    Bockelman, B.; Caballero Bejar, J.; De Stefano, J.; Hover, J.; Quick, R.; Teige, S.

    2014-06-01

    The Open Science Grid encourages the concept of software portability: a user's scientific application should be able to run at as many sites as possible. It is necessary to provide a mechanism for OSG Virtual Organizations to install software at sites. Since its initial release, the OSG Compute Element has provided an application software installation directory to Virtual Organizations, where they can create their own sub-directory, install software into that sub-directory, and have the directory shared on the worker nodes at that site. The current model has shortcomings with regard to permissions, policies, versioning, and the lack of a unified, collective procedure or toolset for deploying software across all sites. Therefore, a new mechanism for data and software distributing is desirable. The architecture for the OSG Application Software Installation Service (OASIS) is a server-client model: the software and data are installed only once in a single place, and are automatically distributed to all client sites simultaneously. Central file distribution offers other advantages, including server-side authentication and authorization, activity records, quota management, data validation and inspection, and well-defined versioning and deletion policies. The architecture, as well as a complete analysis of the current implementation, will be described in this paper.

  6. New Developments in Uncertainty: Linking Risk Management, Reliability, Statistics and Stochastic Optimization

    DTIC Science & Technology

    2014-11-13

    Cm) in a given set C ⊂ IRm . (5.7) Motivation for generalized regression comes from applications in which Y has the cost/loss orien- tation that we have...distribution. The corresponding probability measure on IRm is induced then by the multivariate distribution function FV1,...,Vm(v1, . . . , vm) = prob { (V1...could be generated by future observations of some variables V1, . . . , Vm, as above, in which case Ω would be a subset of IRm with elements ω = (v1

  7. Fiber-optic technology for transport aircraft

    NASA Astrophysics Data System (ADS)

    1993-07-01

    A development status evaluation is presented for fiber-optic devices that are advantageously applicable to commercial aircraft. Current developmental efforts at a major U.S. military and commercial aircraft manufacturer encompass installation techniques and data distribution practices, as well as the definition and refinement of an optical propulsion management interface system, environmental sensing systems, and component-qualification criteria. Data distribution is the most near-term implementable of fiber-optic technologies aboard commercial aircraft in the form of onboard local-area networks for intercomputer connections and passenger entertainment.

  8. Volttron version 5.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    VOLTTRON is an agent execution platform providing services to its agents that allow them to easily communicate with physical devices and other resources. VOLTTRON delivers an innovative distributed control and sensing software platform that supports modern control strategies, including agent-based and transaction-based controls. It enables mobile and stationary software agents to perform information gathering, processing, and control actions. VOLTTRON can independently manage a wide range of applications, such as HVAC systems, electric vehicles, distributed energy or entire building loads, leading to improved operational efficiency.

  9. Dashboard Task Monitor for Managing ATLAS User Analysis on the Grid

    NASA Astrophysics Data System (ADS)

    Sargsyan, L.; Andreeva, J.; Jha, M.; Karavakis, E.; Kokoszkiewicz, L.; Saiz, P.; Schovancova, J.; Tuckett, D.; Atlas Collaboration

    2014-06-01

    The organization of the distributed user analysis on the Worldwide LHC Computing Grid (WLCG) infrastructure is one of the most challenging tasks among the computing activities at the Large Hadron Collider. The Experiment Dashboard offers a solution that not only monitors but also manages (kill, resubmit) user tasks and jobs via a web interface. The ATLAS Dashboard Task Monitor provides analysis users with a tool that is independent of the operating system and Grid environment. This contribution describes the functionality of the application and its implementation details, in particular authentication, authorization and audit of the management operations.

  10. Innovations in clinical trials informatics.

    PubMed

    Summers, Ron; Vyas, Hiten; Dudhal, Nilesh; Doherty, Neil F; Coombs, Crispin R; Hepworth, Mark

    2008-01-01

    This paper will investigate innovations in information management for use in clinical trials. The application typifies a complex, adaptive, distributed and information-rich environment for which continuous innovation is necessary. Organisational innovation is highlighted as well as the technical innovations in workflow processes and their representation as an integrated set of web services. Benefits realization uncovers further innovations in the business strand of the work undertaken. Following the description of the development of this information management system, the semantic web is postulated as a possible solution to tame the complexity related to information management issues found within clinical trials support systems.

  11. Data-Centric Situational Awareness and Management in Intelligent Power Systems

    NASA Astrophysics Data System (ADS)

    Dai, Xiaoxiao

    The rapid development of technology and society has made the current power system a much more complicated system than ever. The request for big data based situation awareness and management becomes urgent today. In this dissertation, to respond to the grand challenge, two data-centric power system situation awareness and management approaches are proposed to address the security problems in the transmission/distribution grids and social benefits augmentation problem at the distribution-customer lever, respectively. To address the security problem in the transmission/distribution grids utilizing big data, the first approach provides a fault analysis solution based on characterization and analytics of the synchrophasor measurements. Specically, the optimal synchrophasor measurement devices selection algorithm (OSMDSA) and matching pursuit decomposition (MPD) based spatial-temporal synchrophasor data characterization method was developed to reduce data volume while preserving comprehensive information for the big data analyses. And the weighted Granger causality (WGC) method was investigated to conduct fault impact causal analysis during system disturbance for fault localization. Numerical results and comparison with other methods demonstrate the effectiveness and robustness of this analytic approach. As more social effects are becoming important considerations in power system management, the goal of situation awareness should be expanded to also include achievements in social benefits. The second approach investigates the concept and application of social energy upon the University of Denver campus grid to provide management improvement solutions for optimizing social cost. Social element--human working productivity cost, and economic element--electricity consumption cost, are both considered in the evaluation of overall social cost. Moreover, power system simulation, numerical experiments for smart building modeling, distribution level real-time pricing and social response to the pricing signals are studied for implementing the interactive artificial-physical management scheme.

  12. A Flexible Online Metadata Editing and Management System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilar, Raul; Pan, Jerry Yun; Gries, Corinna

    2010-01-01

    A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fieldsmore » user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to metadata files in the application introduced here is managed in a custom online module, using a MySQL backend accessed by a simple Java Server Faces front end. A flexible system with three grouping options, organization, group and single editing access is provided. Three levels were chosen to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.« less

  13. A spatially explicit representation of conservation agriculture for application in global change studies.

    PubMed

    Prestele, Reinhard; Hirsch, Annette L; Davin, Edouard L; Seneviratne, Sonia I; Verburg, Peter H

    2018-05-10

    Conservation agriculture (CA) is widely promoted as a sustainable agricultural management strategy with the potential to alleviate some of the adverse effects of modern, industrial agriculture such as large-scale soil erosion, nutrient leaching and overexploitation of water resources. Moreover, agricultural land managed under CA is proposed to contribute to climate change mitigation and adaptation through reduced emission of greenhouse gases, increased solar radiation reflection, and the sustainable use of soil and water resources. Due to the lack of official reporting schemes, the amount of agricultural land managed under CA systems is uncertain and spatially explicit information about the distribution of CA required for various modeling studies is missing. Here, we present an approach to downscale present-day national-level estimates of CA to a 5 arcminute regular grid, based on multicriteria analysis. We provide a best estimate of CA distribution and an uncertainty range in the form of a low and high estimate of CA distribution, reflecting the inconsistency in CA definitions. We also design two scenarios of the potential future development of CA combining present-day data and an assessment of the potential for implementation using biophysical and socioeconomic factors. By our estimates, 122-215 Mha or 9%-15% of global arable land is currently managed under CA systems. The lower end of the range represents CA as an integrated system of permanent no-tillage, crop residue management and crop rotations, while the high estimate includes a wider range of areas primarily devoted to temporary no-tillage or reduced tillage operations. Our scenario analysis suggests a future potential of CA in the range of 533-1130 Mha (38%-81% of global arable land). Our estimates can be used in various ecosystem modeling applications and are expected to help identifying more realistic climate mitigation and adaptation potentials of agricultural practices. © 2018 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  14. Klusters, NeuroScope, NDManager: a free software suite for neurophysiological data processing and visualization.

    PubMed

    Hazan, Lynn; Zugaro, Michaël; Buzsáki, György

    2006-09-15

    Recent technological advances now allow for simultaneous recording of large populations of anatomically distributed neurons in behaving animals. The free software package described here was designed to help neurophysiologists process and view recorded data in an efficient and user-friendly manner. This package consists of several well-integrated applications, including NeuroScope (http://neuroscope.sourceforce.net), an advanced viewer for electrophysiological and behavioral data with limited editing capabilities, Klusters (http://klusters.sourceforge.net), a graphical cluster cutting application for manual and semi-automatic spike sorting, NDManager (GPL,see http://www.gnu.org/licenses/gpl.html), an experimental parameter and data processing manager. All of these programs are distributed under the GNU General Public License (GPL, see ), which gives its users legal permission to copy, distribute and/or modify the software. Also included are extensive user manuals and sample data, as well as source code and documentation.

  15. ESnet authentication services and trust federations

    NASA Astrophysics Data System (ADS)

    Muruganantham, Dhivakaran; Helm, Mike; Genovese, Tony

    2005-01-01

    ESnet provides authentication services and trust federation support for SciDAC projects, collaboratories, and other distributed computing applications. The ESnet ATF team operates the DOEGrids Certificate Authority, available to all DOE Office of Science programs, plus several custom CAs, including one for the National Fusion Collaboratory and one for NERSC. The secure hardware and software environment developed to support CAs is suitable for supporting additional custom authentication and authorization applications that your program might require. Seamless, secure interoperation across organizational and international boundaries is vital to collaborative science. We are fostering the development of international PKI federations by founding the TAGPMA, the American regional PMA, and the worldwide IGTF Policy Management Authority (PMA), as well as participating in European and Asian regional PMAs. We are investigating and prototyping distributed authentication technology that will allow us to support the "roaming scientist" (distributed wireless via eduroam), as well as more secure authentication methods (one-time password tokens).

  16. Application of a distributed hydrological model to the design of a road inundation warning system for flash flood prone areas

    NASA Astrophysics Data System (ADS)

    Versini, P.-A.; Gaume, E.; Andrieu, H.

    2010-04-01

    This paper presents an initial prototype of a distributed hydrological model used to map possible road inundations in a region frequently exposed to severe flash floods: the Gard region (South of France). The prototype has been tested in a pseudo real-time mode on five recent flash flood events for which actual road inundations have been inventoried. The results are promising: close to 100% probability of detection of actual inundations, inundations detected before they were reported by the road management field teams with a false alarm ratios not exceeding 30%. This specific case study differs from the standard applications of rainfall-runoff models to produce flood forecasts, focussed on a single or a limited number of gauged river cross sections. It illustrates that, despite their lack of accuracy, hydro-meteorological forecasts based on rainfall-runoff models, especially distributed models, contain valuable information for flood event management. The possible consequences of landslides, debris flows and local erosion processes, sometimes associated with flash floods, were not considered at this stage of development of the prototype. They are limited in the Gard region but should be taken into account in future developments of the approach to implement it efficiently in other areas more exposed to these phenomena such as the Alpine area.

  17. Properties and potential applications of the culinary-medicinal cauliflower mushroom, Sparassis crispa Wulf.:Fr. (Aphyllophoromycetideae): a review.

    PubMed

    Chandrasekaran, Gayathri; Oh, Deuk-Sil; Shin, Hyun-Jae

    2011-01-01

    Sparassis crispa is a culinary-medicinal mushroom that has recently become popular in Korea, China, Japan, Germany, and the USA. S. crispa is a good source of food and nutraceuticals, or dietary supplements, due to its rich flavor compounds and beta-glucan content. This review is a comprehensive summary of its distribution, growth, management, general constituents, functional ingredients, as well as its current and potential medicinal and other applications.

  18. Application and Expansion of the Modular Command and Control Evaluation Structure (MCES) as a Framework for Improving Interoperability Management.

    DTIC Science & Technology

    1987-06-01

    DECLASSIFICATION OWNGRAONG SCIEDULE distribution is unlimited. 4 PERFORMING ORGANIATION REPORT NUMBIR(S) S MONITORING ORGANIZATION REPORT NUVBER(S) 6a NAME OF...PERFORMING ORGANIZATION 60 OFFICE SYMBOL 7a NAME OF MONITORING ORGANIZATION (if applicable) Naval Postgraduate SchoolJ Code 74 Naval Postgraduate School 6c...FUNOINGi SPONSORING Sb OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATION (If dappicable) 8c AODRESS (City, State. ard ZIP Code

  19. Access control based on attribute certificates for medical intranet applications.

    PubMed

    Mavridis, I; Georgiadis, C; Pangalos, G; Khair, M

    2001-01-01

    Clinical information systems frequently use intranet and Internet technologies. However these technologies have emphasized sharing and not security, despite the sensitive and private nature of much health information. Digital certificates (electronic documents which recognize an entity or its attributes) can be used to control access in clinical intranet applications. To outline the need for access control in distributed clinical database systems, to describe the use of digital certificates and security policies, and to propose the architecture for a system using digital certificates, cryptography and security policy to control access to clinical intranet applications. We have previously developed a security policy, DIMEDAC (Distributed Medical Database Access Control), which is compatible with emerging public key and privilege management infrastructure. In our implementation approach we propose the use of digital certificates, to be used in conjunction with DIMEDAC. Our proposed access control system consists of two phases: the ways users gain their security credentials; and how these credentials are used to access medical data. Three types of digital certificates are used: identity certificates for authentication; attribute certificates for authorization; and access-rule certificates for propagation of access control policy. Once a user is identified and authenticated, subsequent access decisions are based on a combination of identity and attribute certificates, with access-rule certificates providing the policy framework. Access control in clinical intranet applications can be successfully and securely managed through the use of digital certificates and the DIMEDAC security policy.

  20. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility.

    PubMed

    Jaschob, Daniel; Riffle, Michael

    2012-07-30

    Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. JobCenter is a client-server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or "in the cloud") and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/.

  1. Tools to manage the enterprise-wide picture archiving and communications system environment.

    PubMed

    Lannum, L M; Gumpf, S; Piraino, D

    2001-06-01

    The presentation will focus on the implementation and utilization of a central picture archiving and communications system (PACS) network-monitoring tool that allows for enterprise-wide operations management and support of the image distribution network. The MagicWatch (Siemens, Iselin, NJ) PACS/radiology information system (RIS) monitoring station from Siemens has allowed our organization to create a service support structure that has given us proactive control of our environment and has allowed us to meet the service level performance expectations of the users. The Radiology Help Desk has used the MagicWatch PACS monitoring station as an applications support tool that has allowed the group to monitor network activity and individual systems performance at each node. Fast and timely recognition of the effects of single events within the PACS/RIS environment has allowed the group to proactively recognize possible performance issues and resolve problems. The PACS/operations group performs network management control, image storage management, and software distribution management from a single, central point in the enterprise. The MagicWatch station allows for the complete automation of software distribution, installation, and configuration process across all the nodes in the system. The tool has allowed for the standardization of the workstations and provides a central configuration control for the establishment and maintenance of the system standards. This report will describe the PACS management and operation prior to the implementation of the MagicWatch PACS monitoring station and will highlight the operational benefits of a centralized network and system-monitoring tool.

  2. Research on mixed network architecture collaborative application model

    NASA Astrophysics Data System (ADS)

    Jing, Changfeng; Zhao, Xi'an; Liang, Song

    2009-10-01

    When facing complex requirements of city development, ever-growing spatial data, rapid development of geographical business and increasing business complexity, collaboration between multiple users and departments is needed urgently, however conventional GIS software (such as Client/Server model or Browser/Server model) are not support this well. Collaborative application is one of the good resolutions. Collaborative application has four main problems to resolve: consistency and co-edit conflict, real-time responsiveness, unconstrained operation, spatial data recoverability. In paper, application model called AMCM is put forward based on agent and multi-level cache. AMCM can be used in mixed network structure and supports distributed collaborative. Agent is an autonomous, interactive, initiative and reactive computing entity in a distributed environment. Agent has been used in many fields such as compute science and automation. Agent brings new methods for cooperation and the access for spatial data. Multi-level cache is a part of full data. It reduces the network load and improves the access and handle of spatial data, especially, in editing the spatial data. With agent technology, we make full use of its characteristics of intelligent for managing the cache and cooperative editing that brings a new method for distributed cooperation and improves the efficiency.

  3. Tools for the IDL widget set within the X-windows environment

    NASA Technical Reports Server (NTRS)

    Turgeon, B.; Aston, A.

    1992-01-01

    New tools using the IDL widget set are presented. In particular, a utility allowing the easy creation and update of slide presentations, XSlideManager, is explained in detail and examples of its application are shown. In addition to XSlideManager, other mini-utilities are discussed. These various pieces of software follow the philosophy of the X-Windows distribution system and are made available to anyone within the Internet network. Acquisition procedures through anonymous ftp are clearly explained.

  4. End-to-end security for personal telehealth.

    PubMed

    Koster, Paul; Asim, Muhammad; Petkovic, Milan

    2011-01-01

    Personal telehealth is in rapid development with innovative emerging applications like disease management. With personal telehealth people participate in their own care supported by an open distributed system with health services. This poses new end-to-end security and privacy challenges. In this paper we introduce new end-to-end security requirements and present a design for consent management in the context of the Continua Health Alliance architecture. Thus, we empower patients to control how their health information is shared and used in a personal telehealth eco-system.

  5. Research on Factors Influencing Individual's Behavior of Energy Management

    NASA Astrophysics Data System (ADS)

    Fan, Yanfeng

    With the rapid rise of distributed generation, Internet of Things, and mobile Internet, both U.S. and European smart home manufacturers have developed energy management solutions for individual usage. These applications help people manage their energy consumption more efficiently. Domestic manufacturers have also launched similar products. This paper focuses on the factors influencing Energy Management Behaviour (EMB) at the individual level. By reviewing academic literature, conducting surveys in Beijing, Shanghai and Guangzhou, the author builds an integrated behavioural energy management model of the Chinese energy consumers. This paper takes the vague term of EMB and redefines it as a function of two separate behavioural concepts: Energy Management Intention (EMI), and the traditional Energy Saving Intention (ESI). Secondly, the author conducts statistical analyses on these two behavioural concepts. EMI is the main driver behind an individual's EMB. EMI is affected by Behavioural Attitudes, Subjective Norms, and Perceived Behavioural Control (PBC). Among these three key factors, PBC exerts the strongest influence. This implies that the promotion of the energy management concept is mainly driven by good application user experience (UX). The traditional ESI also demonstrates positive influence on EMB, but its impact is weaker than the impacts arising under EMI's three factors. In other words, the government and manufacturers may not be able to change an individual's energy management behaviour if they rely solely on their traditional promotion strategies. In addition, the study finds that the government may achieve better promotional results by launching subsidies to the manufacturers of these kinds of applications and smart appliances.

  6. Informal trail monitoring protocols: Denali National Park and Preserve. Final Report, October 2011

    USGS Publications Warehouse

    Marion, Jeffrey L.; Wimpey, Jeremy F.

    2011-01-01

    Managers at Alaska?s Denali National Park and Preserve (DENA) sponsored this research to assess and monitor visitor-created informal trails (ITs). DENA is located in south-central Alaska and managed as a six million acre wilderness park. This program of research was guided by the following objectives: (1) Investigate alternative methods for monitoring the spatial distribution, aggregate lineal extent, and tread conditions of informal (visitor-created) trails within the park. (2) In consultation with park staff, develop, pilot test, and refine cost-effective and scientifically defensible trail monitoring procedures that are fully integrated with the park?s Geographic Information System. (3) Prepare a technical report that compiles and presents research results and their management implications. This report presents the protocol development and field testing process, illustrates the types of data produced by their application, and provides guidance for their application and use. The protocols described provide managers with an efficient means to document and monitor IT conditions in settings ranging from pristine to intensively visited.

  7. Autonomous smart sensor network for full-scale structural health monitoring

    NASA Astrophysics Data System (ADS)

    Rice, Jennifer A.; Mechitov, Kirill A.; Spencer, B. F., Jr.; Agha, Gul A.

    2010-04-01

    The demands of aging infrastructure require effective methods for structural monitoring and maintenance. Wireless smart sensor networks offer the ability to enhance structural health monitoring (SHM) practices through the utilization of onboard computation to achieve distributed data management. Such an approach is scalable to the large number of sensor nodes required for high-fidelity modal analysis and damage detection. While smart sensor technology is not new, the number of full-scale SHM applications has been limited. This slow progress is due, in part, to the complex network management issues that arise when moving from a laboratory setting to a full-scale monitoring implementation. This paper presents flexible network management software that enables continuous and autonomous operation of wireless smart sensor networks for full-scale SHM applications. The software components combine sleep/wake cycling for enhanced power management with threshold detection for triggering network wide tasks, such as synchronized sensing or decentralized modal analysis, during periods of critical structural response.

  8. Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Magee, Jeff; Moffett, Jonathan

    1996-06-01

    Special Issue on Management This special issue contains seven papers originally presented at an International Workshop on Services for Managing Distributed Systems (SMDS'95), held in September 1995 in Karslruhe, Germany. The workshop was organized to present the results of two ESPRIT III funded projects, Sysman and IDSM, and more generally to bring together work in the area of distributed systems management. The workshop focused on the tools and techniques necessary for managing future large-scale, multi-organizational distributed systems. The open call for papers attracted a large number of submissions and the subsequent attendance at the workshop, which was larger than expected, clearly indicated that the topics addressed by the workshop were of considerable interest both to industry and academia. The papers selected for this special issue represent an excellent coverage of the issues addressed by the workshop. A particular focus of the workshop was the need to help managers deal with the size and complexity of modern distributed systems by the provision of automated support. This automation must have two prime characteristics: it must provide a flexible management system which responds rapidly to changing organizational needs, and it must provide both human managers and automated management components with the information that they need, in a form which can be used for decision-making. These two characteristics define the two main themes of this special issue. To satisfy the requirement for a flexible management system, workers in both industry and universities have turned to architectures which support policy directed management. In these architectures policy is explicitly represented and can be readily modified to meet changing requirements. The paper `Towards implementing policy-based systems management' by Meyer, Anstötz and Popien describes an approach whereby policy is enforced by event-triggered rules. Krause and Zimmermann in their paper `Implementing configuration management policies for distributed applications' present a system in which the configuration of the system in terms of its constituent components and their interconnections can be controlled by reconfiguration rules. Neumair and Wies in the paper `Case study: applying management policies to manage distributed queuing systems' examine how high-level policies can be transformed into practical and efficient implementations for the case of distributed job queuing systems. Koch and Krämer in `Rules and agents for automated management of distributed systems' describe the results of an experiment in using the software development environment Marvel to provide a rule based implementation of management policy. The paper by Jardin, `Supporting scalability and flexibility in a distributed management platform' reports on the experience of using a policy directed approach in the industrial strength TeMIP management platform. Both human managers and automated management components rely on a comprehensive monitoring system to provide accurate and timely information on which decisions are made to modify the operation of a system. The monitoring service must deal with condensing and summarizing the vast amount of data available to produce the events of interest to the controlling components of the overall management system. The paper `Distributed intelligent monitoring and reporting facilities' by Pavlou, Mykoniatis and Sanchez describes a flexible monitoring system in which the monitoring agents themselves are policy directed. Their monitoring system has been implemented in the context of the OSIMIS management platform. Debski and Janas in `The SysMan monitoring service and its management environment' describe the overall SysMan management system architecture and then concentrate on how event processing and distribution is supported in that architecture. The collection of papers gives a good overview of the current state of the art in distributed system management. It has reached a point at which a first generation of systems, based on policy representation within systems and automated monitoring systems, are coming into practical use. The papers also serve to identify many of the issues which are open research questions. In particular, as management systems increase in complexity, how far can we automate the refinement of high-level policies into implementations? How can we detect and resolve conflicts between policies? And how can monitoring services deal efficiently with ever-growing complexity and volume? We wish to acknowledge the many contributors, besides the authors, who have made this issue possible: the anonymous reviewers who have done much to assure the quality of these papers, Morris Sloman and his Programme Committee who convened the Workshop, and Thomas Usländer and his team at the Fraunhofer Institute in Karlsruhe who acted as hosts.

  9. DataHub: Knowledge-based data management for data discovery

    NASA Astrophysics Data System (ADS)

    Handley, Thomas H.; Li, Y. Philip

    1993-08-01

    Currently available database technology is largely designed for business data-processing applications, and seems inadequate for scientific applications. The research described in this paper, the DataHub, will address the issues associated with this shortfall in technology utilization and development. The DataHub development is addressing the key issues in scientific data management of scientific database models and resource sharing in a geographically distributed, multi-disciplinary, science research environment. Thus, the DataHub will be a server between the data suppliers and data consumers to facilitate data exchanges, to assist science data analysis, and to provide as systematic approach for science data management. More specifically, the DataHub's objectives are to provide support for (1) exploratory data analysis (i.e., data driven analysis); (2) data transformations; (3) data semantics capture and usage; analysis-related knowledge capture and usage; and (5) data discovery, ingestion, and extraction. Applying technologies that vary from deductive databases, semantic data models, data discovery, knowledge representation and inferencing, exploratory data analysis techniques and modern man-machine interfaces, DataHub will provide a prototype, integrated environement to support research scientists' needs in multiple disciplines (i.e. oceanography, geology, and atmospheric) while addressing the more general science data management issues. Additionally, the DataHub will provide data management services to exploratory data analysis applications such as LinkWinds and NCSA's XIMAGE.

  10. Crowdsourcing applications for public health.

    PubMed

    Brabham, Daren C; Ribisl, Kurt M; Kirchner, Thomas R; Bernhardt, Jay M

    2014-02-01

    Crowdsourcing is an online, distributed, problem-solving, and production model that uses the collective intelligence of networked communities for specific purposes. Although its use has benefited many sectors of society, it has yet to be fully realized as a method for improving public health. This paper defines the core components of crowdsourcing and proposes a framework for understanding the potential utility of crowdsourcing in the domain of public health. Four discrete crowdsourcing approaches are described (knowledge discovery and management; distributed human intelligence tasking; broadcast search; and peer-vetted creative production types) and a number of potential applications for crowdsourcing for public health science and practice are enumerated. © 2013 American Journal of Preventive Medicine Published by American Journal of Preventive Medicine All rights reserved.

  11. Beginning to manage drug discovery and development knowledge.

    PubMed

    Sumner-Smith, M

    2001-05-01

    Knowledge management approaches and technologies are beginning to be implemented by the pharmaceutical industry in support of new drug discovery and development processes aimed at greater efficiencies and effectiveness. This trend coincides with moves to reduce paper, coordinate larger teams with more diverse skills that are distributed around the globe, and to comply with regulatory requirements for electronic submissions and the associated maintenance of electronic records. Concurrently, the available technologies have implemented web-based architectures with a greater range of collaborative tools and personalization through portal approaches. However, successful application of knowledge management methods depends on effective cultural change management, as well as proper architectural design to match the organizational and work processes within a company.

  12. The application of virtual reality systems as a support of digital manufacturing and logistics

    NASA Astrophysics Data System (ADS)

    Golda, G.; Kampa, A.; Paprocka, I.

    2016-08-01

    Modern trends in development of computer aided techniques are heading toward the integration of design competitive products and so-called "digital manufacturing and logistics", supported by computer simulation software. All phases of product lifecycle: starting from design of a new product, through planning and control of manufacturing, assembly, internal logistics and repairs, quality control, distribution to customers and after-sale service, up to its recycling or utilization should be aided and managed by advanced packages of product lifecycle management software. Important problems for providing the efficient flow of materials in supply chain management of whole product lifecycle, using computer simulation will be described on that paper. Authors will pay attention to the processes of acquiring relevant information and correct data, necessary for virtual modeling and computer simulation of integrated manufacturing and logistics systems. The article describes possibilities of use an applications of virtual reality software for modeling and simulation the production and logistics processes in enterprise in different aspects of product lifecycle management. The authors demonstrate effective method of creating computer simulations for digital manufacturing and logistics and show modeled and programmed examples and solutions. They pay attention to development trends and show options of the applications that go beyond enterprise.

  13. PLOCAN glider portal: a gateway for useful data management and visualization system

    NASA Astrophysics Data System (ADS)

    Morales, Tania; Lorenzo, Alvaro; Viera, Josue; Barrera, Carlos; José Rueda, María

    2014-05-01

    Nowadays monitoring ocean behavior and its characteristics involves a wide range of sources able to gather and provide a vast amount of data in spatio-temporal scales. Multiplatform infrastructures, like PLOCAN, hold a variety of autonomous Lagrangian and Eulerian devices addressed to collect information then transferred to land in near-real time. Managing all this data collection in an efficient way is a major issue. Advances in ocean observation technologies, where underwater autonomous gliders play a key role, has brought as a consequence an improvement of spatio-temporal resolution which offers a deeper understanding of the ocean but requires a bigger effort in the data management process. There are general requirements in terms of data management in that kind of environments, such as processing raw data at different levels to obtain valuable information, storing data coherently and providing accurate products to final users according to their specific needs. Managing large amount of data can be certainly tedious and complex without having right tools and operational procedures; hence automating these tasks through software applications saves time and reduces errors. Moreover, data distribution is highly relevant since scientist tent to assimilate different sources for comparison and validation. The use of web applications has boosted the necessary scientific dissemination. Within this argument, PLOCAN has implemented a set of independent but compatible applications to process, store and disseminate information gathered through different oceanographic platforms. These applications have been implemented using open standards, such as HTML and CSS, and open source software, like python as programming language and Django as framework web. More specifically, a glider application has been developed within the framework of FP7-GROOM project. Regarding data management, this project focuses on collecting and making available consistent and quality controlled datasets as well as fostering open access to glider data.

  14. Adaptive Management of Computing and Network Resources for Spacecraft Systems

    NASA Technical Reports Server (NTRS)

    Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)

    2000-01-01

    It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.

  15. Advanced Power Technology Development Activities for Small Satellite Applications

    NASA Technical Reports Server (NTRS)

    Piszczor, Michael F.; Landis, Geoffrey A.; Miller, Thomas B.; Taylor, Linda M.; Hernandez-Lugo, Dionne; Raffaelle, Ryne; Landi, Brian; Hubbard, Seth; Schauerman, Christopher; Ganter, Mathew; hide

    2017-01-01

    NASA Glenn Research Center (GRC) has a long history related to the development of advanced power technology for space applications. This expertise covers the breadth of energy generation (photovoltaics, thermal energy conversion, etc.), energy storage (batteries, fuel cell technology, etc.), power management and distribution, and power systems architecture and analysis. Such advanced technology is now being developed for small satellite and cubesat applications and could have a significant impact on the longevity and capabilities of these missions. A presentation during the Pre-Conference Workshop will focus on various advanced power technologies being developed and demonstrated by NASA, and their possible application within the small satellite community.

  16. PrismTech Data Distribution Service Java API Evaluation

    NASA Technical Reports Server (NTRS)

    Riggs, Cortney

    2008-01-01

    My internship duties with Launch Control Systems required me to start performance testing of an Object Management Group's (OMG) Data Distribution Service (DDS) specification implementation by PrismTech Limited through the Java programming language application programming interface (API). DDS is a networking middleware for Real-Time Data Distribution. The performance testing involves latency, redundant publishers, extended duration, redundant failover, and read performance. Time constraints allowed only for a data throughput test. I have designed the testing applications to perform all performance tests when time is allowed. Performance evaluation data such as megabits per second and central processing unit (CPU) time consumption were not easily attainable through the Java programming language; they required new methods and classes created in the test applications. Evaluation of this product showed the rate that data can be sent across the network. Performance rates are better on Linux platforms than AIX and Sun platforms. Compared to previous C++ programming language API, the performance evaluation also shows the language differences for the implementation. The Java API of the DDS has a lower throughput performance than the C++ API.

  17. Managing distributed software development in the Virtual Astronomical Observatory

    NASA Astrophysics Data System (ADS)

    Evans, Janet D.; Plante, Raymond L.; Boneventura, Nina; Busko, Ivo; Cresitello-Dittmar, Mark; D'Abrusco, Raffaele; Doe, Stephen; Ebert, Rick; Laurino, Omar; Pevunova, Olga; Refsdal, Brian; Thomas, Brian

    2012-09-01

    The U.S. Virtual Astronomical Observatory (VAO) is a product-driven organization that provides new scientific research capabilities to the astronomical community. Software development for the VAO follows a lightweight framework that guides development of science applications and infrastructure. Challenges to be overcome include distributed development teams, part-time efforts, and highly constrained schedules. We describe the process we followed to conquer these challenges while developing Iris, the VAO application for analysis of 1-D astronomical spectral energy distributions (SEDs). Iris was successfully built and released in less than a year with a team distributed across four institutions. The project followed existing International Virtual Observatory Alliance inter-operability standards for spectral data and contributed a SED library as a by-product of the project. We emphasize lessons learned that will be folded into future development efforts. In our experience, a well-defined process that provides guidelines to ensure the project is cohesive and stays on track is key to success. Internal product deliveries with a planned test and feedback loop are critical. Release candidates are measured against use cases established early in the process, and provide the opportunity to assess priorities and make course corrections during development. Also key is the participation of a stakeholder such as a lead scientist who manages the technical questions, advises on priorities, and is actively involved as a lead tester. Finally, frequent scheduled communications (for example a bi-weekly tele-conference) assure issues are resolved quickly and the team is working toward a common vision.

  18. Adaptable data management for systems biology investigations.

    PubMed

    Boyle, John; Rovira, Hector; Cavnor, Chris; Burdick, David; Killcoyne, Sarah; Shmulevich, Ilya

    2009-03-06

    Within research each experiment is different, the focus changes and the data is generated from a continually evolving barrage of technologies. There is a continual introduction of new techniques whose usage ranges from in-house protocols through to high-throughput instrumentation. To support these requirements data management systems are needed that can be rapidly built and readily adapted for new usage. The adaptable data management system discussed is designed to support the seamless mining and analysis of biological experiment data that is commonly used in systems biology (e.g. ChIP-chip, gene expression, proteomics, imaging, flow cytometry). We use different content graphs to represent different views upon the data. These views are designed for different roles: equipment specific views are used to gather instrumentation information; data processing oriented views are provided to enable the rapid development of analysis applications; and research project specific views are used to organize information for individual research experiments. This management system allows for both the rapid introduction of new types of information and the evolution of the knowledge it represents. Data management is an important aspect of any research enterprise. It is the foundation on which most applications are built, and must be easily extended to serve new functionality for new scientific areas. We have found that adopting a three-tier architecture for data management, built around distributed standardized content repositories, allows us to rapidly develop new applications to support a diverse user community.

  19. On the influence of latency estimation on dynamic group communication using overlays

    NASA Astrophysics Data System (ADS)

    Vik, Knut-Helge; Griwodz, Carsten; Halvorsen, Pål

    2009-01-01

    Distributed interactive applications tend to have stringent latency requirements and some may have high bandwidth demands. Many of them have also very dynamic user groups for which all-to-all communication is needed. In online multiplayer games, for example, such groups are determined through region-of-interest management in the application. We have investigated a variety of group management approaches for overlay networks in earlier work and shown that several useful tree heuristics exist. However, these heuristics require full knowledge of all overlay link latencies. Since this is not scalable, we investigate the effects that latency estimation techqniues have ton the quality of overlay tree constructions. We do this by evaluating one example of our group management approaches in Planetlab and examing how latency estimation techqniues influence their quality. Specifically, we investigate how two well-known latency estimation techniques, Vivaldi and Netvigator, affect the quality of tree building.

  20. ScyFlow: An Environment for the Visual Specification and Execution of Scientific Workflows

    NASA Technical Reports Server (NTRS)

    McCann, Karen M.; Yarrow, Maurice; DeVivo, Adrian; Mehrotra, Piyush

    2004-01-01

    With the advent of grid technologies, scientists and engineers are building more and more complex applications to utilize distributed grid resources. The core grid services provide a path for accessing and utilizing these resources in a secure and seamless fashion. However what the scientists need is an environment that will allow them to specify their application runs at a high organizational level, and then support efficient execution across any given set or sets of resources. We have been designing and implementing ScyFlow, a dual-interface architecture (both GUT and APT) that addresses this problem. The scientist/user specifies the application tasks along with the necessary control and data flow, and monitors and manages the execution of the resulting workflow across the distributed resources. In this paper, we utilize two scenarios to provide the details of the two modules of the project, the visual editor and the runtime workflow engine.

  1. SemanticOrganizer: A Customizable Semantic Repository for Distributed NASA Project Teams

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Berrios, Daniel C.; Carvalho, Robert E.; Hall, David R.; Rich, Stephen J.; Sturken, Ian B.; Swanson, Keith J.; Wolfe, Shawn R.

    2004-01-01

    SemanticOrganizer is a collaborative knowledge management system designed to support distributed NASA projects, including diverse teams of scientists, engineers, and accident investigators. The system provides a customizable, semantically structured information repository that stores work products relevant to multiple projects of differing types. SemanticOrganizer is one of the earliest and largest semantic web applications deployed at NASA to date, and has been used in diverse contexts ranging from the investigation of Space Shuttle Columbia's accident to the search for life on other planets. Although the underlying repository employs a single unified ontology, access control and ontology customization mechanisms make the repository contents appear different for each project team. This paper describes SemanticOrganizer, its customization facilities, and a sampling of its applications. The paper also summarizes some key lessons learned from building and fielding a successful semantic web application across a wide-ranging set of domains with diverse users.

  2. QUALITY ASSURANCE AND QUALITY CONTROL IN THE DEVELOPMENT AND APPLICATION OF THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT (AGWA) TOOL

    EPA Science Inventory

    Planning and assessment in land and water resource management are evolving from simple, local-scale problems toward complex, spatially explicit regional ones. Such problems have to be addressed with distributed models that can compute runoff and erosion at different spatial and t...

  3. Application of the remote-sensing communication model to a time-sensitive wildfire remote-sensing system

    Treesearch

    Christopher D. Lippitt; Douglas A. Stow; Philip J. Riggan

    2016-01-01

    Remote sensing for hazard response requires a priori identification of sensor, transmission, processing, and distribution methods to permit the extraction of relevant information in timescales sufficient to allow managers to make a given time-sensitive decision. This study applies and demonstrates the utility of the Remote Sensing Communication...

  4. Renewables-Friendly Grid Development Strategies. Experience in the United States, Potential Lessons for China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hurlbut, David; Zhou, Ella; Porter, Kevin

    2015-10-01

    This report aims to help China's reform effort by providing a concise summary of experience in the United States with "renewables-friendly"" grid management, focusing on experiences that might be applicable to China. It focuses on utility-scale renewables and sets aside issues related to distributed generation.

  5. Future trends in transport and fate of diffuse contaminants in catchments, with special emphasis on stable isotope applications

    USGS Publications Warehouse

    Turner, J.; Albrechtsen, H.-J.; Bonell, M.; Duguet, J.-P.; Harris, B.; Meckenstock, R.; McGuire, K.; Moussa, R.; Peters, N.; Richnow, H.H.; Sherwood-Lollar, B.; Uhlenbrook, S.; van, Lanen H.

    2006-01-01

    A summary is provided of the first of a series of proposed Integrated Science Initiative workshops supported by the UNESCO International Hydrological Programme. The workshop brought together hydrologists, environmental chemists, microbiologists, stable isotope specialists and natural resource managers with the purpose of communicating new ideas on ways to assess microbial degradation processes and reactive transport at catchment scales. The focus was on diffuse contamination at catchment scales and the application of compound-specific isotope analysis (CSIA) in the assessment of biological degradation processes of agrochemicals. Major outcomes were identifying the linkage between water residence time distribution and rates of contaminant degradation, identifying the need for better information on compound specific microbial degradation isotope fractionation factors and the potential of CSIA in identifying key degradative processes. In the natural resource management context, a framework was developed where CSIA techniques were identified as practically unique in their capacity to serve as distributed integrating indicators of process across a range of scales (micro to diffuse) of relevance to the problem of diffuse pollution assessment. Copyright ?? 2006 John Wiley & Sons, Ltd.

  6. Experimental Evaluation of Processing Time for the Synchronization of XML-Based Business Objects

    NASA Astrophysics Data System (ADS)

    Ameling, Michael; Wolf, Bernhard; Springer, Thomas; Schill, Alexander

    Business objects (BOs) are data containers for complex data structures used in business applications such as Supply Chain Management and Customer Relationship Management. Due to the replication of application logic, multiple copies of BOs are created which have to be synchronized and updated. This is a complex and time consuming task because BOs rigorously vary in their structure according to the distribution, number and size of elements. Since BOs are internally represented as XML documents, the parsing of XML is one major cost factor which has to be considered for minimizing the processing time during synchronization. The prediction of the parsing time for BOs is an significant property for the selection of an efficient synchronization mechanism. In this paper, we present a method to evaluate the influence of the structure of BOs on their parsing time. The results of our experimental evaluation incorporating four different XML parsers examine the dependencies between the distribution of elements and the parsing time. Finally, a general cost model will be validated and simplified according to the results of the experimental setup.

  7. Simulation and Control Lab Development for Power and Energy Management for NASA Manned Deep Space Missions

    NASA Technical Reports Server (NTRS)

    McNelis, Anne M.; Beach, Raymond F.; Soeder, James F.; McNelis, Nancy B.; May, Ryan; Dever, Timothy P.; Trase, Larry

    2014-01-01

    The development of distributed hierarchical and agent-based control systems will allow for reliable autonomous energy management and power distribution for on-orbit missions. Power is one of the most critical systems on board a space vehicle, requiring quick response time when a fault or emergency is identified. As NASAs missions with human presence extend beyond low earth orbit autonomous control of vehicle power systems will be necessary and will need to reliably function for long periods of time. In the design of autonomous electrical power control systems there is a need to dynamically simulate and verify the EPS controller functionality prior to use on-orbit. This paper presents the work at NASA Glenn Research Center in Cleveland, Ohio where the development of a controls laboratory is being completed that will be utilized to demonstrate advanced prototype EPS controllers for space, aeronautical and terrestrial applications. The control laboratory hardware, software and application of an autonomous controller for demonstration with the ISS electrical power system is the subject of this paper.

  8. Towards an Australian ensemble streamflow forecasting system for flood prediction and water management

    NASA Astrophysics Data System (ADS)

    Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.

    2016-12-01

    Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.

  9. Smartphone technologies and Bayesian networks to assess shorebird habitat selection

    USGS Publications Warehouse

    Zeigler, Sara; Thieler, E. Robert; Gutierrez, Ben; Plant, Nathaniel G.; Hines, Megan K.; Fraser, James D.; Catlin, Daniel H.; Karpanty, Sarah M.

    2017-01-01

    Understanding patterns of habitat selection across a species’ geographic distribution can be critical for adequately managing populations and planning for habitat loss and related threats. However, studies of habitat selection can be time consuming and expensive over broad spatial scales, and a lack of standardized monitoring targets or methods can impede the generalization of site-based studies. Our objective was to collaborate with natural resource managers to define available nesting habitat for piping plovers (Charadrius melodus) throughout their U.S. Atlantic coast distribution from Maine to North Carolina, with a goal of providing science that could inform habitat management in response to sea-level rise. We characterized a data collection and analysis approach as being effective if it provided low-cost collection of standardized habitat-selection data across the species’ breeding range within 1–2 nesting seasons and accurate nesting location predictions. In the method developed, >30 managers and conservation practitioners from government agencies and private organizations used a smartphone application, “iPlover,” to collect data on landcover characteristics at piping plover nest locations and random points on 83 beaches and barrier islands in 2014 and 2015. We analyzed these data with a Bayesian network that predicted the probability a specific combination of landcover variables would be associated with a nesting site. Although we focused on a shorebird, our approach can be modified for other taxa. Results showed that the Bayesian network performed well in predicting habitat availability and confirmed predicted habitat preferences across the Atlantic coast breeding range of the piping plover. We used the Bayesian network to map areas with a high probability of containing nesting habitat on the Rockaway Peninsula in New York, USA, as an example application. Our approach facilitated the collation of evidence-based information on habitat selection from many locations and sources, which can be used in management and decision-making applications.

  10. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  11. Mapping Applications Center, National Mapping Division, U.S. Geological Survey

    USGS Publications Warehouse

    ,

    1996-01-01

    The Mapping Applications Center (MAC), National Mapping Division (NMD), is the eastern regional center for coordinating the production, distribution, and sale of maps and digital products of the U.S. Geological Survey (USGS). It is located in the John Wesley Powell Federal Building in Reston, Va. The MAC's major functions are to (1) establish and manage cooperative mapping programs with State and Federal agencies; (2) perform new research in preparing and applying geospatial information; (3) prepare digital cartographic data, special purpose maps, and standard maps from traditional and classified source materials; (4) maintain the domestic names program of the United States; (5) manage the National Aerial Photography Program (NAPP); (6) coordinate the NMD's publications and outreach programs; and (7) direct the USGS mapprinting operations.

  12. An enhanced Ada run-time system for real-time embedded processors

    NASA Technical Reports Server (NTRS)

    Sims, J. T.

    1991-01-01

    An enhanced Ada run-time system has been developed to support real-time embedded processor applications. The primary focus of this development effort has been on the tasking system and the memory management facilities of the run-time system. The tasking system has been extended to support efficient and precise periodic task execution as required for control applications. Event-driven task execution providing a means of task-asynchronous control and communication among Ada tasks is supported in this system. Inter-task control is even provided among tasks distributed on separate physical processors. The memory management system has been enhanced to provide object allocation and protected access support for memory shared between disjoint processors, each of which is executing a distinct Ada program.

  13. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    NASA Astrophysics Data System (ADS)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data, and so the costs of running applications vary widely according to how they use resources. The cloud is well suited to processing CPU-bound (and memory bound) workflows such as the periodogram code, given the relatively low cost of processing in comparison with I/O operations. I/O-bound applications such as Montage perform best on high-performance clusters with fast networks and parallel file-systems. Science-driven Cyberinfrastructure: Montage has been widely used as a driver application to develop workflow management services, such as task scheduling in distributed environments, designing fault tolerance techniques for job schedulers, and developing workflow orchestration techniques. Running Parallel Applications Across Distributed Cloud Environments: Data processing will eventually take place in parallel distributed across cyber infrastructure environments having different architectures. We have used the Pegasus Work Management System (WMS) to successfully run applications across three very different environments: TeraGrid, OSG (Open Science Grid), and FutureGrid. Provisioning resources across different grids and clouds (also referred to as Sky Computing), involves establishing a distributed environment, where issues of, e.g, remote job submission, data management, and security need to be addressed. This environment also requires building virtual machine images that can run in different environments. Usually, each cloud provides basic images that can be customized with additional software and services. In most of our work, we provisioned compute resources using a custom application, called Wrangler. Pegasus WMS abstracts the architectures of the compute environments away from the end-user, and can be considered a first-generation tool suitable for scientists to run their applications on disparate environments.

  14. A Study of Theory U and Its Application to a Complex Japanese Maritime Self-Defense Force Problem

    DTIC Science & Technology

    2014-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited A STUDY OF “ THEORY ...AND DATES COVERED Master’s Thesis 4. TITLE AND SUBTITLE A STUDY OF “ THEORY U” AND ITS APPLICATION TO A COMPLEX JAPANESE MARITIME SELF-DEFENSE...and a new approach to this way of thinking, called “ Theory U.” This thesis describes the types of problems that require managers to change their

  15. Radial basis function and its application in tourism management

    NASA Astrophysics Data System (ADS)

    Hu, Shan-Feng; Zhu, Hong-Bin; Zhao, Lei

    2018-05-01

    In this work, several applications and the performances of the radial basis function (RBF) are briefly reviewed at first. After that, the binomial function combined with three different RBFs including the multiquadric (MQ), inverse quadric (IQ) and inverse multiquadric (IMQ) distributions are adopted to model the tourism data of Huangshan in China. Simulation results showed that all the models match very well with the sample data. It is found that among the three models, the IMQ-RBF model is more suitable for forecasting the tourist flow.

  16. The Application of Collaborative Business Intelligence Technology in the Hospital SPD Logistics Management Model

    PubMed Central

    LIU, Tongzhu; SHEN, Aizong; HU, Xiaojian; TONG, Guixian; GU, Wei

    2017-01-01

    Background: We aimed to apply collaborative business intelligence (BI) system to hospital supply, processing and distribution (SPD) logistics management model. Methods: We searched Engineering Village database, China National Knowledge Infrastructure (CNKI) and Google for articles (Published from 2011 to 2016), books, Web pages, etc., to understand SPD and BI related theories and recent research status. For the application of collaborative BI technology in the hospital SPD logistics management model, we realized this by leveraging data mining techniques to discover knowledge from complex data and collaborative techniques to improve the theories of business process. Results: For the application of BI system, we: (i) proposed a layered structure of collaborative BI system for intelligent management in hospital logistics; (ii) built data warehouse for the collaborative BI system; (iii) improved data mining techniques such as supporting vector machines (SVM) and swarm intelligence firefly algorithm to solve key problems in hospital logistics collaborative BI system; (iv) researched the collaborative techniques oriented to data and business process optimization to improve the business processes of hospital logistics management. Conclusion: Proper combination of SPD model and BI system will improve the management of logistics in the hospitals. The successful implementation of the study requires: (i) to innovate and improve the traditional SPD model and make appropriate implement plans and schedules for the application of BI system according to the actual situations of hospitals; (ii) the collaborative participation of internal departments in hospital including the department of information, logistics, nursing, medical and financial; (iii) timely response of external suppliers. PMID:28828316

  17. Enrollment Management in Medical School Admissions: A Novel Evidence-Based Approach at One Institution.

    PubMed

    Burkhardt, John C; DesJardins, Stephen L; Teener, Carol A; Gay, Steven E; Santen, Sally A

    2016-11-01

    In higher education, enrollment management has been developed to accurately predict the likelihood of enrollment of admitted students. This allows evidence to dictate numbers of interviews scheduled, offers of admission, and financial aid package distribution. The applicability of enrollment management techniques for use in medical education was tested through creation of a predictive enrollment model at the University of Michigan Medical School (U-M). U-M and American Medical College Application Service data (2006-2014) were combined to create a database including applicant demographics, academic application scores, institutional financial aid offer, and choice of school attended. Binomial logistic regression and multinomial logistic regression models were estimated in order to study factors related to enrollment at the local institution versus elsewhere and to groupings of competing peer institutions. A predictive analytic "dashboard" was created for practical use. Both models were significant at P < .001 and had similar predictive performance. In the binomial model female, underrepresented minority students, grade point average, Medical College Admission Test score, admissions committee desirability score, and most individual financial aid offers were significant (P < .05). The significant covariates were similar in the multinomial model (excluding female) and provided separate likelihoods of students enrolling at different institutional types. An enrollment-management-based approach would allow medical schools to better manage the number of students they admit and target recruitment efforts to improve their likelihood of success. It also performs a key institutional research function for understanding failed recruitment of highly desirable candidates.

  18. Architecture of next-generation information management systems for digital radiology enterprises

    NASA Astrophysics Data System (ADS)

    Wong, Stephen T. C.; Wang, Huili; Shen, Weimin; Schmidt, Joachim; Chen, George; Dolan, Tom

    2000-05-01

    Few information systems today offer a clear and flexible means to define and manage the automated part of radiology processes. None of them provide a coherent and scalable architecture that can easily cope with heterogeneity and inevitable local adaptation of applications. Most importantly, they often lack a model that can integrate clinical and administrative information to aid better decisions in managing resources, optimizing operations, and improving productivity. Digital radiology enterprises require cost-effective solutions to deliver information to the right person in the right place and at the right time. We propose a new architecture of image information management systems for digital radiology enterprises. Such a system is based on the emerging technologies in workflow management, distributed object computing, and Java and Web techniques, as well as Philips' domain knowledge in radiology operations. Our design adapts the approach of '4+1' architectural view. In this new architecture, PACS and RIS will become one while the user interaction can be automated by customized workflow process. Clinical service applications are implemented as active components. They can be reasonably substituted by applications of local adaptations and can be multiplied for fault tolerance and load balancing. Furthermore, it will provide powerful query and statistical functions for managing resources and improving productivity in real time. This work will lead to a new direction of image information management in the next millennium. We will illustrate the innovative design with implemented examples of a working prototype.

  19. Programmable multi-node quantum network design and simulation

    NASA Astrophysics Data System (ADS)

    Dasari, Venkat R.; Sadlier, Ronald J.; Prout, Ryan; Williams, Brian P.; Humble, Travis S.

    2016-05-01

    Software-defined networking offers a device-agnostic programmable framework to encode new network functions. Externally centralized control plane intelligence allows programmers to write network applications and to build functional network designs. OpenFlow is a key protocol widely adopted to build programmable networks because of its programmability, flexibility and ability to interconnect heterogeneous network devices. We simulate the functional topology of a multi-node quantum network that uses programmable network principles to manage quantum metadata for protocols such as teleportation, superdense coding, and quantum key distribution. We first show how the OpenFlow protocol can manage the quantum metadata needed to control the quantum channel. We then use numerical simulation to demonstrate robust programmability of a quantum switch via the OpenFlow network controller while executing an application of superdense coding. We describe the software framework implemented to carry out these simulations and we discuss near-term efforts to realize these applications.

  20. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  1. Evaluating non-relational storage technology for HEP metadata and meta-data catalog

    NASA Astrophysics Data System (ADS)

    Grigorieva, M. A.; Golosova, M. V.; Gubin, M. Y.; Klimentov, A. A.; Osipova, V. V.; Ryabinkin, E. A.

    2016-10-01

    Large-scale scientific experiments produce vast volumes of data. These data are stored, processed and analyzed in a distributed computing environment. The life cycle of experiment is managed by specialized software like Distributed Data Management and Workload Management Systems. In order to be interpreted and mined, experimental data must be accompanied by auxiliary metadata, which are recorded at each data processing step. Metadata describes scientific data and represent scientific objects or results of scientific experiments, allowing them to be shared by various applications, to be recorded in databases or published via Web. Processing and analysis of constantly growing volume of auxiliary metadata is a challenging task, not simpler than the management and processing of experimental data itself. Furthermore, metadata sources are often loosely coupled and potentially may lead to an end-user inconsistency in combined information queries. To aggregate and synthesize a range of primary metadata sources, and enhance them with flexible schema-less addition of aggregated data, we are developing the Data Knowledge Base architecture serving as the intelligence behind GUIs and APIs.

  2. Fully distributed monitoring architecture supporting multiple trackees and trackers in indoor mobile asset management application.

    PubMed

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-03-21

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.

  3. Knowledge Management

    NASA Technical Reports Server (NTRS)

    Shariq, Syed Z.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The emergence of rapidly expanding technologies for distribution and dissemination of information and knowledge has brought to focus the opportunities for development of knowledge-based networks, knowledge dissemination and knowledge management technologies and their potential applications for enhancing productivity of knowledge work. The challenging and complex problems of the future can be best addressed by developing the knowledge management as a new discipline based on an integrative synthesis of hard and soft sciences. A knowledge management professional society can provide a framework for catalyzing the development of proposed synthesis as well as serve as a focal point for coordination of professional activities in the strategic areas of education, research and technology development. Preliminary concepts for the development of the knowledge management discipline and the professional society are explored. Within this context of knowledge management discipline and the professional society, potential opportunities for application of information technologies for more effectively delivering or transferring information and knowledge (i.e., resulting from the NASA's Mission to Planet Earth) for the development of policy options in critical areas of national and global importance (i.e., policy decisions in economic and environmental areas) can be explored, particularly for those policy areas where a global collaborative knowledge network is likely to be critical to the acceptance of the policies.

  4. Effects of fertilizer on inorganic soil N in East Africa maize systems: vertical distributions and temporal dynamics.

    PubMed

    Tully, Katherine L; Hickman, Jonathan; McKenna, Madeline; Neill, Christopher; Palm, Cheryl A

    2016-09-01

    Fertilizer applications are poised to increase across sub-Saharan Africa (SSA), but the fate of added nitrogen (N) is largely unknown. We measured vertical distributions and temporal variations of soil inorganic N following fertilizer application in two maize (Zea mays L.)-growing regions of contrasting soil type. Fertilizer trials were established on a clayey soil in Yala, Kenya, and on a sandy soil in Tumbi, Tanzania, with application rates of 0-200 kg N/ha/yr. Soil profiles were collected (0-400 cm) annually (for three years in Yala and two years in Tumbi) to examine changes in inorganic N pools. Topsoils (0-15 cm) were collected every 3-6 weeks to determine how precipitation and fertilizer management influenced plant-available soil N. Fertilizer management altered soil inorganic N, and there were large differences between sites that were consistent with differences in soil texture. Initial soil N pools were larger in Yala than Tumbi (240 vs. 79 kg/ha). Inorganic N pools did not change in Yala (277 kg/ha), but increased fourfold after cultivation and fertilization in Tumbi (371 kg/ha). Intra-annual variability in NO - 3 -N concentrations (3-33 μg/g) in Tumbi topsoils strongly suggested that the sandier soils were prone to high leaching losses. Information on soil inorganic N pools and movement through soil profiles can h vulnerability of SSA croplands to N losses and determine best fertilizer management practices as N application rates increase. A better understanding of the vertical and temporal patterns of soil N pools improves our ability to predict the potential environmental effects of a dramatic increase in fertilizer application rates that will accompany the intensification of African croplands. © 2016 by the Ecological Society of America.

  5. Onshore and Offshore Outsourcing with Agility: Lessons Learned

    NASA Astrophysics Data System (ADS)

    Kussmaul, Clifton

    This chapter reflects on case study based an agile distributed project that ran for approximately three years (from spring 2003 to spring 2006). The project involved (a) a customer organization with key personnel distributed across the US, developing an application with rapidly changing requirements; (b) onshore consultants with expertise in project management, development processes, offshoring, and relevant technologies; and (c) an external offsite development team in a CMM-5 organization in southern India. This chapter is based on surveys and discussions with multiple participants. The several years since the project was completed allow greater perspective on both the strengths and weaknesses, since the participants can reflect on the entire life of the project, and compare it to subsequent experiences. Our findings emphasize the potential for agile project management in distributed software development, and the importance of people and interactions, taking many small steps to find and correct errors, and matching the structures of the project and product to support implementation of agility.

  6. Pacific Northwest GridWise™ Testbed Demonstration Projects; Part I. Olympic Peninsula Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammerstrom, Donald J.; Ambrosio, Ron; Carlon, Teresa A.

    2008-01-09

    This report describes the implementation and results of a field demonstration wherein residential electric water heaters and thermostats, commercial building space conditioning, municipal water pump loads, and several distributed generators were coordinated to manage constrained feeder electrical distribution through the two-way communication of load status and electric price signals. The field demonstration took place in Washington and Oregon and was paid for by the U.S. Department of Energy and several northwest utilities. Price is found to be an effective control signal for managing transmission or distribution congestion. Real-time signals at 5-minute intervals are shown to shift controlled load in time.more » The behaviors of customers and their responses under fixed, time-of-use, and real-time price contracts are compared. Peak loads are effectively reduced on the experimental feeder. A novel application of portfolio theory is applied to the selection of an optimal mix of customer contract types.« less

  7. A global distributed basin morphometric dataset

    NASA Astrophysics Data System (ADS)

    Shen, Xinyi; Anagnostou, Emmanouil N.; Mei, Yiwen; Hong, Yang

    2017-01-01

    Basin morphometry is vital information for relating storms to hydrologic hazards, such as landslides and floods. In this paper we present the first comprehensive global dataset of distributed basin morphometry at 30 arc seconds resolution. The dataset includes nine prime morphometric variables; in addition we present formulas for generating twenty-one additional morphometric variables based on combination of the prime variables. The dataset can aid different applications including studies of land-atmosphere interaction, and modelling of floods and droughts for sustainable water management. The validity of the dataset has been consolidated by successfully repeating the Hack's law.

  8. Application of LANDSAT to the management of Delaware's marine and wetland resources

    NASA Technical Reports Server (NTRS)

    Klemas, V.; Rogers, R. H.; Bartlett, D. S.; Davis, G.; Philpot, W. D.

    1977-01-01

    The author has identified the following significant results. LANDSAT data were found to be the best source of synoptic information on the distribution of horizontal water mass discontinuities (fronts) at different portions of the tidal cycle. Distributions observed were used to improve an oil slick movement prediction model for the Delaware Bay. LANDSAT data were used to monitor the movement and dispersion of industrial acid waste material dumped over the continental shelf. A technique for assessing aqueous sediment concentration with limited ground truth was proposed.

  9. Security and privacy issues of personal health.

    PubMed

    Blobel, Bernd; Pharow, Peter

    2007-01-01

    While health systems in developed countries and increasingly also in developing countries are moving from organisation-centred to person-centred health service delivery, the supporting communication and information technology is faced with new risks regarding security and privacy of stakeholders involved. The comprehensively distributed environment puts special burden on guaranteeing communication security services, but even more on guaranteeing application security services dealing with privilege management, access control and audit regarding social implication and connected sensitivity of personal information recorded, processed, communicated and stored in an even internationally distributed environment.

  10. Distribution Management System Volt/VAR Evaluation | Grid Modernization |

    Science.gov Websites

    NREL Distribution Management System Volt/VAR Evaluation Distribution Management System Volt/VAR Evaluation This project involves building a prototype distribution management system testbed that links a GE Grid Solutions distribution management system to power hardware-in-the-loop testing. This setup is

  11. Information Management of Web Application Based Environmental Performance Management in Concentrating Division of PTFI

    NASA Astrophysics Data System (ADS)

    Susanto, Arif; Mulyono, Nur Budi

    2018-02-01

    The changes of environmental management system standards into the latest version, i.e. ISO 14001:2015, may cause a change on a data and information need in decision making and achieving the objectives in the organization coverage. Information management is the organization's responsibility to ensure that effectiveness and efficiency start from its creating, storing, processing and distribution processes to support operations and effective decision making activity in environmental performance management. The objective of this research was to set up an information management program and to adopt the technology as the supporting component of the program which was done by PTFI Concentrating Division so that it could be in line with the desirable organization objective in environmental management based on ISO 14001:2015 environmental management system standards. Materials and methods used covered technical aspects in information management, i.e. with web-based application development by using usage centered design. The result of this research showed that the use of Single Sign On gave ease to its user to interact further on the use of the environmental management system. Developing a web-based through creating entity relationship diagram (ERD) and information extraction by conducting information extraction which focuses on attributes, keys, determination of constraints. While creating ERD is obtained from relational database scheme from a number of database from environmental performances in Concentrating Division.

  12. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

  13. Benchmarking distributed data warehouse solutions for storing genomic variant information

    PubMed Central

    Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.

    2017-01-01

    Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require the storage and analysis of variants from thousands of samples can benefit from the scalability and performance of distributed data warehouse solutions. Database URL: https://github.com/ZSI-Bio/variantsdwh PMID:29220442

  14. Integrating research tools to support the management of social-ecological systems under climate change

    USGS Publications Warehouse

    Miller, Brian W.; Morisette, Jeffrey T.

    2014-01-01

    Developing resource management strategies in the face of climate change is complicated by the considerable uncertainty associated with projections of climate and its impacts and by the complex interactions between social and ecological variables. The broad, interconnected nature of this challenge has resulted in calls for analytical frameworks that integrate research tools and can support natural resource management decision making in the face of uncertainty and complex interactions. We respond to this call by first reviewing three methods that have proven useful for climate change research, but whose application and development have been largely isolated: species distribution modeling, scenario planning, and simulation modeling. Species distribution models provide data-driven estimates of the future distributions of species of interest, but they face several limitations and their output alone is not sufficient to guide complex decisions for how best to manage resources given social and economic considerations along with dynamic and uncertain future conditions. Researchers and managers are increasingly exploring potential futures of social-ecological systems through scenario planning, but this process often lacks quantitative response modeling and validation procedures. Simulation models are well placed to provide added rigor to scenario planning because of their ability to reproduce complex system dynamics, but the scenarios and management options explored in simulations are often not developed by stakeholders, and there is not a clear consensus on how to include climate model outputs. We see these strengths and weaknesses as complementarities and offer an analytical framework for integrating these three tools. We then describe the ways in which this framework can help shift climate change research from useful to usable.

  15. Context-aware distributed cloud computing using CloudScheduler

    NASA Astrophysics Data System (ADS)

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  16. Fiber in the Local Loop: The Role of Electric Utilities

    NASA Astrophysics Data System (ADS)

    Meehan, Charles M.

    1990-01-01

    Electric utilities are beginning to make heavy use of fiber for a number of applications beyond transmission of voice and data among operating centers and plant facilities which employed fiber on the electric transmission systems. These additional uses include load management and automatic meter reading. Thus, utilities are beginning to place fiber on the electric distribution systems which, in many cases covers the same customer base as the "local loop". This shift to fiber on the distribution system is due to the advantages offered by fiber and because of congestion in the radio bands used for load management. This shift to fiber has been facilitated by a regulatory policy permitting utilities to lease reserve capacity on their fiber systems on an unregulated basis. This, in turn, has interested electric utilities in building fiber to their residential and commercial customers for voice, data and video. This will also provide for sophisticated load management systems and, possibly, generation of revenue.

  17. A review of initial investigations to utilize ERTS-1 data in determining the availability and distribution of living marine resources. [harvest and management of fisheries resources in Mississippi Sound and Gulf waters

    NASA Technical Reports Server (NTRS)

    Stevenson, W. H.; Kemmerer, A. J.; Atwell, B. H.; Maughan, P. M.

    1974-01-01

    The National Marine Fisheries Service has been studying the application of aerospace remote sensing to fisheries management and utilization for many years. The 15-month ERTS study began in July 1972 to: (1) determine the reliability of satellite and high altitude sensors to provide oceanographic parameters in coastal waters; (2) demonstrate the use of remotely-sensed oceanographic information to predict the distribution and abundance of adult menhaden; and (3) demonstrate the potential use of satellites for acquiring information for improving the harvest and management of fisheries resources. The study focused on a coastal area in the north-central portion of the Gulf of Mexico, including parts of Alabama, Mississippi, and Louisiana. The test area used in the final analysis was the Mississippi Sound and the area outside the barrier islands to approximately the 18-meter (10-fathom) curve.

  18. Future needs and recommendations in the development of species sensitivity distributions: Estimating toxicity thresholds for aquatic ecological communities and assessing impacts of chemical exposures.

    PubMed

    Belanger, Scott; Barron, Mace; Craig, Peter; Dyer, Scott; Galay-Burgos, Malyka; Hamer, Mick; Marshall, Stuart; Posthuma, Leo; Raimondo, Sandy; Whitehouse, Paul

    2017-07-01

    A species sensitivity distribution (SSD) is a probability model of the variation of species sensitivities to a stressor, in particular chemical exposure. The SSD approach has been used as a decision support tool in environmental protection and management since the 1980s, and the ecotoxicological, statistical, and regulatory basis and applications continue to evolve. This article summarizes the findings of a 2014 workshop held by the European Centre for Toxicology and Ecotoxicology of Chemicals and the UK Environment Agency in Amsterdam, The Netherlands, on the ecological relevance, statistical basis, and regulatory applications of SSDs. An array of research recommendations categorized under the topical areas of use of SSDs, ecological considerations, guideline considerations, method development and validation, toxicity data, mechanistic understanding, and uncertainty were identified and prioritized. A rationale for the most critical research needs identified in the workshop is provided. The workshop reviewed the technical basis and historical development and application of SSDs, described approaches to estimating generic and scenario-specific SSD-based thresholds, evaluated utility and application of SSDs as diagnostic tools, and presented new statistical approaches to formulate SSDs. Collectively, these address many of the research needs to expand and improve their application. The highest priority work, from a pragmatic regulatory point of view, is to develop a guidance of best practices that could act as a basis for global harmonization and discussions regarding the SSD methodology and tools. Integr Environ Assess Manag 2017;13:664-674. © 2016 SETAC. © 2016 SETAC.

  19. Node Resource Manager: A Distributed Computing Software Framework Used for Solving Geophysical Problems

    NASA Astrophysics Data System (ADS)

    Lawry, B. J.; Encarnacao, A.; Hipp, J. R.; Chang, M.; Young, C. J.

    2011-12-01

    With the rapid growth of multi-core computing hardware, it is now possible for scientific researchers to run complex, computationally intensive software on affordable, in-house commodity hardware. Multi-core CPUs (Central Processing Unit) and GPUs (Graphics Processing Unit) are now commonplace in desktops and servers. Developers today have access to extremely powerful hardware that enables the execution of software that could previously only be run on expensive, massively-parallel systems. It is no longer cost-prohibitive for an institution to build a parallel computing cluster consisting of commodity multi-core servers. In recent years, our research team has developed a distributed, multi-core computing system and used it to construct global 3D earth models using seismic tomography. Traditionally, computational limitations forced certain assumptions and shortcuts in the calculation of tomographic models; however, with the recent rapid growth in computational hardware including faster CPU's, increased RAM, and the development of multi-core computers, we are now able to perform seismic tomography, 3D ray tracing and seismic event location using distributed parallel algorithms running on commodity hardware, thereby eliminating the need for many of these shortcuts. We describe Node Resource Manager (NRM), a system we developed that leverages the capabilities of a parallel computing cluster. NRM is a software-based parallel computing management framework that works in tandem with the Java Parallel Processing Framework (JPPF, http://www.jppf.org/), a third party library that provides a flexible and innovative way to take advantage of modern multi-core hardware. NRM enables multiple applications to use and share a common set of networked computers, regardless of their hardware platform or operating system. Using NRM, algorithms can be parallelized to run on multiple processing cores of a distributed computing cluster of servers and desktops, which results in a dramatic speedup in execution time. NRM is sufficiently generic to support applications in any domain, as long as the application is parallelizable (i.e., can be subdivided into multiple individual processing tasks). At present, NRM has been effective in decreasing the overall runtime of several algorithms: 1) the generation of a global 3D model of the compressional velocity distribution in the Earth using tomographic inversion, 2) the calculation of the model resolution matrix, model covariance matrix, and travel time uncertainty for the aforementioned velocity model, and 3) the correlation of waveforms with archival data on a massive scale for seismic event detection. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  20. 75 FR 30746 - Proposed Revocation and Establishment of Class E Airspace; Northeast, AK

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-02

    ... Management System Office (see ADDRESSES section for address and phone number) between 9 a.m. and 5 p.m... Distribution System, which describes the application procedure. The Proposal This action proposes to amend.... * * * * * AAL AK E6 Barter Island, AK [Removed] * * * * * AAL AK E6 Mentasta Lake/Mountains Area, AK [Removed...

  1. A Geographic-Information-Systems-Based Approach to Analysis of Characteristics Predicting Student Persistence and Graduation

    ERIC Educational Resources Information Center

    Ousley, Chris

    2010-01-01

    This study sought to provide empirical evidence regarding the use of spatial analysis in enrollment management to predict persistence and graduation. The research utilized data from the 2000 U.S. Census and applicant records from The University of Arizona to study the spatial distributions of enrollments. Based on the initial results, stepwise…

  2. 78 FR 65339 - Agency Information Collection Activities; Submission for Office of Management and Budget Review...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... the applicant for marketing a particular medical device. A class III device that fails to meet PMA..., devices that were in commercial distribution before May 28, 1976, are not required to submit a PMA until... is labor-intensive to compile and complete; the remaining PMAs require minimal information. Based on...

  3. Data Aggregation in Multi-Agent Systems in the Presence of Hybrid Faults

    ERIC Educational Resources Information Center

    Srinivasan, Satish Mahadevan

    2010-01-01

    Data Aggregation (DA) is a set of functions that provide components of a distributed system access to global information for purposes of network management and user services. With the diverse new capabilities that networks can provide, applicability of DA is growing. DA is useful in dealing with multi-value domain information and often requires…

  4. Water leakage management by district metered areas at water distribution networks.

    PubMed

    Özdemir, Özgür

    2018-03-01

    The aim of this study is to design a district metered area (DMA) at water distribution network (WDN) for determination and reduction of water losses in the city of Malatya, Turkey. In the application area, a pilot DMA zone was built by analyzing the existing WDN, topographic map, length of pipes, number of customers, service connections, and valves. In the DMA, International Water Association standard water balance was calculated considering inflow rates and billing records. The ratio of water losses in DMAs was determined as 82%. Moreover, 3124 water meters of 2805 customers were examined while 50% of water meters were detected as faulty. This study revealed that DMA application is useful for the determination of water loss rate in WDNs and identify a cost-effective leakage reduction program.

  5. Flight dynamics software in a distributed network environment

    NASA Technical Reports Server (NTRS)

    Jeletic, J.; Weidow, D.; Boland, D.

    1995-01-01

    As with all NASA facilities, the announcement of reduced budgets, reduced staffing, and the desire to implement smaller/quicker/cheaper missions has required the Agency's organizations to become more efficient in what they do. To accomplish these objectives, the FDD has initiated the development of the Flight Dynamics Distributed System (FDDS). The underlying philosophy of FDDS is to build an integrated system that breaks down the traditional barriers of attitude, mission planning, and navigation support software to provide a uniform approach to flight dynamics applications. Through the application of open systems concepts and state-of-the-art technologies, including object-oriented specification concepts, object-oriented software, and common user interface, communications, data management, and executive services, the FDD will reengineer most of its six million lines of code.

  6. An Experimental Framework for Executing Applications in Dynamic Grid Environments

    NASA Technical Reports Server (NTRS)

    Huedo, Eduardo; Montero, Ruben S.; Llorente, Ignacio M.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The Grid opens up opportunities for resource-starved scientists and engineers to harness highly distributed computing resources. A number of Grid middleware projects are currently available to support the simultaneous exploitation of heterogeneous resources distributed in different administrative domains. However, efficient job submission and management continue being far from accessible to ordinary scientists and engineers due to the dynamic and complex nature of the Grid. This report describes a new Globus framework that allows an easier and more efficient execution of jobs in a 'submit and forget' fashion. Adaptation to dynamic Grid conditions is achieved by supporting automatic application migration following performance degradation, 'better' resource discovery, requirement change, owner decision or remote resource failure. The report also includes experimental results of the behavior of our framework on the TRGP testbed.

  7. Development of an Excel-based laboratory information management system for improving workflow efficiencies in early ADME screening.

    PubMed

    Lu, Xinyan

    2016-01-01

    There is a clear requirement for enhancing laboratory information management during early absorption, distribution, metabolism and excretion (ADME) screening. The application of a commercial laboratory information management system (LIMS) is limited by complexity, insufficient flexibility, high costs and extended timelines. An improved custom in-house LIMS for ADME screening was developed using Excel. All Excel templates were generated through macros and formulae, and information flow was streamlined as much as possible. This system has been successfully applied in task generation, process control and data management, with a reduction in both labor time and human error rates. An Excel-based LIMS can provide a simple, flexible and cost/time-saving solution for improving workflow efficiencies in early ADME screening.

  8. JobCenter: an open source, cross-platform, and distributed job queue management system optimized for scalability and versatility

    PubMed Central

    2012-01-01

    Background Laboratories engaged in computational biology or bioinformatics frequently need to run lengthy, multistep, and user-driven computational jobs. Each job can tie up a computer for a few minutes to several days, and many laboratories lack the expertise or resources to build and maintain a dedicated computer cluster. Results JobCenter is a client–server application and framework for job management and distributed job execution. The client and server components are both written in Java and are cross-platform and relatively easy to install. All communication with the server is client-driven, which allows worker nodes to run anywhere (even behind external firewalls or “in the cloud”) and provides inherent load balancing. Adding a worker node to the worker pool is as simple as dropping the JobCenter client files onto any computer and performing basic configuration, which provides tremendous ease-of-use, flexibility, and limitless horizontal scalability. Each worker installation may be independently configured, including the types of jobs it is able to run. Executed jobs may be written in any language and may include multistep workflows. Conclusions JobCenter is a versatile and scalable distributed job management system that allows laboratories to very efficiently distribute all computational work among available resources. JobCenter is freely available at http://code.google.com/p/jobcenter/. PMID:22846423

  9. Access Control based on Attribute Certificates for Medical Intranet Applications

    PubMed Central

    Georgiadis, Christos; Pangalos, George; Khair, Marie

    2001-01-01

    Background Clinical information systems frequently use intranet and Internet technologies. However these technologies have emphasized sharing and not security, despite the sensitive and private nature of much health information. Digital certificates (electronic documents which recognize an entity or its attributes) can be used to control access in clinical intranet applications. Objectives To outline the need for access control in distributed clinical database systems, to describe the use of digital certificates and security policies, and to propose the architecture for a system using digital certificates, cryptography and security policy to control access to clinical intranet applications. Methods We have previously developed a security policy, DIMEDAC (Distributed Medical Database Access Control), which is compatible with emerging public key and privilege management infrastructure. In our implementation approach we propose the use of digital certificates, to be used in conjunction with DIMEDAC. Results Our proposed access control system consists of two phases: the ways users gain their security credentials; and how these credentials are used to access medical data. Three types of digital certificates are used: identity certificates for authentication; attribute certificates for authorization; and access-rule certificates for propagation of access control policy. Once a user is identified and authenticated, subsequent access decisions are based on a combination of identity and attribute certificates, with access-rule certificates providing the policy framework. Conclusions Access control in clinical intranet applications can be successfully and securely managed through the use of digital certificates and the DIMEDAC security policy. PMID:11720951

  10. NASA Remote Sensing Data in Earth Sciences: Processing, Archiving, Distribution, Applications at the GES DISC

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory G.

    2005-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is one of the major Distributed Active Archive Centers (DAACs) archiving and distributing remote sensing data from the NASA's Earth Observing System. In addition to providing just data, the GES DISC/DAAC has developed various value-adding processing services. A particularly useful service is data processing a t the DISC (i.e., close to the input data) with the users' algorithms. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools (to avoid downloading unnecessary all the data). The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from data subsetting data spatially and/or by parameter to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. Shifting processing and data management burden from users to the GES DISC, allows scientists to concentrate on science, while the GES DISC handles the data management and data processing at a lower cost. Several examples of successful partnerships with scientists in the area of data processing and mining are presented.

  11. DREAM: Distributed Resources for the Earth System Grid Federation (ESGF) Advanced Management

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2015-12-01

    The data associated with climate research is often generated, accessed, stored, and analyzed on a mix of unique platforms. The volume, variety, velocity, and veracity of this data creates unique challenges as climate research attempts to move beyond stand-alone platforms to a system that truly integrates dispersed resources. Today, sharing data across multiple facilities is often a challenge due to the large variance in supporting infrastructures. This results in data being accessed and downloaded many times, which requires significant amounts of resources, places a heavy analytic development burden on the end users, and mismanaged resources. Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) has begun to solve this problem. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. However, significant challenges remain, including workflow provenance, modular and flexible deployment, scalability of a diverse set of computational resources, and more. Expanding on the existing ESGF, the Distributed Resources for the Earth System Grid Federation Advanced Management (DREAM) will ensure that the access, storage, movement, and analysis of the large quantities of data that are processed and produced by diverse science projects can be dynamically distributed with proper resource management. This system will enable data from an infinite number of diverse sources to be organized and accessed from anywhere on any device (including mobile platforms). The approach offers a powerful roadmap for the creation and integration of a unified knowledge base of an entire ecosystem, including its many geophysical, geographical, social, political, agricultural, energy, transportation, and cyber aspects. The resulting aggregation of data combined with analytics services has the potential to generate an informational universe and knowledge system of unprecedented size and value to the scientific community, downstream applications, decision makers, and the public.

  12. Regional Management Units for Marine Turtles: A Novel Framework for Prioritizing Conservation and Research across Multiple Scales

    PubMed Central

    Wallace, Bryan P.; DiMatteo, Andrew D.; Hurley, Brendan J.; Finkbeiner, Elena M.; Bolten, Alan B.; Chaloupka, Milani Y.; Hutchinson, Brian J.; Abreu-Grobois, F. Alberto; Amorocho, Diego; Bjorndal, Karen A.; Bourjea, Jerome; Bowen, Brian W.; Dueñas, Raquel Briseño; Casale, Paolo; Choudhury, B. C.; Costa, Alice; Dutton, Peter H.; Fallabrino, Alejandro; Girard, Alexandre; Girondot, Marc; Godfrey, Matthew H.; Hamann, Mark; López-Mendilaharsu, Milagros; Marcovaldi, Maria Angela; Mortimer, Jeanne A.; Musick, John A.; Nel, Ronel; Pilcher, Nicolas J.; Seminoff, Jeffrey A.; Troëng, Sebastian; Witherington, Blair; Mast, Roderic B.

    2010-01-01

    Background Resolving threats to widely distributed marine megafauna requires definition of the geographic distributions of both the threats as well as the population unit(s) of interest. In turn, because individual threats can operate on varying spatial scales, their impacts can affect different segments of a population of the same species. Therefore, integration of multiple tools and techniques — including site-based monitoring, genetic analyses, mark-recapture studies and telemetry — can facilitate robust definitions of population segments at multiple biological and spatial scales to address different management and research challenges. Methodology/Principal Findings To address these issues for marine turtles, we collated all available studies on marine turtle biogeography, including nesting sites, population abundances and trends, population genetics, and satellite telemetry. We georeferenced this information to generate separate layers for nesting sites, genetic stocks, and core distributions of population segments of all marine turtle species. We then spatially integrated this information from fine- to coarse-spatial scales to develop nested envelope models, or Regional Management Units (RMUs), for marine turtles globally. Conclusions/Significance The RMU framework is a solution to the challenge of how to organize marine turtles into units of protection above the level of nesting populations, but below the level of species, within regional entities that might be on independent evolutionary trajectories. Among many potential applications, RMUs provide a framework for identifying data gaps, assessing high diversity areas for multiple species and genetic stocks, and evaluating conservation status of marine turtles. Furthermore, RMUs allow for identification of geographic barriers to gene flow, and can provide valuable guidance to marine spatial planning initiatives that integrate spatial distributions of protected species and human activities. In addition, the RMU framework — including maps and supporting metadata — will be an iterative, user-driven tool made publicly available in an online application for comments, improvements, download and analysis. PMID:21253007

  13. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  14. Transforming for Distribution Based Logistics

    DTIC Science & Technology

    2005-05-26

    distribution process, and extracts elements of distribution and distribution management . Finally characteristics of an effective Army distribution...eventually evolve into a Distribution Management Element. Each organization is examined based on their ability to provide centralized command, with an...distribution and distribution management that together form the distribution system. Clearly all of the physical distribution activities including

  15. Advanced Distribution Management Systems | Grid Modernization | NREL

    Science.gov Websites

    Advanced Distribution Management Systems Advanced Distribution Management Systems Electric utilities are investing in updated grid technologies such as advanced distribution management systems to management testbed for cyber security in power systems. The "advanced" elements of advanced

  16. Planned versus actual outcomes as a result of animal feeding operation decisions for managing phosphorus.

    PubMed

    Cabot, Perry E; Nowak, Pete

    2005-01-01

    The paper explores how decisions made on animal feeding operations (AFOs) influence the management of manure and phosphorus. Variability among these decisions from operation to operation and from field to field can influence the validity of nutrient loss risk assessments. These assessments are based on assumptions that the decision outcomes regarding manure distribution will occur as they are planned. The discrepancy between planned versus actual outcomes in phosphorus management was explored on nine AFOs managing a contiguous set of 210 fields in south-central Wisconsin. A total of 2611 soil samples were collected and multiple interviews conducted to assign phosphorus index (PI) ratings to the fields. Spearman's rank correlation coefficients (r(S)) indicated that PI ratings were less sensitive to soil test phosphorus (STP) levels (r(S) = 0.378), universal soil loss equation (USLE) (r(S) = 0.261), ratings for chemical fertilizer application (r(S) = 0.185), and runoff class (r(S) = -0.089), and more sensitive to ratings for manure application (r(S) = 0.854). One-way ANOVA indicated that mean field STP levels were more homogenous than field PI ratings between AFOs. Kolmogorov-Smirnov (K-S) tests displayed several nonsignificant comparisons for cumulative distribution functions, S(x), of mean STP levels on AFO fields. On the other hand, the K-S tests of S(x) for PI ratings indicated that the majority of these S(x) functions were significantly different between AFOs at or greater than the 0.05 significance level. Interviews suggested multiple reasons for divergence between planned and actual outcomes in managing phosphorus, and that this divergence arises at the strategic, tactical, and operational levels of decision-making.

  17. Federal Emergency Management Information System (FEMIS) system administration guide, version 1.4.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arp, J.A.; Burnett, R.A.; Carter, R.J.

    The Federal Emergency Management Information Systems (FEMIS) is an emergency management planning and response tool that was developed by the Pacific Northwest National Laboratory (PNNL) under the direction of the US Army Chemical Biological Defense Command. The FEMIS System Administration Guide provides information necessary for the system administrator to maintain the FEMIS system. The FEMIS system is designed for a single Chemical Stockpile Emergency Preparedness Program (CSEPP) site that has multiple Emergency Operations Centers (EOCs). Each EOC has personal computers (PCs) that emergency planners and operations personnel use to do their jobs. These PCs are connected via a local areamore » network (LAN) to servers that provide EOC-wide services. Each EOC is interconnected to other EOCs via a Wide Area Network (WAN). Thus, FEMIS is an integrated software product that resides on client/server computer architecture. The main body of FEMIS software, referred to as the FEMIS Application Software, resides on the PC client(s) and is directly accessible to emergency management personnel. The remainder of the FEMIS software, referred to as the FEMIS Support Software, resides on the UNIX server. The Support Software provides the communication, data distribution, and notification functionality necessary to operate FEMIS in a networked, client/server environment. The UNIX server provides an Oracle relational database management system (RDBMS) services, ARC/INFO GIS (optional) capabilities, and basic file management services. PNNL developed utilities that reside on the server include the Notification Service, the Command Service that executes the evacuation model, and AutoRecovery. To operate FEMIS, the Application Software must have access to a site specific FEMIS emergency management database. Data that pertains to an individual EOC`s jurisdiction is stored on the EOC`s local server. Information that needs to be accessible to all EOCs is automatically distributed by the FEMIS database to the other EOCs at the site.« less

  18. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  19. Survivable algorithms and redundancy management in NASA's distributed computing systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw

    1992-01-01

    The design of survivable algorithms requires a solid foundation for executing them. While hardware techniques for fault-tolerant computing are relatively well understood, fault-tolerant operating systems, as well as fault-tolerant applications (survivable algorithms), are, by contrast, little understood, and much more work in this field is required. We outline some of our work that contributes to the foundation of ultrareliable operating systems and fault-tolerant algorithm design. We introduce our consensus-based framework for fault-tolerant system design. This is followed by a description of a hierarchical partitioning method for efficient consensus. A scheduler for redundancy management is introduced, and application-specific fault tolerance is described. We give an overview of our hybrid algorithm technique, which is an alternative to the formal approach given.

  20. Adaptable data management for systems biology investigations

    PubMed Central

    Boyle, John; Rovira, Hector; Cavnor, Chris; Burdick, David; Killcoyne, Sarah; Shmulevich, Ilya

    2009-01-01

    Background Within research each experiment is different, the focus changes and the data is generated from a continually evolving barrage of technologies. There is a continual introduction of new techniques whose usage ranges from in-house protocols through to high-throughput instrumentation. To support these requirements data management systems are needed that can be rapidly built and readily adapted for new usage. Results The adaptable data management system discussed is designed to support the seamless mining and analysis of biological experiment data that is commonly used in systems biology (e.g. ChIP-chip, gene expression, proteomics, imaging, flow cytometry). We use different content graphs to represent different views upon the data. These views are designed for different roles: equipment specific views are used to gather instrumentation information; data processing oriented views are provided to enable the rapid development of analysis applications; and research project specific views are used to organize information for individual research experiments. This management system allows for both the rapid introduction of new types of information and the evolution of the knowledge it represents. Conclusion Data management is an important aspect of any research enterprise. It is the foundation on which most applications are built, and must be easily extended to serve new functionality for new scientific areas. We have found that adopting a three-tier architecture for data management, built around distributed standardized content repositories, allows us to rapidly develop new applications to support a diverse user community. PMID:19265554

  1. PEM public key certificate cache server

    NASA Astrophysics Data System (ADS)

    Cheung, T.

    1993-12-01

    Privacy Enhanced Mail (PEM) provides privacy enhancement services to users of Internet electronic mail. Confidentiality, authentication, message integrity, and non-repudiation of origin are provided by applying cryptographic measures to messages transferred between end systems by the Message Transfer System. PEM supports both symmetric and asymmetric key distribution. However, the prevalent implementation uses a public key certificate-based strategy, modeled after the X.509 directory authentication framework. This scheme provides an infrastructure compatible with X.509. According to RFC 1422, public key certificates can be stored in directory servers, transmitted via non-secure message exchanges, or distributed via other means. Directory services provide a specialized distributed database for OSI applications. The directory contains information about objects and then provides structured mechanisms for accessing that information. Since directory services are not widely available now, a good approach is to manage certificates in a centralized certificate server. This document describes the detailed design of a centralized certificate cache serve. This server manages a cache of certificates and a cache of Certificate Revocation Lists (CRL's) for PEM applications. PEMapplications contact the server to obtain/store certificates and CRL's. The server software is programmed in C and ELROS. To use this server, ISODE has to be configured and installed properly. The ISODE library 'libisode.a' has to be linked together with this library because ELROS uses the transport layer functions provided by 'libisode.a.' The X.500 DAP library that is included with the ELROS distribution has to be linked in also, since the server uses the DAP library functions to communicate with directory servers.

  2. Niches, models, and climate change: Assessing the assumptions and uncertainties

    PubMed Central

    Wiens, John A.; Stralberg, Diana; Jongsomjit, Dennis; Howell, Christine A.; Snyder, Mark A.

    2009-01-01

    As the rate and magnitude of climate change accelerate, understanding the consequences becomes increasingly important. Species distribution models (SDMs) based on current ecological niche constraints are used to project future species distributions. These models contain assumptions that add to the uncertainty in model projections stemming from the structure of the models, the algorithms used to translate niche associations into distributional probabilities, the quality and quantity of data, and mismatches between the scales of modeling and data. We illustrate the application of SDMs using two climate models and two distributional algorithms, together with information on distributional shifts in vegetation types, to project fine-scale future distributions of 60 California landbird species. Most species are projected to decrease in distribution by 2070. Changes in total species richness vary over the state, with large losses of species in some “hotspots” of vulnerability. Differences in distributional shifts among species will change species co-occurrences, creating spatial variation in similarities between current and future assemblages. We use these analyses to consider how assumptions can be addressed and uncertainties reduced. SDMs can provide a useful way to incorporate future conditions into conservation and management practices and decisions, but the uncertainties of model projections must be balanced with the risks of taking the wrong actions or the costs of inaction. Doing this will require that the sources and magnitudes of uncertainty are documented, and that conservationists and resource managers be willing to act despite the uncertainties. The alternative, of ignoring the future, is not an option. PMID:19822750

  3. A Development of Lightweight Grid Interface

    NASA Astrophysics Data System (ADS)

    Iwai, G.; Kawai, Y.; Sasaki, T.; Watase, Y.

    2011-12-01

    In order to help a rapid development of Grid/Cloud aware applications, we have developed API to abstract the distributed computing infrastructures based on SAGA (A Simple API for Grid Applications). SAGA, which is standardized in the OGF (Open Grid Forum), defines API specifications to access distributed computing infrastructures, such as Grid, Cloud and local computing resources. The Universal Grid API (UGAPI), which is a set of command line interfaces (CLI) and APIs, aims to offer simpler API to combine several SAGA interfaces with richer functionalities. These CLIs of the UGAPI offer typical functionalities required by end users for job management and file access to the different distributed computing infrastructures as well as local computing resources. We have also built a web interface for the particle therapy simulation and demonstrated the large scale calculation using the different infrastructures at the same time. In this paper, we would like to present how the web interface based on UGAPI and SAGA achieve more efficient utilization of computing resources over the different infrastructures with technical details and practical experiences.

  4. Integrating network ecology with applied conservation: a synthesis and guide to implementation.

    PubMed

    Kaiser-Bunbury, Christopher N; Blüthgen, Nico

    2015-07-10

    Ecological networks are a useful tool to study the complexity of biotic interactions at a community level. Advances in the understanding of network patterns encourage the application of a network approach in other disciplines than theoretical ecology, such as biodiversity conservation. So far, however, practical applications have been meagre. Here we present a framework for network analysis to be harnessed to advance conservation management by using plant-pollinator networks and islands as model systems. Conservation practitioners require indicators to monitor and assess management effectiveness and validate overall conservation goals. By distinguishing between two network attributes, the 'diversity' and 'distribution' of interactions, on three hierarchical levels (species, guild/group and network) we identify seven quantitative metrics to describe changes in network patterns that have implications for conservation. Diversity metrics are partner diversity, vulnerability/generality, interaction diversity and interaction evenness, and distribution metrics are the specialization indices d' and [Formula: see text] and modularity. Distribution metrics account for sampling bias and may therefore be suitable indicators to detect human-induced changes to plant-pollinator communities, thus indirectly assessing the structural and functional robustness and integrity of ecosystems. We propose an implementation pathway that outlines the stages that are required to successfully embed a network approach in biodiversity conservation. Most importantly, only if conservation action and study design are aligned by practitioners and ecologists through joint experiments, are the findings of a conservation network approach equally beneficial for advancing adaptive management and ecological network theory. We list potential obstacles to the framework, highlight the shortfall in empirical, mostly experimental, network data and discuss possible solutions. Published by Oxford University Press on behalf of the Annals of Botany Company.

  5. Making the Grid "Smart" Through "Smart" Microgrids: Real-Time Power Management of Microgrids with Multiple Distributed Generation Sources Using Intelligent Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nehrir, M. Hashem

    In this Project we collaborated with two DOE National Laboratories, Pacific Northwest National Lab (PNNL) and Lawrence Berkeley National Lab (LBL). Dr. Hammerstrom of PNNL initially supported our project and was on the graduate committee of one of the Ph.D. students (graduated in 2014) who was supported by this project. He is also a committee member of a current graduate student of the PI who was supported by this project in the last two years (August 2014-July 2016). The graduate student is now supported be the Electrical and Computer Engineering (ECE) Department at Montana State University (MSU). Dr. Chris Marneymore » of LBL provided actual load data, and the software WEBOPT developed at LBL for microgrid (MG) design for our project. NEC-Labs America, a private industry, also supported our project, providing expert support and modest financial support. We also used the software “HOMER,” originally developed at the National Renewable Energy Laboratory (NREL) and the most recent version made available to us by HOMER Energy, Inc., for MG (hybrid energy system) unit sizing. We compared the findings from WebOpt and HOMER and designed appropriately sized hybrid systems for our case studies. The objective of the project was to investigate real-time power management strategies for MGs using intelligent control, considering maximum feasible energy sustainability, reliability and efficiency while, minimizing cost and undesired environmental impact (emissions). Through analytic and simulation studies, we evaluated the suitability of several heuristic and artificial-intelligence (AI)-based optimization techniques that had potential for real-time MG power management, including genetic algorithms (GA), ant colony optimization (ACO), particle swarm optimization (PSO), and multi-agent systems (MAS), which is based on the negotiation of smart software-based agents. We found that PSO and MAS, in particular, distributed MAS, were more efficient and better suited for our work. We investigated the following: • Intelligent load control - demand response (DR) - for frequency stabilization in islanded MGs (partially supported by PNNL). • The impact of high penetration of solar photovoltaic (PV)-generated power at the distribution level (partially supported by PNNL). • The application of AI approaches to renewable (wind, PV) power forecasting (proposed by the reviewers of our proposal). • Application of AI approaches and DR for real-time MG power management (partially supported by NEC Labs-America) • Application of DR in dealing with the variability of wind power • Real-time MG power management using DR and storage (partially supported by NEC Labs-America) • Application of DR in enhancing the performance of load-frequency controller • MAS-based whole-sale and retail power market design for smart grid A« less

  6. WE-B-BRD-01: Innovation in Radiation Therapy Planning II: Cloud Computing in RT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, K; Kagadis, G; Xing, L

    As defined by the National Institute of Standards and Technology, cloud computing is “a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” Despite the omnipresent role of computers in radiotherapy, cloud computing has yet to achieve widespread adoption in clinical or research applications, though the transition to such “on-demand” access is underway. As this transition proceeds, new opportunities for aggregate studies and efficient use of computational resources are set againstmore » new challenges in patient privacy protection, data integrity, and management of clinical informatics systems. In this Session, current and future applications of cloud computing and distributed computational resources will be discussed in the context of medical imaging, radiotherapy research, and clinical radiation oncology applications. Learning Objectives: Understand basic concepts of cloud computing. Understand how cloud computing could be used for medical imaging applications. Understand how cloud computing could be employed for radiotherapy research.4. Understand how clinical radiotherapy software applications would function in the cloud.« less

  7. CAD-DRASTIC: chloride application density combined with DRASTIC for assessing groundwater vulnerability to road salt application

    NASA Astrophysics Data System (ADS)

    Salek, Mansour; Levison, Jana; Parker, Beth; Gharabaghi, Bahram

    2018-06-01

    Road salt is pervasively used throughout Canada and in other cold regions during winter. For cities relying exclusively on groundwater, it is important to plan and minimize the application of salt accordingly to mitigate the adverse effects of high chloride concentrations in water supply aquifers. The use of geospatial data (road network, land use, Quaternary and bedrock geology, average annual recharge, water-table depth, soil distribution, topography) in the DRASTIC methodology provides an efficient way of distinguishing salt-vulnerable areas associated with groundwater supply wells, to aid in the implementation of appropriate management practices for road salt application in urban areas. This research presents a GIS-based methodology to accomplish a vulnerability analysis for 12 municipal water supply wells within the City of Guelph, Ontario, Canada. The chloride application density (CAD) value at each supply well is calculated and related to the measured groundwater chloride concentrations and further combined with soil media and aquifer vadose- and saturated-zone properties used in DRASTIC. This combined approach, CAD-DRASTIC, is more accurate than existing groundwater vulnerability mapping methods and can be used by municipalities and other water managers to further improve groundwater protection related to road salt application.

  8. Managing a tier-2 computer centre with a private cloud infrastructure

    NASA Astrophysics Data System (ADS)

    Bagnasco, Stefano; Berzano, Dario; Brunetti, Riccardo; Lusso, Stefano; Vallero, Sara

    2014-06-01

    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI.

  9. The SysMan monitoring service and its management environment

    NASA Astrophysics Data System (ADS)

    Debski, Andrzej; Janas, Ekkehard

    1996-06-01

    Management of modern information systems is becoming more and more complex. There is a growing need for powerful, flexible and affordable management tools to assist system managers in maintaining such systems. It is at the same time evident that effective management should integrate network management, system management and application management in a uniform way. Object oriented OSI management architecture with its four basic modelling concepts (information, organization, communication and functional models) together with widely accepted distribution platforms such as ANSA/CORBA, constitutes a reliable and modern framework for the implementation of a management toolset. This paper focuses on the presentation of concepts and implementation results of an object oriented management toolset developed and implemented within the framework of the ESPRIT project 7026 SysMan. An overview is given of the implemented SysMan management services including the System Management Service, Monitoring Service, Network Management Service, Knowledge Service, Domain and Policy Service, and the User Interface. Special attention is paid to the Monitoring Service which incorporates the architectural key entity responsible for event management. Its architecture and building components, especially filters, are emphasized and presented in detail.

  10. Operationa1 Logistics 2010.

    DTIC Science & Technology

    1997-04-02

    movements control center (MCC) which is co-located with a material management center (MMC) forming a distribution management center (DMC). The MMC...missions by a section in Support Operations called the Distribution Management Center (DMC)29. The DMC executes the distribution management (also...restructured organizations are the formula for making theater distribution a reality and the locus of these changes is the Distribution Management Center

  11. An efficient architecture for the integration of sensor and actuator networks into the future internet

    NASA Astrophysics Data System (ADS)

    Schneider, J.; Klein, A.; Mannweiler, C.; Schotten, H. D.

    2011-08-01

    In the future, sensors will enable a large variety of new services in different domains. Important application areas are service adaptations in fixed and mobile environments, ambient assisted living, home automation, traffic management, as well as management of smart grids. All these applications will share a common property, the usage of networked sensors and actuators. To ensure an efficient deployment of such sensor-actuator networks, concepts and frameworks for managing and distributing sensor data as well as for triggering actuators need to be developed. In this paper, we present an architecture for integrating sensors and actuators into the future Internet. In our concept, all sensors and actuators are connected via gateways to the Internet, that will be used as comprehensive transport medium. Additionally, an entity is needed for registering all sensors and actuators, and managing sensor data requests. We decided to use a hierarchical structure, comparable to the Domain Name Service. This approach realizes a cost-efficient architecture disposing of "plug and play" capabilities and accounting for privacy issues.

  12. Issues and challenges in resource management and its interaction with levels 2/3 fusion with applications to real-world problems: an annotated perspective

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Kadar, Ivan; Hintz, Kenneth; Biermann, Joachim; Chong, Chee-Yee; Salerno, John; Das, Subrata

    2007-04-01

    Resource management (or process refinement) is critical for information fusion operations in that users, sensors, and platforms need to be informed, based on mission needs, on how to collect, process, and exploit data. To meet these growing concerns, a panel session was conducted at the International Society of Information Fusion Conference in 2006 to discuss the various issues surrounding the interaction of Resource Management with Level 2/3 Situation and Threat Assessment. This paper briefly consolidates the discussion of the invited panel panelists. The common themes include: (1) Addressing the user in system management, sensor control, and knowledge based information collection (2) Determining a standard set of fusion metrics for optimization and evaluation based on the application (3) Allowing dynamic and adaptive updating to deliver timely information needs and information rates (4) Optimizing the joint objective functions at all information fusion levels based on decision-theoretic analysis (5) Providing constraints from distributed resource mission planning and scheduling; and (6) Defining L2/3 situation entity definitions for knowledge discovery, modeling, and information projection

  13. National information infrastructure applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forslund, D.; George, J.; Greenfield, J.

    1996-07-01

    This is the final report of a two-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project sought to develop a telemedical application in which medical records are electronically searched and digital signatures of real CT scan data are indexed and used to characterize a range of diseases and are used to compare on-line medical data with archived clinical data rapidly. This system includes multimedia data management, interactive collaboration, data compression and transmission, remote data storage and retrieval, and automated data analysis integrated in a distributed application between Los Alamos and the National Jewishmore » Hospital.« less

  14. Study and Application of Remote Data Moving Transmission under the Network Convergence

    NASA Astrophysics Data System (ADS)

    Zhiguo, Meng; Du, Zhou

    The data transmission is an important problem in remote applications. Advance of network convergence has help to select and use data transmission model. The embedded system and data management platform is a key of the design. With communication module, interface technology and the transceiver which has independent intellectual property rights connected broadband network and mobile network seamlessly. Using the distribution system of mobile base station to realize the wireless transmission, using public networks to implement the data transmission, making the distant information system break through area restrictions and realizing transmission of the moving data, it has been fully recognized in long-distance medical care applications.

  15. The distribution and dynamics of chromium and nickel in cultivated and uncultivated semi-arid soils from Nigeria.

    PubMed

    Agbenin, John O

    2002-12-02

    Growing concern about heavy metal contamination of agricultural lands under long-term application of inorganic fertilizers and organic wastes makes periodic risk assessment of heavy metal accumulation in arable lands imperative. As a part of a much larger study to systematically document the status of heavy metals in savanna soils this study investigated the distribution and dynamics of Cr and Ni in a savanna soil after 50 years of continuous cultivation and application of inorganic fertilizers and organic manures. The cultivated fields were fertilized with inorganic fertilizers (NPK), farmyard manure (FYM), FYM+NPK for 50 years and a control plot under continuous cultivation for 50 years but did not receive either FYM or NPK. Two uncultivated or natural sites were sampled as reference conditions for assessing the dynamics of Cr and Ni induced by cultivation and management practices. The distribution of Cr and Ni in the soil profiles exhibited eluvial-illuvial patterns. Sand and clay fractions explained between 62 and 90% of the variance in Cr and Ni concentration and distribution in the soil profiles. Mean Cr concentrations ranged from 17 to 59 mg kg(-1), while Ni varied from <1 mg kg(-1) in the topsoil to 16 mg kg(-1) in the subsoil. Mass balance calculations showed a loss of 10% Cr and 17% Ni in the FYM field, and approximately 4% Cr and 11% Ni in the NPK field compared to the natural site after 50 years of cultivation. The control and FYM + NPK field had, however, a positive balance of Cr and Ni. In general, it was concluded that existing soil management practices in this region are unlikely to lead to Cr and Ni build-up probably because of low rates of application of inorganic fertilizers, farmyard manure and other organic wastes to the soils.

  16. Adaptive and technology-independent architecture for fault-tolerant distributed AAL solutions.

    PubMed

    Schmidt, Michael; Obermaisser, Roman

    2018-04-01

    Today's architectures for Ambient Assisted Living (AAL) must cope with a variety of challenges like flawless sensor integration and time synchronization (e.g. for sensor data fusion) while abstracting from the underlying technologies at the same time. Furthermore, an architecture for AAL must be capable to manage distributed application scenarios in order to support elderly people in all situations of their everyday life. This encompasses not just life at home but in particular the mobility of elderly people (e.g. when going for a walk or having sports) as well. Within this paper we will introduce a novel architecture for distributed AAL solutions whose design follows a modern Microservices approach by providing small core services instead of a monolithic application framework. The architecture comprises core services for sensor integration, and service discovery while supporting several communication models (periodic, sporadic, streaming). We extend the state-of-the-art by introducing a fault-tolerance model for our architecture on the basis of a fault-hypothesis describing the fault-containment regions (FCRs) with their respective failure modes and failure rates in order to support safety-critical AAL applications. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Data management in an object-oriented distributed aircraft conceptual design environment

    NASA Astrophysics Data System (ADS)

    Lu, Zhijie

    In the competitive global market place, aerospace companies are forced to deliver the right products to the right market, with the right cost, and at the right time. However, the rapid development of technologies and new business opportunities, such as mergers, acquisitions, supply chain management, etc., have dramatically increased the complexity of designing an aircraft. Therefore, the pressure to reduce design cycle time and cost is enormous. One way to solve such a dilemma is to develop and apply advanced engineering environments (AEEs), which are distributed collaborative virtual design environments linking researchers, technologists, designers, etc., together by incorporating application tools and advanced computational, communications, and networking facilities. Aircraft conceptual design, as the first design stage, provides major opportunity to compress design cycle time and is the cheapest place for making design changes. However, traditional aircraft conceptual design programs, which are monolithic programs, cannot provide satisfactory functionality to meet new design requirements due to the lack of domain flexibility and analysis scalability. Therefore, we are in need of the next generation aircraft conceptual design environment (NextADE). To build the NextADE, the framework and the data management problem are two major problems that need to be addressed at the forefront. Solving these two problems, particularly the data management problem, is the focus of this research. In this dissertation, in light of AEEs, a distributed object-oriented framework is firstly formulated and tested for the NextADE. In order to improve interoperability and simplify the integration of heterogeneous application tools, data management is one of the major problems that need to be tackled. To solve this problem, taking into account the characteristics of aircraft conceptual design data, a robust, extensible object-oriented data model is then proposed according to the distributed object-oriented framework. By overcoming the shortcomings of the traditional approach of modeling aircraft conceptual design data, this data model makes it possible to capture specific detailed information of aircraft conceptual design without sacrificing generality, which is one of the most desired features of a data model for aircraft conceptual design. Based upon this data model, a prototype of the data management system, which is one of the fundamental building blocks of the NextADE, is implemented utilizing the state of the art information technologies. Using a general-purpose integration software package to demonstrate the efficacy of the proposed framework and the data management system, the NextADE is initially implemented by integrating the prototype of the data management system with other building blocks of the design environment, such as disciplinary analyses programs and mission analyses programs. As experiments, two case studies are conducted in the integrated design environments. One is based upon a simplified conceptual design of a notional conventional aircraft; the other is a simplified conceptual design of an unconventional aircraft. As a result of the experiments, the proposed framework and the data management approach are shown to be feasible solutions to the research problems.

  18. Earth-Base: A Free And Open Source, RESTful Earth Sciences Platform

    NASA Astrophysics Data System (ADS)

    Kishor, P.; Heim, N. A.; Peters, S. E.; McClennen, M.

    2012-12-01

    This presentation describes the motivation, concept, and architecture behind Earth-Base, a web-based, RESTful data-management, analysis and visualization platform for earth sciences data. Traditionally web applications have been built directly accessing data from a database using a scripting language. While such applications are great at bring results to a wide audience, they are limited in scope to the imagination and capabilities of the application developer. Earth-Base decouples the data store from the web application by introducing an intermediate "data application" tier. The data application's job is to query the data store using self-documented, RESTful URIs, and send the results back formatted as JavaScript Object Notation (JSON). Decoupling the data store from the application allows virtually limitless flexibility in developing applications, both web-based for human consumption or programmatic for machine consumption. It also allows outside developers to use the data in their own applications, potentially creating applications that the original data creator and app developer may not have even thought of. Standardized specifications for URI-based querying and JSON-formatted results make querying and developing applications easy. URI-based querying also allows utilizing distributed datasets easily. Companion mechanisms for querying data snapshots aka time-travel, usage tracking and license management, and verification of semantic equivalence of data are also described. The latter promotes the "What You Expect Is What You Get" (WYEIWYG) principle that can aid in data citation and verification.

  19. Impact of the social networking applications for health information management for patients and physicians.

    PubMed

    Sahama, Tony; Liang, Jian; Iannella, Renato

    2012-01-01

    Most social network users hold more than one social network account and utilize them in different ways depending on the digital context. For example, friendly chat on Facebook, professional discussion on LinkedIn, and health information exchange on PatientsLikeMe. Thus many web users need to manage many disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time consuming, inefficient, and leads to lost opportunity. In this paper we propose a framework for multiple profile management of online social networks and showcase a demonstrator utilising an open source platform. The result of the research enables a user to create and manage an integrated profile and share/synchronise their profiles with their social networks. A number of use cases were created to capture the functional requirements and describe the interactions between users and the online services. An innovative application of this project is in public health informatics. We utilize the prototype to examine how the framework can benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians.

  20. MPEG-21 in broadcasting: the novel digital broadcast item model

    NASA Astrophysics Data System (ADS)

    Lugmayr, Artur R.; Touimi, Abdellatif B.; Kaneko, Itaru; Kim, Jong-Nam; Alberti, Claudio; Yona, Sadigurschi; Kim, Jaejoon; Andrade, Maria Teresa; Kalli, Seppo

    2004-05-01

    The MPEG experts are currently developing the MPEG-21 set of standards and this includes a framework and specifications for digital rights management (DRM), delivery of quality of services (QoS) over heterogeneous networks and terminals, packaging of multimedia content and other things essential for the infrastructural aspects of multimedia content distribution. Considerable research effort is being applied to these new developments and the capabilities of MPEG-21 technologies to address specific application areas are being investigated. One such application area is broadcasting, in particular the development of digital TV and its services. In more practical terms, digital TV addresses networking, events, channels, services, programs, signaling, encoding, bandwidth, conditional access, subscription, advertisements and interactivity. MPEG-21 provides an excellent framework of standards to be applied in digital TV applications. Within the scope of this research work we describe a new model based on MPEG-21 and its relevance to digital TV: the digital broadcast item model (DBIM). The goal of the DBIM is to elaborate the potential of MPEG-21 for digital TV applications. Within this paper we focus on a general description of the DBIM, quality of service (QoS) management and metadata filtering, digital rights management and also present use-cases and scenarios where the DBIM"s role is explored in detail.

  1. Fungicides affect Japanese beetle Popillia japonica (Coleoptera: Scarabaeidae) egg hatch, larval survival and detoxification enzymes.

    PubMed

    Obear, Glen R; Adesanya, Adekunle W; Liesch, Patrick J; Williamson, R Chris; Held, David W

    2016-05-01

    Larvae of the Japanese beetle, Popillia japonica (Coleoptera: Scarabaeidae), have a patchy distribution in soils, which complicates detection and management of this insect pest. Managed turf systems are frequently under pest pressure from fungal pathogens, necessitating frequent fungicide applications. It is possible that certain turfgrass fungicides may have lethal or sublethal adverse effects on eggs and larvae of P. japonica that inhabit managed turf systems. In this study, eggs and first-, second- and third-instar larvae were treated with the fungicides chlorothalonil and propiconazole, and survival was compared with that of untreated controls as well as positive controls treated with the insecticide trichlorfon. Chlorothalonil reduced survival of first-instar larvae treated directly and hatched from treated eggs. Propiconazole delayed egg hatch, reduced the proportion of eggs that successfully hatched and reduced survival of first-instar larvae treated directly and hatched from treated eggs. Sublethal doses of the fungicides lowered the activities of certain detoxification enzymes in third-instar grubs. Fungicide applications to turfgrass that coincide with oviposition and egg hatch of white grubs may have sublethal effects. This work is applicable both to high-maintenance turfgrass such as golf courses, where applications of pesticides are more frequent, and to home lawn services, where mixtures of multiple pesticides are commonly used. © 2015 Society of Chemical Industry.

  2. A multi-domain trust management model for supporting RFID applications of IoT

    PubMed Central

    Li, Feng

    2017-01-01

    The use of RFID technology in complex and distributed environments often leads to a multi-domain RFID system, in which trust establishment among entities from heterogeneous domains without past interaction or prior agreed policy, is a challenge. The current trust management mechanisms in the literature do not meet the specific requirements in multi-domain RFID systems. Therefore, this paper analyzes the special challenges on trust management in multi-domain RFID systems, and identifies the implications and the requirements of the challenges on the solutions to the trust management of multi-domain RFID systems. A multi-domain trust management model is proposed, which provides a hierarchical trust management framework include a diversity of trust evaluation and establishment approaches. The simulation results and analysis show that the proposed method has excellent ability to deal with the trust relationships, better security, and higher accuracy rate. PMID:28708855

  3. A multi-domain trust management model for supporting RFID applications of IoT.

    PubMed

    Wu, Xu; Li, Feng

    2017-01-01

    The use of RFID technology in complex and distributed environments often leads to a multi-domain RFID system, in which trust establishment among entities from heterogeneous domains without past interaction or prior agreed policy, is a challenge. The current trust management mechanisms in the literature do not meet the specific requirements in multi-domain RFID systems. Therefore, this paper analyzes the special challenges on trust management in multi-domain RFID systems, and identifies the implications and the requirements of the challenges on the solutions to the trust management of multi-domain RFID systems. A multi-domain trust management model is proposed, which provides a hierarchical trust management framework include a diversity of trust evaluation and establishment approaches. The simulation results and analysis show that the proposed method has excellent ability to deal with the trust relationships, better security, and higher accuracy rate.

  4. Industrial application of thermal image processing and thermal control

    NASA Astrophysics Data System (ADS)

    Kong, Lingxue

    2001-09-01

    Industrial application of infrared thermography is virtually boundless as it can be used in any situations where there are temperature differences. This technology has particularly been widely used in automotive industry for process evaluation and system design. In this work, thermal image processing technique will be introduced to quantitatively calculate the heat stored in a warm/hot object and consequently, a thermal control system will be proposed to accurately and actively manage the thermal distribution within the object in accordance with the heat calculated from the thermal images.

  5. Study on the application of mobile internet cloud computing platform

    NASA Astrophysics Data System (ADS)

    Gong, Songchun; Fu, Songyin; Chen, Zheng

    2012-04-01

    The innovative development of computer technology promotes the application of the cloud computing platform, which actually is the substitution and exchange of a sort of resource service models and meets the needs of users on the utilization of different resources after changes and adjustments of multiple aspects. "Cloud computing" owns advantages in many aspects which not merely reduce the difficulties to apply the operating system and also make it easy for users to search, acquire and process the resources. In accordance with this point, the author takes the management of digital libraries as the research focus in this paper, and analyzes the key technologies of the mobile internet cloud computing platform in the operation process. The popularization and promotion of computer technology drive people to create the digital library models, and its core idea is to strengthen the optimal management of the library resource information through computers and construct an inquiry and search platform with high performance, allowing the users to access to the necessary information resources at any time. However, the cloud computing is able to promote the computations within the computers to distribute in a large number of distributed computers, and hence implement the connection service of multiple computers. The digital libraries, as a typical representative of the applications of the cloud computing, can be used to carry out an analysis on the key technologies of the cloud computing.

  6. Sequential sampling and biorational chemistries for management of lepidopteran pests of vegetable amaranth in the Caribbean.

    PubMed

    Clarke-Harris, Dionne; Fleischer, Shelby J

    2003-06-01

    Although vegetable amaranth, Amaranthus viridis L. and A. dubius Mart. ex Thell., production and economic importance is increasing in diversified peri-urban farms in Jamaica, lepidopteran herbivory is common even during weekly pyrethroid applications. We developed and validated a sampling plan, and investigated insecticides with new modes of action, for a complex of five species (Pyralidae: Spoladea recurvalis (F.), Herpetogramma bipunctalis (F.), Noctuidae: Spodoptera exigua (Hubner), S. frugiperda (J. E. Smith), and S. eridania Stoll). Significant within-plant variation occurred with H. bipunctalis, and a six-leaf sample unit including leaves from the inner and outer whorl was selected to sample all species. Larval counts best fit a negative binomial distribution. We developed a sequential sampling plan using a threshold of one larva per sample unit and the fitted distribution with a k(c) of 0.645. When compared with a fixed plan of 25 plants, sequential sampling recommended the same management decision on 87.5%, additional samples on 9.4%, and gave inaccurate recommendations on 3.1% of 32 farms, while reducing sample size by 46%. Insecticide frequency was reduced 33-60% when management decisions were based on sampled data compared with grower-standards, with no effect on crop damage. Damage remained high or variable (10-46%) with pyrethroid applications. Lepidopteran control was dramatically improved with ecdysone agonists (tebufenozide) or microbial metabolites (spinosyns and emamectin benzoate). This work facilitates resistance management efforts concurrent with the introduction of newer modes of action for lepidopteran control in leafy vegetable production in the Caribbean.

  7. The contribution of waste management to the reduction of greenhouse gas emissions with applications in the city of Bucharest.

    PubMed

    Sandulescu, Elena

    2004-12-01

    Waste management is a key process to protect the environment and conserve resources. The contribution of appropriate waste management measures to the reduction of greenhouse gas (GHG) emissions from the city of Bucharest was studied. An analysis of the distribution of waste flows into various treatment options was conducted using the material flows and stocks analysis (MFSA). An optimum scenario (i.e. municipal solid waste stream managed as: recycling of recoverable materials, 8%; incineration of combustibles, 60%; landfilling of non-combustibles, 32%) was modelled to represent the future waste management in Bucharest with regard to its relevance towards the potential for GHG reduction. The results indicate that it can contribute by 5.5% to the reduction of the total amount of GHGs emitted from Bucharest.

  8. Heterogeneous collaborative sensor network for electrical management of an automated house with PV energy.

    PubMed

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Alvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier

    2011-01-01

    In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the "Smart Grid" which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called "MagicBox" equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency.

  9. Study on Big Database Construction and its Application of Sample Data Collected in CHINA'S First National Geographic Conditions Census Based on Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Cheng, T.; Zhou, X.; Jia, Y.; Yang, G.; Bai, J.

    2018-04-01

    In the project of China's First National Geographic Conditions Census, millions of sample data have been collected all over the country for interpreting land cover based on remote sensing images, the quantity of data files reaches more than 12,000,000 and has grown in the following project of National Geographic Conditions Monitoring. By now, using database such as Oracle for storing the big data is the most effective method. However, applicable method is more significant for sample data's management and application. This paper studies a database construction method which is based on relational database with distributed file system. The vector data and file data are saved in different physical location. The key issues and solution method are discussed. Based on this, it studies the application method of sample data and analyzes some kinds of using cases, which could lay the foundation for sample data's application. Particularly, sample data locating in Shaanxi province are selected for verifying the method. At the same time, it takes 10 first-level classes which defined in the land cover classification system for example, and analyzes the spatial distribution and density characteristics of all kinds of sample data. The results verify that the method of database construction which is based on relational database with distributed file system is very useful and applicative for sample data's searching, analyzing and promoted application. Furthermore, sample data collected in the project of China's First National Geographic Conditions Census could be useful in the earth observation and land cover's quality assessment.

  10. iRODS: A Distributed Data Management Cyberinfrastructure for Observatories

    NASA Astrophysics Data System (ADS)

    Rajasekar, A.; Moore, R.; Vernon, F.

    2007-12-01

    Large-scale and long-term preservation of both observational and synthesized data requires a system that virtualizes data management concepts. A methodology is needed that can work across long distances in space (distribution) and long-periods in time (preservation). The system needs to manage data stored on multiple types of storage systems including new systems that become available in the future. This concept is called infrastructure independence, and is typically implemented through virtualization mechanisms. Data grids are built upon concepts of data and trust virtualization. These concepts enable the management of collections of data that are distributed across multiple institutions, stored on multiple types of storage systems, and accessed by multiple types of clients. Data virtualization ensures that the name spaces used to identify files, users, and storage systems are persistent, even when files are migrated onto future technology. This is required to preserve authenticity, the link between the record and descriptive and provenance metadata. Trust virtualization ensures that access controls remain invariant as files are moved within the data grid. This is required to track the chain of custody of records over time. The Storage Resource Broker (http://www.sdsc.edu/srb) is one such data grid used in a wide variety of applications in earth and space sciences such as ROADNet (roadnet.ucsd.edu), SEEK (seek.ecoinformatics.org), GEON (www.geongrid.org) and NOAO (www.noao.edu). Recent extensions to data grids provide one more level of virtualization - policy or management virtualization. Management virtualization ensures that execution of management policies can be automated, and that rules can be created that verify assertions about the shared collections of data. When dealing with distributed large-scale data over long periods of time, the policies used to manage the data and provide assurances about the authenticity of the data become paramount. The integrated Rule-Oriented Data System (iRODS) (http://irods.sdsc.edu) provides the mechanisms needed to describe not only management policies, but also to track how the policies are applied and their execution results. The iRODS data grid maps management policies to rules that control the execution of the remote micro-services. As an example, a rule can be created that automatically creates a replica whenever a file is added to a specific collection, or extracts its metadata automatically and registers it in a searchable catalog. For the replication operation, the persistent state information consists of the replica location, the creation date, the owner, the replica size, etc. The mechanism used by iRODS for providing policy virtualization is based on well-defined functions, called micro-services, which are chained into alternative workflows using rules. A rule engine, based on the event-condition-action paradigm executes the rule-based workflows after an event. Rules can be deferred to a pre-determined time or executed on a periodic basis. As the data management policies evolve, the iRODS system can implement new rules, new micro-services, and new state information (metadata content) needed to manage the new policies. Each sub- collection can be managed using a different set of policies. The discussion of the concepts in rule-based policy virtualization and its application to long-term and large-scale data management for observatories such as ORION and NEON will be the basis of the paper.

  11. Container Management During Desert Shield/Storm: An Analysis and Critique of Lessons Learned

    DTIC Science & Technology

    1993-04-15

    across the distribution spectrum.14 These issues were grouped into five major categories: Containerization and Packaging, Distribution Management , Automation...of containers is needed, according to TDAP. Distribution - Management issues. The Desert Shield experience identified three general distribution ...recommended the formation of a 19 Theater Distribution Management Center from the assets of the Movement Control Agency (MCA) and Material Management

  12. Gulf of Mexico Integrated Science - Tampa Bay Study - Data Information Management System (DIMS)

    USGS Publications Warehouse

    Johnston, James

    2004-01-01

    The Tampa Bay Integrated Science Study is an effort by the U.S. Geological Survey (USGS) that combines the expertise of federal, state and local partners to address some of the most pressing ecological problems of the Tampa Bay estuary. This project serves as a template for the application of integrated research projects in other estuaries in the Gulf of Mexico. Efficient information and data distribution for the Tampa Bay Study has required the development of a Data Information Management System (DIMS). This information system is being used as an outreach management tool, providing information to scientists, decision makers and the public on the coastal resources of the Gulf of Mexico.

  13. Probabilistic graphs as a conceptual and computational tool in hydrology and water management

    NASA Astrophysics Data System (ADS)

    Schoups, Gerrit

    2014-05-01

    Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.

  14. A comparison of ground-based air-blast sprayer and aircraft application of fungicides to manage scab in tall pecan trees

    USDA-ARS?s Scientific Manuscript database

    Scab (caused by Venturia effusa) is the most destructive disease of pecan in the southeastern USA. The most widely used method to apply fungicide is air-blast (AB) sprayers. Aerially (A) applied sprays are also used, but the disease distribution and spray coverage of these two methods has not been c...

  15. Army Civil Affairs Functional Specialists: On the Verge of Extinction

    DTIC Science & Technology

    2012-03-22

    the following six areas: rule of law, economic stability , infrastructure, governance, public health and welfare, and public education and information...as defined in table 1. Rule of Law Economic Stability Infrastructure Rule of law pertains to the fair, competent, and efficient application and... Economic stability pertains to the efficient management (for example, production, distribution, trade, and consumption) of resources, goods

  16. Estimating spread rates of non-native species: the gypsy moth as a case study

    Treesearch

    Patrick Tobin; Andrew M. Liebhold; E. Anderson Roberts; Laura M. Blackburn

    2015-01-01

    Estimating rates of spread and generating projections of future range expansion for invasive alien species is a key process in the development of management guidelines and policy. Critical needs to estimate spread rates include the availability of surveys to characterize the spatial distribution of an invading species and the application of analytical methods to...

  17. 75 FR 3493 - Notice of Acceptance for Docketing of the Application, Notice of Opportunity for Hearing for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-21

    ...), Rockville, Maryland 20852 and is accessible from the NRC's Agencywide Documents Access and Management System... receipt of the document. The E-Filing system also distributes an e-mail notice that provides access to the... intervene is filed so that they can obtain access to the document via the E-Filing system. A person filing...

  18. 75 FR 42462 - Notice of Acceptance for Docketing of the Application and Notice of Opportunity for Hearing...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-21

    ... NRC's Agencywide Documents Access and Management System (ADAMS) Public Electronic Reading Room on the... receipt of the document. The E-Filing system also distributes an e-mail notice that provides access to the... intervene is filed so that they can obtain access to the document via the E-Filing system. A person filing...

  19. Toxicity of newly isolated piperideine alkaloids from the red imported fire ant, Solenopsis invicta Buren, against the green peach aphid, Myzus persicae (Sulzer)

    USDA-ARS?s Scientific Manuscript database

    The green peach aphid, Myzus persicae (Sulzer), is a major insect pest of many agronomic and horticultural crops and is distributed worldwide Aphid management is often based on application of insecticides. However, the aphid is now resistant to many of these and much interest has recently develope...

  20. Knowledge-based image data management - An expert front-end for the BROWSE facility

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Star, Jeffrey L.; Estes, John E.

    1988-01-01

    An intelligent user interface being added to the NASA-sponsored BROWSE testbed facility is described. BROWSE is a prototype system designed to explore issues involved in locating image data in distributed archives and displaying low-resolution versions of that imagery at a local terminal. For prototyping, the initial application is the remote sensing of forest and range land.

  1. DOS Design/Application Tools System/Segment Specification. Volume 3

    DTIC Science & Technology

    1990-09-01

    consume the same information to obtain that information without "manual" translation by people. Solving the information management problem effectively...and consumes ’ even more information than centralized development. Distributed systems cannot be developed successfully by experiment without...human intervention because all tools consume input from and produce output to the same repository. New tools are easily absorbed into the environment

  2. New Technologies for Smart Grid Operation

    NASA Astrophysics Data System (ADS)

    Mak, Sioe T.

    2015-02-01

    This book is a handbook for advanced applications design and integration of new and future technologies into Smart Grids for researchers and engineers in academia and industry, looking to pull together disparate technologies and apply them for greater gains. The book covers Smart Grids as the midpoint in the generation, storage, transmission and distribution process through to database management, communication technologies, intelligent devices and synchronisation.

  3. Distributed Information System Development: Review of Some Management Issues

    NASA Astrophysics Data System (ADS)

    Mishra, Deepti; Mishra, Alok

    Due to the proliferation of the Internet and globalization, distributed information system development is becoming popular. In this paper we have reviewed some significant management issues like process management, project management, requirements management and knowledge management issues which have received much attention in distributed development perspective. In this literature review we found that areas like quality and risk management issues could get only scant attention in distributed information system development.

  4. Investigating the management performance of disinfection analysis of water distribution networks using data mining approaches.

    PubMed

    Zounemat-Kermani, Mohammad; Ramezani-Charmahineh, Abdollah; Adamowski, Jan; Kisi, Ozgur

    2018-06-13

    Chlorination, the basic treatment utilized for drinking water sources, is widely used for water disinfection and pathogen elimination in water distribution networks. Thereafter, the proper prediction of chlorine consumption is of great importance in water distribution network performance. In this respect, data mining techniques-which have the ability to discover the relationship between dependent variable(s) and independent variables-can be considered as alternative approaches in comparison to conventional methods (e.g., numerical methods). This study examines the applicability of three key methods, based on the data mining approach, for predicting chlorine levels in four water distribution networks. ANNs (artificial neural networks, including the multi-layer perceptron neural network, MLPNN, and radial basis function neural network, RBFNN), SVM (support vector machine), and CART (classification and regression tree) methods were used to estimate the concentration of residual chlorine in distribution networks for three villages in Kerman Province, Iran. Produced water (flow), chlorine consumption, and residual chlorine were collected daily for 3 years. An assessment of the studied models using several statistical criteria (NSC, RMSE, R 2 , and SEP) indicated that, in general, MLPNN has the greatest capability for predicting chlorine levels followed by CART, SVM, and RBF-ANN. Weaker performance of the data-driven methods in the water distribution networks, in some cases, could be attributed to improper chlorination management rather than the methods' capability.

  5. GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.

    2016-12-01

    Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.

  6. National Renewable Energy Laboratory (NREL) Topic 2 Final Report: End-to-End Communication and Control System to Support Clean Energy Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hudgins, Andrew P.; Carrillo, Ismael M.; Jin, Xin

    This document is the final report of a two-year development, test, and demonstration project, 'Cohesive Application of Standards- Based Connected Devices to Enable Clean Energy Technologies.' The project was part of the National Renewable Energy Laboratory's (NREL's) Integrated Network Testbed for Energy Grid Research and Technology (INTEGRATE) initiative hosted at Energy Systems Integration Facility (ESIF). This project demonstrated techniques to control distribution grid events using the coordination of traditional distribution grid devices and high-penetration renewable resources and demand response. Using standard communication protocols and semantic standards, the project examined the use cases of high/low distribution voltage, requests for volt-ampere-reactive (VAR)more » power support, and transactive energy strategies using Volttron. Open source software, written by EPRI to control distributed energy resources (DER) and demand response (DR), was used by an advanced distribution management system (ADMS) to abstract the resources reporting to a collection of capabilities rather than needing to know specific resource types. This architecture allows for scaling both horizontally and vertically. Several new technologies were developed and tested. Messages from the ADMS based on the common information model (CIM) were developed to control the DER and DR management systems. The OpenADR standard was used to help manage grid events by turning loads off and on. Volttron technology was used to simulate a homeowner choosing the price at which to enter the demand response market. Finally, the ADMS used newly developed algorithms to coordinate these resources with a capacitor bank and voltage regulator to respond to grid events.« less

  7. A distributed scheme to manage the dynamic coexistence of IEEE 802.15.4-based health-monitoring WBANs.

    PubMed

    Deylami, Mohammad N; Jovanov, Emil

    2014-01-01

    The overlap of transmission ranges between wireless networks as a result of mobility is referred to as dynamic coexistence. The interference caused by coexistence may significantly affect the performance of wireless body area networks (WBANs) where reliability is particularly critical for health monitoring applications. In this paper, we analytically study the effects of dynamic coexistence on the operation of IEEE 802.15.4-based health monitoring WBANs. The current IEEE 802.15.4 standard lacks mechanisms for effectively managing the coexistence of mobile WBANs. Considering the specific characteristics and requirements of health monitoring WBANs, we propose the dynamic coexistence management (DCM) mechanism to make IEEE 802.15.4-based WBANs able to detect and mitigate the harmful effects of coexistence. We assess the effectiveness of this scheme using extensive OPNET simulations. Our results indicate that DCM improves the successful transmission rates of dynamically coexisting WBANs by 20%-25% for typical medical monitoring applications.

  8. Advanced data management system architectures testbed

    NASA Technical Reports Server (NTRS)

    Grant, Terry

    1990-01-01

    The objective of the Architecture and Tools Testbed is to provide a working, experimental focus to the evolving automation applications for the Space Station Freedom data management system. Emphasis is on defining and refining real-world applications including the following: the validation of user needs; understanding system requirements and capabilities; and extending capabilities. The approach is to provide an open, distributed system of high performance workstations representing both the standard data processors and networks and advanced RISC-based processors and multiprocessor systems. The system provides a base from which to develop and evaluate new performance and risk management concepts and for sharing the results. Participants are given a common view of requirements and capability via: remote login to the testbed; standard, natural user interfaces to simulations and emulations; special attention to user manuals for all software tools; and E-mail communication. The testbed elements which instantiate the approach are briefly described including the workstations, the software simulation and monitoring tools, and performance and fault tolerance experiments.

  9. Analysis and Application of Microgrids

    NASA Astrophysics Data System (ADS)

    Yue, Lu

    New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.

  10. Managing distribution changes in time series prediction

    NASA Astrophysics Data System (ADS)

    Matias, J. M.; Gonzalez-Manteiga, W.; Taboada, J.; Ordonez, C.

    2006-07-01

    When a problem is modeled statistically, a single distribution model is usually postulated that is assumed to be valid for the entire space. Nonetheless, this practice may be somewhat unrealistic in certain application areas, in which the conditions of the process that generates the data may change; as far as we are aware, however, no techniques have been developed to tackle this problem.This article proposes a technique for modeling and predicting this change in time series with a view to improving estimates and predictions. The technique is applied, among other models, to the hypernormal distribution recently proposed. When tested on real data from a range of stock market indices the technique produces better results that when a single distribution model is assumed to be valid for the entire period of time studied.Moreover, when a global model is postulated, it is highly recommended to select the hypernormal distribution parameter in the same likelihood maximization process.

  11. The research of distributed interactive simulation based on HLA in coal mine industry inherent safety

    NASA Astrophysics Data System (ADS)

    Dou, Zhi-Wu

    2010-08-01

    To solve the inherent safety problem puzzling the coal mining industry, analyzing the characteristic and the application of distributed interactive simulation based on high level architecture (DIS/HLA), a new method is proposed for developing coal mining industry inherent safety distributed interactive simulation adopting HLA technology. Researching the function and structure of the system, a simple coal mining industry inherent safety is modeled with HLA, the FOM and SOM are developed, and the math models are suggested. The results of the instance research show that HLA plays an important role in developing distributed interactive simulation of complicated distributed system and the method is valid to solve the problem puzzling coal mining industry. To the coal mining industry, the conclusions show that the simulation system with HLA plays an important role to identify the source of hazard, to make the measure for accident, and to improve the level of management.

  12. A Distributed Simulation Facility to Support Human Factors Research in Advanced Air Transportation Technology

    NASA Technical Reports Server (NTRS)

    Amonlirdviman, Keith; Farley, Todd C.; Hansman, R. John, Jr.; Ladik, John F.; Sherer, Dana Z.

    1998-01-01

    A distributed real-time simulation of the civil air traffic environment developed to support human factors research in advanced air transportation technology is presented. The distributed environment is based on a custom simulation architecture designed for simplicity and flexibility in human experiments. Standard Internet protocols are used to create the distributed environment, linking all advanced cockpit simulator, all Air Traffic Control simulator, and a pseudo-aircraft control and simulation management station. The pseudo-aircraft control station also functions as a scenario design tool for coordinating human factors experiments. This station incorporates a pseudo-pilot interface designed to reduce workload for human operators piloting multiple aircraft simultaneously in real time. The application of this distributed simulation facility to support a study of the effect of shared information (via air-ground datalink) on pilot/controller shared situation awareness and re-route negotiation is also presented.

  13. Energy Management of Smart Distribution Systems

    NASA Astrophysics Data System (ADS)

    Ansari, Bananeh

    Electric power distribution systems interface the end-users of electricity with the power grid. Traditional distribution systems are operated in a centralized fashion with the distribution system owner or operator being the only decision maker. The management and control architecture of distribution systems needs to gradually transform to accommodate the emerging smart grid technologies, distributed energy resources, and active electricity end-users or prosumers. The content of this document concerns with developing multi-task multi-objective energy management schemes for: 1) commercial/large residential prosumers, and 2) distribution system operator of a smart distribution system. The first part of this document describes a method of distributed energy management of multiple commercial/ large residential prosumers. These prosumers not only consume electricity, but also generate electricity using their roof-top solar photovoltaics systems. When photovoltaics generation is larger than local consumption, excess electricity will be fed into the distribution system, creating a voltage rise along the feeder. Distribution system operator cannot tolerate a significant voltage rise. ES can help the prosumers manage their electricity exchanges with the distribution system such that minimal voltage fluctuation occurs. The proposed distributed energy management scheme sizes and schedules each prosumer's ES to reduce the electricity bill and mitigate voltage rise along the feeder. The second part of this document focuses on emergency energy management and resilience assessment of a distribution system. The developed emergency energy management system uses available resources and redundancy to restore the distribution system's functionality fully or partially. The success of the restoration maneuver depends on how resilient the distribution system is. Engineering resilience terminology is used to evaluate the resilience of distribution system. The proposed emergency energy management scheme together with resilience assessment increases the distribution system operator's preparedness for emergency events.

  14. Environmental offsets, resilience and cost-effective conservation

    PubMed Central

    Little, L. R.; Grafton, R. Q.

    2015-01-01

    Conservation management agencies are faced with acute trade-offs when dealing with disturbance from human activities. We show how agencies can respond to permanent ecosystem disruption by managing for Pimm resilience within a conservation budget using a model calibrated to a metapopulation of a coral reef fish species at Ningaloo Reef, Western Australia. The application is of general interest because it provides a method to manage species susceptible to negative environmental disturbances by optimizing between the number and quality of migration connections in a spatially distributed metapopulation. Given ecological equivalency between the number and quality of migration connections in terms of time to recover from disturbance, our approach allows conservation managers to promote ecological function, under budgetary constraints, by offsetting permanent damage to one ecological function with investment in another. PMID:26587260

  15. Awareware: Narrowcasting Attributes for Selective Attention, Privacy, and Multipresence

    NASA Astrophysics Data System (ADS)

    Cohen, Michael; Newton Fernando, Owen Noel

    The domain of cscw, computer-supported collaborative work, and DSC, distributed synchronous collaboration, spans real-time interactive multiuser systems, shared information spaces, and applications for teleexistence and artificial reality, including collaborative virtual environments ( cves) (Benford et al., 2001). As presence awareness systems emerge, it is important to develop appropriate interfaces and architectures for managing multimodal multiuser systems. Especially in consideration of the persistent connectivity enabled by affordable networked communication, shared distributed environments require generalized control of media streams, techniques to control source → sink transmissions in synchronous groupware, including teleconferences and chatspaces, online role-playing games, and virtual concerts.

  16. An international organization for remote sensing

    NASA Technical Reports Server (NTRS)

    Helm, Neil R.; Edelson, Burton I.

    1991-01-01

    A recommendation is presented for the formation of a new commercially oriented international organization to acquire or develop, coordinate or manage, the space and ground segments for a global operational satellite system to furnish the basic data for remote sensing and meteorological, land, and sea resource applications. The growing numbers of remote sensing programs are examined and possible ways of reducing redundant efforts and improving the coordination and distribution of these global efforts are discussed. This proposed remote sensing organization could play an important role in international cooperation and the distribution of scientific, commercial, and public good data.

  17. Health Management Applications for International Space Station

    NASA Technical Reports Server (NTRS)

    Alena, Richard; Duncavage, Dan

    2005-01-01

    Traditional mission and vehicle management involves teams of highly trained specialists monitoring vehicle status and crew activities, responding rapidly to any anomalies encountered during operations. These teams work from the Mission Control Center and have access to engineering support teams with specialized expertise in International Space Station (ISS) subsystems. Integrated System Health Management (ISHM) applications can significantly augment these capabilities by providing enhanced monitoring, prognostic and diagnostic tools for critical decision support and mission management. The Intelligent Systems Division of NASA Ames Research Center is developing many prototype applications using model-based reasoning, data mining and simulation, working with Mission Control through the ISHM Testbed and Prototypes Project. This paper will briefly describe information technology that supports current mission management practice, and will extend this to a vision for future mission control workflow incorporating new ISHM applications. It will describe ISHM applications currently under development at NASA and will define technical approaches for implementing our vision of future human exploration mission management incorporating artificial intelligence and distributed web service architectures using specific examples. Several prototypes are under development, each highlighting a different computational approach. The ISStrider application allows in-depth analysis of Caution and Warning (C&W) events by correlating real-time telemetry with the logical fault trees used to define off-nominal events. The application uses live telemetry data and the Livingstone diagnostic inference engine to display the specific parameters and fault trees that generated the C&W event, allowing a flight controller to identify the root cause of the event from thousands of possibilities by simply navigating animated fault tree models on their workstation. SimStation models the functional power flow for the ISS Electrical Power System and can predict power balance for nominal and off-nominal conditions. SimStation uses realtime telemetry data to keep detailed computational physics models synchronized with actual ISS power system state. In the event of failure, the application can then rapidly diagnose root cause, predict future resource levels and even correlate technical documents relevant to the specific failure. These advanced computational models will allow better insight and more precise control of ISS subsystems, increasing safety margins by speeding up anomaly resolution and reducing,engineering team effort and cost. This technology will make operating ISS more efficient and is directly applicable to next-generation exploration missions and Crew Exploration Vehicles.

  18. Fully Distributed Monitoring Architecture Supporting Multiple Trackees and Trackers in Indoor Mobile Asset Management Application

    PubMed Central

    Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju

    2014-01-01

    A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407

  19. Evolution of the ATLAS PanDA workload management system for exascale computational science

    NASA Astrophysics Data System (ADS)

    Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration

    2014-06-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  20. Distributed Earth observation data integration and on-demand services based on a collaborative framework of geospatial data service gateway

    NASA Astrophysics Data System (ADS)

    Xie, Jibo; Li, Guoqing

    2015-04-01

    Earth observation (EO) data obtained by air-borne or space-borne sensors has the characteristics of heterogeneity and geographical distribution of storage. These data sources belong to different organizations or agencies whose data management and storage methods are quite different and geographically distributed. Different data sources provide different data publish platforms or portals. With more Remote sensing sensors used for Earth Observation (EO) missions, different space agencies have distributed archived massive EO data. The distribution of EO data archives and system heterogeneity makes it difficult to efficiently use geospatial data for many EO applications, such as hazard mitigation. To solve the interoperable problems of different EO data systems, an advanced architecture of distributed geospatial data infrastructure is introduced to solve the complexity of distributed and heterogeneous EO data integration and on-demand processing in this paper. The concept and architecture of geospatial data service gateway (GDSG) is proposed to build connection with heterogeneous EO data sources by which EO data can be retrieved and accessed with unified interfaces. The GDSG consists of a set of tools and service to encapsulate heterogeneous geospatial data sources into homogenous service modules. The GDSG modules includes EO metadata harvesters and translators, adaptors to different type of data system, unified data query and access interfaces, EO data cache management, and gateway GUI, etc. The GDSG framework is used to implement interoperability and synchronization between distributed EO data sources with heterogeneous architecture. An on-demand distributed EO data platform is developed to validate the GDSG architecture and implementation techniques. Several distributed EO data achieves are used for test. Flood and earthquake serves as two scenarios for the use cases of distributed EO data integration and interoperability.

  1. Towards G2G: Systems of Technology Database Systems

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Bell, David

    2005-01-01

    We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.

  2. DIRAC3 - the new generation of the LHCb grid software

    NASA Astrophysics Data System (ADS)

    Tsaregorodtsev, A.; Brook, N.; Casajus Ramo, A.; Charpentier, Ph; Closier, J.; Cowan, G.; Graciani Diaz, R.; Lanciotti, E.; Mathe, Z.; Nandakumar, R.; Paterson, S.; Romanovsky, V.; Santinelli, R.; Sapunov, M.; Smith, A. C.; Seco Miguelez, M.; Zhelezov, A.

    2010-04-01

    DIRAC, the LHCb community Grid solution, was considerably reengineered in order to meet all the requirements for processing the data coming from the LHCb experiment. It is covering all the tasks starting with raw data transportation from the experiment area to the grid storage, data processing up to the final user analysis. The reengineered DIRAC3 version of the system includes a fully grid security compliant framework for building service oriented distributed systems; complete Pilot Job framework for creating efficient workload management systems; several subsystems to manage high level operations like data production and distribution management. The user interfaces of the DIRAC3 system providing rich command line and scripting tools are complemented by a full-featured Web portal providing users with a secure access to all the details of the system status and ongoing activities. We will present an overview of the DIRAC3 architecture, new innovative features and the achieved performance. Extending DIRAC3 to manage computing resources beyond the WLCG grid will be discussed. Experience with using DIRAC3 by other user communities than LHCb and in other application domains than High Energy Physics will be shown to demonstrate the general-purpose nature of the system.

  3. Rich internet application system for patient-centric healthcare data management using handheld devices.

    PubMed

    Constantinescu, L; Pradana, R; Kim, J; Gong, P; Fulham, Michael; Feng, D

    2009-01-01

    Rich Internet Applications (RIAs) are an emerging software platform that blurs the line between web service and native application, and is a powerful tool for handheld device deployment. By democratizing health data management and widening its availability, this software platform has the potential to revolutionize telemedicine, clinical practice, medical education and information distribution, particularly in rural areas, and to make patient-centric medical computing a reality. In this paper, we propose a telemedicine application that leverages the ability of a mobile RIA platform to transcode, organise and present textual and multimedia data, which are sourced from medical database software. We adopted a web-based approach to communicate, in real-time, with an established hospital information system via a custom RIA. The proposed solution allows communication between handheld devices and a hospital information system for media streaming with support for real-time encryption, on any RIA enabled platform. We demonstrate our prototype's ability to securely and rapidly access, without installation requirements, medical data ranging from simple textual records to multi-slice PET-CT images and maximum intensity (MIP) projections.

  4. Next generation tools for genomic data generation, distribution, and visualization

    PubMed Central

    2010-01-01

    Background With the rapidly falling cost and availability of high throughput sequencing and microarray technologies, the bottleneck for effectively using genomic analysis in the laboratory and clinic is shifting to one of effectively managing, analyzing, and sharing genomic data. Results Here we present three open-source, platform independent, software tools for generating, analyzing, distributing, and visualizing genomic data. These include a next generation sequencing/microarray LIMS and analysis project center (GNomEx); an application for annotating and programmatically distributing genomic data using the community vetted DAS/2 data exchange protocol (GenoPub); and a standalone Java Swing application (GWrap) that makes cutting edge command line analysis tools available to those who prefer graphical user interfaces. Both GNomEx and GenoPub use the rich client Flex/Flash web browser interface to interact with Java classes and a relational database on a remote server. Both employ a public-private user-group security model enabling controlled distribution of patient and unpublished data alongside public resources. As such, they function as genomic data repositories that can be accessed manually or programmatically through DAS/2-enabled client applications such as the Integrated Genome Browser. Conclusions These tools have gained wide use in our core facilities, research laboratories and clinics and are freely available for non-profit use. See http://sourceforge.net/projects/gnomex/, http://sourceforge.net/projects/genoviz/, and http://sourceforge.net/projects/useq. PMID:20828407

  5. Scalable PGAS Metadata Management on Extreme Scale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Agarwal, Khushbu; Straatsma, TP

    Programming models intended to run on exascale systems have a number of challenges to overcome, specially the sheer size of the system as measured by the number of concurrent software entities created and managed by the underlying runtime. It is clear from the size of these systems that any state maintained by the programming model has to be strictly sub-linear in size, in order not to overwhelm memory usage with pure overhead. A principal feature of Partitioned Global Address Space (PGAS) models is providing easy access to global-view distributed data structures. In order to provide efficient access to these distributedmore » data structures, PGAS models must keep track of metadata such as where array sections are located with respect to processes/threads running on the HPC system. As PGAS models and applications become ubiquitous on very large transpetascale systems, a key component to their performance and scalability will be efficient and judicious use of memory for model overhead (metadata) compared to application data. We present an evaluation of several strategies to manage PGAS metadata that exhibit different space/time tradeoffs. We use two real-world PGAS applications to capture metadata usage patterns and gain insight into their communication behavior.« less

  6. Decentralized asset management for collaborative sensing

    NASA Astrophysics Data System (ADS)

    Malhotra, Raj P.; Pribilski, Michael J.; Toole, Patrick A.; Agate, Craig

    2017-05-01

    There has been increased impetus to leverage Small Unmanned Aerial Systems (SUAS) for collaborative sensing applications in which many platforms work together to provide critical situation awareness in dynamic environments. Such applications require critical sensor observations to be made at the right place and time to facilitate the detection, tracking, and classification of ground-based objects. This further requires rapid response to real-world events and the balancing of multiple, competing mission objectives. In this context, human operators become overwhelmed with management of many platforms. Further, current automated planning paradigms tend to be centralized and don't scale up well to many collaborating platforms. We introduce a decentralized approach based upon information-theory and distributed fusion which enable us to scale up to large numbers of collaborating Small Unmanned Aerial Systems (SUAS) platforms. This is exercised against a military application involving the autonomous detection, tracking, and classification of critical mobile targets. We further show that, based upon monte-carlo simulation results, our decentralized approach out-performs more static management strategies employed by human operators and achieves similar results to a centralized approach while being scalable and robust to degradation of communication. Finally, we describe the limitations of our approach and future directions for our research.

  7. Effects of distributed and centralized stormwater best management practices and land cover on urban stream hydrology at the catchment scale

    NASA Astrophysics Data System (ADS)

    Loperfido, J. V.; Noe, Gregory B.; Jarnagin, S. Taylor; Hogan, Dianna M.

    2014-11-01

    Urban stormwater runoff remains an important issue that causes local and regional-scale water quantity and quality issues. Stormwater best management practices (BMPs) have been widely used to mitigate runoff issues, traditionally in a centralized manner; however, problems associated with urban hydrology have remained. An emerging trend is implementation of BMPs in a distributed manner (multi-BMP treatment trains located on the landscape and integrated with urban design), but little catchment-scale performance of these systems have been reported to date. Here, stream hydrologic data (March, 2011-September, 2012) are evaluated in four catchments located in the Chesapeake Bay watershed: one utilizing distributed stormwater BMPs, two utilizing centralized stormwater BMPs, and a forested catchment serving as a reference. Among urban catchments with similar land cover, geology and BMP design standards (i.e. 100-year event), but contrasting placement of stormwater BMPs, distributed BMPs resulted in: significantly greater estimated baseflow, a higher minimum precipitation threshold for stream response and maximum discharge increases, better maximum discharge control for small precipitation events, and reduced runoff volume during an extreme (1000-year) precipitation event compared to centralized BMPs. For all catchments, greater forest land cover and less impervious cover appeared to be more important drivers than stormwater BMP spatial pattern, and caused lower total, stormflow, and baseflow runoff volume; lower maximum discharge during typical precipitation events; and lower runoff volume during an extreme precipitation event. Analysis of hydrologic field data in this study suggests that both the spatial distribution of stormwater BMPs and land cover are important for management of urban stormwater runoff. In particular, catchment-wide application of distributed BMPs improved stream hydrology compared to centralized BMPs, but not enough to fully replicate forested catchment stream hydrology. Integrated planning of stormwater management, protected riparian buffers and forest land cover with suburban development in the distributed-BMP catchment enabled multi-purpose use of land that provided esthetic value and green-space, community gathering points, and wildlife habitat in addition to hydrologic stormwater treatment.

  8. Effects of distributed and centralized stormwater best management practices and land cover on urban stream hydrology at the catchment scale

    USGS Publications Warehouse

    Loperfido, John V.; Noe, Gregory B.; Jarnagin, S. Taylor; Hogan, Dianna M.

    2014-01-01

    Urban stormwater runoff remains an important issue that causes local and regional-scale water quantity and quality issues. Stormwater best management practices (BMPs) have been widely used to mitigate runoff issues, traditionally in a centralized manner; however, problems associated with urban hydrology have remained. An emerging trend is implementation of BMPs in a distributed manner (multi-BMP treatment trains located on the landscape and integrated with urban design), but little catchment-scale performance of these systems have been reported to date. Here, stream hydrologic data (March, 2011–September, 2012) are evaluated in four catchments located in the Chesapeake Bay watershed: one utilizing distributed stormwater BMPs, two utilizing centralized stormwater BMPs, and a forested catchment serving as a reference. Among urban catchments with similar land cover, geology and BMP design standards (i.e. 100-year event), but contrasting placement of stormwater BMPs, distributed BMPs resulted in: significantly greater estimated baseflow, a higher minimum precipitation threshold for stream response and maximum discharge increases, better maximum discharge control for small precipitation events, and reduced runoff volume during an extreme (1000-year) precipitation event compared to centralized BMPs. For all catchments, greater forest land cover and less impervious cover appeared to be more important drivers than stormwater BMP spatial pattern, and caused lower total, stormflow, and baseflow runoff volume; lower maximum discharge during typical precipitation events; and lower runoff volume during an extreme precipitation event. Analysis of hydrologic field data in this study suggests that both the spatial distribution of stormwater BMPs and land cover are important for management of urban stormwater runoff. In particular, catchment-wide application of distributed BMPs improved stream hydrology compared to centralized BMPs, but not enough to fully replicate forested catchment stream hydrology. Integrated planning of stormwater management, protected riparian buffers and forest land cover with suburban development in the distributed-BMP catchment enabled multi-purpose use of land that provided esthetic value and green-space, community gathering points, and wildlife habitat in addition to hydrologic stormwater treatment.

  9. Responding Logistically to Future Natural and Man-Made Disasters and Catastrophes

    DTIC Science & Technology

    2008-03-15

    Logistics Operations, Plans and Exercises, Distribution Management and Property Management. Each competency has associated roles, missions and...professional development. LMD’s Distribution Management Division (DMD) Within the LMD, FEMA also created the Distribution Management Division (DMD...to stock in anticipation of future disasters. A Distribution Management Strategy Working Group was formed with Federal, private and nongovernmental

  10. Review of Remote Sensing Needs and Applications in Africa

    NASA Technical Reports Server (NTRS)

    Brown, Molly E.

    2007-01-01

    Remote sensing data has had an important role in identifying and responding to inter-annual variations in the African environment during the past three decades. As a largely agricultural region with diverse but generally limited government capacity to acquire and distribute ground observations of rainfall, temperature and other parameters, remote sensing is sometimes the only reliable measure of crop growing conditions in Africa. Thus, developing and maintaining the technical and scientific capacity to analyze and utilize satellite remote sensing data in Africa is critical to augmenting the continent's local weather/climate observation networks as well as its agricultural and natural resource development and management. The report Review of Remote Sensing Needs and Applications in Africa' has as its central goal to recommend to the US Agency for International Development an appropriate approach to support sustainable remote sensing applications at African regional remote sensing centers. The report focuses on "RS applications" to refer to the acquisition, maintenance and archiving, dissemination, distribution, analysis, and interpretation of remote sensing data, as well as the integration of interpreted data with other spatial data products. The report focuses on three primary remote sensing centers: (1) The AGRHYMET Regional Center in Niamey, Niger, created in 1974, is a specialized institute of the Permanent Interstate Committee for Drought Control in the Sahel (CILSS), with particular specialization in science and techniques applied to agricultural development, rural development, and natural resource management. (2) The Regional Centre for Maiming of Resources for Development (RCMRD) in Nairobi, Kenya, established in 1975 under the auspices of the United Nations Economic Commission for Africa and the Organization of African Unity (now the African Union), is an intergovernmental organization, with 15 member states from eastern and southern Africa. (3) The Regional Remote Sensing Unit (RRSU) in Gaborone, Botswana, began work in June 1988 and operates under the Agriculture Information Management System (AIMS), as part of the Food, Agriculture and Natural Resources (FANR) Directorate, based at the Southern Africa Development Community (SADC) Secretariat.

  11. Project Management Software for Distributed Industrial Companies

    NASA Astrophysics Data System (ADS)

    Dobrojević, M.; Medjo, B.; Rakin, M.; Sedmak, A.

    This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators.

  12. Distributed utility technology cost, performance, and environmental characteristics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan, Y; Adelman, S

    1995-06-01

    Distributed Utility (DU) is an emerging concept in which modular generation and storage technologies sited near customer loads in distribution systems and specifically targeted demand-side management programs are used to supplement conventional central station generation plants to meet customer energy service needs. Research has shown that implementation of the DU concept could provide substantial benefits to utilities. This report summarizes the cost, performance, and environmental and siting characteristics of existing and emerging modular generation and storage technologies that are applicable under the DU concept. It is intended to be a practical reference guide for utility planners and engineers seeking informationmore » on DU technology options. This work was funded by the Office of Utility Technologies of the US Department of Energy.« less

  13. [The development of a distribution system for medical lasers and its clinical application].

    PubMed

    Okae, S; Ishiguchi, T; Ishigaki, T; Sakuma, S

    1991-02-25

    We developed a new laser beam generator system which can deliver laser beam to multiple terminals in distant clinical therapy rooms. The system possesses the distribution equipment by which Nd-YAG laser power is distributed to 8 output terminals under the computer control. Distributed laser beam is delivered to each distant terminal with clinical informations through the optical fiber. In the fundamental studies, possibility of distant transportation of laser beam (30 m) only with 10% loss of energy and without dangerous heating at the connection parts was shown. There seems to be no disadvantage associated with distribution laser beam. In the clinical study, the system was applied to five patients with the symptoms including hemosputum, esophageal stenosis, hemorrhage, lip ulcer and pain. Clinical usefulness of the system was proved. The advantages of the system are as follows: 1. Benefit of cost reduction due to multiple use of single laser source. 2. No necessity of transport of the equipment. 3. No requirement of a wide space to install the equipment in the distant room. 4. Efficient management and maintenance of the system by centralization. Further improvements, e.g., simultaneous use at multiple terminals and elongation of transportation up to 340 m, make the system more useful for clinical application.

  14. Sustainable management of agriculture activity on areas with soil vulnerability to compaction trough a developed decision support system (DSS)

    NASA Astrophysics Data System (ADS)

    Moretto, Johnny; Fantinato, Luciano; Rasera, Roberto

    2017-04-01

    One of the main environmental effects of agriculture is the negative impacts on areas with soil vulnerability to compaction and undersurface water derived from inputs and treatment distributions. A solution may represented from the "Precision Farming". Precision Farming refers to a management concept focusing on (near-real time) observation, measurement and responses to inter- and intra-variability in crops, fields and animals. Potential benefits may include increasing crop yields and animal performance, cost and labour reduction and optimisation of process inputs, all of which would increase profitability. At the same time, Precision Farming should increase work safety and reduce the environmental impacts of agriculture and farming practices, thus contributing to the sustainability of agricultural production. The concept has been made possible by the rapid development of ICT-based sensor technologies and procedures along with dedicated software that, in the case of arable farming, provides the link between spatially-distributed variables and appropriate farming practices such as tillage, seeding, fertilisation, herbicide and pesticide application, and harvesting. Much progress has been made in terms of technical solutions, but major steps are still required for the introduction of this approach over the common agricultural practices. There are currently a large number of sensors capable of collecting data for various applications (e.g. Index of vegetation vigor, soil moisture, Digital Elevation Models, meteorology, etc.). The resulting large volumes of data need to be standardised, processed and integrated using metadata analysis of spatial information, to generate useful input for decision-support systems. In this context, a user-friendly IT applications has been developed, for organizing and processing large volumes of data from different types of remote sensing and meteorological sensors, and for integrating these data into user-friendly farm management support systems able to support the farm manager. In this applications will be possible to implement numerical models to support the farm manager on the best time to work in field and/or the best trajectory to follow with a GPS navigation system on soil vulnerability to compaction. In addition to provide "as applied map" to indicate in each part of the field the exact needed quantity of inputs and treatments. This new working models for data management will allow to a most efficient resource usage contributing in a more sustainable agriculture both for a more economic benefits for the farmers and for reduction of environmental soil and undersurface water impacts.

  15. Advanced Grid Control Technologies Workshop Series | Energy Systems

    Science.gov Websites

    on advanced distribution management systems (ADMS) and microgrid controls. The workshops were held at . July 7, 2015: Advanced Distribution Management Systems (ADMS) Welcome and NREL Overview Dr. Murali Keynote: Next-Generation Distribution Management Systems and Distributed Resource Energy Management

  16. Energy management and cooperation in microgrids

    NASA Astrophysics Data System (ADS)

    Rahbar, Katayoun

    Microgrids are key components of future smart power grids, which integrate distributed renewable energy generators to efficiently serve the load demand locally. However, random and intermittent characteristics of renewable energy generations may hinder the reliable operation of microgrids. This thesis is thus devoted to investigating new strategies for microgrids to optimally manage their energy consumption, energy storage system (ESS) and cooperation in real time to achieve the reliable and cost-effective operation. This thesis starts with a single microgrid system. The optimal energy scheduling and ESS management policy is derived to minimize the energy cost of the microgrid resulting from drawing conventional energy from the main grid under both the off-line and online setups, where the renewable energy generation/load demand are assumed to be non-causally known and causally known at the microgrid, respectively. The proposed online algorithm is designed based on the optimal off-line solution and works under arbitrary (even unknown) realizations of future renewable energy generation/load demand. Therefore, it is more practically applicable as compared to solutions based on conventional techniques such as dynamic programming and stochastic programming that require the prior knowledge of renewable energy generation and load demand realizations/distributions. Next, for a group of microgrids that cooperate in energy management, we study efficient methods for sharing energy among them for both fully and partially cooperative scenarios, where microgrids are of common interests and self-interested, respectively. For the fully cooperative energy management, the off-line optimization problem is first formulated and optimally solved, where a distributed algorithm is proposed to minimize the total (sum) energy cost of microgrids. Inspired by the results obtained from the off-line optimization, efficient online algorithms are proposed for the real-time energy management, which are of low complexity and work given arbitrary realizations of renewable energy generation/load demand. On the other hand, for self-interested microgrids, the partially cooperative energy management is formulated and a distributed algorithm is proposed to optimize the energy cooperation such that energy costs of individual microgrids reduce simultaneously over the case without energy cooperation while limited information is shared among the microgrids and the central controller.

  17. Theater Logistics Management: A Case for a Joint Distribution Solution

    DTIC Science & Technology

    2008-03-15

    Multinational (JIIM) operations necessitate creating joint-multinational-based distribution management centers which effectively manage materiel...in the world. However, as the operation continued, the inherent weakness of the intra-theater logistical distribution management link became clear...compounded the distribution management problem. The common thread between each of the noted GAO failures is the lack of a defined joint, theater

  18. A Rendering System Independent High Level Architecture Implementation for Networked Virtual Environments

    DTIC Science & Technology

    2002-09-01

    Management .........................15 5. Time Management ..............................16 6. Data Distribution Management .................16 D...50 b. Ownership Management .....................51 c. Data Distribution Management .............51 2. Additional Objects and Interactions...16 Figure 6. Data Distribution Management . (From: ref. 2) ...16 Figure 7. RTI and Federate Code Responsibilities. (From: ref. 2

  19. Advanced Distribution Network Modelling with Distributed Energy Resources

    NASA Astrophysics Data System (ADS)

    O'Connell, Alison

    The addition of new distributed energy resources, such as electric vehicles, photovoltaics, and storage, to low voltage distribution networks means that these networks will undergo major changes in the future. Traditionally, distribution systems would have been a passive part of the wider power system, delivering electricity to the customer and not needing much control or management. However, the introduction of these new technologies may cause unforeseen issues for distribution networks, due to the fact that they were not considered when the networks were originally designed. This thesis examines different types of technologies that may begin to emerge on distribution systems, as well as the resulting challenges that they may impose. Three-phase models of distribution networks are developed and subsequently utilised as test cases. Various management strategies are devised for the purposes of controlling distributed resources from a distribution network perspective. The aim of the management strategies is to mitigate those issues that distributed resources may cause, while also keeping customers' preferences in mind. A rolling optimisation formulation is proposed as an operational tool which can manage distributed resources, while also accounting for the uncertainties that these resources may present. Network sensitivities for a particular feeder are extracted from a three-phase load flow methodology and incorporated into an optimisation. Electric vehicles are the focus of the work, although the method could be applied to other types of resources. The aim is to minimise the cost of electric vehicle charging over a 24-hour time horizon by controlling the charge rates and timings of the vehicles. The results demonstrate the advantage that controlled EV charging can have over an uncontrolled case, as well as the benefits provided by the rolling formulation and updated inputs in terms of cost and energy delivered to customers. Building upon the rolling optimisation, a three-phase optimal power flow method is developed. The formulation has the capability to provide optimal solutions for distribution system control variables, for a chosen objective function, subject to required constraints. It can, therefore, be utilised for numerous technologies and applications. The three-phase optimal power flow is employed to manage various distributed resources, such as photovoltaics and storage, as well as distribution equipment, including tap changers and switches. The flexibility of the methodology allows it to be applied in both an operational and a planning capacity. The three-phase optimal power flow is employed in an operational planning capacity to determine volt-var curves for distributed photovoltaic inverters. The formulation finds optimal reactive power settings for a number of load and solar scenarios and uses these reactive power points to create volt-var curves. Volt-var curves are determined for 10 PV systems on a test feeder. A universal curve is also determined which is applicable to all inverters. The curves are validated by testing them in a power flow setting over a 24-hour test period. The curves are shown to provide advantages to the feeder in terms of reduction of voltage deviations and unbalance, with the individual curves proving to be more effective. It is also shown that adding a new PV system to the feeder only requires analysis for that system. In order to represent the uncertainties that inherently occur on distribution systems, an information gap decision theory method is also proposed and integrated into the three-phase optimal power flow formulation. This allows for robust network decisions to be made using only an initial prediction for what the uncertain parameter will be. The work determines tap and switch settings for a test network with demand being treated as uncertain. The aim is to keep losses below a predefined acceptable value. The results provide the decision maker with the maximum possible variation in demand for a given acceptable variation in the losses. A validation is performed with the resulting tap and switch settings being implemented, and shows that the control decisions provided by the formulation keep losses below the acceptable value while adhering to the limits imposed by the network.

  20. Web Service Distributed Management Framework for Autonomic Server Virtualization

    NASA Astrophysics Data System (ADS)

    Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea

    Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.

  1. Restoration and management of shortleaf pine in pure and mixed stands - science, empirical observation, and the wishful application of generalities

    Treesearch

    James M. Guildin

    2007-01-01

    Shortleaf pine (Pinus echinata Mill.) is the only naturally-occurring pine ~Distributed throughout the Ozark-Ouachita Highlands. Once dominant on south-facing and ridgetop stands and important in mixed stands, it is now restricted to south- and southwestfacing ~slopes in the Ouachita and southern Ozark Mountains, and to isolated pure and mixed stands...

  2. Future War: An Assessment of Aerospace Campaigns in 2010,

    DTIC Science & Technology

    1996-01-01

    theoretician: "The impending sixth generation of warfare, with its centerpiece of superior data-processing to support precision smart weaponry, will radically...tions concept of " smart push, warrior pull." If JFACC were colocated with the worldwide intelligence manager, unit taskings and the applicable...intelligence information could be distributed concurrently (" smart push"). Intelligence officers sitting alongside the operational tasking officers would

  3. Restoration and management of shortleaf pine in pure and mixed stands--science, empirical observation, and the wishful application of generalities

    Treesearch

    James M. Guldin

    2007-01-01

    Shortleaf pine (Pinus echinata Mill.) is the only naturally-occurring pine distributed throughout the Ozark-Ouachita Highlands. Once dominant on south-facing and ridgetop stands and important in mixed stands, it is now restricted to south- and southwestfacing slopes in the Ouachita and southern Ozark Mountains, and to isolated pure and mixed stands...

  4. Real Estate Site Selection: An Application of Artificial Intelligence for Military Retail Facilities

    DTIC Science & Technology

    2006-09-01

    Information and Spatial Analysis (SCGISA), University of Sheffield. Kotler , P. (1984). Marketing Management: Analysis, Planning, and Control...Spatial Distribution of Retail Sales. Journal of Real Estate Finance and Economics, Vol. 31 Iss. 1, 53. Lilien, G., & Kotler , P. (1983). Marketing ...commissaries). The current business model for military retail facilities may not be optimized based upon current trends market data. Optimizing

  5. Globus | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    Globus software services provide secure cancer research data transfer, synchronization, and sharing in distributed environments at large scale. These services can be integrated into applications and research data gateways, leveraging Globus identity management, single sign-on, search, and authorization capabilities. Globus Genomics integrates Globus with the Galaxy genomics workflow engine and Amazon Web Services to enable cancer genomics analysis that can elastically scale compute resources with demand.

  6. Networking and Information Technology Research and Development. Supplement to the President’s Budget for FY 2002

    DTIC Science & Technology

    2001-07-01

    Web-based applications to improve health data systems and quality of care; innovative strategies for data collection in clinical settings; approaches...research to increase interoperability and integration of software in distributed systems ; protocols and tools for data annotation and management; and...Generation National Defense and National Security Systems .......................... 27 Improved Health Care Systems for All Citizens

  7. Heterogeneous Collaborative Sensor Network for Electrical Management of an Automated House with PV Energy

    PubMed Central

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Gutiérrez, Álvaro; Monasterio-Huelin, Félix; Caamaño-Martín, Estefaná; Masa-Bote, Daniel; Jiménez-Leube, Javier

    2011-01-01

    In this paper we present a heterogeneous collaborative sensor network for electrical management in the residential sector. Improving demand-side management is very important in distributed energy generation applications. Sensing and control are the foundations of the “Smart Grid” which is the future of large-scale energy management. The system presented in this paper has been developed on a self-sufficient solar house called “MagicBox” equipped with grid connection, PV generation, lead-acid batteries, controllable appliances and smart metering. Therefore, there is a large number of energy variables to be monitored that allow us to precisely manage the energy performance of the house by means of collaborative sensors. The experimental results, performed on a real house, demonstrate the feasibility of the proposed collaborative system to reduce the consumption of electrical power and to increase energy efficiency. PMID:22247680

  8. Monitoring Fires from Space and Getting Data in to the hands of Users: An Example from NASA's Fire Information for Resource Management System (FIRMS)

    NASA Astrophysics Data System (ADS)

    Davies, D.; Wong, M.; Ilavajhala, S.; Molinario, G.; Justice, C. O.

    2012-12-01

    This paper discusses the broad uptake of MODIS near-real-time (NRT) active fire data for applications. Prior to the launch of MODIS most real-time satellite-derived fire information was obtained from NOAA AVHRR via direct broadcast (DB) systems. Whilst there were efforts to make direct broadcast stations affordable in developing countries, such as through the Local Applications of Satellite Remote Technologies (LARST), these systems were relatively few and far between and required expertise to manage and operate. One such system was in Etosha National Park (ENP) in Namibia. Prior to the installation of the AVHRR DB system in ENP, fires were reported by rangers and the quality, accuracy and timing of reports was variable. With the introduction of the DB station, early warning of fires improved and fire maps could be produced for park managers within 2-3 hours by staff trained to process data, interpret images and produce maps. Up keep and maintenance of such systems was relatively costly for parks with limited resources therefore when global fire data from MODIS became available uptake was widespread. NRT data from MODIS became availalbe through a collaboration between the MODIS Fire Team and the US Forest Service (USFS) Remote Sensing Applications Center to provide rapid access to imagery to help fight the Montana wildfires of 2001. This prompted the development of a Rapid Response System for fire data that eventually led to the operational use of MODIS data by the USFS for fire monitoring. Building on this success, the Fire Information for Resource Management System (FIRMS) project was funded by NASA Applications, and developed under the umbrella of the GOFC-GOLD Fire program, to further improve products and services for the global fire information community. FIRMS was developed as a web-based geospatial tool, offering a range of geospatial data services, including a fire email alert service which is widely used around the world. FIRMS was initially developed to meet the needs of protected area managers, who like the managers in ENP had limited resources to cover large, remote areas. It was quickly realized that these data could be used for a wide range of applications beyond wildfire management including ecological studies, informing fire policy and public outreach. Today, FIRMS sends approximately 2000 email alerts daily to users in over 120 countries. In addition to the direct users of the MODIS fire data, there are a growing number of brokers who add value to the data, by combining them with targeted geospatial information and re-distribute the information. In addition to the English, French and Spanish fire notifications sent out by FIRMS, some brokers translate the alerts in to local languages and distribute them in Thailand, Indonesia, Russia and India.

  9. LHCb Conditions database operation assistance systems

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.

    2012-12-01

    The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.

  10. EXODUS: Integrating intelligent systems for launch operations support

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1991-01-01

    Kennedy Space Center (KSC) is developing knowledge-based systems to automate critical operations functions for the space shuttle fleet. Intelligent systems will monitor vehicle and ground support subsystems for anomalies, assist in isolating and managing faults, and plan and schedule shuttle operations activities. These applications are being developed independently of one another, using different representation schemes, reasoning and control models, and hardware platforms. KSC has recently initiated the EXODUS project to integrate these stand alone applications into a unified, coordinated intelligent operations support system. EXODUS will be constructed using SOCIAL, a tool for developing distributed intelligent systems. EXODUS, SOCIAL, and initial prototyping efforts using SOCIAL to integrate and coordinate selected EXODUS applications are described.

  11. Systems Analysis Directorate Activities Summary August 1977

    DTIC Science & Technology

    1977-09-01

    are: x a. Cataloging direction b. Requirements computation c. Procurement direction d. Distribution management e. Disposal direction f...34inventory management," as a responsibility of NICP’s, includes cataloging, requirements computation, procurement direction, distribution management , maintenance...functions are cataloging, major item management, secondary item management, procurement direction, distribution management , overhaul and rebuild

  12. Power and Energy Management Strategy for Solid State Transformer Interfaced DC Microgrid

    NASA Astrophysics Data System (ADS)

    Yu, Xunwei

    As a result of more and more applications of renewable energy into our ordinary life, how to construct a microgrid (MG) based on the distributed renewable energy resources and energy storages, and then to supply a reliable and flexible power to the conventional power system are the hottest topics nowadays. Comparing to the AC microgrid (AC MG), DC microgrid (DC MG) gets more attentions, because it has its own advantages, such as high efficiency, easy to integrate the DC energy sources and energy storages, and so on. Furthermore, the interaction between DC MG system and the distribution system is also an important and practical issue. In Future Renewable Electric Energy Delivery and Management Systems Center (FREEDM), the Solid State Transformer (SST) is built, which can transform the distribution system to the low AC and DC system directly (usually home application level). Thus, the SST gives a new promising solution for low voltage level MG to interface the distribution level system instead of the traditional transformer. So a SST interfaced DC MG is proposed. However, it also brings new challenges in the design and control fields for this system because the system gets more complicated, which includes distributed energy sources and storages, load, and SST. The purpose of this dissertation is to design a reliable and flexible SST interfaced DC MG based on the renewable energy sources and energy storages, which can operate in islanding mode and SST-enabled mode. Dual Half Bridge (DHB) is selected as the topology for DC/DC converter in DC MG. The DHB operation procedure and average model are analyzed, which is the basis for the system modeling, control and operation. Furthermore, two novel power and energy management strategies are proposed. The first one is a distributed energy management strategy for the DC MG operating in the SST-enabled mode. In this method, the system is not only in distributed control to increase the system reliability, but the power sharing between DC MG and SST, State of Charge (SOC) for battery, are both considered in the system energy management strategy. Then the DC MG output power is controllable and the battery is autonomous charged and discharged based on its SOC and system information without communication. The system operation modes are defined, analyzed and the simulation results verify the strategy. The second power and energy management strategy is the hierarchical control. In this control strategy, three-layer control structure is presented and defined. The first layer is the primary control for the DC MG in islanding mode, which is to guarantee the DC MG system power balance without communication to increase the system reliability. The second control layer is to implement the seamless switch for DC MG system from islanding mode to SST-enabled mode. The third control layer is the tertiary control for the system energy management and the communication is also involved. The tertiary layer not only controls the whole DC MG output power, but also manages battery module charge and discharge statuses based on its SOC. The simulation and experimental results verify the methods. Some practical issues for the SST interfaced DC MG are also investigated. Power unbalance issue of SST is analyzed and a distributed control strategy is presented to solve this problem. Simulation and experimental results verify it. Furthermore, the control strategy for SST interfaced DC MG blackout is presented and the simulation results are shown to valid it. Also a plug and play SST interfaced DC MG is constructed and demonstrated. Several battery and PV modules construct a typical DC MG and a DC source is adopted to simulate the SST. The system is in distributed control and can operate in islanding mode and SST-enabled mode. The experimental results verify that individual module can plug into and unplug from the DC MG randomly without affecting the system stability. Furthermore, the communication ports are embedded into the system and a universal communication protocol is proposed to implement the plug and play function. Specified ID is defined for individual PV and battery for system recognition. A database is built to store the whole system date for visual display, monitor and history query.

  13. Calibration of soil moisture flow simulation models aided by the active heated fiber optic distributed temperature sensing AHFO

    NASA Astrophysics Data System (ADS)

    Rodriguez-Sinobas, Leonor; Zubelzu, Sergio; Sobrino, Fernando Fernando; Sánchez, Raúl

    2017-04-01

    Most of the studies dealing with the development of water flow simulation models in soils, are calibrated using experimental data measured by soil probe sensors or tensiometers which locate at specific points in the study area. However since the beginning of the XXI century, the use of Distributed Fiber Optic Temperature Measurement for estimating temperature variation along a cable of fiber optic has been assessed in multiple environmental applications. Recently, its application combined with an active heating pulses technique (AHFO) has been reported as a sensor to estimate soil moisture. This method applies a known amount of heat to the soil and monitors the temperature evolution, which mainly depends on the soil moisture content. Thus, it allows estimations of soil water content every 12.5 cm along the fiber optic cable, as long as 1500 m , with 2 % accuracy , every second. This study presents the calibration of a soil water flow model (developed in Hydrus 2D) with the AHFO technique. The model predicts the distribution of soil water content of a green area irrigated by sprinkler irrigation. Several irrigation events have been evaluated in a green area located at the ETSI Agronómica, Agroalimentaria y Biosistemas in Madrid where an installation of 147 m of fiber optic cable at 15 cm depth is deployed. The Distribute Temperature Sensing unit was a SILIXA ULTIMA SR (Silixa Ltd, UK) and has spatial and temporal resolution of 0.29 m. Data logged in the DTS unit before, during and after the irrigation event were used to calibrate the estimations in the Hydrus 2D model during the infiltration and redistribution of soil water content within the irrigation interval. References: Karandish, F., & Šimůnek, J. (2016). A field-modeling study for assessing temporal variations of soil-water-crop interactions under water-saving irrigation strategies. Agricultural Water Management, 178, 291-303. Li, Y., Šimůnek, J., Jing, L., Zhang, Z., & Ni, L. (2014). Evaluation of water movement and water losses in a direct-seeded-rice field experiment using Hydrus-1D. Agricultural Water Management, 142, 38-46. Tan, X., Shao, D., & Liu, H. (2014). Simulating soil water regime in lowland paddy fields under different water managements using HYDRUS-1D. Agricultural Water Management, 132, 69-78.

  14. Air System Information Management

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.

    2004-01-01

    I flew to Washington last week, a trip rich in distributed information management. Buying tickets, at the gate, in flight, landing and at the baggage claim, myriad messages about my reservation, the weather, our flight plans, gates, bags and so forth flew among a variety of travel agency, airline and Federal Aviation Administration (FAA) computers and personnel. By and large, each kind of information ran on a particular application, often specialized to own data formats and communications network. I went to Washington to attend an FAA meeting on System-Wide Information Management (SWIM) for the National Airspace System (NAS) (http://www.nasarchitecture.faa.gov/Tutorials/NAS101.cfm). NAS (and its information infrastructure, SWIM) is an attempt to bring greater regularity, efficiency and uniformity to the collection of stovepipe applications now used to manage air traffic. Current systems hold information about flight plans, flight trajectories, weather, air turbulence, current and forecast weather, radar summaries, hazardous condition warnings, airport and airspace capacity constraints, temporary flight restrictions, and so forth. Information moving among these stovepipe systems is usually mediated by people (for example, air traffic controllers) or single-purpose applications. People, whose intelligence is critical for difficult tasks and unusual circumstances, are not as efficient as computers for tasks that can be automated. Better information sharing can lead to higher system capacity, more efficient utilization and safer operations. Better information sharing through greater automation is possible though not necessarily easy.

  15. Managing more than the mean: Using quantile regression to identify factors related to large elk groups

    USGS Publications Warehouse

    Brennan, Angela K.; Cross, Paul C.; Creely, Scott

    2015-01-01

    Synthesis and applications. Our analysis of elk group size distributions using quantile regression suggests that private land, irrigation, open habitat, elk density and wolf abundance can affect large elk group sizes. Thus, to manage larger groups by removal or dispersal of individuals, we recommend incentivizing hunting on private land (particularly if irrigated) during the regular and late hunting seasons, promoting tolerance of wolves on private land (if elk aggregate in these areas to avoid wolves) and creating more winter range and varied habitats. Relationships to the variables of interest also differed by quantile, highlighting the importance of using quantile regression to examine response variables more completely to uncover relationships important to conservation and management.

  16. A science data gateway for environmental management: A SCIENCE DATA GATEWAY FOR ENVIRONMENTAL MANAGEMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, Deborah A.; Faybishenko, Boris; Freedman, Vicky L.

    Science data gateways are effective in providing complex science data collections to the world-wide user communities. In this paper we describe a gateway for the Advanced Simulation Capability for Environmental Management (ASCEM) framework. Built on top of established web service technologies, the ASCEM data gateway is specifically designed for environmental modeling applications. Its key distinguishing features include: (1) handling of complex spatiotemporal data, (2) offering a variety of selective data access mechanisms, (3) providing state of the art plotting and visualization of spatiotemporal data records, and (4) integrating seamlessly with a distributed workflow system using a RESTful interface. ASCEM projectmore » scientists have been using this data gateway since 2011.« less

  17. Configuration Management of an Optimization Application in a Research Environment

    NASA Technical Reports Server (NTRS)

    Townsend, James C.; Salas, Andrea O.; Schuler, M. Patricia

    1999-01-01

    Multidisciplinary design optimization (MDO) research aims to increase interdisciplinary communication and reduce design cycle time by combining system analyses (simulations) with design space search and decision making. The High Performance Computing and Communication Program's current High Speed Civil Transport application, HSCT4.0, at NASA Langley Research Center involves a highly complex analysis process with high-fidelity analyses that are more realistic than previous efforts at the Center. The multidisciplinary processes have been integrated to form a distributed application by using the Java language and Common Object Request Broker Architecture (CORBA) software techniques. HSCT4.0 is a research project in which both the application problem and the implementation strategy have evolved as the MDO and integration issues became better understood. Whereas earlier versions of the application and integrated system were developed with a simple, manual software configuration management (SCM) process, it was evident that this larger project required a more formal SCM procedure. This report briefly describes the HSCT4.0 analysis and its CORBA implementation and then discusses some SCM concepts and their application to this project. In anticipation that SCM will prove beneficial for other large research projects, the report concludes with some lessons learned in overcoming SCM implementation problems for HSCT4.0.

  18. Where is the Battle-Line for Supply Contractors?

    DTIC Science & Technology

    1999-04-01

    military supply distribution system initiates, at the Theater Distribution Management Center (TMC). 3 Chapter 2 Current peacetime supply process I don’t know...terms of distribution success on the battlefield. There are three components which comprise the idea of distribution and distribution management . They...throughout the distribution pipeline. Visibility is the most essential component of distribution management . History is full of examples that prove

  19. Army Battlefield Distribution Through the Lens of OIF: Logical Failures and the Way Ahead

    DTIC Science & Technology

    2005-02-02

    3 Historical Context of Logistics and Distribution Management Transformation...THEATER DISTRIBUTION UNITS ............................................... 66 iii TABLE OF FIGURES Figure 1. Distribution Management Center...consumer and a potential provider of logistics.8 Historical Context of Logistics and Distribution Management Transformation The critical role of

  20. Knowledge management for chronic patient control and monitoring

    NASA Astrophysics Data System (ADS)

    Pedreira, Nieves; Aguiar-Pulido, Vanessa; Dorado, Julián; Pazos, Alejandro; Pereira, Javier

    2014-10-01

    Knowledge Management (KM) can be seen as the process of capturing, developing, sharing, and effectively using organizational knowledge. In this context, the work presented here proposes a KM System to be used in the scope of chronic patient control and monitoring for distributed research projects. It was designed in order to enable communication between patient and doctors, as well as to be usedbythe researchers involved in the project for its management. The proposed model integrates all the information concerning every patient and project management tasks in the Institutional Memory of a KMSystem and uses an ontology to maintain the information and its categorization independently. Furthermore, taking the philosophy of intelligent agents, the system will interact with the user to show him the information according to his preferences and access rights. Finally, three different scenarios of application are described.

Top