1994-05-01
PARALLEL DISTRIBUTED MEMORY ARCHITECTURE LTJh T. M. Eidson 0 - 8 l 9 5 " G. Erlebacher _ _ _. _ DTIe QUALITY INSPECTED a Contract NAS I - 19480 May 1994...DISTRIBUTED MEMORY ARCHITECTURE T.M. Eidson * High Technology Corporation Hampton, VA 23665 G. Erlebachert Institute for Computer Applications in Science and...developed and evaluated. Simple model calculations as well as timing results are pres.nted to evaluate the various strategies. The particular
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2014-03-21
A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated.
Transition in Gas Turbine Control System Architecture: Modular, Distributed, and Embedded
NASA Technical Reports Server (NTRS)
Culley, Dennis
2010-01-01
Controls systems are an increasingly important component of turbine-engine system technology. However, as engines become more capable, the control system itself becomes ever more constrained by the inherent environmental conditions of the engine; a relationship forced by the continued reliance on commercial electronics technology. A revolutionary change in the architecture of turbine-engine control systems will change this paradigm and result in fully distributed engine control systems. Initially, the revolution will begin with the physical decoupling of the control law processor from the hostile engine environment using a digital communications network and engine-mounted high temperature electronics requiring little or no thermal control. The vision for the evolution of distributed control capability from this initial implementation to fully distributed and embedded control is described in a roadmap and implementation plan. The development of this plan is the result of discussions with government and industry stakeholders
Communication Needs Assessment for Distributed Turbine Engine Control
NASA Technical Reports Server (NTRS)
Culley, Dennis E.; Behbahani, Alireza R.
2008-01-01
Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.
A synchronized computational architecture for generalized bilateral control of robot arms
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Szakaly, Zoltan
1987-01-01
This paper describes a computational architecture for an interconnected high speed distributed computing system for generalized bilateral control of robot arms. The key method of the architecture is the use of fully synchronized, interrupt driven software. Since an objective of the development is to utilize the processing resources efficiently, the synchronization is done in the hardware level to reduce system software overhead. The architecture also achieves a balaced load on the communication channel. The paper also describes some architectural relations to trading or sharing manual and automatic control.
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2014-01-01
A tracking service like asset management is essential in a dynamic hospital environment consisting of numerous mobile assets (e.g., wheelchairs or infusion pumps) that are continuously relocated throughout a hospital. The tracking service is accomplished based on the key technologies of an indoor location-based service (LBS), such as locating and monitoring multiple mobile targets inside a building in real time. An indoor LBS such as a tracking service entails numerous resource lookups being requested concurrently and frequently from several locations, as well as a network infrastructure requiring support for high scalability in indoor environments. A traditional centralized architecture needs to maintain a geographic map of the entire building or complex in its central server, which can cause low scalability and traffic congestion. This paper presents a self-organizing and fully distributed indoor mobile asset management (MAM) platform, and proposes an architecture for multiple trackees (such as mobile assets) and trackers based on the proposed distributed platform in real time. In order to verify the suggested platform, scalability performance according to increases in the number of concurrent lookups was evaluated in a real test bed. Tracking latency and traffic load ratio in the proposed tracking architecture was also evaluated. PMID:24662407
A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System
ERIC Educational Resources Information Center
Chim, Hung; Deng, Xiaotie
2008-01-01
We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…
2008-10-01
Agents in the DEEP architecture extend and use the Java Agent Development (JADE) framework. DEEP requires a distributed multi-agent system and a...framework to help simplify the implementation of this system. JADE was chosen because it is fully implemented in Java , and supports these requirements
Fully programmable and scalable optical switching fabric for petabyte data center.
Zhu, Zhonghua; Zhong, Shan; Chen, Li; Chen, Kai
2015-02-09
We present a converged EPS and OCS switching fabric for data center networks (DCNs) based on a distributed optical switching architecture leveraging both WDM & SDM technologies. The architecture is topology adaptive, well suited to dynamic and diverse *-cast traffic patterns. Compared to a typical folded-Clos network, the new architecture is more readily scalable to future multi-Petabyte data centers with 1000 + racks while providing a higher link bandwidth, reducing transceiver count by 50%, and improving cabling efficiency by more than 90%.
Data flow language and interpreter for a reconfigurable distributed data processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurt, A.D.; Heath, J.R.
1982-01-01
An analytic language and an interpreter whereby an applications data flow graph may serve as an input to a reconfigurable distributed data processor is proposed. The architecture considered consists of a number of loosely coupled computing elements (CES) which may be linked to data and file memories through fully nonblocking interconnect networks. The real-time performance of such an architecture depends upon its ability to alter its topology in response to changes in application, asynchronous data rates and faults. Such a data flow language enhances the versatility of a reconfigurable architecture by allowing the user to specify the machine's topology atmore » a very high level. 11 references.« less
An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures
NASA Astrophysics Data System (ADS)
Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.
2009-07-01
A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.
Distributed Planning in a Mixed-Initiative Environment
2008-06-01
Knowledge Sources Control Remote Blackboard Remote Knowledge Sources Remot e Data Remot e Data Java Distributed Blackboard Figure 3 - Distributed...an interface agent or planning agent and the second type is a critic agent. Agents in the DEEP architecture extend and use the Java Agent...chosen because it is fully implemented in Java , and supports these requirements. 2.3.3 Interface Agents Interface agents are the interfaces through
NASA Technical Reports Server (NTRS)
Schwaller, Mathew R.; Schweiss, Robert J.
2007-01-01
The NPOESS Preparatory Project (NPP) Science Data Segment (SDS) provides a framework for the future of NASA s distributed Earth science data systems. The NPP SDS performs research and data product assessment while using a fully distributed architecture. The components of this architecture are organized around key environmental data disciplines: land, ocean, ozone, atmospheric sounding, and atmospheric composition. The SDS thus establishes a set of concepts and a working prototypes. This paper describes the framework used by the NPP Project as it enabled Measurement-Based Earth Science Data Systems for the assessment of NPP products.
a Framework for Distributed Mixed Language Scientific Applications
NASA Astrophysics Data System (ADS)
Quarrie, D. R.
The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.
Towards scalable Byzantine fault-tolerant replication
NASA Astrophysics Data System (ADS)
Zbierski, Maciej
2017-08-01
Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.
Centralized vs decentralized lunar power system study
NASA Astrophysics Data System (ADS)
Metcalf, Kenneth; Harty, Richard B.; Perronne, Gerald E.
1991-09-01
Three power-system options are considered with respect to utilization on a lunar base: the fully centralized option, the fully decentralized option, and a hybrid comprising features of the first two options. Power source, power conditioning, and power transmission are considered separately, and each architecture option is examined with ac and dc distribution, high and low voltage transmission, and buried and suspended cables. Assessments are made on the basis of mass, technological complexity, cost, reliability, and installation complexity, however, a preferred power-system architecture is not proposed. Preferred options include transmission based on ac, transmission voltages of 2000-7000 V with buried high-voltage lines and suspended low-voltage lines. Assessments of the total cost associated with the installations are required to determine the most suitable power system.
Flexible distributed architecture for semiconductor process control and experimentation
NASA Astrophysics Data System (ADS)
Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.
1997-01-01
Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
NASA Technical Reports Server (NTRS)
Hill, Gerald M.; Evans, Richard K.
2009-01-01
A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.
Data management system performance modeling
NASA Technical Reports Server (NTRS)
Kiser, Larry M.
1993-01-01
This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.
Focused Logistics and Support for Force Projection in Force XXI and Beyond
1999-12-09
business system linking trading partners with point of sale demand and real time manufacturing for clothing items.17 Quick Response achieved $1.7...be able to determine the real - time status and supply requirements of units. With "distributed logistics system software model hosts൨ and active...location, quantity, condition, and movement of assets. The system is designed to be fully automated, operate in near- real time with an open-architecture
Efficient Sorting on the Tilera Manycore Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Tumeo, Antonino; Villa, Oreste
e present an efficient implementation of the radix sort algo- rithm for the Tilera TILEPro64 processor. The TILEPro64 is one of the first successful commercial manycore processors. It is com- posed of 64 tiles interconnected through multiple fast Networks- on-chip and features a fully coherent, shared distributed cache. The architecture has a large degree of flexibility, and allows various optimization strategies. We describe how we mapped the algorithm to this architecture. We present an in-depth analysis of the optimizations for each phase of the algorithm with respect to the processor’s sustained performance. We discuss the overall throughput reached by ourmore » radix sort implementation (up to 132 MK/s) and show that it provides comparable or better performance-per-watt with respect to state-of-the art implemen- tations on x86 processors and graphic processing units.« less
FALCON: A distributed scheduler for MIMD architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimshaw, A.S.; Vivas, V.E. Jr.
1991-01-01
This paper describes FALCON (Fully Automatic Load COordinator for Networks), the scheduler for the Mentat parallel processing system. FALCON has a modular structure and is designed for systems that use a task scheduling mechanism. FALCON is distributed, stable, supports system heterogeneities, and employs a sender-initiated adaptive load sharing policy with static task assignment. FALCON is parameterizable and is implemented in Mentat, a working distributed system. We present the design and implementation of FALCON as well as a brief introduction to those features of the Mentat run-time system that influence FALCON. Performance measures under different scheduler configurations are also presented andmore » analyzed with respect to the system parameters. 36 refs., 8 figs.« less
Yaryhin, Oleksandr; Werneburg, Ingmar
2018-06-08
The sand lizard, Lacerta agilis, is a classical model species in herpetology. Its adult skull anatomy and its embryonic development are well known. The description of its fully formed primordial skull by Ernst Gaupp, in 1900, was a key publication in vertebrate morphology and influenced many comparative embryologists. Based on recent methodological considerations, we restudied the early cranial development of this species starting as early as the formation of mesenchymal condensations up to the fully formed chondrocranium. We traced the formation of the complex chondrocranial architecture in detail, clarified specific homologies for the first time, and uncovered major differences to old textbook descriptions. Comparison with other lacertid lizards revealed a very similar genesis of the primordial skull. However, we detected shifts in the developmental timing of particular cartilaginous elements, mainly in the nasal region, which may correlate to specific ecological adaptation in the adults. Late timing of nasal elements might be an important innovation for the successful wide range distribution of the well-known sand lizard. © 2018 Wiley Periodicals, Inc.
Fully Packaged Carbon Nanotube Supercapacitors by Direct Ink Writing on Flexible Substrates.
Chen, Bolin; Jiang, Yizhou; Tang, Xiaohui; Pan, Yayue; Hu, Shan
2017-08-30
The ability to print fully packaged integrated energy storage components (e.g., supercapacitors) is of critical importance for practical applications of printed electronics. Due to the limited variety of printable materials, most studies on printed supercapacitors focus on printing the electrode materials but rarely the full-packaged cell. This work presents for the first time the printing of a fully packaged single-wall carbon nanotube-based supercapacitor with direct ink writing (DIW) technology. Enabled by the developed ink formula, DIW setup, and cell architecture, the whole printing process is mask free, transfer free, and alignment free with precise and repeatable control on the spatial distribution of all constituent materials. Studies on cell design show that a wider electrode pattern and narrower gap distance between electrodes lead to higher specific capacitance. The as-printed fully packaged supercapacitors have energy and power performances that are among the best in recently reported planar carbon-based supercapacitors that are only partially printed or nonprinted.
Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2011-01-01
This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.
Bioinspired architecture approach for a one-billion transistor smart CMOS camera chip
NASA Astrophysics Data System (ADS)
Fey, Dietmar; Komann, Marcus
2007-05-01
In the paper we present a massively parallel VLSI architecture for future smart CMOS camera chips with up to one billion transistors. To exploit efficiently the potential offered by future micro- or nanoelectronic devices traditional on central structures oriented parallel architectures based on MIMD or SIMD approaches will fail. They require too long and too many global interconnects for the distribution of code or the access to common memory. On the other hand nature developed self-organising and emergent principles to manage successfully complex structures based on lots of interacting simple elements. Therefore we developed a new as Marching Pixels denoted emergent computing paradigm based on a mixture of bio-inspired computing models like cellular automaton and artificial ants. In the paper we present different Marching Pixels algorithms and the corresponding VLSI array architecture. A detailed synthesis result for a 0.18 μm CMOS process shows that a 256×256 pixel image is processed in less than 10 ms assuming a moderate 100 MHz clock rate for the processor array. Future higher integration densities and a 3D chip stacking technology will allow the integration and processing of Mega pixels within the same time since our architecture is fully scalable.
Wright, Adam; Sittig, Dean F
2008-12-01
In this paper, we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. The SANDS architecture for decision support has several significant advantages over other architectures for clinical decision support. The most salient of these are:
NASA Astrophysics Data System (ADS)
Garg, Amit Kumar; Madavi, Amresh Ashok; Janyani, Vijay
2017-02-01
A flexible hybrid wavelength division multiplexing-time division multiplexing passive optical network architecture that allows dual rate signals to be sent at 1 and 10 Gbps to each optical networking unit depending upon the traffic load is proposed. The proposed design allows dynamic wavelength allocation with pay-as-you-grow deployment capability. This architecture is capable of providing up to 40 Gbps of equal data rates to all optical distribution networks (ODNs) and up to 70 Gbps of a asymmetrical data rate to the specific ODN. The proposed design handles broadcasting capability with simultaneous point-to-point transmission, which further reduces energy consumption. In this architecture, each module sends a wavelength to each ODN, thus making the architecture fully flexible; this flexibility allows network providers to use only required OLT components and switch off others. The design is also reliable to any module or TRx failure and provides services without any service disruption. Dynamic wavelength allocation and pay-as-you-grow deployment support network extensibility and bandwidth scalability to handle future generation access networks.
Wright, Adam; Sittig, Dean F.
2008-01-01
In this paper we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. PMID:18434256
Modular architectures for quantum networks
NASA Astrophysics Data System (ADS)
Pirker, A.; Wallnöfer, J.; Dür, W.
2018-05-01
We consider the problem of generating multipartite entangled states in a quantum network upon request. We follow a top-down approach, where the required entanglement is initially present in the network in form of network states shared between network devices, and then manipulated in such a way that the desired target state is generated. This minimizes generation times, and allows for network structures that are in principle independent of physical links. We present a modular and flexible architecture, where a multi-layer network consists of devices of varying complexity, including quantum network routers, switches and clients, that share certain resource states. We concentrate on the generation of graph states among clients, which are resources for numerous distributed quantum tasks. We assume minimal functionality for clients, i.e. they do not participate in the complex and distributed generation process of the target state. We present architectures based on shared multipartite entangled Greenberger–Horne–Zeilinger states of different size, and fully connected decorated graph states, respectively. We compare the features of these architectures to an approach that is based on bipartite entanglement, and identify advantages of the multipartite approach in terms of memory requirements and complexity of state manipulation. The architectures can handle parallel requests, and are designed in such a way that the network state can be dynamically extended if new clients or devices join the network. For generation or dynamical extension of the network states, we propose a quantum network configuration protocol, where entanglement purification is used to establish high fidelity states. The latter also allows one to show that the entanglement generated among clients is private, i.e. the network is secure.
Design method of freeform light distribution lens for LED automotive headlamp based on DMD
NASA Astrophysics Data System (ADS)
Ma, Jianshe; Huang, Jianwei; Su, Ping; Cui, Yao
2018-01-01
We propose a new method to design freeform light distribution lens for light-emitting diode (LED) automotive headlamp based on digital micro mirror device (DMD). With the Parallel optical path architecture, the exit pupil of the illuminating system is set in infinity. Thus the principal incident rays of micro lens in DMD is parallel. DMD is made of high speed digital optical reflection array, the function of distribution lens is to distribute the emergent parallel rays from DMD and get a lighting pattern that fully comply with the national regulation GB 25991-2010.We use DLP 4500 to design the light distribution lens, mesh the target plane regulated by the national regulation GB 25991-2010 and correlate the mesh grids with the active mirror array of DLP4500. With the mapping relations and the refraction law, we can build the mathematics model and get the parameters of freeform light distribution lens. Then we import its parameter into the three-dimensional (3D) software CATIA to construct its 3D model. The ray tracing results using Tracepro demonstrate that the Illumination value of target plane is easily adjustable and fully comply with the requirement of the national regulation GB 25991-2010 by adjusting the exit brightness value of DMD. The theoretical optical efficiencies of the light distribution lens designed using this method could be up to 92% without any other auxiliary lens.
Automated monitoring of medical protocols: a secure and distributed architecture.
Alsinet, T; Ansótegui, C; Béjar, R; Fernández, C; Manyà, F
2003-03-01
The control of the right application of medical protocols is a key issue in hospital environments. For the automated monitoring of medical protocols, we need a domain-independent language for their representation and a fully, or semi, autonomous system that understands the protocols and supervises their application. In this paper we describe a specification language and a multi-agent system architecture for monitoring medical protocols. We model medical services in hospital environments as specialized domain agents and interpret a medical protocol as a negotiation process between agents. A medical service can be involved in multiple medical protocols, and so specialized domain agents are independent of negotiation processes and autonomous system agents perform monitoring tasks. We present the detailed architecture of the system agents and of an important domain agent, the database broker agent, that is responsible of obtaining relevant information about the clinical history of patients. We also describe how we tackle the problems of privacy, integrity and authentication during the process of exchanging information between agents.
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
On developing the local research environment of the 1990s - The Space Station era
NASA Technical Reports Server (NTRS)
Chase, Robert; Ziel, Fred
1989-01-01
A requirements analysis for the Space Station's polar platform data system has been performed. Based upon this analysis, a cluster, layered cluster, and layered-modular implementation of one specific module within the Eos Data and Information System (EosDIS), an active data base for satellite remote sensing research has been developed. It is found that a distributed system based on a layered-modular architecture and employing current generation work station technologies has the requisite attributes ascribed by the remote sensing research community. Although, based on benchmark testing, probabilistic analysis, failure analysis and user-survey technique analysis, it is found that this architecture presents some operational shortcomings that will not be alleviated with new hardware or software developments. Consequently, the potential of a fully-modular layered architectural design for meeting the needs of Eos researchers has also been evaluated, concluding that it would be well suited to the evolving requirements of this multidisciplinary research community.
NASA Technical Reports Server (NTRS)
Edwards, Bernard; Horne, William; Israel, David; Kwadrat, Carl; Bauer, Frank H. (Technical Monitor)
2001-01-01
This paper will identify the important characteristics and requirements necessary for inter-satellite communications in distributed spacecraft systems and present analysis results focusing on architectural and protocol comparisons. Emerging spacecraft systems plan to deploy multiple satellites in various "distributed" configurations ranging from close proximity formation flying to widely separated constellations. Distributed spacecraft configurations provide advantages for science exploration and operations since many activities useful for missions may be better served by distributing them between spacecraft. For example, many scientific observations can be enhanced through spatially separated platforms, such as for deep space interferometry. operating multiple distributed spacecraft as a mission requires coordination that may be best provided through inter-satellite communications. For example, several future distributed spacecraft systems envision autonomous operations requiring relative navigational calculations and coordinated attitude and position corrections. To conduct these operations, data must be exchanged between spacecraft. Direct cross-links between satellites provides an efficient and practical method for transferring data and commands. Unlike existing "bent-pipe" relay networks supporting space missions, no standard or widely-used method exists for cross-link communications. Consequently, to support these future missions, the characteristics necessary for inter-satellite communications need to be examined. At first glance, all of the missions look extremely different. Some missions call for tens to hundreds of nano-satellites in constant communications in close proximity to each other. Other missions call for a handful of satellites communicating very slowly over thousands to hundreds of thousands of kilometers. The paper will first classify distributed spacecraft missions to help guide the evaluation and definition of cross-link architectures and approaches. Based on this general classification, the paper will examine general physical layer parameters, such as frequency bands and data rates, necessary to support the missions. The paper will also identify classes of communication architectures that may be employed, ranging from fully distributed to centralized topologies. Numerous factors, such as number of spacecraft, must be evaluated when attempting to pick a communications architecture. Also important is the stability of the formation from a communications standpoint. For example, do all of the spacecraft require equal bandwidth and are spacecraft allowed to enter and leave a formation? The type of science mission being attempted may also heavily influence the communications architecture. In addition, the paper will assess various parameters and characteristics typically associated with the data link layer. The paper will analyze the performance of various multiple access techniques given the operational scenario, requirements, and communication topologies envisioned for missions. This assessment will also include a survey of existing standards and their applicability for distributed spacecraft systems. An important consideration includes the interoperability of the lower layers (physical and data link) examined in this paper with the higher layer protocols(network) envisioned for future space internetworking. Finally, the paper will define a suggested path, including preliminary recommendations, for defining and developing a standard for intersatellite communications based on the classes of distributed spacecraft missions and analysis results.
Ensuring Data Storage Security in Tree cast Routing Architecture for Sensor Networks
NASA Astrophysics Data System (ADS)
Kumar, K. E. Naresh; Sagar, U. Vidya; Waheed, Mohd. Abdul
2010-10-01
In this paper presents recent advances in technology have made low-cost, low-power wireless sensors with efficient energy consumption. A network of such nodes can coordinate among themselves for distributed sensing and processing of certain data. For which, we propose an architecture to provide a stateless solution in sensor networks for efficient routing in wireless sensor networks. This type of architecture is known as Tree Cast. We propose a unique method of address allocation, building up multiple disjoint trees which are geographically inter-twined and rooted at the data sink. Using these trees, routing messages to and from the sink node without maintaining any routing state in the sensor nodes is possible. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, this routing architecture moves the application software and databases to the large data centers, where the management of the data and services may not be fully trustworthy. This unique attribute, however, poses many new security challenges which have not been well understood. In this paper, we focus on data storage security, which has always been an important aspect of quality of service. To ensure the correctness of users' data in this architecture, we propose an effective and flexible distributed scheme with two salient features, opposing to its predecessors. By utilizing the homomorphic token with distributed verification of erasure-coded data, our scheme achieves the integration of storage correctness insurance and data error localization, i.e., the identification of misbehaving server(s). Unlike most prior works, the new scheme further supports secure and efficient dynamic operations on data blocks, including: data update, delete and append. Extensive security and performance analysis shows that the proposed scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.
A fully convolutional network for weed mapping of unmanned aerial vehicle (UAV) imagery.
Huang, Huasheng; Deng, Jizhong; Lan, Yubin; Yang, Aqing; Deng, Xiaoling; Zhang, Lei
2018-01-01
Appropriate Site Specific Weed Management (SSWM) is crucial to ensure the crop yields. Within SSWM of large-scale area, remote sensing is a key technology to provide accurate weed distribution information. Compared with satellite and piloted aircraft remote sensing, unmanned aerial vehicle (UAV) is capable of capturing high spatial resolution imagery, which will provide more detailed information for weed mapping. The objective of this paper is to generate an accurate weed cover map based on UAV imagery. The UAV RGB imagery was collected in 2017 October over the rice field located in South China. The Fully Convolutional Network (FCN) method was proposed for weed mapping of the collected imagery. Transfer learning was used to improve generalization capability, and skip architecture was applied to increase the prediction accuracy. After that, the performance of FCN architecture was compared with Patch_based CNN algorithm and Pixel_based CNN method. Experimental results showed that our FCN method outperformed others, both in terms of accuracy and efficiency. The overall accuracy of the FCN approach was up to 0.935 and the accuracy for weed recognition was 0.883, which means that this algorithm is capable of generating accurate weed cover maps for the evaluated UAV imagery.
ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
1995-02-01
This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since suchmore » cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.« less
NASA Technical Reports Server (NTRS)
Marius, Julio L.; Busch, Jim
2008-01-01
The Tropical Rainfall Measuring Mission (TRMM) spacecraft was launched in November of 1996 in order to obtain unique three dimensional radar cross sectional observations of cloud structures with particular interest in hurricanes. The TRMM mission life was recently extended with current estimates that operations will continue through the 2012-2013 timeframe. Faced with this extended mission profile, the project has embarked on a technology refresh and re-engineering effort. TRMM has recently implemented a re-engineering effort to expand a middleware based messaging architecture to enable fully redundant lights-out of flight operations activities. The middleware approach is based on the Goddard Mission Services Evolution Center (GMSEC) architecture, tools and associated open-source Applications Programming Interface (API). Middleware based messaging systems are useful in spacecraft operations and automation systems because private node based knowledge (such as that within a telemetry and command system) can be broadcast on the middleware messaging bus and hence enable collaborative decisions to be made by multiple subsystems. In this fashion, private data is made public and distributed within the local area network and multiple nodes can remain synchronized with other nodes. This concept is useful in a fully redundant architecture whereby one node is monitoring the processing of the 'prime' node so that in the event of a failure the backup node can assume operations of the prime, without loss of state knowledge. This paper will review and present the experiences, architecture, approach and lessons learned of the TRMM re-engineering effort centered on the GMSEC middleware architecture and tool suite. Relevant information will be presented that relates to the dual redundant parallel nature of the Telemetry and Command (T and C) and Front-End systems and how these systems can interact over a middleware bus to achieve autonomous operations including autonomous commanding to recover missing science data during the same spacecraft contact.
Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques
2013-03-01
MEMRISTOR-BASED COMPUTING ARCHITECTURE : DESIGN METHODOLOGIES AND CIRCUIT TECHNIQUES POLYTECHNIC INSTITUTE OF NEW YORK UNIVERSITY...TECHNICAL REPORT 3. DATES COVERED (From - To) OCT 2010 – OCT 2012 4. TITLE AND SUBTITLE MEMRISTOR-BASED COMPUTING ARCHITECTURE : DESIGN METHODOLOGIES...schemes for a memristor-based reconfigurable architecture design have not been fully explored yet. Therefore, in this project, we investigated
A Fully Implemented 12 × 12 Data Vortex Optical Packet Switching Interconnection Network
NASA Astrophysics Data System (ADS)
Shacham, Assaf; Small, Benjamin A.; Liboiron-Ladouceur, Odile; Bergman, Keren
2005-10-01
A fully functional optical packet switching (OPS) interconnection network based on the data vortex architecture is presented. The photonic switching fabric uniquely capitalizes on the enormous bandwidth advantage of wavelength division multiplexing (WDM) wavelength parallelism while delivering minimal packet transit latency. Utilizing semiconductor optical amplifier (SOA)-based switching nodes and conventional fiber-optic technology, the 12-port system exhibits a capacity of nearly 1 Tb/s. Optical packets containing an eight-wavelength WDM payload with 10 Gb/s per wavelength are routed successfully to all 12 ports while maintaining a bit error rate (BER) of 10-12 or better. Median port-to-port latencies of 110 ns are achieved with a distributed deflection routing network that resolves packet contention on-the-fly without the use of optical buffers and maintains the entire payload path in the optical domain.
NASA Technical Reports Server (NTRS)
Collins, Oliver (Inventor); Dolinar, Jr., Samuel J. (Inventor); Hus, In-Shek (Inventor); Bozzola, Fabrizio P. (Inventor); Olson, Erlend M. (Inventor); Statman, Joseph I. (Inventor); Zimmerman, George A. (Inventor)
1991-01-01
A method of formulating and packaging decision-making elements into a long constraint length Viterbi decoder which involves formulating the decision-making processors as individual Viterbi butterfly processors that are interconnected in a deBruijn graph configuration. A fully distributed architecture, which achieves high decoding speeds, is made feasible by novel wiring and partitioning of the state diagram. This partitioning defines universal modules, which can be used to build any size decoder, such that a large number of wires is contained inside each module, and a small number of wires is needed to connect modules. The total system is modular and hierarchical, and it implements a large proportion of the required wiring internally within modules and may include some external wiring to fully complete the deBruijn graph. pg,14.
Affordable multisensor digital video architecture for 360° situational awareness displays
NASA Astrophysics Data System (ADS)
Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana
2011-06-01
One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.
A decentralized training algorithm for Echo State Networks in distributed big data applications.
Scardapane, Simone; Wang, Dianhui; Panella, Massimo
2016-06-01
The current big data deluge requires innovative solutions for performing efficient inference on large, heterogeneous amounts of information. Apart from the known challenges deriving from high volume and velocity, real-world big data applications may impose additional technological constraints, including the need for a fully decentralized training architecture. While several alternatives exist for training feed-forward neural networks in such a distributed setting, less attention has been devoted to the case of decentralized training of recurrent neural networks (RNNs). In this paper, we propose such an algorithm for a class of RNNs known as Echo State Networks. The algorithm is based on the well-known Alternating Direction Method of Multipliers optimization procedure. It is formulated only in terms of local exchanges between neighboring agents, without reliance on a coordinating node. Additionally, it does not require the communication of training patterns, which is a crucial component in realistic big data implementations. Experimental results on large scale artificial datasets show that it compares favorably with a fully centralized implementation, in terms of speed, efficiency and generalization accuracy. Copyright © 2015 Elsevier Ltd. All rights reserved.
Towards dropout training for convolutional neural networks.
Wu, Haibing; Gu, Xiaodong
2015-11-01
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.
A holistic approach to SIM platform and its application to early-warning satellite system
NASA Astrophysics Data System (ADS)
Sun, Fuyu; Zhou, Jianping; Xu, Zheyao
2018-01-01
This study proposes a new simulation platform named Simulation Integrated Management (SIM) for the analysis of parallel and distributed systems. The platform eases the process of designing and testing both applications and architectures. The main characteristics of SIM are flexibility, scalability, and expandability. To improve the efficiency of project development, new models of early-warning satellite system were designed based on the SIM platform. Finally, through a series of experiments, the correctness of SIM platform and the aforementioned early-warning satellite models was validated, and the systematical analyses for the orbital determination precision of the ballistic missile during its entire flight process were presented, as well as the deviation of the launch/landing point. Furthermore, the causes of deviation and prevention methods will be fully explained. The simulation platform and the models will lay the foundations for further validations of autonomy technology in space attack-defense architecture research.
NASA Technical Reports Server (NTRS)
Zak, Michail
1990-01-01
A new neural network architecture is proposed based upon effects of non-Lipschitzian dynamics. The network is fully connected, but these connections are active only during vanishingly short time periods. The advantages of this architecture are discussed.
ERIC Educational Resources Information Center
Farid, Ayman A.; Zaghloul, Weaam M.; Dewidar, Khaled M.
2014-01-01
The great shift in sustainability and computer aided design in the field of architecture caused a remarkable change in the architecture philosophy, new aspects of beauty and aesthetic values are being introduced, and traditional definitions for beauty cannot fully cover this aspects, which causes a gap between; new architecture works criticism and…
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
Arabnejad, Sajad; Johnston, Burnett; Tanzer, Michael; Pasini, Damiano
2017-08-01
Current hip replacement femoral implants are made of fully solid materials which all have stiffness considerably higher than that of bone. This mechanical mismatch can cause significant bone resorption secondary to stress shielding, which can lead to serious complications such as peri-prosthetic fracture during or after revision surgery. In this work, a high strength fully porous material with tunable mechanical properties is introduced for use in hip replacement design. The implant macro geometry is based off of a short stem taper-wedge implant compatible with minimally invasive hip replacement surgery. The implant micro-architecture is fine-tuned to locally mimic bone tissue properties which results in minimum bone resorption secondary to stress shielding. We present a systematic approach for the design of a 3D printed fully porous hip implant that encompasses the whole activity spectrum of implant development, from concept generation, multiscale mechanics of porous materials, material architecture tailoring, to additive manufacturing, and performance assessment via in vitro experiments in composite femurs. We show that the fully porous implant with an optimized material micro-structure can reduce the amount of bone loss secondary to stress shielding by 75% compared to a fully solid implant. This result also agrees with those of the in vitro quasi-physiological experimental model and the corresponding finite element model for both the optimized fully porous and fully solid implant. These studies demonstrate the merit and the potential of tuning material architecture to achieve a substantial reduction of bone resorption secondary to stress shielding. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:1774-1783, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Knowledge-base browsing: an application of hybrid distributed/local connectionist networks
NASA Astrophysics Data System (ADS)
Samad, Tariq; Israel, Peggy
1990-08-01
We describe a knowledge base browser based on a connectionist (or neural network) architecture that employs both distributed and local representations. The distributed representations are used for input and output thereby enabling associative noise-tolerant interaction with the environment. Internally all representations are fully local. This simplifies weight assignment and facilitates network configuration for specific applications. In our browser concepts and relations in a knowledge base are represented using " microfeatures. " The microfeatures can encode semantic attributes structural features contextual information etc. Desired portions of the knowledge base can then be associatively retrieved based on a structured cue. An ordered list of partial matches is presented to the user for selection. Microfeatures can also be used as " bookmarks" they can be placed dynamically at appropriate points in the knowledge base and subsequently used as retrieval cues. A proof-of-concept system has been implemented for an internally developed Honeywell-proprietary knowledge acquisition tool. 1.
Layer 1 VPN services in distributed next-generation SONET/SDH networks with inverse multiplexing
NASA Astrophysics Data System (ADS)
Ghani, N.; Muthalaly, M. V.; Benhaddou, D.; Alanqar, W.
2006-05-01
Advances in next-generation SONET/SDH along with GMPLS control architectures have enabled many new service provisioning capabilities. In particular, a key services paradigm is the emergent Layer 1 virtual private network (L1 VPN) framework, which allows multiple clients to utilize a common physical infrastructure and provision their own 'virtualized' circuit-switched networks. This precludes expensive infrastructure builds and increases resource utilization for carriers. Along these lines, a novel L1 VPN services resource management scheme for next-generation SONET/SDH networks is proposed that fully leverages advanced virtual concatenation and inverse multiplexing features. Additionally, both centralized and distributed GMPLS-based implementations are also tabled to support the proposed L1 VPN services model. Detailed performance analysis results are presented along with avenues for future research.
Systems Architecture for Fully Autonomous Space Missions
NASA Technical Reports Server (NTRS)
Esper, Jamie; Schnurr, R.; VanSteenberg, M.; Brumfield, Mark (Technical Monitor)
2002-01-01
The NASA Goddard Space Flight Center is working to develop a revolutionary new system architecture concept in support of fully autonomous missions. As part of GSFC's contribution to the New Millenium Program (NMP) Space Technology 7 Autonomy and on-Board Processing (ST7-A) Concept Definition Study, the system incorporates the latest commercial Internet and software development ideas and extends them into NASA ground and space segment architectures. The unique challenges facing the exploration of remote and inaccessible locales and the need to incorporate corresponding autonomy technologies within reasonable cost necessitate the re-thinking of traditional mission architectures. A measure of the resiliency of this architecture in its application to a broad range of future autonomy missions will depend on its effectiveness in leveraging from commercial tools developed for the personal computer and Internet markets. Specialized test stations and supporting software come to past as spacecraft take advantage of the extensive tools and research investments of billion-dollar commercial ventures. The projected improvements of the Internet and supporting infrastructure go hand-in-hand with market pressures that provide continuity in research. By taking advantage of consumer-oriented methods and processes, space-flight missions will continue to leverage on investments tailored to provide better services at reduced cost. The application of ground and space segment architectures each based on Local Area Networks (LAN), the use of personal computer-based operating systems, and the execution of activities and operations through a Wide Area Network (Internet) enable a revolution in spacecraft mission formulation, implementation, and flight operations. Hardware and software design, development, integration, test, and flight operations are all tied-in closely to a common thread that enables the smooth transitioning between program phases. The application of commercial software development techniques lays the foundation for delivery of product-oriented flight software modules and models. Software can then be readily applied to support the on-board autonomy required for mission self-management. An on-board intelligent system, based on advanced scripting languages, facilitates the mission autonomy required to offload ground system resources, and enables the spacecraft to manage itself safely through an efficient and effective process of reactive planning, science data acquisition, synthesis, and transmission to the ground. Autonomous ground systems in turn coordinate and support schedule contact times with the spacecraft. Specific autonomy software modules on-board include mission and science planners, instrument and subsystem control, and fault tolerance response software, all residing within a distributed computing environment supported through the flight LAN. Autonomy also requires the minimization of human intervention between users on the ground and the spacecraft, and hence calls for the elimination of the traditional operations control center as a funnel for data manipulation. Basic goal-oriented commands are sent directly from the user to the spacecraft through a distributed internet-based payload operations "center". The ensuing architecture calls for the use of spacecraft as point extensions on the Internet. This paper will detail the system architecture implementation chosen to enable cost-effective autonomous missions with applicability to a broad range of conditions. It will define the structure needed for implementation of such missions, including software and hardware infrastructures. The overall architecture is then laid out as a common thread in the mission life cycle from formulation through implementation and flight operations.
Telemedicine system interoperability architecture: concept description and architecture overview.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, Richard Layne, II
2004-05-01
In order for telemedicine to realize the vision of anywhere, anytime access to care, it must address the question of how to create a fully interoperable infrastructure. This paper describes the reasons for pursuing interoperability, outlines operational requirements that any interoperability approach needs to consider, proposes an abstract architecture for meeting these needs, identifies candidate technologies that might be used for rendering this architecture, and suggests a path forward that the telemedicine community might follow.
Common Readout Unit (CRU) - A new readout architecture for the ALICE experiment
NASA Astrophysics Data System (ADS)
Mitra, J.; Khan, S. A.; Mukherjee, S.; Paul, R.
2016-03-01
The ALICE experiment at the CERN Large Hadron Collider (LHC) is presently going for a major upgrade in order to fully exploit the scientific potential of the upcoming high luminosity run, scheduled to start in the year 2021. The high interaction rate and the large event size will result in an experimental data flow of about 1 TB/s from the detectors, which need to be processed before sending to the online computing system and data storage. This processing is done in a dedicated Common Readout Unit (CRU), proposed for data aggregation, trigger and timing distribution and control moderation. It act as common interface between sub-detector electronic systems, computing system and trigger processors. The interface links include GBT, TTC-PON and PCIe. GBT (Gigabit transceiver) is used for detector data payload transmission and fixed latency path for trigger distribution between CRU and detector readout electronics. TTC-PON (Timing, Trigger and Control via Passive Optical Network) is employed for time multiplex trigger distribution between CRU and Central Trigger Processor (CTP). PCIe (Peripheral Component Interconnect Express) is the high-speed serial computer expansion bus standard for bulk data transport between CRU boards and processors. In this article, we give an overview of CRU architecture in ALICE, discuss the different interfaces, along with the firmware design and implementation of CRU on the LHCb PCIe40 board.
Putting Teeth into Open Architectures: Infrastructure for Reducing the Need for Retesting
2007-04-30
the test and evaluation team. This paper outlines new approaches to quality assurance and testing that are better suited for providing...reconfiguration. Testing of reusable subsystems is also subject to the above considerations and, similarly, requires new methods for effectively achieving...architectural model. Thus, fully realizing the open architecture vision requires a new paradigm for test and evaluation. We propose such a
Simultaneous Transmit and Receive Performance of an 8-channel Digital Phased Array
2017-01-16
Lincoln Laboratory Lexington, Massachusetts, USA Abstract—The Aperture- Level Simultaneous Transmit and Re- ceive (ALSTAR) architecture enables extremely...In [1], the Aperture- Level Simultaneous Transmit and Receive (ALSTAR) architecture was proposed for achieving STAR using a fully digital phased array...Aperture- Level Simultaneous Transmit and Receive (ALSTAR) architecture enables STAR functionality in a digital phased array without the use of specialized
Parallel performance investigations of an unstructured mesh Navier-Stokes solver
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.
2000-01-01
A Reynolds-averaged Navier-Stokes solver based on unstructured mesh techniques for analysis of high-lift configurations is described. The method makes use of an agglomeration multigrid solver for convergence acceleration. Implicit line-smoothing is employed to relieve the stiffness associated with highly stretched meshes. A GMRES technique is also implemented to speed convergence at the expense of additional memory usage. The solver is cache efficient and fully vectorizable, and is parallelized using a two-level hybrid MPI-OpenMP implementation suitable for shared and/or distributed memory architectures, as well as clusters of shared memory machines. Convergence and scalability results are illustrated for various high-lift cases.
Architecture-Centric Development in Globally Distributed Projects
NASA Astrophysics Data System (ADS)
Sauer, Joachim
In this chapter architecture-centric development is proposed as a means to strengthen the cohesion of distributed teams and to tackle challenges due to geographical and temporal distances and the clash of different cultures. A shared software architecture serves as blueprint for all activities in the development process and ties them together. Architecture-centric development thus provides a plan for task allocation, facilitates the cooperation of globally distributed developers, and enables continuous integration reaching across distributed teams. Advice is also provided for software architects who work with distributed teams in an agile manner.
Associative architecture for image processing
NASA Astrophysics Data System (ADS)
Adar, Rutie; Akerib, Avidan
1997-09-01
This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.
New Generation Power System for Space Applications
NASA Technical Reports Server (NTRS)
Jones, Loren; Carr, Greg; Deligiannis, Frank; Lam, Barbara; Nelson, Ron; Pantaleon, Jose; Ruiz, Ian; Treicler, John; Wester, Gene; Sauers, Jim;
2004-01-01
The Deep Space Avionics (DSA) Project is developing a new generation of power system building blocks. Using application specific integrated circuits (ASICs) and power switching modules a scalable power system can be constructed for use on multiple deep space missions including future missions to Mars, comets, Jupiter and its moons. The key developments of the DSA power system effort are five power ASICs and a mod ule for power switching. These components enable a modular and scalab le design approach, which can result in a wide variety of power syste m architectures to meet diverse mission requirements and environments . Each component is radiation hardened to one megarad) total dose. The power switching module can be used for power distribution to regular spacecraft loads, to propulsion valves and actuation of pyrotechnic devices. The number of switching elements per load, pyrotechnic firin gs and valve drivers can be scaled depending on mission needs. Teleme try data is available from the switch module via an I2C data bus. The DSA power system components enable power management and distribution for a variety of power buses and power system architectures employing different types of energy storage and power sources. This paper will describe each power ASIC#s key performance characteristics as well a s recent prototype test results. The power switching module test results will be discussed and will demonstrate its versatility as a multip urpose switch. Finally, the combination of these components will illu strate some of the possible power system architectures achievable fro m small single string systems to large fully redundant systems.
CHARACTERIZATION OF THE COMPLETE FIBER NETWORK TOPOLOGY OF PLANAR FIBROUS TISSUES AND SCAFFOLDS
D'Amore, Antonio; Stella, John A.; Wagner, William R.; Sacks, Michael S.
2010-01-01
Understanding how engineered tissue scaffold architecture affects cell morphology, metabolism, phenotypic expression, as well as predicting material mechanical behavior have recently received increased attention. In the present study, an image-based analysis approach that provides an automated tool to characterize engineered tissue fiber network topology is presented. Micro-architectural features that fully defined fiber network topology were detected and quantified, which include fiber orientation, connectivity, intersection spatial density, and diameter. Algorithm performance was tested using scanning electron microscopy (SEM) images of electrospun poly(ester urethane)urea (ES-PEUU) scaffolds. SEM images of rabbit mesenchymal stem cell (MSC) seeded collagen gel scaffolds and decellularized rat carotid arteries were also analyzed to further evaluate the ability of the algorithm to capture fiber network morphology regardless of scaffold type and the evaluated size scale. The image analysis procedure was validated qualitatively and quantitatively, comparing fiber network topology manually detected by human operators (n=5) with that automatically detected by the algorithm. Correlation values between manual detected and algorithm detected results for the fiber angle distribution and for the fiber connectivity distribution were 0.86 and 0.93 respectively. Algorithm detected fiber intersections and fiber diameter values were comparable (within the mean ± standard deviation) with those detected by human operators. This automated approach identifies and quantifies fiber network morphology as demonstrated for three relevant scaffold types and provides a means to: (1) guarantee objectivity, (2) significantly reduce analysis time, and (3) potentiate broader analysis of scaffold architecture effects on cell behavior and tissue development both in vitro and in vivo. PMID:20398930
Developing a Distributed Computing Architecture at Arizona State University.
ERIC Educational Resources Information Center
Armann, Neil; And Others
1994-01-01
Development of Arizona State University's computing architecture, designed to ensure that all new distributed computing pieces will work together, is described. Aspects discussed include the business rationale, the general architectural approach, characteristics and objectives of the architecture, specific services, and impact on the university…
A Flexible and Configurable Architecture for Automatic Control Remote Laboratories
ERIC Educational Resources Information Center
Kalúz, Martin; García-Zubía, Javier; Fikar, Miroslav; Cirka, Luboš
2015-01-01
In this paper, we propose a novel approach in hardware and software architecture design for implementation of remote laboratories for automatic control. In our contribution, we show the solution with flexible connectivity at back-end, providing features of multipurpose usage with different types of experimental devices, and fully configurable…
Reilly, Jamie; Garcia, Amanda; Binney, Richard J.
2016-01-01
Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210
Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki
2014-12-01
As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.
Distributed Cooperation Solution Method of Complex System Based on MAS
NASA Astrophysics Data System (ADS)
Weijin, Jiang; Yuhui, Xu
To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.
Fully decentralized estimation and control for a modular wheeled mobile robot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mutambara, A.G.O.; Durrant-Whyte, H.F.
2000-06-01
In this paper, the problem of fully decentralized data fusion and control for a modular wheeled mobile robot (WMR) is addressed. This is a vehicle system with nonlinear kinematics, distributed multiple sensors, and nonlinear sensor models. The problem is solved by applying fully decentralized estimation and control algorithms based on the extended information filter. This is achieved by deriving a modular, decentralized kinematic model by using plane motion kinematics to obtain the forward and inverse kinematics for a generalized simple wheeled vehicle. This model is then used in the decentralized estimation and control algorithms. WMR estimation and control is thusmore » obtained locally using reduced order models with reduced communication of information between nodes is carried out after every measurement (full rate communication), the estimates and control signals obtained at each node are equivalent to those obtained by a corresponding centralized system. Transputer architecture is used as the basis for hardware and software design as it supports the extensive communication and concurrency requirements that characterize modular and decentralized systems. The advantages of a modular WMR vehicle include scalability, application flexibility, low prototyping costs, and high reliability.« less
Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems
NASA Astrophysics Data System (ADS)
Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof
The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.
Innovative HPC architectures for the study of planetary plasma environments
NASA Astrophysics Data System (ADS)
Amaya, Jorge; Wolf, Anna; Lembège, Bertrand; Zitz, Anke; Alvarez, Damian; Lapenta, Giovanni
2016-04-01
DEEP-ER is an European Commission founded project that develops a new type of High Performance Computer architecture. The revolutionary system is currently used by KU Leuven to study the effects of the solar wind on the global environments of the Earth and Mercury. The new architecture combines the versatility of Intel Xeon computing nodes with the power of the upcoming Intel Xeon Phi accelerators. Contrary to classical heterogeneous HPC architectures, where it is customary to find CPU and accelerators in the same computing nodes, in the DEEP-ER system CPU nodes are grouped together (Cluster) and independently from the accelerator nodes (Booster). The system is equipped with a state of the art interconnection network, a highly scalable and fast I/O and a fail recovery resiliency system. The final objective of the project is to introduce a scalable system that can be used to create the next generation of exascale supercomputers. The code iPic3D from KU Leuven is being adapted to this new architecture. This particle-in-cell code can now perform the computation of the electromagnetic fields in the Cluster while the particles are moved in the Booster side. Using fast and scalable Xeon Phi accelerators in the Booster we can introduce many more particles per cell in the simulation than what is possible in the current generation of HPC systems, allowing to calculate fully kinetic plasmas with very low interpolation noise. The system will be used to perform fully kinetic, low noise, 3D simulations of the interaction of the solar wind with the magnetosphere of the Earth and Mercury. Preliminary simulations have been performed in other HPC centers in order to compare the results in different systems. In this presentation we show the complexity of the plasma flow around the planets, including the development of hydrodynamic instabilities at the flanks, the presence of the collision-less shock, the magnetosheath, the magnetopause, reconnection zones, the formation of the plasma sheet and the magnetotail, and the variation of ion/electron plasma flows when crossing these frontiers. The simulations also give access to detailed information about the particle dynamics and their velocity distribution at locations that can be used for comparison with satellite data.
Experimental measurement-device-independent quantum digital signatures.
Roberts, G L; Lucamarini, M; Yuan, Z L; Dynes, J F; Comandar, L C; Sharpe, A W; Shields, A J; Curty, M; Puthoor, I V; Andersson, E
2017-10-23
The development of quantum networks will be paramount towards practical and secure telecommunications. These networks will need to sign and distribute information between many parties with information-theoretic security, requiring both quantum digital signatures (QDS) and quantum key distribution (QKD). Here, we introduce and experimentally realise a quantum network architecture, where the nodes are fully connected using a minimum amount of physical links. The central node of the network can act either as a totally untrusted relay, connecting the end users via the recently introduced measurement-device-independent (MDI)-QKD, or as a trusted recipient directly communicating with the end users via QKD. Using this network, we perform a proof-of-principle demonstration of QDS mediated by MDI-QKD. For that, we devised an efficient protocol to distil multiple signatures from the same block of data, thus reducing the statistical fluctuations in the sample and greatly enhancing the final QDS rate in the finite-size scenario.
New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots
Gonzalez-de-Soto, Mariano; Pajares, Gonzalo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976
New trends in robotics for agriculture: integration and assessment of a real fleet of robots.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.
Zhao, Jiangsan; Rewald, Boris; Leitner, Daniel; Nagel, Kerstin A.; Nakhforoosh, Alireza
2017-01-01
Abstract Root phenotyping provides trait information for plant breeding. A shortcoming of high-throughput root phenotyping is the limitation to seedling plants and failure to make inferences on mature root systems. We suggest root system architecture (RSA) models to predict mature root traits and overcome the inference problem. Sixteen pea genotypes were phenotyped in (i) seedling (Petri dishes) and (ii) mature (sand-filled columns) root phenotyping platforms. The RSA model RootBox was parameterized with seedling traits to simulate the fully developed root systems. Measured and modelled root length, first-order lateral number, and root distribution were compared to determine key traits for model-based prediction. No direct relationship in root traits (tap, lateral length, interbranch distance) was evident between phenotyping systems. RootBox significantly improved the inference over phenotyping platforms. Seedling plant tap and lateral root elongation rates and interbranch distance were sufficient model parameters to predict genotype ranking in total root length with an RSpearman of 0.83. Parameterization including uneven lateral spacing via a scaling function substantially improved the prediction of architectures underlying the differently sized root systems. We conclude that RSA models can solve the inference problem of seedling root phenotyping. RSA models should be included in the phenotyping pipeline to provide reliable information on mature root systems to breeding research. PMID:28168270
Differentiated protection method in passive optical networks based on OPEX
NASA Astrophysics Data System (ADS)
Zhang, Zhicheng; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2011-12-01
Reliable service delivery becomes more significant due to increased dependency on electronic services all over society and the growing importance of reliable service delivery. As the capability of PON increasing, both residential and business customers may be included in a PON. Meanwhile, OPEX have been proven to be a very important factor of the total cost for a telecommunication operator. Thus, in this paper, we present the partial protection PON architecture and compare the operational expenditures (OPEX) of fully duplicated protection and partly duplicated protection for ONUs with different distributed fiber length, reliability requirement and penalty cost per hour. At last, we propose a differentiated protection method to minimize OPEX.
Li, Siqi; Jiang, Huiyan; Pang, Wenbo
2017-05-01
Accurate cell grading of cancerous tissue pathological image is of great importance in medical diagnosis and treatment. This paper proposes a joint multiple fully connected convolutional neural network with extreme learning machine (MFC-CNN-ELM) architecture for hepatocellular carcinoma (HCC) nuclei grading. First, in preprocessing stage, each grayscale image patch with the fixed size is obtained using center-proliferation segmentation (CPS) method and the corresponding labels are marked under the guidance of three pathologists. Next, a multiple fully connected convolutional neural network (MFC-CNN) is designed to extract the multi-form feature vectors of each input image automatically, which considers multi-scale contextual information of deep layer maps sufficiently. After that, a convolutional neural network extreme learning machine (CNN-ELM) model is proposed to grade HCC nuclei. Finally, a back propagation (BP) algorithm, which contains a new up-sample method, is utilized to train MFC-CNN-ELM architecture. The experiment comparison results demonstrate that our proposed MFC-CNN-ELM has superior performance compared with related works for HCC nuclei grading. Meanwhile, external validation using ICPR 2014 HEp-2 cell dataset shows the good generalization of our MFC-CNN-ELM architecture. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sensing and Measurement Architecture for Grid Modernization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taft, Jeffrey D.; De Martini, Paul
2016-02-01
This paper addresses architecture for grid sensor networks, with primary emphasis on distribution grids. It describes a forward-looking view of sensor network architecture for advanced distribution grids, and discusses key regulatory, financial, and planning issues.
Characterization of architectural distortion on mammograms using a linear energy detector
NASA Astrophysics Data System (ADS)
Alvarez, Jorge; Narváez, Fabián.; Poveda, César; Romero, Eduardo
2013-11-01
Architectural distortion is a breast cancer sign, characterized by spiculated patterns that define the disease malignancy level. In this paper, the radial spiculae of a typical architectural distortion were characterized by a new strategy. Firstly, previously selected Regions of Interest are divided into a set of parallel and disjoint bands (4 pixels the ROI length), from which intensity integrals (coefficients) are calculated. This partition is rotated every two degrees, searching in the phase plane the characteristic radial spiculation. Then, these coefficients are used to construct a fully connected graph whose edges correspond to the integral values or coefficients and the nodes to x and y image positions. A centrality measure like the first eigenvector is used to extract a set of discriminant coefficients that represent the locations with higher linear energy. Finally, the approach is trained using a set of 24 Regions of Interest obtained from the MIAS database, namely, 12 Architectural Distortions and 12 controls. The first eigenvector is then used as input to a conventional Support Vector Machine classifier whose optimal parameters were obtained by a leave-one-out cross validation. The whole method was assessed in a set of 12 RoIs with different distribution of breast tissues (normal and abnormal), and the classification results were compared against a ground truth, already provided by the data base, showing a precision rate of 0.583%, a sensitivity rate of 0.833% and a specificity rate of 0.333%.
NASA Technical Reports Server (NTRS)
Fijany, Amir (Inventor); Bejczy, Antal K. (Inventor)
1993-01-01
This is a real-time robotic controller and simulator which is a MIMD-SIMD parallel architecture for interfacing with an external host computer and providing a high degree of parallelism in computations for robotic control and simulation. It includes a host processor for receiving instructions from the external host computer and for transmitting answers to the external host computer. There are a plurality of SIMD microprocessors, each SIMD processor being a SIMD parallel processor capable of exploiting fine grain parallelism and further being able to operate asynchronously to form a MIMD architecture. Each SIMD processor comprises a SIMD architecture capable of performing two matrix-vector operations in parallel while fully exploiting parallelism in each operation. There is a system bus connecting the host processor to the plurality of SIMD microprocessors and a common clock providing a continuous sequence of clock pulses. There is also a ring structure interconnecting the plurality of SIMD microprocessors and connected to the clock for providing the clock pulses to the SIMD microprocessors and for providing a path for the flow of data and instructions between the SIMD microprocessors. The host processor includes logic for controlling the RRCS by interpreting instructions sent by the external host computer, decomposing the instructions into a series of computations to be performed by the SIMD microprocessors, using the system bus to distribute associated data among the SIMD microprocessors, and initiating activity of the SIMD microprocessors to perform the computations on the data by procedure call.
ESPC Common Model Architecture
2014-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Common Model Architecture Earth System Modeling...Operational Prediction Capability (NUOPC) was established between NOAA and Navy to develop common software architecture for easy and efficient...development under a common model architecture and other software-related standards in this project. OBJECTIVES NUOPC proposes to accelerate
Integrated Nationwide Electronic Health Records system: Semi-distributed architecture approach.
Fragidis, Leonidas L; Chatzoglou, Prodromos D; Aggelidis, Vassilios P
2016-11-14
The integration of heterogeneous electronic health records systems by building an interoperable nationwide electronic health record system provides undisputable benefits in health care, like superior health information quality, medical errors prevention and cost saving. This paper proposes a semi-distributed system architecture approach for an integrated national electronic health record system incorporating the advantages of the two dominant approaches, the centralized architecture and the distributed architecture. The high level design of the main elements for the proposed architecture is provided along with diagrams of execution and operation and data synchronization architecture for the proposed solution. The proposed approach effectively handles issues related to redundancy, consistency, security, privacy, availability, load balancing, maintainability, complexity and interoperability of citizen's health data. The proposed semi-distributed architecture offers a robust interoperability framework without healthcare providers to change their local EHR systems. It is a pragmatic approach taking into account the characteristics of the Greek national healthcare system along with the national public administration data communication network infrastructure, for achieving EHR integration with acceptable implementation cost.
2017-12-01
SYSTEM ARCHITECTURE TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT by Justin K. Davis...TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT 5. FUNDING NUMBERS 6. AUTHOR(S) Justin K...ARCHITECTURE TO INVESTIGATE THE IMPACT OF INTEGRATED AIR AND MISSILE DEFENSE IN A DISTRIBUTED LETHALITY ENVIRONMENT Justin K. Davis Lieutenant
2016-09-15
Investigative Questions This research will quantitatively address the impact of proposed benefits of a 3D printed satellite architecture on the...subsystems of a CubeSat. The objective of this research is to bring a quantitative analysis to the discussion of whether a fully 3D printed satellite...manufacturers to quantitatively address what impact the architecture would have on the subsystems of a CubeSat. Summary of Research Gap, Research Questions, and
An AI Approach to Ground Station Autonomy for Deep Space Communications
NASA Technical Reports Server (NTRS)
Fisher, Forest; Estlin, Tara; Mutz, Darren; Paal, Leslie; Law, Emily; Stockett, Mike; Golshan, Nasser; Chien, Steve
1998-01-01
This paper describes an architecture for an autonomous deep space tracking station (DS-T). The architecture targets fully automated routine operations encompassing scheduling and resource allocation, antenna and receiver predict generation. track procedure generation from service requests, and closed loop control and error recovery for the station subsystems. This architecture has been validated by the construction of a prototype DS-T station, which has performed a series of demonstrations of autonomous ground station control for downlink services with NASA's Mars Global Surveyor (MGS).
2007-11-01
available architecture for time and synchronization information distribution was at that time implemented with a single Master Clock. The signal of...a hierarchical approach. Moreover, analyzing this architecture , it is clear that there is signal performance degradation due to the distribution...applications. Figure 2 depicts the time distribution architecture implemented via GNSS. The main difference with respect to the previous one is that all the
Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés
2015-02-25
This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems.
Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés
2015-01-01
This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems. PMID:25723145
NASA Astrophysics Data System (ADS)
Du, Jian; Sheng, Wanxing; Lin, Tao; Lv, Guangxian
2018-05-01
Nowadays, the smart distribution network has made tremendous progress, and the business visualization becomes even more significant and indispensable. Based on the summarization of traditional visualization technologies and demands of smart distribution network, a panoramic visualization application is proposed in this paper. The overall architecture, integrated architecture and service architecture of panoramic visualization application is firstly presented. Then, the architecture design and main functions of panoramic visualization system are elaborated in depth. In addition, the key technologies related to the application is discussed briefly. At last, two typical visualization scenarios in smart distribution network, which are risk warning and fault self-healing, proves that the panoramic visualization application is valuable for the operation and maintenance of the distribution network.
Nitzlnader, Michael; Falgenhauer, Markus; Gossy, Christian; Schreier, Günter
2015-01-01
Today, progress in biomedical research often depends on large, interdisciplinary research projects and tailored information and communication technology (ICT) support. In the context of the European Network for Cancer Research in Children and Adolescents (ENCCA) project the exchange of data between data source (Source Domain) and data consumer (Consumer Domain) systems in a distributed computing environment needs to be facilitated. This work presents the requirements and the corresponding solution architecture of the Advanced Biomedical Collaboration Domain for Europe (ABCD-4-E). The proposed concept utilises public as well as private cloud systems, the Integrating the Healthcare Enterprise (IHE) framework and web-based applications to provide the core capabilities in accordance with privacy and security needs. The utility of crucial parts of the concept was evaluated by prototypic implementation. A discussion of the design indicates that the requirements of ENCCA are fully met. A whole system demonstration is currently being prepared to verify that ABCD-4-E has the potential to evolve into a domain-bridging collaboration platform in the future.
Sengur, Abdulkadir; Akbulut, Yaman; Guo, Yanhui; Bajaj, Varun
2017-12-01
Electromyogram (EMG) signals contain useful information of the neuromuscular diseases like amyotrophic lateral sclerosis (ALS). ALS is a well-known brain disease, which can progressively degenerate the motor neurons. In this paper, we propose a deep learning based method for efficient classification of ALS and normal EMG signals. Spectrogram, continuous wavelet transform (CWT), and smoothed pseudo Wigner-Ville distribution (SPWVD) have been employed for time-frequency (T-F) representation of EMG signals. A convolutional neural network is employed to classify these features. In it, Two convolution layers, two pooling layer, a fully connected layer and a lost function layer is considered in CNN architecture. The CNN architecture is trained with the reinforcement sample learning strategy. The efficiency of the proposed implementation is tested on publicly available EMG dataset. The dataset contains 89 ALS and 133 normal EMG signals with 24 kHz sampling frequency. Experimental results show 96.80% accuracy. The obtained results are also compared with other methods, which show the superiority of the proposed method.
Mixing console design for telematic applications in live performance and remote recording
NASA Astrophysics Data System (ADS)
Samson, David J.
The development of a telematic mixing console addresses audio engineers' need for a fully integrated system architecture that improves efficiency and control for applications such as distributed performance and remote recording. Current systems used in state of the art telematic performance rely on software-based interconnections with complex routing schemes that offer minimal flexibility or control over key parameters needed to achieve a professional workflow. The lack of hardware-based control in the current model limits the full potential of both the engineer and the system. The new architecture provides a full-featured platform that, alongside customary features, integrates (1) surround panning capability for motorized, binaural manikin heads, as well as all sources in the included auralization module, (2) self-labelling channel strips, responsive to change at all remote sites, (3) onboard roundtrip latency monitoring, (4) synchronized remote audio recording and monitoring, and (5) flexible routing. These features combined with robust parameter automation and precise analog control will raise the standard for telematic systems as well as advance the development of networked audio systems for both research and professional audio markets.
NASA Technical Reports Server (NTRS)
Ticker, Ronald L.; Azzolini, John D.
2000-01-01
The study investigates NASA's Earth Science Enterprise needs for Distributed Spacecraft Technologies in the 2010-2025 timeframe. In particular, the study focused on the Earth Science Vision Initiative and extrapolation of the measurement architecture from the 2002-2010 time period. Earth Science Enterprise documents were reviewed. Interviews were conducted with a number of Earth scientists and technologists. fundamental principles of formation flying were also explored. The results led to the development of four notional distribution spacecraft architectures. These four notional architectures (global constellations, virtual platforms, precision formation flying, and sensorwebs) are presented. They broadly and generically cover the distributed spacecraft architectures needed by Earth Science in the post-2010 era. These notional architectures are used to identify technology needs and drivers. Technology needs are subsequently grouped into five categories: Systems and architecture development tools; Miniaturization, production, manufacture, test and calibration; Data networks and information management; Orbit control, planning and operations; and Launch and deployment. The current state of the art and expected developments are explored. High-value technology areas are identified for possible future funding emphasis.
Project Integration Architecture: Distributed Lock Management, Deadlock Detection, and Set Iteration
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The migration of the Project Integration Architecture (PIA) to the distributed object environment of the Common Object Request Broker Architecture (CORBA) brings with it the nearly unavoidable requirements of multiaccessor, asynchronous operations. In order to maintain the integrity of data structures in such an environment, it is necessary to provide a locking mechanism capable of protecting the complex operations typical of the PIA architecture. This paper reports on the implementation of a locking mechanism to treat that need. Additionally, the ancillary features necessary to make the distributed lock mechanism work are discussed.
Distributed information system architecture for Primary Health Care.
Grammatikou, M; Stamatelopoulos, F; Maglaris, B
2000-01-01
We present a distributed architectural framework for Primary Health Care (PHC) Centres. Distribution is handled through the introduction of the Roaming Electronic Health Care Record (R-EHCR) and the use of local caching and incremental update of a global index. The proposed architecture is designed to accommodate a specific PHC workflow model. Finally, we discuss a pilot implementation in progress, which is based on CORBA and web-based user interfaces. However, the conceptual architecture is generic and open to other middleware approaches like the DHE or HL7.
Pape-Haugaard, Louise; Frank, Lars
2011-01-01
A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.
Exploring Concepts of Operations for On-Demand Passenger Air Transportation
NASA Technical Reports Server (NTRS)
Nneji, Victoria Chibuogu; Stimpson, Alexander; Cummings, Mary; Goodrich, Kenneth H.
2017-01-01
In recent years, a surge of interest in "flying cars" for city commutes has led to rapid development of new technologies to help make them and similar on-demand mobility platforms a reality. To this end, this paper provides analyses of the stakeholders involved, their proposed operational concepts, and the hazards and regulations that must be addressed. Three system architectures emerged from the analyses, ranging from conventional air taxi to revolutionary fully autonomous aircraft operations, each with vehicle safety functions allocated differently between humans and machines. Advancements for enabling technologies such as distributed electric propulsion and artificial intelligence have had major investments and initial experimental success, but may be some years away from being deployed for on-demand passenger air transportation at scale.
Digital Avionics Information System (DAIS): Development and Demonstration.
1981-09-01
advances in technology. The DAIS architecture results in improved reliability and availability of avionics systems while at the same time reducing life ...DAIS) represents a significant advance in the technology of avionics system architecture. DAIS is a total systems concept, exploiting standardization...configurations and fully capable of accommodating new advances in technology. These fundamental system charac- teristics are described in this report; the
NASA Astrophysics Data System (ADS)
Wang, T.; Barbero, M.; Berdalovic, I.; Bespin, C.; Bhat, S.; Breugnon, P.; Caicedo, I.; Cardella, R.; Chen, Z.; Degerli, Y.; Egidos, N.; Godiot, S.; Guilloux, F.; Hemperek, T.; Hirono, T.; Krüger, H.; Kugathasan, T.; Hügging, F.; Marin Tobon, C. A.; Moustakas, K.; Pangaud, P.; Schwemling, P.; Pernegger, H.; Pohl, D.-L.; Rozanov, A.; Rymaszewski, P.; Snoeys, W.; Wermes, N.
2018-03-01
Depleted monolithic active pixel sensors (DMAPS), which exploit high voltage and/or high resistivity add-ons of modern CMOS technologies to achieve substantial depletion in the sensing volume, have proven to have high radiation tolerance towards the requirements of ATLAS in the high-luminosity LHC era. DMAPS integrating fast readout architectures are currently being developed as promising candidates for the outer pixel layers of the future ATLAS Inner Tracker, which will be installed during the phase II upgrade of ATLAS around year 2025. In this work, two DMAPS prototype designs, named LF-Monopix and TJ-Monopix, are presented. LF-Monopix was fabricated in the LFoundry 150 nm CMOS technology, and TJ-Monopix has been designed in the TowerJazz 180 nm CMOS technology. Both chips employ the same readout architecture, i.e. the column drain architecture, whereas different sensor implementation concepts are pursued. The paper makes a joint description of the two prototypes, so that their technical differences and challenges can be addressed in direct comparison. First measurement results for LF-Monopix will also be shown, demonstrating for the first time a fully functional fast readout DMAPS prototype implemented in the LFoundry technology.
The TENOR Architecture for Advanced Distributed Learning and Intelligent Training
2002-01-01
called TENOR, for Training Education Network on Request. There have been a number of recent learning systems developed that leverage off Internet...AG2-14256 AIAA 2002-1054 The TENOR Architecture for Advanced Distributed Learning and Intelligent Training C. Tibaudo, J. Kristl and J. Schroeder...COVERED 4. TITLE AND SUBTITLE The TENOR Architecture for Advanced Distributed Learning and Intelligent Training 5a. CONTRACT NUMBER F33615-00-M
Integrating security in a group oriented distributed system
NASA Technical Reports Server (NTRS)
Reiter, Michael; Birman, Kenneth; Gong, LI
1992-01-01
A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.
Jeong, Seol Young; Jo, Hyeong Gon; Kang, Soon Ju
2015-01-01
Indoor location-based services (iLBS) are extremely dynamic and changeable, and include numerous resources and mobile devices. In particular, the network infrastructure requires support for high scalability in the indoor environment, and various resource lookups are requested concurrently and frequently from several locations based on the dynamic network environment. A traditional map-based centralized approach for iLBSs has several disadvantages: it requires global knowledge to maintain a complete geographic indoor map; the central server is a single point of failure; it can also cause low scalability and traffic congestion; and it is hard to adapt to a change of service area in real time. This paper proposes a self-organizing and fully distributed platform for iLBSs. The proposed self-organizing distributed platform provides a dynamic reconfiguration of locality accuracy and service coverage by expanding and contracting dynamically. In order to verify the suggested platform, scalability performance according to the number of inserted or deleted nodes composing the dynamic infrastructure was evaluated through a simulation similar to the real environment. PMID:26016908
Integrating Computing Resources: A Shared Distributed Architecture for Academics and Administrators.
ERIC Educational Resources Information Center
Beltrametti, Monica; English, Will
1994-01-01
Development and implementation of a shared distributed computing architecture at the University of Alberta (Canada) are described. Aspects discussed include design of the architecture, users' views of the electronic environment, technical and managerial challenges, and the campuswide human infrastructures needed to manage such an integrated…
Zhao, Jiangsan; Bodner, Gernot; Rewald, Boris; Leitner, Daniel; Nagel, Kerstin A; Nakhforoosh, Alireza
2017-02-01
Root phenotyping provides trait information for plant breeding. A shortcoming of high-throughput root phenotyping is the limitation to seedling plants and failure to make inferences on mature root systems. We suggest root system architecture (RSA) models to predict mature root traits and overcome the inference problem. Sixteen pea genotypes were phenotyped in (i) seedling (Petri dishes) and (ii) mature (sand-filled columns) root phenotyping platforms. The RSA model RootBox was parameterized with seedling traits to simulate the fully developed root systems. Measured and modelled root length, first-order lateral number, and root distribution were compared to determine key traits for model-based prediction. No direct relationship in root traits (tap, lateral length, interbranch distance) was evident between phenotyping systems. RootBox significantly improved the inference over phenotyping platforms. Seedling plant tap and lateral root elongation rates and interbranch distance were sufficient model parameters to predict genotype ranking in total root length with an RSpearman of 0.83. Parameterization including uneven lateral spacing via a scaling function substantially improved the prediction of architectures underlying the differently sized root systems. We conclude that RSA models can solve the inference problem of seedling root phenotyping. RSA models should be included in the phenotyping pipeline to provide reliable information on mature root systems to breeding research. © The Author 2017. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Prototyping a Distributed Information Retrieval System That Uses Statistical Ranking.
ERIC Educational Resources Information Center
Harman, Donna; And Others
1991-01-01
Built using a distributed architecture, this prototype distributed information retrieval system uses statistical ranking techniques to provide better service to the end user. Distributed architecture was shown to be a feasible alternative to centralized or CD-ROM information retrieval, and user testing of the ranking methodology showed both…
Modeling meander morphodynamics over self-formed heterogeneous floodplains
NASA Astrophysics Data System (ADS)
Bogoni, Manuel; Putti, Mario; Lanzoni, Stefano
2017-06-01
This work addresses the signatures embedded in the planform geometry of meandering rivers consequent to the formation of floodplain heterogeneities as the river bends migrate. Two geomorphic features are specifically considered: scroll bars produced by lateral accretion of point bars at convex banks and oxbow lake fills consequent to neck cutoffs. The sedimentary architecture of these geomorphic units depends on the type and amount of sediment, and controls bank erodibility as the river impinges on them, favoring or contrasting the river migration. The geometry of numerically generated planforms obtained for different scenarios of floodplain heterogeneity is compared to that of natural meandering paths. Half meander metrics and spatial distribution of channel curvatures are used to disclose the complexity embedded in meandering geometry. Fourier Analysis, Principal Component Analysis, Singular Spectrum Analysis and Multivariate Singular Spectrum Analysis are used to emphasize the subtle but crucial differences which may emerge between apparently similar configurations. A closer similarity between observed and simulated planforms is attained when fully coupling flow and sediment dynamics (fully-coupled models) and when considering self-formed heterogeneities that are less erodible than the surrounding floodplain.
Scaling Support Vector Machines On Modern HPC Platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Yang; Fu, Haohuan; Song, Shuaiwen
2015-02-01
We designed and implemented MIC-SVM, a highly efficient parallel SVM for x86 based multicore and many-core architectures, such as the Intel Ivy Bridge CPUs and Intel Xeon Phi co-processor (MIC). We propose various novel analysis methods and optimization techniques to fully utilize the multilevel parallelism provided by these architectures and serve as general optimization methods for other machine learning tools.
An Autonomous Autopilot Control System Design for Small-Scale UAVs
NASA Technical Reports Server (NTRS)
Ippolito, Corey; Pai, Ganeshmadhav J.; Denney, Ewen W.
2012-01-01
This paper describes the design and implementation of a fully autonomous and programmable autopilot system for small scale autonomous unmanned aerial vehicle (UAV) aircraft. This system was implemented in Reflection and has flown on the Exploration Aerial Vehicle (EAV) platform at NASA Ames Research Center, currently only as a safety backup for an experimental autopilot. The EAV and ground station are built on a component-based architecture called the Reflection Architecture. The Reflection Architecture is a prototype for a real-time embedded plug-and-play avionics system architecture which provides a transport layer for real-time communications between hardware and software components, allowing each component to focus solely on its implementation. The autopilot module described here, although developed in Reflection, contains no design elements dependent on this architecture.
Modular, Cost-Effective, Extensible Avionics Architecture for Secure, Mobile Communications
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2006-01-01
Current onboard communication architectures are based upon an all-in-one communications management unit. This unit and associated radio systems has regularly been designed as a one-off, proprietary system. As such, it lacks flexibility and cannot adapt easily to new technology, new communication protocols, and new communication links. This paper describes the current avionics communication architecture and provides a historical perspective of the evolution of this system. A new onboard architecture is proposed that allows full use of commercial-off-the-shelf technologies to be integrated in a modular approach thereby enabling a flexible, cost-effective and fully deployable design that can take advantage of ongoing advances in the computer, cryptography, and telecommunications industries.
Modular, Cost-Effective, Extensible Avionics Architecture for Secure, Mobile Communications
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2007-01-01
Current onboard communication architectures are based upon an all-in-one communications management unit. This unit and associated radio systems has regularly been designed as a one-off, proprietary system. As such, it lacks flexibility and cannot adapt easily to new technology, new communication protocols, and new communication links. This paper describes the current avionics communication architecture and provides a historical perspective of the evolution of this system. A new onboard architecture is proposed that allows full use of commercial-off-the-shelf technologies to be integrated in a modular approach thereby enabling a flexible, cost-effective and fully deployable design that can take advantage of ongoing advances in the computer, cryptography, and telecommunications industries.
A distributed parallel storage architecture and its potential application within EOSDIS
NASA Technical Reports Server (NTRS)
Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony
1994-01-01
We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.
Performance Analysis of Distributed Object-Oriented Applications
NASA Technical Reports Server (NTRS)
Schoeffler, James D.
1998-01-01
The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.
NASA Astrophysics Data System (ADS)
Xie, Mengying; Zhang, Yan; Kraśny, Marcin J.; Rhead, Andrew; Bowen, Chris; Arafa, Mustafa
2018-07-01
The energy harvesting capability of resonant harvesting structures, such as piezoelectric cantilever beams, can be improved by utilizing coupled oscillations that generate favourable strain mode distributions. In this work, we present the first demonstration of the use of a laminated carbon fibre reinforced polymer to create cantilever beams that undergo coupled bending-twisting oscillations for energy harvesting applications. Piezoelectric layers that operate in bending and shear mode are attached to the bend-twist coupled beam surface at locations of maximum bending and torsional strains in the first mode of vibration to fully exploit the strain distribution along the beam. Modelling of this new bend-twist harvesting system is presented, which compares favourably with experimental results. It is demonstrated that the variety of bend and torsional modes of the harvesters can be utilized to create a harvester that operates over a wider range of frequencies and such multi-modal device architectures provides a unique approach to tune the frequency response of resonant harvesting systems.
Architecture for distributed design and fabrication
NASA Astrophysics Data System (ADS)
McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.
1997-01-01
We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.
Distributed computing environments for future space control systems
NASA Technical Reports Server (NTRS)
Viallefont, Pierre
1993-01-01
The aim of this paper is to present the results of a CNES research project on distributed computing systems. The purpose of this research was to study the impact of the use of new computer technologies in the design and development of future space applications. The first part of this study was a state-of-the-art review of distributed computing systems. One of the interesting ideas arising from this review is the concept of a 'virtual computer' allowing the distributed hardware architecture to be hidden from a software application. The 'virtual computer' can improve system performance by adapting the best architecture (addition of computers) to the software application without having to modify its source code. This concept can also decrease the cost and obsolescence of the hardware architecture. In order to verify the feasibility of the 'virtual computer' concept, a prototype representative of a distributed space application is being developed independently of the hardware architecture.
NASA Astrophysics Data System (ADS)
Hegde, Ganapathi; Vaya, Pukhraj
2013-10-01
This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.
Dynamic Task Assignment of Autonomous Distributed AGV in an Intelligent FMS Environment
NASA Astrophysics Data System (ADS)
Fauadi, Muhammad Hafidz Fazli Bin Md; Lin, Hao Wen; Murata, Tomohiro
The need of implementing distributed system is growing significantly as it is proven to be effective for organization to be flexible against a highly demanding market. Nevertheless, there are still large technical gaps need to be addressed to gain significant achievement. We propose a distributed architecture to control Automated Guided Vehicle (AGV) operation based on multi-agent architecture. System architectures and agents' functions have been designed to support distributed control of AGV. Furthermore, enhanced agent communication protocol has been configured to accommodate dynamic attributes of AGV task assignment procedure. Result proved that the technique successfully provides a better solution.
Exemplar-Based Image and Video Stylization Using Fully Convolutional Semantic Features.
Zhu, Feida; Yan, Zhicheng; Bu, Jiajun; Yu, Yizhou
2017-05-10
Color and tone stylization in images and videos strives to enhance unique themes with artistic color and tone adjustments. It has a broad range of applications from professional image postprocessing to photo sharing over social networks. Mainstream photo enhancement softwares, such as Adobe Lightroom and Instagram, provide users with predefined styles, which are often hand-crafted through a trial-and-error process. Such photo adjustment tools lack a semantic understanding of image contents and the resulting global color transform limits the range of artistic styles it can represent. On the other hand, stylistic enhancement needs to apply distinct adjustments to various semantic regions. Such an ability enables a broader range of visual styles. In this paper, we first propose a novel deep learning architecture for exemplar-based image stylization, which learns local enhancement styles from image pairs. Our deep learning architecture consists of fully convolutional networks (FCNs) for automatic semantics-aware feature extraction and fully connected neural layers for adjustment prediction. Image stylization can be efficiently accomplished with a single forward pass through our deep network. To extend our deep network from image stylization to video stylization, we exploit temporal superpixels (TSPs) to facilitate the transfer of artistic styles from image exemplars to videos. Experiments on a number of datasets for image stylization as well as a diverse set of video clips demonstrate the effectiveness of our deep learning architecture.
Architecture earth-sheltered buildings: Design Manual 1.4
NASA Astrophysics Data System (ADS)
1984-03-01
Design guidance is presented for use by experienced engineers and architects. The types of buildings within the scope of this manual include slab-on-grade, partially-buried (bermed) or fully-buried, and large (single-story or multistory) structures. New criteria unique to earth-sheltered design are included for the following disciplines: Planning, Landscape Design, Life-Cycle Analysis, Architectural, Structural, Mechanical (criteria include below-grade heat flux calculation procedures), and Electrical.
Fully parallel write/read in resistive synaptic array for accelerating on-chip learning
NASA Astrophysics Data System (ADS)
Gao, Ligang; Wang, I.-Ting; Chen, Pai-Yu; Vrudhula, Sarma; Seo, Jae-sun; Cao, Yu; Hou, Tuo-Hung; Yu, Shimeng
2015-11-01
A neuro-inspired computing paradigm beyond the von Neumann architecture is emerging and it generally takes advantage of massive parallelism and is aimed at complex tasks that involve intelligence and learning. The cross-point array architecture with synaptic devices has been proposed for on-chip implementation of the weighted sum and weight update in the learning algorithms. In this work, forming-free, silicon-process-compatible Ta/TaO x /TiO2/Ti synaptic devices are fabricated, in which >200 levels of conductance states could be continuously tuned by identical programming pulses. In order to demonstrate the advantages of parallelism of the cross-point array architecture, a novel fully parallel write scheme is designed and experimentally demonstrated in a small-scale crossbar array to accelerate the weight update in the training process, at a speed that is independent of the array size. Compared to the conventional row-by-row write scheme, it achieves >30× speed-up and >30× improvement in energy efficiency as projected in a large-scale array. If realistic synaptic device characteristics such as device variations are taken into an array-level simulation, the proposed array architecture is able to achieve ∼95% recognition accuracy of MNIST handwritten digits, which is close to the accuracy achieved by software using the ideal sparse coding algorithm.
32 bit digital optical computer - A hardware update
NASA Technical Reports Server (NTRS)
Guilfoyle, Peter S.; Carter, James A., III; Stone, Richard V.; Pape, Dennis R.
1990-01-01
Such state-of-the-art devices as multielement linear laser diode arrays, multichannel acoustooptic modulators, optical relays, and avalanche photodiode arrays, are presently applied to the implementation of a 32-bit supercomputer's general-purpose optical central processing architecture. Shannon's theorem, Morozov's control operator method (in conjunction with combinatorial arithmetic), and DeMorgan's law have been used to design an architecture whose 100 MHz clock renders it fully competitive with emerging planar-semiconductor technology. Attention is given to the architecture's multichannel Bragg cells, thermal design and RF crosstalk considerations, and the first and second anamorphic relay legs.
Authentication and Authorization of End User in Microservice Architecture
NASA Astrophysics Data System (ADS)
He, Xiuyu; Yang, Xudong
2017-10-01
As the market and business continues to expand; the traditional single monolithic architecture is facing more and more challenges. The development of cloud computing and container technology promote microservice architecture became more popular. While the low coupling, fine granularity, scalability, flexibility and independence of the microservice architecture bring convenience, the inherent complexity of the distributed system make the security of microservice architecture important and difficult. This paper aims to study the authentication and authorization of the end user under the microservice architecture. By comparing with the traditional measures and researching on existing technology, this paper put forward a set of authentication and authorization strategies suitable for microservice architecture, such as distributed session, SSO solutions, client-side JSON web token and JWT + API Gateway, and summarize the advantages and disadvantages of each method.
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Hendrikson, Wim J; Deegan, Anthony J; Yang, Ying; van Blitterswijk, Clemens A; Verdonschot, Nico; Moroni, Lorenzo; Rouwkema, Jeroen
2017-01-01
Scaffolds for regenerative medicine applications should instruct cells with the appropriate signals, including biophysical stimuli such as stress and strain, to form the desired tissue. Apart from that, scaffolds, especially for load-bearing applications, should be capable of providing mechanical stability. Since both scaffold strength and stress-strain distributions throughout the scaffold depend on the scaffold's internal architecture, it is important to understand how changes in architecture influence these parameters. In this study, four scaffold designs with different architectures were produced using additive manufacturing. The designs varied in fiber orientation, while fiber diameter, spacing, and layer height remained constant. Based on micro-CT (μCT) scans, finite element models (FEMs) were derived for finite element analysis (FEA) and computational fluid dynamics (CFD). FEA of scaffold compression was validated using μCT scan data of compressed scaffolds. Results of the FEA and CFD showed a significant impact of scaffold architecture on fluid shear stress and mechanical strain distribution. The average fluid shear stress ranged from 3.6 mPa for a 0/90 architecture to 6.8 mPa for a 0/90 offset architecture, and the surface shear strain from 0.0096 for a 0/90 offset architecture to 0.0214 for a 0/90 architecture. This subsequently resulted in variations of the predicted cell differentiation stimulus values on the scaffold surface. Fluid shear stress was mainly influenced by pore shape and size, while mechanical strain distribution depended mainly on the presence or absence of supportive columns in the scaffold architecture. Together, these results corroborate that scaffold architecture can be exploited to design scaffolds with regions that guide specific tissue development under compression and perfusion. In conjunction with optimization of stimulation regimes during bioreactor cultures, scaffold architecture optimization can be used to improve scaffold design for tissue engineering purposes.
Hendrikson, Wim J.; Deegan, Anthony J.; Yang, Ying; van Blitterswijk, Clemens A.; Verdonschot, Nico; Moroni, Lorenzo; Rouwkema, Jeroen
2017-01-01
Scaffolds for regenerative medicine applications should instruct cells with the appropriate signals, including biophysical stimuli such as stress and strain, to form the desired tissue. Apart from that, scaffolds, especially for load-bearing applications, should be capable of providing mechanical stability. Since both scaffold strength and stress–strain distributions throughout the scaffold depend on the scaffold’s internal architecture, it is important to understand how changes in architecture influence these parameters. In this study, four scaffold designs with different architectures were produced using additive manufacturing. The designs varied in fiber orientation, while fiber diameter, spacing, and layer height remained constant. Based on micro-CT (μCT) scans, finite element models (FEMs) were derived for finite element analysis (FEA) and computational fluid dynamics (CFD). FEA of scaffold compression was validated using μCT scan data of compressed scaffolds. Results of the FEA and CFD showed a significant impact of scaffold architecture on fluid shear stress and mechanical strain distribution. The average fluid shear stress ranged from 3.6 mPa for a 0/90 architecture to 6.8 mPa for a 0/90 offset architecture, and the surface shear strain from 0.0096 for a 0/90 offset architecture to 0.0214 for a 0/90 architecture. This subsequently resulted in variations of the predicted cell differentiation stimulus values on the scaffold surface. Fluid shear stress was mainly influenced by pore shape and size, while mechanical strain distribution depended mainly on the presence or absence of supportive columns in the scaffold architecture. Together, these results corroborate that scaffold architecture can be exploited to design scaffolds with regions that guide specific tissue development under compression and perfusion. In conjunction with optimization of stimulation regimes during bioreactor cultures, scaffold architecture optimization can be used to improve scaffold design for tissue engineering purposes. PMID:28239606
NASA Technical Reports Server (NTRS)
Corker, Kevin M.; Smith, Barry R.
1993-01-01
The process of designing crew stations for large-scale, complex automated systems is made difficult because of the flexibility of roles that the crew can assume, and by the rapid rate at which system designs become fixed. Modern cockpit automation frequently involves multiple layers of control and display technology in which human operators must exercise equipment in augmented, supervisory, and fully automated control modes. In this context, we maintain that effective human-centered design is dependent on adequate models of human/system performance in which representations of the equipment, the human operator(s), and the mission tasks are available to designers for manipulation and modification. The joint Army-NASA Aircrew/Aircraft Integration (A3I) Program, with its attendant Man-machine Integration Design and Analysis System (MIDAS), was initiated to meet this challenge. MIDAS provides designers with a test bed for analyzing human-system integration in an environment in which both cognitive human function and 'intelligent' machine function are described in similar terms. This distributed object-oriented simulation system, its architecture and assumptions, and our experiences from its application in advanced aviation crew stations are described.
NASA Astrophysics Data System (ADS)
Alameh, N.; Bambacus, M.; Cole, M.
2006-12-01
Nasa's Earth Science as well as interdisciplinary research and applications activities require access to earth observations, analytical models and specialized tools and services, from diverse distributed sources. Interoperability and open standards for geospatial data access and processing greatly facilitate such access among the information and processing compo¬nents related to space¬craft, airborne, and in situ sensors; predictive models; and decision support tools. To support this mission, NASA's Geosciences Interoperability Office (GIO) has been developing the Earth Science Gateway (ESG; online at http://esg.gsfc.nasa.gov) by adapting and deploying a standards-based commercial product. Thanks to extensive use of open standards, ESG can tap into a wide array of online data services, serve a variety of audiences and purposes, and adapt to technology and business changes. Most importantly, the use of open standards allow ESG to function as a platform within a larger context of distributed geoscience processing, such as the Global Earth Observing System of Systems (GEOSS). ESG shares the goals of GEOSS to ensure that observations and products shared by users will be accessible, comparable, and understandable by relying on common standards and adaptation to user needs. By maximizing interoperability, modularity, extensibility and scalability, ESG's architecture fully supports the stated goals of GEOSS. As such, ESG's role extends beyond that of a gateway to NASA science data to become a shared platform that can be leveraged by GEOSS via: A modular and extensible architecture Consensus and community-based standards (e.g. ISO and OGC standards) A variety of clients and visualization techniques, including WorldWind and Google Earth A variety of services (including catalogs) with standard interfaces Data integration and interoperability Mechanisms for user involvement and collaboration Mechanisms for supporting interdisciplinary and domain-specific applications ESG has played a key role in recent GEOSS Service Network (GSN) demos and workshops, acting not only as a service and data catalog and discovery client, but also as a portrayal and visualization client to distributed data.
DeepX: Deep Learning Accelerator for Restricted Boltzmann Machine Artificial Neural Networks.
Kim, Lok-Won
2018-05-01
Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).
Learning to classify in large committee machines
NASA Astrophysics Data System (ADS)
O'kane, Dominic; Winther, Ole
1994-10-01
The ability of a two-layer neural network to learn a specific non-linearly-separable classification task, the proximity problem, is investigated using a statistical mechanics approach. Both the tree and fully connected architectures are investigated in the limit where the number K of hidden units is large, but still much smaller than the number N of inputs. Both have continuous weights. Within the replica symmetric ansatz, we find that for zero temperature training, the tree architecture exhibits a strong overtraining effect. For nonzero temperature the asymptotic error is lowered, but it is still higher than the corresponding value for the simple perceptron. The fully connected architecture is considered for two regimes. First, for a finite number of examples we find a symmetry among the hidden units as each performs equally well. The asymptotic generalization error is finite, and minimal for T-->∞ where it goes to the same value as for the simple perceptron. For a large number of examples we find a continuous transition to a phase with broken hidden-unit symmetry, which has an asymptotic generalization error equal to zero.
Using an Integrated Distributed Test Architecture to Develop an Architecture for Mars
NASA Technical Reports Server (NTRS)
Othon, William L.
2016-01-01
The creation of a crew-rated spacecraft architecture capable of sending humans to Mars requires the development and integration of multiple vehicle systems and subsystems. Important new technologies will be identified and matured within each technical discipline to support the mission. Architecture maturity also requires coordination with mission operations elements and ground infrastructure. During early architecture formulation, many of these assets will not be co-located and will required integrated, distributed test to show that the technologies and systems are being developed in a coordinated way. When complete, technologies must be shown to function together to achieve mission goals. In this presentation, an architecture will be described that promotes and advances integration of disparate systems within JSC and across NASA centers.
Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture
NASA Technical Reports Server (NTRS)
Behbahani, Alireza; Culley, Dennis; Garg, Sanjay; Millar, Richard; Smith, Bert; Wood, Jim; Mahoney, Tim; Quinn, Ronald; Carpenter, Sheldon; Mailander, Bill;
2007-01-01
A Distributed Engine Control Working Group (DECWG) consisting of the Department of Defense (DoD), the National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) and industry has been formed to examine the current and future requirements of propulsion engine systems. The scope of this study will include an assessment of the paradigm shift from centralized engine control architecture to an architecture based on distributed control utilizing open system standards. Included will be a description of the work begun in the 1990's, which continues today, followed by the identification of the remaining technical challenges which present barriers to on-engine distributed control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalsi, Karan; Fuller, Jason C.; Somani, Abhishek
Disclosed herein are representative embodiments of methods, apparatus, and systems for facilitating operation and control of a resource distribution system (such as a power grid). Among the disclosed embodiments is a distributed hierarchical control architecture (DHCA) that enables smart grid assets to effectively contribute to grid operations in a controllable manner, while helping to ensure system stability and equitably rewarding their contribution. Embodiments of the disclosed architecture can help unify the dispatch of these resources to provide both market-based and balancing services.
Programming model for distributed intelligent systems
NASA Technical Reports Server (NTRS)
Sztipanovits, J.; Biegl, C.; Karsai, G.; Bogunovic, N.; Purves, B.; Williams, R.; Christiansen, T.
1988-01-01
A programming model and architecture which was developed for the design and implementation of complex, heterogeneous measurement and control systems is described. The Multigraph Architecture integrates artificial intelligence techniques with conventional software technologies, offers a unified framework for distributed and shared memory based parallel computational models and supports multiple programming paradigms. The system can be implemented on different hardware architectures and can be adapted to strongly different applications.
Energy Management of Smart Distribution Systems
NASA Astrophysics Data System (ADS)
Ansari, Bananeh
Electric power distribution systems interface the end-users of electricity with the power grid. Traditional distribution systems are operated in a centralized fashion with the distribution system owner or operator being the only decision maker. The management and control architecture of distribution systems needs to gradually transform to accommodate the emerging smart grid technologies, distributed energy resources, and active electricity end-users or prosumers. The content of this document concerns with developing multi-task multi-objective energy management schemes for: 1) commercial/large residential prosumers, and 2) distribution system operator of a smart distribution system. The first part of this document describes a method of distributed energy management of multiple commercial/ large residential prosumers. These prosumers not only consume electricity, but also generate electricity using their roof-top solar photovoltaics systems. When photovoltaics generation is larger than local consumption, excess electricity will be fed into the distribution system, creating a voltage rise along the feeder. Distribution system operator cannot tolerate a significant voltage rise. ES can help the prosumers manage their electricity exchanges with the distribution system such that minimal voltage fluctuation occurs. The proposed distributed energy management scheme sizes and schedules each prosumer's ES to reduce the electricity bill and mitigate voltage rise along the feeder. The second part of this document focuses on emergency energy management and resilience assessment of a distribution system. The developed emergency energy management system uses available resources and redundancy to restore the distribution system's functionality fully or partially. The success of the restoration maneuver depends on how resilient the distribution system is. Engineering resilience terminology is used to evaluate the resilience of distribution system. The proposed emergency energy management scheme together with resilience assessment increases the distribution system operator's preparedness for emergency events.
Höfle, Stefan; Schienle, Alexander; Bernhard, Christoph; Bruns, Michael; Lemmer, Uli; Colsmann, Alexander
2014-08-13
Fully solution processed monochromatic and white-light emitting tandem or multi-photon polymer OLEDs with an inverted device architecture have been realized by employing WO3 /PEDOT:PSS/ZnO/PEI charge carrier generation layers. The luminance of the sub-OLEDs adds up in the stacked device indicating multi-photon emission. The white OLEDs exhibit a CRI of 75. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ground support system methodology and architecture
NASA Technical Reports Server (NTRS)
Schoen, P. D.
1991-01-01
A synergistic approach to systems test and support is explored. A building block architecture provides transportability of data, procedures, and knowledge. The synergistic approach also lowers cost and risk for life cycle of a program. The determination of design errors at the earliest phase reduces cost of vehicle ownership. Distributed scaleable architecture is based on industry standards maximizing transparency and maintainability. Autonomous control structure provides for distributed and segmented systems. Control of interfaces maximizes compatibility and reuse, reducing long term program cost. Intelligent data management architecture also reduces analysis time and cost (automation).
Advanced Mating System Development for Space Applications
NASA Technical Reports Server (NTRS)
Lewis, James L.
2004-01-01
This slide presentation reviews the development of space flight sealing and the work required for the further development of a dynamic interface seal for the use on space mating systems to support a fully androgynous mating interface. This effort has resulted in the advocacy of developing a standard multipurpose interface for use with all modern modular space architecture. This fully androgynous design means a seal-on-seal (SOS) system.
A Distributed Intelligent E-Learning System
ERIC Educational Resources Information Center
Kristensen, Terje
2016-01-01
An E-learning system based on a multi-agent (MAS) architecture combined with the Dynamic Content Manager (DCM) model of E-learning, is presented. We discuss the benefits of using such a multi-agent architecture. Finally, the MAS architecture is compared with a pure service-oriented architecture (SOA). This MAS architecture may also be used within…
A self-scaling, distributed information architecture for public health, research, and clinical care.
McMurry, Andrew J; Gilbert, Clint A; Reis, Ben Y; Chueh, Henry C; Kohane, Isaac S; Mandl, Kenneth D
2007-01-01
This study sought to define a scalable architecture to support the National Health Information Network (NHIN). This architecture must concurrently support a wide range of public health, research, and clinical care activities. The architecture fulfils five desiderata: (1) adopt a distributed approach to data storage to protect privacy, (2) enable strong institutional autonomy to engender participation, (3) provide oversight and transparency to ensure patient trust, (4) allow variable levels of access according to investigator needs and institutional policies, (5) define a self-scaling architecture that encourages voluntary regional collaborations that coalesce to form a nationwide network. Our model has been validated by a large-scale, multi-institution study involving seven medical centers for cancer research. It is the basis of one of four open architectures developed under funding from the Office of the National Coordinator of Health Information Technology, fulfilling the biosurveillance use case defined by the American Health Information Community. The model supports broad applicability for regional and national clinical information exchanges. This model shows the feasibility of an architecture wherein the requirements of care providers, investigators, and public health authorities are served by a distributed model that grants autonomy, protects privacy, and promotes participation.
Mitigation of Hot-Spots in Photovoltaic Systems Using Distributed Power Electronics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olalla, Carlos; Hasan, Md. Nazmul; Deline, Chris
In the presence of partial shading and other mismatch factors, bypass diodes may not offer complete elimination of excessive power dissipation due to cell reverse biasing, commonly referred to as hot-spotting in photovoltaic (PV) systems. As a result, PV systems may experience higher failure rates and accelerated ageing. In this paper, a cell-level simulation model is used to assess occurrence of hot-spotting events in a representative residential rooftop system scenario featuring a moderate shading environment. The approach is further used to examine how well distributed power electronics converters mitigate the effects of partial shading and other sources of mismatch bymore » preventing activation of bypass diodes and thereby reducing the chances of heavy power dissipation and hot-spotting in mismatched cells. The simulation results confirm that the occurrence of heavy power dissipation is reduced in all distributed power electronics architectures, and that submodule-level converters offer nearly 100% mitigation of hot-spotting. In addition, the paper further elaborates on the possibility of hot-spot-induced permanent damage, predicting a lifetime energy loss above 15%. In conclusion, this energy loss is fully recoverable with submodule-level power converters that mitigate hot-spotting and prevent the damage.« less
Mitigation of Hot-Spots in Photovoltaic Systems Using Distributed Power Electronics
Olalla, Carlos; Hasan, Md. Nazmul; Deline, Chris; ...
2018-03-23
In the presence of partial shading and other mismatch factors, bypass diodes may not offer complete elimination of excessive power dissipation due to cell reverse biasing, commonly referred to as hot-spotting in photovoltaic (PV) systems. As a result, PV systems may experience higher failure rates and accelerated ageing. In this paper, a cell-level simulation model is used to assess occurrence of hot-spotting events in a representative residential rooftop system scenario featuring a moderate shading environment. The approach is further used to examine how well distributed power electronics converters mitigate the effects of partial shading and other sources of mismatch bymore » preventing activation of bypass diodes and thereby reducing the chances of heavy power dissipation and hot-spotting in mismatched cells. The simulation results confirm that the occurrence of heavy power dissipation is reduced in all distributed power electronics architectures, and that submodule-level converters offer nearly 100% mitigation of hot-spotting. In addition, the paper further elaborates on the possibility of hot-spot-induced permanent damage, predicting a lifetime energy loss above 15%. In conclusion, this energy loss is fully recoverable with submodule-level power converters that mitigate hot-spotting and prevent the damage.« less
Yoshitomi, Munetake; Ohta, Keisuke; Kanazawa, Tomonoshin; Togo, Akinobu; Hirashima, Shingo; Uemura, Kei-Ichiro; Okayama, Satoko; Morioka, Motohiro; Nakamura, Kei-Ichiro
2016-10-31
Endocrine and endothelial cells of the anterior pituitary gland frequently make close appositions or contacts, and the secretory granules of each endocrine cell tend to accumulate at the perivascular regions, which is generally considered to facilitate secretory functions of these cells. However, three-dimensional relationships between the localization pattern of secretory granules and blood vessels are not fully understood. To define and characterize these spatial relationships, we used scanning electron microscopy (SEM) three-dimensional reconstruction method based on focused ion-beam slicing and scanning electron microscopy (FIB/SEM). Full three-dimensional cellular architectures of the anterior pituitary tissue at ultrastructural resolution revealed that about 70% of endocrine cells were in apposition to the endothelial cells, while almost 30% of endocrine cells were entirely isolated from perivascular space in the tissue. Our three-dimensional analyses also visualized the distribution pattern of secretory granules in individual endocrine cells, showing an accumulation of secretory granules in regions in close apposition to the blood vessels in many cases. However, secretory granules in cells isolated from the perivascular region tended to distribute uniformly in the cytoplasm of these cells. These data suggest that the cellular interactions between the endocrine and endothelial cells promote an uneven cytoplasmic distribution of the secretory granules.
Description and Simulation of a Fast Packet Switch Architecture for Communication Satellites
NASA Technical Reports Server (NTRS)
Quintana, Jorge A.; Lizanich, Paul J.
1995-01-01
The NASA Lewis Research Center has been developing the architecture for a multichannel communications signal processing satellite (MCSPS) as part of a flexible, low-cost meshed-VSAT (very small aperture terminal) network. The MCSPS architecture is based on a multifrequency, time-division-multiple-access (MF-TDMA) uplink and a time-division multiplex (TDM) downlink. There are eight uplink MF-TDMA beams, and eight downlink TDM beams, with eight downlink dwells per beam. The information-switching processor, which decodes, stores, and transmits each packet of user data to the appropriate downlink dwell onboard the satellite, has been fully described by using VHSIC (Very High Speed Integrated-Circuit) Hardware Description Language (VHDL). This VHDL code, which was developed in-house to simulate the information switching processor, showed that the architecture is both feasible and viable. This paper describes a shared-memory-per-beam architecture, its VHDL implementation, and the simulation efforts.
NASA Astrophysics Data System (ADS)
Payne, Joshua; Taitano, William; Knoll, Dana; Liebs, Chris; Murthy, Karthik; Feltman, Nicolas; Wang, Yijie; McCarthy, Colleen; Cieren, Emanuel
2012-10-01
In order to solve problems such as the ion coalescence and slow MHD shocks fully kinetically we developed a fully implicit 2D energy and charge conserving electromagnetic PIC code, PlasmaApp2D. PlasmaApp2D differs from previous implicit PIC implementations in that it will utilize advanced architectures such as GPUs and shared memory CPU systems, with problems too large to fit into cache. PlasmaApp2D will be a hybrid CPU-GPU code developed primarily to run on the DARWIN cluster at LANL utilizing four 12-core AMD Opteron CPUs and two NVIDIA Tesla GPUs per node. MPI will be used for cross-node communication, OpenMP will be used for on-node parallelism, and CUDA will be used for the GPUs. Development progress and initial results will be presented.
Demonstration of fully enabled data center subsystem with embedded optical interconnect
NASA Astrophysics Data System (ADS)
Pitwon, Richard; Worrall, Alex; Stevens, Paul; Miller, Allen; Wang, Kai; Schmidtke, Katharine
2014-03-01
The evolution of data storage communication protocols and corresponding in-system bandwidth densities is set to impose prohibitive cost and performance constraints on future data storage system designs, fuelling proposals for hybrid electronic and optical architectures in data centers. The migration of optical interconnect into the system enclosure itself can substantially mitigate the communications bottlenecks resulting from both the increase in data rate and internal interconnect link lengths. In order to assess the viability of embedding optical links within prevailing data storage architectures, we present the design and assembly of a fully operational data storage array platform, in which all internal high speed links have been implemented optically. This required the deployment of mid-board optical transceivers, an electro-optical midplane and proprietary pluggable optical connectors for storage devices. We present the design of a high density optical layout to accommodate the midplane interconnect requirements of a data storage enclosure with support for 24 Small Form Factor (SFF) solid state or rotating disk drives and the design of a proprietary optical connector and interface cards, enabling standard drives to be plugged into an electro-optical midplane. Crucially, we have also modified the platform to accommodate longer optical interconnect lengths up to 50 meters in order to investigate future datacenter architectures based on disaggregation of modular subsystems. The optically enabled data storage system has been fully validated for both 6 Gb/s and 12 Gb/s SAS data traffic conveyed along internal optical links.
OhioView: Distribution of Remote Sensing Data Across Geographically Distributed Environments
NASA Technical Reports Server (NTRS)
Ramos, Calvin T.
1998-01-01
Various issues associated with the distribution of remote sensing data across geographically distributed environments are presented in viewgraph form. Specific topics include: 1) NASA education program background; 2) High level architectures, technologies and applications; 3) LeRC internal architecture and role; 4) Potential GIBN interconnect; 5) Potential areas of network investigation and research; 6) Draft of OhioView data model; and 7) the LeRC strategy and roadmap.
Design and "restoration": the Roots of Architecture Porject for the Built
NASA Astrophysics Data System (ADS)
Campanella, C.
2017-05-01
It is absolutely essential now to prepare a project of pre-critical understanding of the building which will be the object of action, free from preconceived notions of value, fully committed to the implementation of a variety of useful and indispensable to determine that operating margin and freedom that each architecture has not being linked uniquely to the precise original function. This is the margin that you can leverage to implement the architectural project for the built that mutuerà inside knowledge, conservation and innovation at the same time. The binomials in the lead paragraph as all sides of the same coin that merge into one integrated design process that takes charge of design including those different styles of drafting of survey (in all aspects) of an existing architecture.
Integrated Distributed Directory Service for KSC
NASA Technical Reports Server (NTRS)
Ghansah, Isaac
1997-01-01
This paper describes an integrated distributed directory services (DDS) architecture as a fundamental component of KSC distributed computing systems. Specifically, an architecture for an integrated directory service based on DNS and X.500/LDAP has been suggested. The architecture supports using DNS in its traditional role as a name service and X.500 for other services. Specific designs were made in the integration of X.500 DDS for Public Key Certificates, Kerberos Security Services, Network-wide Login, Electronic Mail, WWW URLS, Servers, and other diverse network objects. Issues involved in incorporating the emerging Microsoft Active Directory Service MADS in KSC's X.500 were discussed.
NASA Technical Reports Server (NTRS)
Twombly, I. Alexander; Smith, Jeffrey; Bruyns, Cynthia; Montgomery, Kevin; Boyle, Richard
2003-01-01
The International Space Station will soon provide an unparalleled research facility for studying the near- and longer-term effects of microgravity on living systems. Using the Space Station Glovebox Facility - a compact, fully contained reach-in environment - astronauts will conduct technically challenging life sciences experiments. Virtual environment technologies are being developed at NASA Ames Research Center to help realize the scientific potential of this unique resource by facilitating the experimental hardware and protocol designs and by assisting the astronauts in training. The Virtual GloveboX (VGX) integrates high-fidelity graphics, force-feedback devices and real- time computer simulation engines to achieve an immersive training environment. Here, we describe the prototype VGX system, the distributed processing architecture used in the simulation environment, and modifications to the visualization pipeline required to accommodate the display configuration.
Evidence of common and separate eye and hand accumulators underlying flexible eye-hand coordination
Jana, Sumitash; Gopal, Atul
2016-01-01
Eye and hand movements are initiated by anatomically separate regions in the brain, and yet these movements can be flexibly coupled and decoupled, depending on the need. The computational architecture that enables this flexible coupling of independent effectors is not understood. Here, we studied the computational architecture that enables flexible eye-hand coordination using a drift diffusion framework, which predicts that the variability of the reaction time (RT) distribution scales with its mean. We show that a common stochastic accumulator to threshold, followed by a noisy effector-dependent delay, explains eye-hand RT distributions and their correlation in a visual search task that required decision-making, while an interactive eye and hand accumulator model did not. In contrast, in an eye-hand dual task, an interactive model better predicted the observed correlations and RT distributions than a common accumulator model. Notably, these two models could only be distinguished on the basis of the variability and not the means of the predicted RT distributions. Additionally, signatures of separate initiation signals were also observed in a small fraction of trials in the visual search task, implying that these distinct computational architectures were not a manifestation of the task design per se. Taken together, our results suggest two unique computational architectures for eye-hand coordination, with task context biasing the brain toward instantiating one of the two architectures. NEW & NOTEWORTHY Previous studies on eye-hand coordination have considered mainly the means of eye and hand reaction time (RT) distributions. Here, we leverage the approximately linear relationship between the mean and standard deviation of RT distributions, as predicted by the drift-diffusion model, to propose the existence of two distinct computational architectures underlying coordinated eye-hand movements. These architectures, for the first time, provide a computational basis for the flexible coupling between eye and hand movements. PMID:27784809
Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs
NASA Technical Reports Server (NTRS)
Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan
2006-01-01
Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.
Cooperative crossing of traffic intersections in a distributed robot system
NASA Astrophysics Data System (ADS)
Rausch, Alexander; Oswald, Norbert; Levi, Paul
1995-09-01
In traffic scenarios a distributed robot system has to cope with problems like resource sharing, distributed planning, distributed job scheduling, etc. While travelling along a street segment can be done autonomously by each robot, crossing of an intersection as a shared resource forces the robot to coordinate its actions with those of other robots e.g. by means of negotiations. We discuss the issue of cooperation on the design of a robot control architecture. Task and sensor specific cooperation between robots requires the robots' architectures to be interlinked at different hierarchical levels. Inside each level control cycles are running in parallel and provide fast reaction on events. Internal cooperation may occur between cycles of the same level. Altogether the architecture is matrix-shaped and contains abstract control cycles with a certain degree of autonomy. Based upon the internal structure of a cycle we consider the horizontal and vertical interconnection of cycles to form an individual architecture. Thereafter we examine the linkage of several agents and its influence on an interacting architecture. A prototypical implementation of a scenario, which combines aspects of active vision and cooperation, illustrates our approach. Two vision-guided vehicles are faced with line following, intersection recognition and negotiation.
Architectural and Functional Design of an Environmental Information Network.
1984-04-30
study was accomplished under contract F08635-83-C-013(,, Task 83- 2 for Headquarters Air Force Engineering and Services Center, Engineering and Services...election Procedure ............................... 11 2 General Architecture of Distributed Data Management System...o.......60 A-1 Schema Architecture .......... o-.................. .... 74 A- 2 MULTIBASE Component Architecture
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
Grid data access on widely distributed worker nodes using scalla and SRM
NASA Astrophysics Data System (ADS)
Jakl, P.; Lauret, J.; Hanushevsky, A.; Shoshani, A.; Sim, A.; Gu, J.
2008-07-01
Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of the largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.
Imran, Noreen; Seet, Boon-Chong; Fong, A C M
2015-01-01
Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian-Wolf and Wyner-Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs.
A novel strategy for load balancing of distributed medical applications.
Logeswaran, Rajasvaran; Chen, Li-Choo
2012-04-01
Current trends in medicine, specifically in the electronic handling of medical applications, ranging from digital imaging, paperless hospital administration and electronic medical records, telemedicine, to computer-aided diagnosis, creates a burden on the network. Distributed Service Architectures, such as Intelligent Network (IN), Telecommunication Information Networking Architecture (TINA) and Open Service Access (OSA), are able to meet this new challenge. Distribution enables computational tasks to be spread among multiple processors; hence, performance is an important issue. This paper proposes a novel approach in load balancing, the Random Sender Initiated Algorithm, for distribution of tasks among several nodes sharing the same computational object (CO) instances in Distributed Service Architectures. Simulations illustrate that the proposed algorithm produces better network performance than the benchmark load balancing algorithms-the Random Node Selection Algorithm and the Shortest Queue Algorithm, especially under medium and heavily loaded conditions.
System performance predictions for Space Station Freedom's electric power system
NASA Technical Reports Server (NTRS)
Kerslake, Thomas W.; Hojnicki, Jeffrey S.; Green, Robert D.; Follo, Jeffrey C.
1993-01-01
Space Station Freedom Electric Power System (EPS) capability to effectively deliver power to housekeeping and user loads continues to strongly influence Freedom's design and planned approaches for assembly and operations. The EPS design consists of silicon photovoltaic (PV) arrays, nickel-hydrogen batteries, and direct current power management and distribution hardware and cabling. To properly characterize the inherent EPS design capability, detailed system performance analyses must be performed for early stages as well as for the fully assembled station up to 15 years after beginning of life. Such analyses were repeatedly performed using the FORTRAN code SPACE (Station Power Analysis for Capability Evaluation) developed at the NASA Lewis Research Center over a 10-year period. SPACE combines orbital mechanics routines, station orientation/pointing routines, PV array and battery performance models, and a distribution system load-flow analysis to predict EPS performance. Time-dependent, performance degradation, low earth orbit environmental interactions, and EPS architecture build-up are incorporated in SPACE. Results from two typical SPACE analytical cases are presented: (1) an electric load driven case and (2) a maximum EPS capability case.
iSpy: a powerful and lightweight event display
NASA Astrophysics Data System (ADS)
Alverson, G.; Eulisse, G.; McCauley, T.; Taylor, L.
2012-12-01
iSpy is a general-purpose event data and detector visualization program that was developed as an event display for the CMS experiment at the LHC and has seen use by the general public and teachers and students in the context of education and outreach. Central to the iSpy design philosophy is ease of installation, use, and extensibility. The application itself uses the open-access packages Qt4 and Open Inventor and is distributed either as a fully-bound executable or a standard installer package: one can simply download and double-click to begin. Mac OSX, Linux, and Windows are supported. iSpy renders the standard 2D, 3D, and tabular views, and the architecture allows for a generic approach to production of new views and projections. iSpy reads and displays data in the ig format: event information is written in compressed JSON format files designed for distribution over a network. This format is easily extensible and makes the iSpy client indifferent to the original input data source. The ig format is the one used for release of approved CMS data to the public.
Reconstruction of initial pressure from limited view photoacoustic images using deep learning
NASA Astrophysics Data System (ADS)
Waibel, Dominik; Gröhl, Janek; Isensee, Fabian; Kirchner, Thomas; Maier-Hein, Klaus; Maier-Hein, Lena
2018-02-01
Quantification of tissue properties with photoacoustic (PA) imaging typically requires a highly accurate representation of the initial pressure distribution in tissue. Almost all PA scanners reconstruct the PA image only from a partial scan of the emitted sound waves. Especially handheld devices, which have become increasingly popular due to their versatility and ease of use, only provide limited view data because of their geometry. Owing to such limitations in hardware as well as to the acoustic attenuation in tissue, state-of-the-art reconstruction methods deliver only approximations of the initial pressure distribution. To overcome the limited view problem, we present a machine learning-based approach to the reconstruction of initial pressure from limited view PA data. Our method involves a fully convolutional deep neural network based on a U-Net-like architecture with pixel-wise regression loss on the acquired PA images. It is trained and validated on in silico data generated with Monte Carlo simulations. In an initial study we found an increase in accuracy over the state-of-the-art when reconstructing simulated linear-array scans of blood vessels.
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732
Man-Robot Symbiosis: A Framework For Cooperative Intelligence And Control
NASA Astrophysics Data System (ADS)
Parker, Lynne E.; Pin, Francois G.
1988-10-01
The man-robot symbiosis concept has the fundamental objective of bridging the gap between fully human-controlled and fully autonomous systems to achieve true man-robot cooperative control and intelligence. Such a system would allow improved speed, accuracy, and efficiency of task execution, while retaining the man in the loop for innovative reasoning and decision-making. The symbiont would have capabilities for supervised and unsupervised learning, allowing an increase of expertise in a wide task domain. This paper describes a robotic system architecture facilitating the symbiotic integration of teleoperative and automated modes of task execution. The architecture reflects a unique blend of many disciplines of artificial intelligence into a working system, including job or mission planning, dynamic task allocation, man-robot communication, automated monitoring, and machine learning. These disciplines are embodied in five major components of the symbiotic framework: the Job Planner, the Dynamic Task Allocator, the Presenter/Interpreter, the Automated Monitor, and the Learning System.
Towards dense volumetric pancreas segmentation in CT using 3D fully convolutional networks
NASA Astrophysics Data System (ADS)
Roth, Holger; Oda, Masahiro; Shimizu, Natsuki; Oda, Hirohisa; Hayashi, Yuichiro; Kitasaka, Takayuki; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku
2018-03-01
Pancreas segmentation in computed tomography imaging has been historically difficult for automated methods because of the large shape and size variations between patients. In this work, we describe a custom-build 3D fully convolutional network (FCN) that can process a 3D image including the whole pancreas and produce an automatic segmentation. We investigate two variations of the 3D FCN architecture; one with concatenation and one with summation skip connections to the decoder part of the network. We evaluate our methods on a dataset from a clinical trial with gastric cancer patients, including 147 contrast enhanced abdominal CT scans acquired in the portal venous phase. Using the summation architecture, we achieve an average Dice score of 89.7 +/- 3.8 (range [79.8, 94.8])% in testing, achieving the new state-of-the-art performance in pancreas segmentation on this dataset.
NASA Astrophysics Data System (ADS)
Durst, Phillip J.; Gray, Wendell; Trentini, Michael
2013-05-01
A simple, quantitative measure for encapsulating the autonomous capabilities of unmanned systems (UMS) has yet to be established. Current models for measuring a UMS's autonomy level require extensive, operational level testing, and provide a means for assessing the autonomy level for a specific mission/task and operational environment. A more elegant technique for quantifying autonomy using component level testing of the robot platform alone, outside of mission and environment contexts, is desirable. Using a high level framework for UMS architectures, such a model for determining a level of autonomy has been developed. The model uses a combination of developmental and component level testing for each aspect of the UMS architecture to define a non-contextual autonomous potential (NCAP). The NCAP provides an autonomy level, ranging from fully non- autonomous to fully autonomous, in the form of a single numeric parameter describing the UMS's performance capabilities when operating at that level of autonomy.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
NASA Astrophysics Data System (ADS)
Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.
2013-09-01
A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.
Power law versus exponential state transition dynamics: application to sleep-wake architecture.
Chu-Shore, Jesse; Westover, M Brandon; Bianchi, Matt T
2010-12-02
Despite the common experience that interrupted sleep has a negative impact on waking function, the features of human sleep-wake architecture that best distinguish sleep continuity versus fragmentation remain elusive. In this regard, there is growing interest in characterizing sleep architecture using models of the temporal dynamics of sleep-wake stage transitions. In humans and other mammals, the state transitions defining sleep and wake bout durations have been described with exponential and power law models, respectively. However, sleep-wake stage distributions are often complex, and distinguishing between exponential and power law processes is not always straightforward. Although mono-exponential distributions are distinct from power law distributions, multi-exponential distributions may in fact resemble power laws by appearing linear on a log-log plot. To characterize the parameters that may allow these distributions to mimic one another, we systematically fitted multi-exponential-generated distributions with a power law model, and power law-generated distributions with multi-exponential models. We used the Kolmogorov-Smirnov method to investigate goodness of fit for the "incorrect" model over a range of parameters. The "zone of mimicry" of parameters that increased the risk of mistakenly accepting power law fitting resembled empiric time constants obtained in human sleep and wake bout distributions. Recognizing this uncertainty in model distinction impacts interpretation of transition dynamics (self-organizing versus probabilistic), and the generation of predictive models for clinical classification of normal and pathological sleep architecture.
Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture
NASA Astrophysics Data System (ADS)
Weber, Raphael; Rettberg, Achim
A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.
1991-01-01
Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.
Nebot, Patricio; Torres-Sospedra, Joaquín; Martínez, Rafael J
2011-01-01
The control architecture is one of the most important part of agricultural robotics and other robotic systems. Furthermore its importance increases when the system involves a group of heterogeneous robots that should cooperate to achieve a global goal. A new control architecture is introduced in this paper for groups of robots in charge of doing maintenance tasks in agricultural environments. Some important features such as scalability, code reuse, hardware abstraction and data distribution have been considered in the design of the new architecture. Furthermore, coordination and cooperation among the different elements in the system is allowed in the proposed control system. By integrating a network oriented device server Player, Java Agent Development Framework (JADE) and High Level Architecture (HLA), the previous concepts have been considered in the new architecture presented in this paper. HLA can be considered the most important part because it not only allows the data distribution and implicit communication among the parts of the system but also allows to simultaneously operate with simulated and real entities, thus allowing the use of hybrid systems in the development of applications.
Adaptive and technology-independent architecture for fault-tolerant distributed AAL solutions.
Schmidt, Michael; Obermaisser, Roman
2018-04-01
Today's architectures for Ambient Assisted Living (AAL) must cope with a variety of challenges like flawless sensor integration and time synchronization (e.g. for sensor data fusion) while abstracting from the underlying technologies at the same time. Furthermore, an architecture for AAL must be capable to manage distributed application scenarios in order to support elderly people in all situations of their everyday life. This encompasses not just life at home but in particular the mobility of elderly people (e.g. when going for a walk or having sports) as well. Within this paper we will introduce a novel architecture for distributed AAL solutions whose design follows a modern Microservices approach by providing small core services instead of a monolithic application framework. The architecture comprises core services for sensor integration, and service discovery while supporting several communication models (periodic, sporadic, streaming). We extend the state-of-the-art by introducing a fault-tolerance model for our architecture on the basis of a fault-hypothesis describing the fault-containment regions (FCRs) with their respective failure modes and failure rates in order to support safety-critical AAL applications. Copyright © 2017 Elsevier Ltd. All rights reserved.
Intelligent Middle-Ware Architecture for Mobile Networks
NASA Astrophysics Data System (ADS)
Rayana, Rayene Ben; Bonnin, Jean-Marie
Recent advances in electronic and automotive industries as well as in wireless telecommunication technologies have drawn a new picture where each vehicle became “fully networked”. Multiple stake-holders (network operators, drivers, car manufacturers, service providers, etc.) will participate in this emerging market, which could grow following various models. To free the market from technical constraints, it is important to return to the basics of the Internet, i.e., providing embarked devices with a fully operational Internet connectivity (IPv6).
NASA Technical Reports Server (NTRS)
Shyy, Dong-Jye; Redman, Wayne
1993-01-01
For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.
A Distributed Architecture for Tsunami Early Warning and Collaborative Decision-support in Crises
NASA Astrophysics Data System (ADS)
Moßgraber, J.; Middleton, S.; Hammitzsch, M.; Poslad, S.
2012-04-01
The presentation will describe work on the system architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". The challenges for a Tsunami Early Warning System (TEWS) are manifold and the success of a system depends crucially on the system's architecture. A modern warning system following a system-of-systems approach has to integrate various components and sub-systems such as different information sources, services and simulation systems. Furthermore, it has to take into account the distributed and collaborative nature of warning systems. In order to create an architecture that supports the whole spectrum of a modern, distributed and collaborative warning system one must deal with multiple challenges. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. At the bottom layer it has to reliably integrate a large set of conventional sensors, such as seismic sensors and sensor networks, buoys and tide gauges, and also innovative and unconventional sensors, such as streams of messages from social media services. At the top layer it has to support collaboration on high-level decision processes and facilitates information sharing between organizations. In between, the system has to process all data and integrate information on a semantic level in a timely manner. This complex communication follows an event-driven mechanism allowing events to be published, detected and consumed by various applications within the architecture. Therefore, at the upper layer the event-driven architecture (EDA) aspects are combined with principles of service-oriented architectures (SOA) using standards for communication and data exchange. The most prominent challenges on this layer include providing a framework for information integration on a syntactic and semantic level, leveraging distributed processing resources for a scalable data processing platform, and automating data processing and decision support workflows.
Proton beam therapy control system
Baumann, Michael A [Riverside, CA; Beloussov, Alexandre V [Bernardino, CA; Bakir, Julide [Alta Loma, CA; Armon, Deganit [Redlands, CA; Olsen, Howard B [Colton, CA; Salem, Dana [Riverside, CA
2008-07-08
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A.; Beloussov, Alexandre V.; Bakir, Julide; Armon, Deganit; Olsen, Howard B.; Salem, Dana
2010-09-21
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-06-25
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-12-03
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Doulamis, A; Doulamis, N; Ntalianis, K; Kollias, S
2003-01-01
In this paper, an unsupervised video object (VO) segmentation and tracking algorithm is proposed based on an adaptable neural-network architecture. The proposed scheme comprises: 1) a VO tracking module and 2) an initial VO estimation module. Object tracking is handled as a classification problem and implemented through an adaptive network classifier, which provides better results compared to conventional motion-based tracking algorithms. Network adaptation is accomplished through an efficient and cost effective weight updating algorithm, providing a minimum degradation of the previous network knowledge and taking into account the current content conditions. A retraining set is constructed and used for this purpose based on initial VO estimation results. Two different scenarios are investigated. The first concerns extraction of human entities in video conferencing applications, while the second exploits depth information to identify generic VOs in stereoscopic video sequences. Human face/ body detection based on Gaussian distributions is accomplished in the first scenario, while segmentation fusion is obtained using color and depth information in the second scenario. A decision mechanism is also incorporated to detect time instances for weight updating. Experimental results and comparisons indicate the good performance of the proposed scheme even in sequences with complicated content (object bending, occlusion).
A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Brian S; Wu, Yifu; Wei, Jin
Distributed Energy Resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. As most of these generators are geographically dispersed, dedicated communications investments for every generator are capital cost prohibitive. Real-time distributed communications middleware, which supervises, organizes and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs, allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the Quality of Experience (QoE) measures to complement the conventional Quality of Service (QoS)more » information to detect and mitigate the congestion attacks effectively. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less
A Distributed Middleware Architecture for Attack-Resilient Communications in Smart Grids: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Yifu; Wei, Jin; Hodge, Bri-Mathias
Distributed energy resources (DERs) are being increasingly accepted as an excellent complement to traditional energy sources in smart grids. Because most of these generators are geographically dispersed, dedicated communications investments for every generator are capital-cost prohibitive. Real-time distributed communications middleware - which supervises, organizes, and schedules tremendous amounts of data traffic in smart grids with high penetrations of DERs - allows for the use of existing network infrastructure. In this paper, we propose a distributed attack-resilient middleware architecture that detects and mitigates the congestion attacks by exploiting the quality of experience measures to complement the conventional quality of service informationmore » to effectively detect and mitigate congestion attacks. The simulation results illustrate the efficiency of our proposed communications middleware architecture.« less
COREBA (cognition-oriented emergent behavior architecture)
NASA Astrophysics Data System (ADS)
Kwak, S. David
2000-06-01
Currently, many behavior implementation technologies are available for modeling human behaviors in Department of Defense (DOD) computerized systems. However, it is commonly known that any single currently adopted behavior implementation technology is not so capable of fully representing complex and dynamic human decision-making and cognition behaviors. The author views that the current situation can be greatly improved if multiple technologies are integrated within a well designed overarching architecture that amplifies the merits of each of the participating technologies while suppressing the limitations that are inherent with each of the technologies. COREBA uses an overarching behavior integration architecture that makes the multiple implementation technologies cooperate in a homogeneous environment while collectively transcending the limitations associated with the individual implementation technologies. Specifically, COREBA synergistically integrates Artificial Intelligence and Complex Adaptive System under Rational Behavior Model multi-level multi- paradigm behavior architecture. This paper will describe applicability of COREBA in DOD domain, behavioral capabilities and characteristics of COREBA and how the COREBA architectural integrates various behavior implementation technologies.
Flow distribution in parallel microfluidic networks and its effect on concentration gradient
Guermonprez, Cyprien; Michelin, Sébastien; Baroud, Charles N.
2015-01-01
The architecture of microfluidic networks can significantly impact the flow distribution within its different branches and thereby influence tracer transport within the network. In this paper, we study the flow rate distribution within a network of parallel microfluidic channels with a single input and single output, using a combination of theoretical modeling and microfluidic experiments. Within the ladder network, the flow rate distribution follows a U-shaped profile, with the highest flow rate occurring in the initial and final branches. The contrast with the central branches is controlled by a single dimensionless parameter, namely, the ratio of hydrodynamic resistance between the distribution channel and the side branches. This contrast in flow rates decreases when the resistance of the side branches increases relative to the resistance of the distribution channel. When the inlet flow is composed of two parallel streams, one of which transporting a diffusing species, a concentration variation is produced within the side branches of the network. The shape of this concentration gradient is fully determined by two dimensionless parameters: the ratio of resistances, which determines the flow rate distribution, and the Péclet number, which characterizes the relative speed of diffusion and advection. Depending on the values of these two control parameters, different distribution profiles can be obtained ranging from a flat profile to a step distribution of solute, with well-distributed gradients between these two limits. Our experimental results are in agreement with our numerical model predictions, based on a simplified 2D advection-diffusion problem. Finally, two possible applications of this work are presented: the first one combines the present design with self-digitization principle to encapsulate the controlled concentration in nanoliter chambers, while the second one extends the present design to create a continuous concentration gradient within an open flow chamber. PMID:26487905
Supporting shared data structures on distributed memory architectures
NASA Technical Reports Server (NTRS)
Koelbel, Charles; Mehrotra, Piyush; Vanrosendale, John
1990-01-01
Programming nonshared memory systems is more difficult than programming shared memory systems, since there is no support for shared data structures. Current programming languages for distributed memory architectures force the user to decompose all data structures into separate pieces, with each piece owned by one of the processors in the machine, and with all communication explicitly specified by low-level message-passing primitives. A new programming environment is presented for distributed memory architectures, providing a global name space and allowing direct access to remote parts of data values. The analysis and program transformations required to implement this environment are described, and the efficiency of the resulting code on the NCUBE/7 and IPSC/2 hypercubes are described.
A Geo-Distributed System Architecture for Different Domains
NASA Astrophysics Data System (ADS)
Moßgraber, Jürgen; Middleton, Stuart; Tao, Ran
2013-04-01
The presentation will describe work on the system-of-systems (SoS) architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". In this project we deal with two use-cases: Natural Crisis Management (e.g. Tsunami Early Warning) and Industrial Subsurface Development (e.g. drilling for oil). These use-cases seem to be quite different at first sight but share a lot of similarities, like managing and looking up available sensors, extracting data from them and annotate it semantically, intelligently manage the data (big data problem), run mathematical analysis algorithms on the data and finally provide decision support on this basis. The main challenge was to create a generic architecture which fits both use-cases. The requirements to the architecture are manifold and the whole spectrum of a modern, geo-distributed and collaborative system comes into play. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. The most important architectural challenges we needed to address are 1. Build a scalable communication layer for a System-of-sytems 2. Build a resilient communication layer for a System-of-sytems 3. Efficiently publish large volumes of semantically rich sensor data 4. Scalable and high performance storage of large distributed datasets 5. Handling federated multi-domain heterogeneous data 6. Discovery of resources in a geo-distributed SoS 7. Coordination of work between geo-distributed systems The design decisions made for each of them will be presented. These developed concepts are also applicable to the requirements of the Future Internet (FI) and Internet of Things (IoT) which will provide services like smart grids, smart metering, logistics and environmental monitoring.
Control and Communication for a Secure and Reconfigurable Power Distribution System
NASA Astrophysics Data System (ADS)
Giacomoni, Anthony Michael
A major transformation is taking place throughout the electric power industry to overlay existing electric infrastructure with advanced sensing, communications, and control system technologies. This transformation to a smart grid promises to enhance system efficiency, increase system reliability, support the electrification of transportation, and provide customers with greater control over their electricity consumption. Upgrading control and communication systems for the end-to-end electric power grid, however, will present many new security challenges that must be dealt with before extensive deployment and implementation of these technologies can begin. In this dissertation, a comprehensive systems approach is taken to minimize and prevent cyber-physical disturbances to electric power distribution systems using sensing, communications, and control system technologies. To accomplish this task, an intelligent distributed secure control (IDSC) architecture is presented and validated in silico for distribution systems to provide greater adaptive protection, with the ability to proactively reconfigure, and rapidly respond to disturbances. Detailed descriptions of functionalities at each layer of the architecture as well as the whole system are provided. To compare the performance of the IDSC architecture with that of other control architectures, an original simulation methodology is developed. The simulation model integrates aspects of cyber-physical security, dynamic price and demand response, sensing, communications, intermittent distributed energy resources (DERs), and dynamic optimization and reconfiguration. Applying this comprehensive systems approach, performance results for the IEEE 123 node test feeder are simulated and analyzed. The results show the trade-offs between system reliability, operational constraints, and costs for several control architectures and optimization algorithms. Additional simulation results are also provided. In particular, the advantages of an IDSC architecture are highlighted when an intermittent DER is present on the system.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.
Relighting Character Motion for Photoreal Simulations
2006-11-01
Southern California Cinema -Television Interactive Media Division, LA, CA 90089 ABSTRACT. We present a fully image-based approach for...Graphics Proceedings, Annual Conference Series, 279–288. DEBEVEC, P. E., TAYLOR, C. J., AND MALIK, J. 1996. Modeling and rendering architecture from
Reliable Adaptive Video Streaming Driven by Perceptual Semantics for Situational Awareness
Pimentel-Niño, M. A.; Saxena, Paresh; Vazquez-Castro, M. A.
2015-01-01
A novel cross-layer optimized video adaptation driven by perceptual semantics is presented. The design target is streamed live video to enhance situational awareness in challenging communications conditions. Conventional solutions for recreational applications are inadequate and novel quality of experience (QoE) framework is proposed which allows fully controlled adaptation and enables perceptual semantic feedback. The framework relies on temporal/spatial abstraction for video applications serving beyond recreational purposes. An underlying cross-layer optimization technique takes into account feedback on network congestion (time) and erasures (space) to best distribute available (scarce) bandwidth. Systematic random linear network coding (SRNC) adds reliability while preserving perceptual semantics. Objective metrics of the perceptual features in QoE show homogeneous high performance when using the proposed scheme. Finally, the proposed scheme is in line with content-aware trends, by complying with information-centric-networking philosophy and architecture. PMID:26247057
Real-Time Hardware-in-the-Loop Simulation of Ares I Launch Vehicle
NASA Technical Reports Server (NTRS)
Tobbe, Patrick; Matras, Alex; Walker, David; Wilson, Heath; Fulton, Chris; Alday, Nathan; Betts, Kevin; Hughes, Ryan; Turbe, Michael
2009-01-01
The Ares Real-Time Environment for Modeling, Integration, and Simulation (ARTEMIS) has been developed for use by the Ares I launch vehicle System Integration Laboratory at the Marshall Space Flight Center. The primary purpose of the Ares System Integration Laboratory is to test the vehicle avionics hardware and software in a hardware - in-the-loop environment to certify that the integrated system is prepared for flight. ARTEMIS has been designed to be the real-time simulation backbone to stimulate all required Ares components for verification testing. ARTE_VIIS provides high -fidelity dynamics, actuator, and sensor models to simulate an accurate flight trajectory in order to ensure realistic test conditions. ARTEMIS has been designed to take advantage of the advances in underlying computational power now available to support hardware-in-the-loop testing to achieve real-time simulation with unprecedented model fidelity. A modular realtime design relying on a fully distributed computing architecture has been implemented.
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Gao, Yi
2017-03-01
This paper presents a fully automatic approach to grade intermediate prostate malignancy with hematoxylin and eosin-stained whole slide images. Deep learning architectures such as convolutional neural networks have been utilized in the domain of histopathology for automated carcinoma detection and classification. However, few work show its power in discriminating intermediate Gleason patterns, due to sporadic distribution of prostate glands on stained surgical section samples. We propose optimized hematoxylin decomposition on localized images, followed by convolutional neural network to classify Gleason patterns 3+4 and 4+3 without handcrafted features or gland segmentation. Crucial glands morphology and structural relationship of nuclei are extracted twice in different color space by the multi-scale strategy to mimic pathologists' visual examination. Our novel classification scheme evaluated on 169 whole slide images yielded a 70.41% accuracy and corresponding area under the receiver operating characteristic curve of 0.7247.
IDEA: Planning at the Core of Autonomous Reactive Agents
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Clancy, Daniel (Technical Monitor)
2002-01-01
Several successful autonomous systems are separated into technologically diverse functional layers operating at different levels of abstraction. This diversity makes them difficult to implement and validate. In this paper, we present IDEA (Intelligent Distributed Execution Architecture), a unified planning and execution framework. In IDEA a layered system can be implemented as separate agents, one per layer, each representing its interactions with the world in a model. At all levels, the model representation primitives and their semantics is the same. Moreover, each agent relies on a single model, plan database, plan runner and on a variety of planners, both reactive and deliberative. The framework allows the specification of agents that operate, within a guaranteed reaction time and supports flexible specification of reactive vs. deliberative agent behavior. Within the IDEA framework we are working to fully duplicate the functionalities of the DS1 Remote Agent and extend it to domains of higher complexity than autonomous spacecraft control.
Renovating a 65-year-old performing arts center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gifford, R.S.
This article describes the HVAC, electrical and lighting systems that were upgraded in the renovations to the Wang Center for the Performing Arts. The renovations and restorations involved a complete restoration to elaborate interior finishes and a comprehensive upgrade of antiquated core mechanical and electrical systems in a 65-year-old performing arts theater. A new thermal storage cooling system, a new electrical power distribution system, new lighting systems and a new fire protection system were accomplished simultaneously as the theater interior was completely refinished with meticulous detail. The project offered a rare opportunity to integrate current technology with what may atmore » first appear to be obsolete systems to enable the original architectural grandeur to be maintained, yet be fully functional to meet the demanding requirements of a modern performing arts center. It is an example of a successful project that was completed within a very aggressive construction schedule and within a controlled budget.« less
NCSTRL: Design and Deployment of a Globally Distributed Digital Library.
ERIC Educational Resources Information Center
Davies, James R.; Lagoze, Carl
2000-01-01
Discusses the development of a digital library architecture that allows the creation of digital libraries within the World Wide Web. Describes a digital library, NCSTRL (Networked Computer Science Technical Research Library), within which the work has taken place and explains Dienst, a protocol and architecture for distributed digital libraries.…
A Survey of Some Approaches to Distributed Data Base & Distributed File System Architecture.
1980-01-01
BUS POD A DD A 12 12 A = A Cell D = D Cell Figure 7-1: MUFFIN logical architecture - 45 - MUFI January 1980 ".-.Bus Interface V Conventional Processor...and Applied Mathematics (14), * December, 1966. [Kimbleton 791 Kimbleton, Stephen; Wang, Pearl; and Fong, Elizabeth. XNDM: An Experimental Network
A mission operations architecture for the 21st century
NASA Technical Reports Server (NTRS)
Tai, W.; Sweetnam, D.
1996-01-01
An operations architecture is proposed for low cost missions beyond the year 2000. The architecture consists of three elements: a service based architecture; a demand access automata; and distributed science hubs. The service based architecture is based on a set of standard multimission services that are defined, packaged and formalized by the deep space network and the advanced multi-mission operations system. The demand access automata is a suite of technologies which reduces the need to be in contact with the spacecraft, and thus reduces operating costs. The beacon signaling, the virtual emergency room, and the high efficiency tracking automata technologies are described. The distributed science hubs provide information system capabilities to the small science oriented flight teams: individual access to all traditional mission functions and services; multimedia intra-team communications, and automated direct transparent communications between the scientists and the instrument.
A practical approach for active camera coordination based on a fusion-driven multi-agent system
NASA Astrophysics Data System (ADS)
Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.
2014-04-01
In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.
An Association Mapping Framework To Account for Potential Sex Difference in Genetic Architectures.
Kang, Eun Yong; Lee, Cue Hyunkyu; Furlotte, Nicholas A; Joo, Jong Wha J; Kostem, Emrah; Zaitlen, Noah; Eskin, Eleazar; Han, Buhm
2018-05-11
Over the past few years, genome-wide association studies have identified many trait-associated loci that have different effects on females and males, which increased attention to the genetic architecture differences between the sexes. The between-sex differences in genetic architectures can cause a variety of phenomena such as differences in the effect sizes at trait-associated loci, differences in the magnitudes of polygenic background effects, and differences in the phenotypic variances. However, current association testing approaches for dealing with sex, such as including sex as a covariate, cannot fully account for these phenomena and can be suboptimal in statistical power. We present a novel association mapping framework, MetaSex, that can comprehensively account for the genetic architecture differences between the sexes. Through simulations and applications to real data, we show that our framework has superior performance than previous approaches in association mapping. Copyright © 2018, Genetics.
The NASA Auralization Framework and Plugin Architecture
NASA Technical Reports Server (NTRS)
Aumann, Aric R.; Tuttle, Brian C.; Chapin, William L.; Rizzi, Stephen A.
2015-01-01
NASA has a long history of investigating human response to aircraft flyover noise and in recent years has developed a capability to fully auralize the noise of aircraft during their design. This capability is particularly useful for unconventional designs with noise signatures significantly different from the current fleet. To that end, a flexible software architecture has been developed to facilitate rapid integration of new simulation techniques for noise source synthesis and propagation, and to foster collaboration amongst researchers through a common releasable code base. The NASA Auralization Framework (NAF) is a skeletal framework written in C++ with basic functionalities and a plugin architecture that allows users to mix and match NAF capabilities with their own methods through the development and use of dynamically linked libraries. This paper presents the NAF software architecture and discusses several advanced auralization techniques that have been implemented as plugins to the framework.
Intelligent distributed medical image management
NASA Astrophysics Data System (ADS)
Garcia, Hong-Mei C.; Yun, David Y.
1995-05-01
The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.
FPGA-based real-time phase measuring profilometry algorithm design and implementation
NASA Astrophysics Data System (ADS)
Zhan, Guomin; Tang, Hongwei; Zhong, Kai; Li, Zhongwei; Shi, Yusheng
2016-11-01
Phase measuring profilometry (PMP) has been widely used in many fields, like Computer Aided Verification (CAV), Flexible Manufacturing System (FMS) et al. High frame-rate (HFR) real-time vision-based feedback control will be a common demands in near future. However, the instruction time delay in the computer caused by numerous repetitive operations greatly limit the efficiency of data processing. FPGA has the advantages of pipeline architecture and parallel execution, and it fit for handling PMP algorithm. In this paper, we design a fully pipelined hardware architecture for PMP. The functions of hardware architecture includes rectification, phase calculation, phase shifting, and stereo matching. The experiment verified the performance of this method, and the factors that may influence the computation accuracy was analyzed.
Architectural Implementation of NASA Space Telecommunications Radio System Specification
NASA Technical Reports Server (NTRS)
Peters, Kenneth J.; Lux, James P.; Lang, Minh; Duncan, Courtney B.
2012-01-01
This software demonstrates a working implementation of the NASA STRS (Space Telecommunications Radio System) architecture specification. This is a developing specification of software architecture and required interfaces to provide commonality among future NASA and commercial software-defined radios for space, and allow for easier mixing of software and hardware from different vendors. It provides required functions, and supports interaction with STRS-compliant simple test plug-ins ("waveforms"). All of it is programmed in "plain C," except where necessary to interact with C++ plug-ins. It offers a small footprint, suitable for use in JPL radio hardware. Future NASA work is expected to develop into fully capable software-defined radios for use on the space station, other space vehicles, and interplanetary probes.
NASA Astrophysics Data System (ADS)
Xie, Jibo; Li, Guoqing
2015-04-01
Earth observation (EO) data obtained by air-borne or space-borne sensors has the characteristics of heterogeneity and geographical distribution of storage. These data sources belong to different organizations or agencies whose data management and storage methods are quite different and geographically distributed. Different data sources provide different data publish platforms or portals. With more Remote sensing sensors used for Earth Observation (EO) missions, different space agencies have distributed archived massive EO data. The distribution of EO data archives and system heterogeneity makes it difficult to efficiently use geospatial data for many EO applications, such as hazard mitigation. To solve the interoperable problems of different EO data systems, an advanced architecture of distributed geospatial data infrastructure is introduced to solve the complexity of distributed and heterogeneous EO data integration and on-demand processing in this paper. The concept and architecture of geospatial data service gateway (GDSG) is proposed to build connection with heterogeneous EO data sources by which EO data can be retrieved and accessed with unified interfaces. The GDSG consists of a set of tools and service to encapsulate heterogeneous geospatial data sources into homogenous service modules. The GDSG modules includes EO metadata harvesters and translators, adaptors to different type of data system, unified data query and access interfaces, EO data cache management, and gateway GUI, etc. The GDSG framework is used to implement interoperability and synchronization between distributed EO data sources with heterogeneous architecture. An on-demand distributed EO data platform is developed to validate the GDSG architecture and implementation techniques. Several distributed EO data achieves are used for test. Flood and earthquake serves as two scenarios for the use cases of distributed EO data integration and interoperability.
NASA Constellation Distributed Simulation Middleware Trade Study
NASA Technical Reports Server (NTRS)
Hasan, David; Bowman, James D.; Fisher, Nancy; Cutts, Dannie; Cures, Edwin Z.
2008-01-01
This paper presents the results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
Programmable architecture for pixel level processing tasks in lightweight strapdown IR seekers
NASA Astrophysics Data System (ADS)
Coates, James L.
1993-06-01
Typical processing tasks associated with missile IR seeker applications are described, and a straw man suite of algorithms is presented. A fully programmable multiprocessor architecture is realized on a multimedia video processor (MVP) developed by Texas Instruments. The MVP combines the elements of RISC, floating point, advanced DSPs, graphics processors, display and acquisition control, RAM, and external memory. Front end pixel level tasks typical of missile interceptor applications, operating on 256 x 256 sensor imagery, can be processed at frame rates exceeding 100 Hz in a single MVP chip.
Designing a low cost bedside workstation for intensive care units.
Michel, A.; Zörb, L.; Dudeck, J.
1996-01-01
The paper describes the design and implementation of a software architecture for a low cost bedside workstation for intensive care units. The development is fully integrated into the information infrastructure of the existing hospital information system (HIS) at the University Hospital of Giessen. It provides cost efficient and reliable access for data entry and review from the HIS database from within patient rooms, even in very space limited environments. The architecture further supports automatical data input from medical devices. First results from three different intensive care units are reported. PMID:8947771
Modeling and Verification of Dependable Electronic Power System Architecture
NASA Astrophysics Data System (ADS)
Yuan, Ling; Fan, Ping; Zhang, Xiao-fang
The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.
Middleware Trade Study for NASA Domain
NASA Technical Reports Server (NTRS)
Bowman, Dan
2007-01-01
This presentation presents preliminary results of a trade study designed to assess three distributed simulation middleware technologies for support of the NASA Constellation Distributed Space Exploration Simulation (DSES) project and Test and Verification Distributed System Integration Laboratory (DSIL). The technologies are: the High Level Architecture (HLA), the Test and Training Enabling Architecture (TENA), and an XML-based variant of Distributed Interactive Simulation (DIS-XML) coupled with the Extensible Messaging and Presence Protocol (XMPP). According to the criteria and weights determined in this study, HLA scores better than the other two for DSES as well as the DSIL
Reductive evolution of architectural repertoires in proteomes and the birth of the tripartite world
Wang, Minglei; Yafremava, Liudmila S.; Caetano-Anollés, Derek; Mittenthal, Jay E.; Caetano-Anollés, Gustavo
2007-01-01
The repertoire of protein architectures in proteomes is evolutionarily conserved and capable of preserving an accurate record of genomic history. Here we use a census of protein architecture in 185 genomes that have been fully sequenced to generate genome-based phylogenies that describe the evolution of the protein world at fold (F) and fold superfamily (FSF) levels. The patterns of representation of F and FSF architectures over evolutionary history suggest three epochs in the evolution of the protein world: (1) architectural diversification, where members of an architecturally rich ancestral community diversified their protein repertoire; (2) superkingdom specification, where superkingdoms Archaea, Bacteria, and Eukarya were specified; and (3) organismal diversification, where F and FSF specific to relatively small sets of organisms appeared as the result of diversification of organismal lineages. Functional annotation of FSF along these architectural chronologies revealed patterns of discovery of biological function. Most importantly, the analysis identified an early and extensive differential loss of architectures occurring primarily in Archaea that segregates the archaeal lineage from the ancient community of organisms and establishes the first organismal divide. Reconstruction of phylogenomic trees of proteomes reflects the timeline of architectural diversification in the emerging lineages. Thus, Archaea undertook a minimalist strategy using only a small subset of the full architectural repertoire and then crystallized into a diversified superkingdom late in evolution. Our analysis also suggests a communal ancestor to all life that was molecularly complex and adopted genomic strategies currently present in Eukarya. PMID:17908824
A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data
NASA Astrophysics Data System (ADS)
Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.
2014-12-01
Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.
ERIC Educational Resources Information Center
Amenyo, John-Thones
2012-01-01
Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…
ERIC Educational Resources Information Center
Ahmed, Iftikhar; Sadeq, Muhammad Jafar
2006-01-01
Current distance learning systems are increasingly packing highly data-intensive contents on servers, resulting in the congestion of network and server resources at peak service times. A distributed learning system based on faded information field (FIF) architecture that employs mobile agents (MAs) has been proposed and simulated in this work. The…
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.; Das, Raja; Saltz, Joel; Vermeland, R. E.
1992-01-01
An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made.
The architecture of a distributed medical dictionary.
Fowler, J; Buffone, G; Moreau, D
1995-01-01
Exploiting high-speed computer networks to provide a national medical information infrastructure is a goal for medical informatics. The Distributed Medical Dictionary under development at Baylor College of Medicine is a model for an architecture that supports collaborative development of a distributed online medical terminology knowledge-base. A prototype is described that illustrates the concept. Issues that must be addressed by such a system include high availability, acceptable response time, support for local idiom, and control of vocabulary.
Gontijo, Lessando M; Nechols, James R; Margolies, David C; Cloyd, Raymond A
2012-01-01
The arrangement, number, and size of plant parts may influence predator foraging behavior, either directly, by altering the rate or pattern of predator movement, or, indirectly, by affecting the distribution and abundance of prey. We report on the effects of both plant architecture and prey distribution on foraging by the predatory mite, Phytoseiulus persimilis Athias-Henriot (Acari: Phytoseiidae), on cucumber (Cucumis sativus L.). Plants differed in leaf number (2- or 6-leafed), and there were associated differences in leaf size, plant height, and relative proportions of plant parts; but all had the same total surface area. The prey, the twospotted spider mite Tetranychus urticae Koch (Acari: Tetranychidae), were distributed either on the basal leaf or on all leaves. The effect of plant architecture on predator foraging behavior varied depending on prey distribution. The dimensions of individual plant parts affected time allocated to moving and feeding, but they did not appear to influence the frequency with which predators moved among different plant parts. Overall, P. persimilis moved less, and fed upon prey longer, on 6-leafed plants with prey on all leaves than on plants representing other treatment combinations. Our findings suggest that both plant architecture and pattern of prey distribution should be considered, along with other factors such as herbivore-induced plant volatiles, in augmentative biological control programs.
An eConsent-based System Architecture Supporting Cooperation in Integrated Healthcare Networks.
Bergmann, Joachim; Bott, Oliver J; Hoffmann, Ina; Pretschner, Dietrich P
2005-01-01
The economical need for efficient healthcare leads to cooperative shared care networks. A virtual electronic health record is required, which integrates patient related information but reflects the distributed infrastructure and restricts access only to those health professionals involved into the care process. Our work aims on specification and development of a system architecture fulfilling these requirements to be used in concrete regional pilot studies. Methodical analysis and specification have been performed in a healthcare network using the formal method and modelling tool MOSAIK-M. The complexity of the application field was reduced by focusing on the scenario of thyroid disease care, which still includes various interdisciplinary cooperation. Result is an architecture for a secure distributed electronic health record for integrated care networks, specified in terms of a MOSAIK-M-based system model. The architecture proposes business processes, application services, and a sophisticated security concept, providing a platform for distributed document-based, patient-centred, and secure cooperation. A corresponding system prototype has been developed for pilot studies, using advanced application server technologies. The architecture combines a consolidated patient-centred document management with a decentralized system structure without needs for replication management. An eConsent-based approach assures, that access to the distributed health record remains under control of the patient. The proposed architecture replaces message-based communication approaches, because it implements a virtual health record providing complete and current information. Acceptance of the new communication services depends on compatibility with the clinical routine. Unique and cross-institutional identification of a patient is also a challenge, but will loose significance with establishing common patient cards.
Grid Data Access on Widely Distributed Worker Nodes Using Scalla and SRM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakl, Pavel; /Prague, Inst. Phys.; Lauret, Jerome
2011-11-10
Facing the reality of storage economics, NP experiments such as RHIC/STAR have been engaged in a shift of the analysis model, and now heavily rely on using cheap disks attached to processing nodes, as such a model is extremely beneficial over expensive centralized storage. Additionally, exploiting storage aggregates with enhanced distributed computing capabilities such as dynamic space allocation (lifetime of spaces), file management on shared storages (lifetime of files, pinning file), storage policies or a uniform access to heterogeneous storage solutions is not an easy task. The Xrootd/Scalla system allows for storage aggregation. We will present an overview of themore » largest deployment of Scalla (Structured Cluster Architecture for Low Latency Access) in the world spanning over 1000 CPUs co-sharing the 350 TB Storage Elements and the experience on how to make such a model work in the RHIC/STAR standard analysis framework. We will explain the key features and approach on how to make access to mass storage (HPSS) possible in such a large deployment context. Furthermore, we will give an overview of a fully 'gridified' solution using the plug-and-play features of Scalla architecture, replacing standard storage access with grid middleware SRM (Storage Resource Manager) components designed for space management and will compare the solution with the standard Scalla approach in use in STAR for the past 2 years. Integration details, future plans and status of development will be explained in the area of best transfer strategy between multiple-choice data pools and best placement with respect of load balancing and interoperability with other SRM aware tools or implementations.« less
Kim, Sang-Yoon; Lim, Woochang
2016-07-01
We investigate the effect of network architecture on burst and spike synchronization in a directed scale-free network (SFN) of bursting neurons, evolved via two independent α- and β-processes. The α-process corresponds to a directed version of the Barabási-Albert SFN model with growth and preferential attachment, while for the β-process only preferential attachments between pre-existing nodes are made without addition of new nodes. We first consider the "pure" α-process of symmetric preferential attachment (with the same in- and out-degrees), and study emergence of burst and spike synchronization by varying the coupling strength J and the noise intensity D for a fixed attachment degree. Characterizations of burst and spike synchronization are also made by employing realistic order parameters and statistical-mechanical measures. Next, we choose appropriate values of J and D where only burst synchronization occurs, and investigate the effect of the scale-free connectivity on the burst synchronization by varying (1) the symmetric attachment degree and (2) the asymmetry parameter (representing deviation from the symmetric case) in the α-process, and (3) the occurrence probability of the β-process. In all these three cases, changes in the type and the degree of population synchronization are studied in connection with the network topology such as the degree distribution, the average path length Lp, and the betweenness centralization Bc. It is thus found that just taking into consideration Lp and Bc (affecting global communication between nodes) is not sufficient to understand emergence of population synchronization in SFNs, but in addition to them, the in-degree distribution (affecting individual dynamics) must also be considered to fully understand for the effective population synchronization. Copyright © 2016 Elsevier Ltd. All rights reserved.
Proceedings of the Second Software Architecture Technology User Network (SATURN) Workshop
2006-08-01
Proceedings of the Second Software Architecture Technology User Network (SATURN) Workshop Robert L. Nord August 2006 TECHNICAL REPORT CMU...SEI-2006-TR-010 ESC-TR-2006-010 Software Architecture Technology Initiative Unlimited distribution subject to the copyright. This report was...Participants 3 3 Presentations 5 3.1 SATURN Opening Presentation: Future Directions of the Software Architecture Technology Initiative 5 3.2 Keynote
2006-12-01
NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI-AGENT PHYSICALLY INTERACTING SPACECRAFT (AMPHIS) TEST BED by Blake D. Eikenberry...Engineer Degree 4. TITLE AND SUBTITLE Guidance and Navigation Software Architecture Design for the Autonomous Multi- Agent Physically Interacting...iii Approved for public release; distribution is unlimited GUIDANCE AND NAVIGATION SOFTWARE ARCHITECTURE DESIGN FOR THE AUTONOMOUS MULTI
Distributed phased array architecture study
NASA Technical Reports Server (NTRS)
Bourgeois, Brian
1987-01-01
Variations in amplifiers and phase shifters can cause degraded antenna performance, depending also on the environmental conditions and antenna array architecture. The implementation of distributed phased array hardware was studied with the aid of the DISTAR computer program as a simulation tool. This simulation provides guidance in hardware simulation. Both hard and soft failures of the amplifiers in the T/R modules are modeled. Hard failures are catastrophic: no power is transmitted to the antenna elements. Noncatastrophic or soft failures are modeled as a modified Gaussian distribution. The resulting amplitude characteristics then determine the array excitation coefficients. The phase characteristics take on a uniform distribution. Pattern characteristics such as antenna gain, half power beamwidth, mainbeam phase errors, sidelobe levels, and beam pointing errors were studied as functions of amplifier and phase shifter variations. General specifications for amplifier and phase shifter tolerances in various architecture configurations for C band and S band were determined.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H C; Raila, W F; Pappas, J J; Ford, M; Zatsman, P; Tu, J; Barnett, G O
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces.
A component-based, distributed object services architecture for a clinical workstation.
Chueh, H. C.; Raila, W. F.; Pappas, J. J.; Ford, M.; Zatsman, P.; Tu, J.; Barnett, G. O.
1996-01-01
Attention to an architectural framework in the development of clinical applications can promote reusability of both legacy systems as well as newly designed software. We describe one approach to an architecture for a clinical workstation application which is based on a critical middle tier of distributed object-oriented services. This tier of network-based services provides flexibility in the creation of both the user interface and the database tiers. We developed a clinical workstation for ambulatory care using this architecture, defining a number of core services including those for vocabulary, patient index, documents, charting, security, and encounter management. These services can be implemented through proprietary or more standard distributed object interfaces such as CORBA and OLE. Services are accessed over the network by a collection of user interface components which can be mixed and matched to form a variety of interface styles. These services have also been reused with several applications based on World Wide Web browser interfaces. PMID:8947744
The AI Bus architecture for distributed knowledge-based systems
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain
1991-01-01
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.
Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX
2007-05-17
including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication
Brain tumor segmentation with Deep Neural Networks.
Havaei, Mohammad; Davy, Axel; Warde-Farley, David; Biard, Antoine; Courville, Aaron; Bengio, Yoshua; Pal, Chris; Jodoin, Pierre-Marc; Larochelle, Hugo
2017-01-01
In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test data-set reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster. Copyright © 2016 Elsevier B.V. All rights reserved.
Strategies for P2P connectivity in reconfigurable converged wired/wireless access networks.
Puerto, Gustavo; Mora, José; Ortega, Beatriz; Capmany, José
2010-12-06
This paper presents different strategies to define the architecture of a Radio-Over-Fiber (RoF) Access networks enabling Peer-to-Peer (P2P) functionalities. The architectures fully exploit the flexibility of a wavelength router based on the feedback configuration of an Arrayed Waveguide Grating (AWG) and an optical switch to broadcast P2P services among diverse infrastructures featuring dynamic channel allocation and enabling an optical platform for 3G and beyond wireless backhaul requirements. The first architecture incorporates a tunable laser to generate a dedicated wavelength for P2P purposes and the second architecture takes advantage of reused wavelengths to enable the P2P connectivity among Optical Network Units (ONUs) or Base Stations (BS). While these two approaches allow the P2P connectivity in a one at a time basis (1:1), the third architecture enables the broadcasting of P2P sessions among different ONUs or BSs at the same time (1:M). Experimental assessment of the proposed architecture shows approximately 0.6% Error Vector Magnitude (EVM) degradation for wireless services and 1 dB penalty in average for 1 x 10(-12) Bit Error Rate (BER) for wired baseband services.
Web-Based Course Management and Web Services
ERIC Educational Resources Information Center
Mandal, Chittaranjan; Sinha, Vijay Luxmi; Reade, Christopher M. P.
2004-01-01
The architecture of a web-based course management tool that has been developed at IIT [Indian Institute of Technology], Kharagpur and which manages the submission of assignments is discussed. Both the distributed architecture used for data storage and the client-server architecture supporting the web interface are described. Further developments…
NASA Technical Reports Server (NTRS)
Rogers, David
1988-01-01
The advent of the Connection Machine profoundly changes the world of supercomputers. The highly nontraditional architecture makes possible the exploration of algorithms that were impractical for standard Von Neumann architectures. Sparse distributed memory (SDM) is an example of such an algorithm. Sparse distributed memory is a particularly simple and elegant formulation for an associative memory. The foundations for sparse distributed memory are described, and some simple examples of using the memory are presented. The relationship of sparse distributed memory to three important computational systems is shown: random-access memory, neural networks, and the cerebellum of the brain. Finally, the implementation of the algorithm for sparse distributed memory on the Connection Machine is discussed.
Woven TPS Enabling Missions Beyond Heritage Carbon Phenolic
NASA Technical Reports Server (NTRS)
Stackpoole, Mairead; Venkatapathy, Ethiraj; Feldman, Jay
2013-01-01
Woven Thermal Protection Systems (WTPS) is a new TPS concept that is funded by NASAs Office of the Chief Technologist (OCT) Game Changing Division. The WTPS project demonstrates the potential for manufacturing many TPS architectures capable of performances demanded by the many potential solar system exploration missions. Currently, missions that encounter heat fluxes in the range of 1500 4000 W/sq cm and pressures greater than 1.5 atm have very limited TPS options - only one proven material, fully dense Carbon Phenolic, is currently available for these missions. However, fully dense carbon phenolic is only mass efficient at heat fluxes greater than 4000 W/sq cm, and current mission designs suffer this mass inefficiency for lack of an alternative mid-density TPS. WTPS not only bridges this TPS gap but also offers a replacement for carbon phenolic, which itself requires a significant and costly redevelopment effort to re-establish its capability for use in the high heat flux missions recently prioritized in the NRC Decadal survey, including probe missions to Venus, Saturn and Neptune. This presentation will introduce some woven TPS architectures considered in this project and summarize some recent arc jet testing to evaluate the performance of fully dense and mid density WTPS. Performance comparisons to heritage carbon phenolic will be drawn where applicable.
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices
Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried
2017-01-01
In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures. PMID:29066942
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.
Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried
2017-01-01
In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.
Power System Information Delivering System Based on Distributed Object
NASA Astrophysics Data System (ADS)
Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji
In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.
Motion/imagery secure cloud enterprise architecture analysis
NASA Astrophysics Data System (ADS)
DeLay, John L.
2012-06-01
Cloud computing with storage virtualization and new service-oriented architectures brings a new perspective to the aspect of a distributed motion imagery and persistent surveillance enterprise. Our existing research is focused mainly on content management, distributed analytics, WAN distributed cloud networking performance issues of cloud based technologies. The potential of leveraging cloud based technologies for hosting motion imagery, imagery and analytics workflows for DOD and security applications is relatively unexplored. This paper will examine technologies for managing, storing, processing and disseminating motion imagery and imagery within a distributed network environment. Finally, we propose areas for future research in the area of distributed cloud content management enterprises.
Space Station Freedom power management and distribution design status
NASA Technical Reports Server (NTRS)
Javidi, S.; Gholdston, E.; Stroh, P.
1989-01-01
The design status of the power management and distribution electric power system for the Space Station Freedom is presented. The current design is a star architecture, which has been found to be the best approach for meeting the requirement to deliver 120 V dc to the user interface. The architecture minimizes mass and power losses while improving element-to-element isolation and system flexibility. The design is partitioned into three elements: energy collection, storage and conversion, system protection and distribution, and management and control.
A digital protection system incorporating knowledge based learning
NASA Astrophysics Data System (ADS)
Watson, Karan; Russell, B. Don; McCall, Kurt
A digital system architecture used to diagnoses the operating state and health of electric distribution lines and to generate actions for line protection is presented. The architecture is described functionally and to a limited extent at the hardware level. This architecture incorporates multiple analysis and fault-detection techniques utilizing a variety of parameters. In addition, a knowledge-based decision maker, a long-term memory retention and recall scheme, and a learning environment are described. Preliminary laboratory implementations of the system elements have been completed. Enhanced protection for electric distribution feeders is provided by this system. Advantages of the system are enumerated.
A resilient and secure software platform and architecture for distributed spacecraft
NASA Astrophysics Data System (ADS)
Otte, William R.; Dubey, Abhishek; Karsai, Gabor
2014-06-01
A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.
Jiang, Guangli; Liu, Leibo; Zhu, Wenping; Yin, Shouyi; Wei, Shaojun
2015-09-04
This paper proposes a real-time feature extraction VLSI architecture for high-resolution images based on the accelerated KAZE algorithm. Firstly, a new system architecture is proposed. It increases the system throughput, provides flexibility in image resolution, and offers trade-offs between speed and scaling robustness. The architecture consists of a two-dimensional pipeline array that fully utilizes computational similarities in octaves. Secondly, a substructure (block-serial discrete-time cellular neural network) that can realize a nonlinear filter is proposed. This structure decreases the memory demand through the removal of data dependency. Thirdly, a hardware-friendly descriptor is introduced in order to overcome the hardware design bottleneck through the polar sample pattern; a simplified method to realize rotation invariance is also presented. Finally, the proposed architecture is designed in TSMC 65 nm CMOS technology. The experimental results show a performance of 127 fps in full HD resolution at 200 MHz frequency. The peak performance reaches 181 GOPS and the throughput is double the speed of other state-of-the-art architectures.
Geospace simulations using modern accelerator processor technology
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D. J.
2009-12-01
OpenGGCM (Open Geospace General Circulation Model) is a well-established numerical code simulating the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is currently limited by computational constraints on grid resolution. OpenGGCM has been ported to make use of the added computational powerof modern accelerator based processor architectures, in particular the Cell processor. The Cell architecture is a novel inhomogeneous multicore architecture capable of achieving up to 230 GFLops on a single chip. The University of New Hampshire recently acquired a PowerXCell 8i based computing cluster, and here we will report initial performance results of OpenGGCM. Realizing the high theoretical performance of the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallelization approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We use a modern technique, automatic code generation, which shields the application programmer from having to deal with all of the implementation details just described, keeping the code much more easily maintainable. Our preliminary results indicate excellent performance, a speed-up of a factor of 30 compared to the unoptimized version.
Zhang, Xing; Wang, Ke Lin; Fu, Zhi Yong; Chen, Hong Song; Zhang, Wei; Shi, Zhi Hua
2017-07-18
The traditional hydrology method, stable hydrogen and oxygen isotope technology, and rainfall simulation method were combined to investigate the hydrological function of small experimental plots (2 m×1.2 m) of contrasting architecture in Northwest Guangxi dolomite area. There were four typical catenary soils along the dolomite peak-cluster slope, which were the whole-sand, up-loam and down-sand, the whole loam, up-clay and down-sand soil types, respectively. All the experimental plots generated little amounts of overland runoff and had a high surface infiltration rate, ranging from 41 to 48 mm·h -1 , and the interflow and deep percolation were the dominant hydrological progress. The interflow was classified into interflow in soil clay A and C according to soil genetic layers. For interflow in soil clay A, matrix flow was generated from the whole-sand, up-loam and down-sand, up-clay and down-sand soil types, but preferential flow dominated in the whole-loam soil type. As for interflow in soil clay C, preferential flow dominated in the whole-loam, up-clay and down-sand, up-loam and down-sand soil types. The soils were shallow yet continuously distributed along the dolomite slope. The difference of hydrological characteristics in soil types with different architectures mainly existed in the runoff generation progress of each interface underground. It proved that the a 3-D perspective was needed to study the soil hydrological functions on dolomite slope of Northwest Guangxi, and a new way paying more attention on underground hydrological progress should be explored to fully reveal the near-surface hydrological processes on karst slope.
A robot control architecture supported on contraction theory
NASA Astrophysics Data System (ADS)
Silva, Jorge; Sequeira, João; Santos, Cristina
2017-01-01
This paper proposes fundamentals for stability and success of a global system composed by a mobile robot, a real environment and a navigation architecture with time constraints. Contraction theory is a typical framework that provides tools and properties to prove the stability and convergence of the global system to a unique fixed point that identifies the mission success. A stability indicator based on the combination contraction property is developed to identify the mission success as a stability measure. The architecture is fully designed through C1 nonlinear dynamical systems and feedthrough maps, which makes it amenable for contraction analysis. Experiments in a realistic and uncontrolled environment are realised to verify if inherent perturbations of the sensory information and of the environment affect the stability and success of the global system.
Architectural Design for European SST System
NASA Astrophysics Data System (ADS)
Utzmann, Jens; Wagner, Axel; Blanchet, Guillaume; Assemat, Francois; Vial, Sophie; Dehecq, Bernard; Fernandez Sanchez, Jaime; Garcia Espinosa, Jose Ramon; Agueda Mate, Alberto; Bartsch, Guido; Schildknecht, Thomas; Lindman, Niklas; Fletcher, Emmet; Martin, Luis; Moulin, Serge
2013-08-01
The paper presents the results of a detailed design, evaluation and trade-off of a potential European Space Surveillance and Tracking (SST) system architecture. The results have been produced in study phase 1 of the on-going "CO-II SSA Architectural Design" project performed by the Astrium consortium as part of ESA's Space Situational Awareness Programme and are the baseline for further detailing and consolidation in study phase 2. The sensor network is comprised of both ground- and space-based assets and aims at being fully compliant with the ESA SST System Requirements. The proposed ground sensors include a surveillance radar, an optical surveillance system and a tracking network (radar and optical). A space-based telescope system provides significant performance and robustness for the surveillance and tracking of beyond-LEO target objects.
Geospatial Applications on Different Parallel and Distributed Systems in enviroGRIDS Project
NASA Astrophysics Data System (ADS)
Rodila, D.; Bacu, V.; Gorgan, D.
2012-04-01
The execution of Earth Science applications and services on parallel and distributed systems has become a necessity especially due to the large amounts of Geospatial data these applications require and the large geographical areas they cover. The parallelization of these applications comes to solve important performance issues and can spread from task parallelism to data parallelism as well. Parallel and distributed architectures such as Grid, Cloud, Multicore, etc. seem to offer the necessary functionalities to solve important problems in the Earth Science domain: storing, distribution, management, processing and security of Geospatial data, execution of complex processing through task and data parallelism, etc. A main goal of the FP7-funded project enviroGRIDS (Black Sea Catchment Observation and Assessment System supporting Sustainable Development) [1] is the development of a Spatial Data Infrastructure targeting this catchment region but also the development of standardized and specialized tools for storing, analyzing, processing and visualizing the Geospatial data concerning this area. For achieving these objectives, the enviroGRIDS deals with the execution of different Earth Science applications, such as hydrological models, Geospatial Web services standardized by the Open Geospatial Consortium (OGC) and others, on parallel and distributed architecture to maximize the obtained performance. This presentation analysis the integration and execution of Geospatial applications on different parallel and distributed architectures and the possibility of choosing among these architectures based on application characteristics and user requirements through a specialized component. Versions of the proposed platform have been used in enviroGRIDS project on different use cases such as: the execution of Geospatial Web services both on Web and Grid infrastructures [2] and the execution of SWAT hydrological models both on Grid and Multicore architectures [3]. The current focus is to integrate in the proposed platform the Cloud infrastructure, which is still a paradigm with critical problems to be solved despite the great efforts and investments. Cloud computing comes as a new way of delivering resources while using a large set of old as well as new technologies and tools for providing the necessary functionalities. The main challenges in the Cloud computing, most of them identified also in the Open Cloud Manifesto 2009, address resource management and monitoring, data and application interoperability and portability, security, scalability, software licensing, etc. We propose a platform able to execute different Geospatial applications on different parallel and distributed architectures such as Grid, Cloud, Multicore, etc. with the possibility of choosing among these architectures based on application characteristics and complexity, user requirements, necessary performances, cost support, etc. The execution redirection on a selected architecture is realized through a specialized component and has the purpose of offering a flexible way in achieving the best performances considering the existing restrictions.
SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases.
Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani
2016-01-01
Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed architecture׳s objectives, including resource awareness, smart data integration and visualization, cost reduction, and performance guarantee. Copyright © 2015 Elsevier Ltd. All rights reserved.
Space and Ground Trades for Human Exploration and Wearable Computing
NASA Technical Reports Server (NTRS)
Lupisella, Mark; Donohue, John; Mandl, Dan; Ly, Vuong; Graves, Corey; Heimerdinger, Dan; Studor, George; Saiz, John; DeLaune, Paul; Clancey, William
2006-01-01
Human exploration of the Moon and Mars will present unique trade study challenges as ground system elements shift to planetary bodies and perhaps eventually to the bodies of human explorers in the form of wearable computing technologies. This presentation will highlight some of the key space and ground trade issues that will face the Exploration Initiative as NASA begins designing systems for the sustained human exploration of the Moon and Mars, with an emphasis on wearable computing. We will present some preliminary test results and scenarios that demonstrate how wearable computing might affect the trade space noted below. We will first present some background on wearable computing and its utility to NASA's Exploration Initiative. Next, we will discuss three broad architectural themes, some key ground and space trade issues within those themes and how they relate to wearable computing. Lastly, we will present some preliminary test results and suggest guidance for proceeding in the assessment and creation of a value-added role for wearable computing in the Exploration Initiative. The three broad ground-space architectural trade themes we will discuss are: 1. Functional Shift and Distribution: To what extent, if any, should traditional ground system functionality be shifted to, and distributed among, the Earth, Moon/Mars, and the human. explorer? 2. Situational Awareness and Autonomy: How much situational awareness (e.g. environmental conditions, biometrics, etc.) and autonomy is required and desired, and where should these capabilities reside? 3. Functional Redundancy: What functions (e.g. command, control, analysis) should exist simultaneously on Earth, the Moon/Mars, and the human explorer? These three themes can serve as the axes of a three-dimensional trade space, within which architectural solutions reside. We will show how wearable computers can fit into this trade space and what the possible implications could be for the rest of the ground and space architecture(s). We intend this to be an example of explorer-centric thinking in a fully integrated explorer paradigm, where integrated explorer refers to a human explorer having instant access to all relevant data, knowledge of the environment, science models, health and safety-related events, and other tools and information via wearable computing technologies. The trade study approach will include involvement from the relevant stakeholders (Constellation Systems, CCCI, EVA Project Office, Astronaut office, Mission Operations, Space Life Sciences, etc.) to develop operations concepts (and/or operations scenarios) from which a basic high-level set of requirements could be extracted. This set of requirements could serve as a foundation (along with stakeholder buy-in) that would help define the trade space and assist in identifying candidate technologies for further study and evolution to higher-level technology readiness levels.
Improving Grid Resilience through Informed Decision-making (IGRID)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burnham, Laurie; Stamber, Kevin L.; Jeffers, Robert Fredric
The transformation of the distribution grid from a centralized to decentralized architecture, with bi-directional power and data flows, is made possible by a surge in network intelligence and grid automation. While changes are largely beneficial, the interface between grid operator and automated technologies is not well understood, nor are the benefits and risks of automation. Quantifying and understanding the latter is an important facet of grid resilience that needs to be fully investigated. The work described in this document represents the first empirical study aimed at identifying and mitigating the vulnerabilities posed by automation for a grid that for themore » foreseeable future will remain a human-in-the-loop critical infrastructure. Our scenario-based methodology enabled us to conduct a series of experimental studies to identify causal relationships between grid-operator performance and automated technologies and to collect measurements of human performance as a function of automation. Our findings, though preliminary, suggest there are predictive patterns in the interplay between human operators and automation, patterns that can inform the rollout of distribution automation and the hiring and training of operators, and contribute in multiple and significant ways to the field of grid resilience.« less
Enamel microarchitecture of a tribosphenic molar.
Spoutil, Frantisek; Vlcek, Vojtĕch; Horácek, Ivan
2010-10-01
The tribosphenic molar is a dental apomorphy of mammals and the molar type from which all derived types originated. Its enamel coat is expected to be ancestral: a thin, evenly distributed layer of radial prismatic enamel. In the bat Myotis myotis, we reinvestigated the 3D architecture of the dental enamel using serial sectioning combined with scanning electron microscopy analyses, biometrics of enamel prisms and crystallites, and X-ray diffraction. We found distinct heterotopies in enamel thickness (thick enamel on the convex sides of the crests, thin on the concave ones), angularity of enamel prisms, and in distribution of particular enamel types (prismatic, interprismatic, aprismatic) and demonstrated structural relations of these heterotopies to the cusp and crest organization of the tribosphenic molar. X-ray diffraction demonstrated that the crystallites composing the enamel are actually the aggregates of much smaller primary crystallites. The differences among particular enamel types in degree of crystallite aggregation and the variation in structural microstrain of the primary crystallites (depending upon the duration and the mechanical context of mineralization) represent factors not fully understood as yet that may contribute to the complexity of enamel microarchitecture in a significant way. © 2010 Wiley-Liss, Inc.
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
DIRAC3 - the new generation of the LHCb grid software
NASA Astrophysics Data System (ADS)
Tsaregorodtsev, A.; Brook, N.; Casajus Ramo, A.; Charpentier, Ph; Closier, J.; Cowan, G.; Graciani Diaz, R.; Lanciotti, E.; Mathe, Z.; Nandakumar, R.; Paterson, S.; Romanovsky, V.; Santinelli, R.; Sapunov, M.; Smith, A. C.; Seco Miguelez, M.; Zhelezov, A.
2010-04-01
DIRAC, the LHCb community Grid solution, was considerably reengineered in order to meet all the requirements for processing the data coming from the LHCb experiment. It is covering all the tasks starting with raw data transportation from the experiment area to the grid storage, data processing up to the final user analysis. The reengineered DIRAC3 version of the system includes a fully grid security compliant framework for building service oriented distributed systems; complete Pilot Job framework for creating efficient workload management systems; several subsystems to manage high level operations like data production and distribution management. The user interfaces of the DIRAC3 system providing rich command line and scripting tools are complemented by a full-featured Web portal providing users with a secure access to all the details of the system status and ongoing activities. We will present an overview of the DIRAC3 architecture, new innovative features and the achieved performance. Extending DIRAC3 to manage computing resources beyond the WLCG grid will be discussed. Experience with using DIRAC3 by other user communities than LHCb and in other application domains than High Energy Physics will be shown to demonstrate the general-purpose nature of the system.
Parallel peak pruning for scalable SMP contour tree computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, Hamish A.; Weber, Gunther H.; Sewell, Christopher M.
As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this formmore » of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.« less
Resource Prospector: Evaluating the ISRU Potential of the Lunar Poles
NASA Astrophysics Data System (ADS)
Colaprete, A.; Elphic, R. C.; Andrews, D.; Bluethmann, W.; Quinn, J.; Chavers, D. G.
2017-12-01
Resource Prospector (RP) is a lunar volatiles prospecting mission being developed for potential flight in CY2021-2022. The mission includes a rover-borne payload that (1) can locate surface and near-subsurface volatiles, (2) excavate and analyze samples of the volatile-bearing regolith, and (3) demonstrate the form, extractability and usefulness of the materials. The primary mission goal for RP is to evaluate the In-Situ Resource Utilization (ISRU) potential of the lunar poles. While it is now understood that lunar water and other volatiles have a much greater extent of distribution, possible forms, and concentrations than previously believed, to fully understand how viable these volatiles are as a resource to support human exploration of the solar system, the distribution and form needs to be understood at a "human" scale. That is, the "ore body" must be better understood at the scales it would be worked before it can be evaluated as a potential architectural element within any evolvable lunar or Mars campaign. This talk will provide an overview of the RP mission with an emphasis on mission goals and measurements, and will provide an update as to its current status.
Advanced computer architecture specification for automated weld systems
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.
Marsh, James; Glencross, Mashhuda; Pettifer, Steve; Hubbold, Roger
2006-01-01
Network architectures for collaborative virtual reality have traditionally been dominated by client-server and peer-to-peer approaches, with peer-to-peer strategies typically being favored where minimizing latency is a priority, and client-server where consistency is key. With increasingly sophisticated behavior models and the demand for better support for haptics, we argue that neither approach provides sufficient support for these scenarios and, thus, a hybrid architecture is required. We discuss the relative performance of different distribution strategies in the face of real network conditions and illustrate the problems they face. Finally, we present an architecture that successfully meets many of these challenges and demonstrate its use in a distributed virtual prototyping application which supports simultaneous collaboration for assembly, maintenance, and training applications utilizing haptics.
Biasetti, Jacopo; Pustavoitau, Aliaksei; Spazzini, Pier Giorgio
2017-01-01
Mechanical circulatory support devices, such as total artificial hearts and left ventricular assist devices, rely on external energy sources for their continuous operation. Clinically approved power supplies rely on percutaneous cables connecting an external energy source to the implanted device with the associated risk of infections. One alternative, investigated in the 70s and 80s, employs a fully implanted nuclear power source. The heat generated by the nuclear decay can be converted into electricity to power circulatory support devices. Due to the low conversion efficiencies, substantial levels of waste heat are generated and must be dissipated to avoid tissue damage, heat stroke, and death. The present work computationally evaluates the ability of the blood flow in the descending aorta to remove the locally generated waste heat for subsequent full-body distribution and dissipation, with the specific aim of investigating methods for containment of local peak temperatures within physiologically acceptable limits. To this aim, coupled fluid–solid heat transfer computational models of the blood flow in the human aorta and different heat exchanger architectures are developed. Particle tracking is used to evaluate temperature histories of cells passing through the heat exchanger region. The use of the blood flow in the descending aorta as a heat sink proves to be a viable approach for the removal of waste heat loads. With the basic heat exchanger design, blood thermal boundary layer temperatures exceed 50°C, possibly damaging blood cells and proteins. Improved designs of the heat exchanger, with the addition of fins and heat guides, allow for drastically lower blood temperatures, possibly leading to a more biocompatible implant. The ability to maintain blood temperatures at biologically compatible levels will ultimately allow for the body-wise distribution, and subsequent dissipation, of heat loads with minimum effects on the human physiology. PMID:29094038
Biasetti, Jacopo; Pustavoitau, Aliaksei; Spazzini, Pier Giorgio
2017-01-01
Mechanical circulatory support devices, such as total artificial hearts and left ventricular assist devices, rely on external energy sources for their continuous operation. Clinically approved power supplies rely on percutaneous cables connecting an external energy source to the implanted device with the associated risk of infections. One alternative, investigated in the 70s and 80s, employs a fully implanted nuclear power source. The heat generated by the nuclear decay can be converted into electricity to power circulatory support devices. Due to the low conversion efficiencies, substantial levels of waste heat are generated and must be dissipated to avoid tissue damage, heat stroke, and death. The present work computationally evaluates the ability of the blood flow in the descending aorta to remove the locally generated waste heat for subsequent full-body distribution and dissipation, with the specific aim of investigating methods for containment of local peak temperatures within physiologically acceptable limits. To this aim, coupled fluid-solid heat transfer computational models of the blood flow in the human aorta and different heat exchanger architectures are developed. Particle tracking is used to evaluate temperature histories of cells passing through the heat exchanger region. The use of the blood flow in the descending aorta as a heat sink proves to be a viable approach for the removal of waste heat loads. With the basic heat exchanger design, blood thermal boundary layer temperatures exceed 50°C, possibly damaging blood cells and proteins. Improved designs of the heat exchanger, with the addition of fins and heat guides, allow for drastically lower blood temperatures, possibly leading to a more biocompatible implant. The ability to maintain blood temperatures at biologically compatible levels will ultimately allow for the body-wise distribution, and subsequent dissipation, of heat loads with minimum effects on the human physiology.
ModSAF Software Architecture Design and Overview Document
1993-12-20
ADVANCED DISTRIBUTED SIMULATIONTECHNOLOGY AD-A282 740 ModSAF SOFTWARE ARCHITECTURE DESIGN AND OVERVIEW DOCUMENT Ver 1.0 - 20 December 1993 D T...AND SUBTITLE 5. FUNDING NUMBERS MOdSAF SOFTWARE ARCHITECTURE DESIGN AND OVERVIEW DOCUMENT C N61339-91-D-O00, Delivery Order (0021), ModSAF (CDRL A004) 6
Analysis of Employment Flow of Landscape Architecture Graduates in Agricultural Universities
ERIC Educational Resources Information Center
Yao, Xia; He, Linchun
2012-01-01
A statistical analysis of employment flow of landscape architecture graduates was conducted on the employment data of graduates major in landscape architecture in 2008 to 2011. The employment flow of graduates was to be admitted to graduate students, industrial direction and regional distribution, etc. Then, the features of talent flow and factors…
Geospace simulations on the Cell BE processor
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Raeder, J.; Larson, D.
2008-12-01
OpenGGCM (Open Geospace General circulation Model) is an established numerical code that simulates the Earth's space environment. The most computing intensive part is the MHD (magnetohydrodynamics) solver that models the plasma surrounding Earth and its interaction with Earth's magnetic field and the solar wind flowing in from the sun. Like other global magnetosphere codes, OpenGGCM's realism is limited by computational constraints on grid resolution. We investigate porting of the MHD solver to the Cell BE architecture, a novel inhomogeneous multicore architecture capable of up to 230 GFlops per processor. Realizing this high performance on the Cell processor is a programming challenge, though. We implemented the MHD solver using a multi-level parallel approach: On the coarsest level, the problem is distributed to processors based upon the usual domain decomposition approach. Then, on each processor, the problem is divided into 3D columns, each of which is handled by the memory limited SPEs (synergistic processing elements) slice by slice. Finally, SIMD instructions are used to fully exploit the vector/SIMD FPUs in each SPE. Memory management needs to be handled explicitly by the code, using DMA to move data from main memory to the per-SPE local store and vice versa. We obtained excellent performance numbers, a speed-up of a factor of 25 compared to just using the main processor, while still keeping the numerical implementation details of the code maintainable.
Kou, W; Pandolfino, J E; Kahrilas, P J; Patankar, N A
2017-06-01
Based on a fully coupled computational model of esophageal transport, we analyzed how varied esophageal muscle fiber architecture and/or dual contraction waves (CWs) affect bolus transport. Specifically, we studied the luminal pressure profile in those cases to better understand possible origins of the peristaltic transition zone. Two groups of studies were conducted using a computational model. The first studied esophageal transport with circumferential-longitudinal fiber architecture, helical fiber architecture and various combinations of the two. In the second group, cases with dual CWs and varied muscle fiber architecture were simulated. Overall transport characteristics were examined and the space-time profiles of luminal pressure were plotted and compared. Helical muscle fiber architecture featured reduced circumferential wall stress, greater esophageal distensibility, and greater axial shortening. Non-uniform fiber architecture featured a peristaltic pressure trough between two high-pressure segments. The distal pressure segment showed greater amplitude than the proximal segment, consistent with experimental data. Dual CWs also featured a pressure trough between two high-pressure segments. However, the minimum pressure in the region of overlap was much lower, and the amplitudes of the two high-pressure segments were similar. The efficacy of esophageal transport is greatly affected by muscle fiber architecture. The peristaltic transition zone may be attributable to non-uniform architecture of muscle fibers along the length of the esophagus and/or dual CWs. The difference in amplitude between the proximal and distal pressure segments may be attributable to non-uniform muscle fiber architecture. © 2017 John Wiley & Sons Ltd.
Wireless sensor network for wide-area high-mobility applications
NASA Astrophysics Data System (ADS)
del Castillo, Ignacio; Esper-Chaín, Roberto; Tobajas, Félix; de Armas, Valentín.
2013-05-01
In recent years, IEEE 802.15.4-based Wireless Sensor Networks (WSN) have experienced significant growth, mainly motivated by the standard features, such as small size oriented devices, low power consumption nodes, wireless communication links, and sensing and data processing capabilities. In this paper, the development, implementation and deployment of a novel fully compatible IEEE 802.15.4-based WSN architecture for applications operating over extended geographic regions with high node mobility support, is described. In addition, a practical system implementation of the proposed WSN architecture is presented and described for experimental validation and characterization purposes.
Modified signed-digit arithmetic based on redundant bit representation.
Huang, H; Itoh, M; Yatagai, T
1994-09-10
Fully parallel modified signed-digit arithmetic operations are realized based on redundant bit representation of the digits proposed. A new truth-table minimizing technique is presented based on redundant-bitrepresentation coding. It is shown that only 34 minterms are enough for implementing one-step modified signed-digit addition and subtraction with this new representation. Two optical implementation schemes, correlation and matrix multiplication, are described. Experimental demonstrations of the correlation architecture are presented. Both architectures use fixed minterm masks for arbitrary-length operands, taking full advantage of the parallelism of the modified signed-digit number system and optics.
Observatory software for the Maunakea Spectroscopic Explorer
NASA Astrophysics Data System (ADS)
Vermeulen, Tom; Isani, Sidik; Withington, Kanoa; Ho, Kevin; Szeto, Kei; Murowinski, Rick
2016-07-01
The Canada-France-Hawaii Telescope is currently in the conceptual design phase to redevelop its facility into the new Maunakea Spectroscopic Explorer (MSE). MSE is designed to be the largest non-ELT optical/NIR astronomical telescope, and will be a fully dedicated facility for multi-object spectroscopy over a broad range of spectral resolutions. This paper outlines the software and control architecture envisioned for the new facility. The architecture will be designed around much of the existing software infrastructure currently used at CFHT as well as the latest proven opensource software. CFHT plans to minimize risk and development time by leveraging existing technology.
A Distributed Prognostic Health Management Architecture
NASA Technical Reports Server (NTRS)
Bhaskar, Saha; Saha, Sankalita; Goebel, Kai
2009-01-01
This paper introduces a generic distributed prognostic health management (PHM) architecture with specific application to the electrical power systems domain. Current state-of-the-art PHM systems are mostly centralized in nature, where all the processing is reliant on a single processor. This can lead to loss of functionality in case of a crash of the central processor or monitor. Furthermore, with increases in the volume of sensor data as well as the complexity of algorithms, traditional centralized systems become unsuitable for successful deployment, and efficient distributed architectures are required. A distributed architecture though, is not effective unless there is an algorithmic framework to take advantage of its unique abilities. The health management paradigm envisaged here incorporates a heterogeneous set of system components monitored by a varied suite of sensors and a particle filtering (PF) framework that has the power and the flexibility to adapt to the different diagnostic and prognostic needs. Both the diagnostic and prognostic tasks are formulated as a particle filtering problem in order to explicitly represent and manage uncertainties; however, typically the complexity of the prognostic routine is higher than the computational power of one computational element ( CE). Individual CEs run diagnostic routines until the system variable being monitored crosses beyond a nominal threshold, upon which it coordinates with other networked CEs to run the prognostic routine in a distributed fashion. Implementation results from a network of distributed embedded devices monitoring a prototypical aircraft electrical power system are presented, where the CEs are Sun Microsystems Small Programmable Object Technology (SPOT) devices.
JPL Facilities and Software for Collaborative Design: 1994 - Present
NASA Technical Reports Server (NTRS)
DeFlorio, Paul A.
2004-01-01
The viewgraph presentation provides an overview of the history of the JPL Project Design Center (PDC) and, since 2000, the Center for Space Mission Architecture and Design (CSMAD). The discussion includes PDC objectives and scope; mission design metrics; distributed design; a software architecture timeline; facility design principles; optimized design for group work; CSMAD plan view, facility design, and infrastructure; and distributed collaboration tools.
Emulation of Industrial Control Field Device Protocols
2013-03-01
platforms such as the Arduino ( based on the Atmel AVR architecture) or popular PIC architecture based devices, which are programmed for specific functions...UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base , Ohio DISTRIBUTION STATEMENT A. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...confidence intervals for the mean. Based on these results, extensive knowledge of the specific implementations of the protocols or timing profiles of the
Application of Advanced Multi-Core Processor Technologies to Oceanographic Research
2013-09-30
STM32 NXP LPC series No Proprietary Microchip PIC32/DSPIC No > 500 mW; < 5 W ARM Cortex TI OMAP TI Sitara Broadcom BCM2835 Varies FPGA...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Application of Advanced Multi-Core Processor Technologies...state-of-the-art information processing architectures. OBJECTIVES Next-generation processor architectures (multi-core, multi-threaded) hold the
Laboratory for Computer Science Progress Report 19, 1 July 1981-30 June 1982.
1984-05-01
Multiprocessor Architectures 202 4. TRIX Operating System 209 5. VLSI Tools 212 ’SYSTEMATIC PROGRAM DEVELOPMENT, 221 1. Introduction 222 2. Specification...exploring distributed operating systems and the architecture of single-user powerful computers that are interconnected by communication networks. The...to now. In particular, we expect to experiment with languages, operating systems , and applications that establish the feasibility of distributed
Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture (Postprint)
2007-09-18
TERMS turbine engine control, engine health management, FADEC , Universal FADEC , Distributed Controls, UF, UF Platform, common FADEC , Generic FADEC ...Modular FADEC , Adaptive Control 16. SECURITY CLASSIFICATION OF: 19a. NAME OF RESPONSIBLE PERSON (Monitor) a. REPORT Unclassified b. ABSTRACT...Eventually the Full Authority Digital Electronic Control ( FADEC ) became the norm. Presently, this control system architecture accounts for 15 to 20% of
Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture
NASA Astrophysics Data System (ADS)
Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel
2003-11-01
Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.
Software architecture of INO340 telescope control system
NASA Astrophysics Data System (ADS)
Ravanmehr, Reza; Khosroshahi, Habib
2016-08-01
The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.
NASA Astrophysics Data System (ADS)
Belapurkar, Rohit K.
Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.
Functional and Database Architecture Design.
1983-09-26
I AD-At3.N 275 FUNCTIONAL AND D ATABASE ARCHITECTURE DESIGN (U) ALPHA / OMEGA GROUP INC HARVARD MA 26 SEP 83 NODS 4-83-C 0525 UNCLASSIFIED FG52 N EE...0525 REPORT AOO1 FUNCTIONAL AND DATABASE ARCHITECTURE DESIGN Submitted to: Office of Naval Research Department of the Navy 800 N. Quincy Street...ZNTIS GRA& I DTIC TAB Unannounced 0 Justification REPORT ON Distribution/ Availability Codes Avail and/or FUNCTIONAL AND DATABASE ARCHITECTURE DESIGN Dist
Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander
2008-04-01
We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.
An Architectural Concept for ISS Contingency Resupply
NASA Astrophysics Data System (ADS)
Gurevich, G.; Chinnery, A. E.
2002-01-01
The International Space Station (ISS) is a unique Earth orbiting laboratory drawing upon the expertise of 16 nations: the US, Canada, Japan, Russia, 11 member nations of the European Space Agency, and Brazil to promote advances in science and technology. This capability is extremely valuable, but comes at very high cost. Under contract to the Marshall Space Flight Center, Microcosm identified an architectural concept to reduce the overhead cost burden associated with ISS operations. The concept focuses on the development of a responsive contingency resupply capability. This concept makes use of non-traditional station resources, minimizes the impact to the station infrastructure, supports an evolution of operations from supervised to full autonomy, and is scaleable from a small cargo delivery to a larger capability. The concept addresses the three mission phases -- launch, phasing and transfer to the Station orbit, and proximity operations. The elements of the architecture include the following: Both the launch vehicle design and launch operations are simple, robust, and fully support a launch-on-demand environment. The launch vehicle third stage is equipped to maneuver the cargo cannister to station altitude, where it awaits rendezvous from the small, yet fully capable multi-mission Orbit Transfer Vehicle (OTV). The OTV performs the close in orbit transfer and proximity operations. Due to the criticality of these operations and the safety required, the OTV is two failure tolerant. The concept allows for the cargo cannisters to be unloaded and reloaded with waste at the convenience of the ISS crew. Safe return of cargo is also addressed. This paper describes the concept in more detail. The various elements of the architecture are defined, the phases of re-supply operations are explained, and concepts for improving the viability of the service are suggested. Perceived obstacles for implementing the service are discussed. System costs are discussed as well as alternative uses of the architecture to enhance commercial viability.
Performance study of a data flow architecture
NASA Technical Reports Server (NTRS)
Adams, George
1985-01-01
Teams of scientists studied data flow concepts, static data flow machine architecture, and the VAL language. Each team mapped its application onto the machine and coded it in VAL. The principal findings of the study were: (1) Five of the seven applications used the full power of the target machine. The galactic simulation and multigrid fluid flow teams found that a significantly smaller version of the machine (16 processing elements) would suffice. (2) A number of machine design parameters including processing element (PE) function unit numbers, array memory size and bandwidth, and routing network capability were found to be crucial for optimal machine performance. (3) The study participants readily acquired VAL programming skills. (4) Participants learned that application-based performance evaluation is a sound method of evaluating new computer architectures, even those that are not fully specified. During the course of the study, participants developed models for using computers to solve numerical problems and for evaluating new architectures. These models form the bases for future evaluation studies.
Design of Distributed Engine Control Systems with Uncertain Delay.
Liu, Xiaofeng; Li, Yanxi; Sun, Xu
Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method.
Design of Distributed Engine Control Systems with Uncertain Delay
Li, Yanxi; Sun, Xu
2016-01-01
Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method. PMID:27669005
Two-dimensional optical architectures for the receive mode of phased-array antennas.
Pastur, L; Tonda-Goldstein, S; Dolfi, D; Huignard, J P; Merlet, T; Maas, O; Chazelas, J
1999-05-10
We propose and experimentally demonstrate two optical architectures that process the receive mode of a p x p element phased-array antenna. The architectures are based on free-space propagation and switching of the channelized optical carriers of microwave signals. With the first architecture a direct transposition of the received signals in the optical domain is assumed. The second architecture is based on the optical generation and distribution of a microwave local oscillator matched in frequency and direction. Preliminary experimental results at microwave frequencies of approximately 3 GHz are presented.
Applications of an architecture design and assessment system (ADAS)
NASA Technical Reports Server (NTRS)
Gray, F. Gail; Debrunner, Linda S.; White, Tennis S.
1988-01-01
A new Architecture Design and Assessment System (ADAS) tool package is introduced, and a range of possible applications is illustrated. ADAS was used to evaluate the performance of an advanced fault-tolerant computer architecture in a modern flight control application. Bottlenecks were identified and possible solutions suggested. The tool was also used to inject faults into the architecture and evaluate the synchronization algorithm, and improvements are suggested. Finally, ADAS was used as a front end research tool to aid in the design of reconfiguration algorithms in a distributed array architecture.
USDA-ARS?s Scientific Manuscript database
Given the rapid advances in genomic technologies, phenotyping has become the bottleneck for revealing gene-trait relationships. Therefore, developing a means to rapidly and accurately phenotype thousands of genotypes can allow us to more fully utilize the genomic data that is currently available. A ...
Architecture and Assembly of the Bacillus subtilis Spore Coat
2014-09-26
with chromosomal DNA was as described [32]. Table 1. 8. subtifis strains used in this study. Stra in Genotype Phenotype• PS832 wild type PS3394...of the morphology of fully hydrated and air dried spores demonstrate that surface ridges on dehydrated spores mostly disappear or decrease in size
Multi-Agent Architecture with Support to Quality of Service and Quality of Control
NASA Astrophysics Data System (ADS)
Poza-Luján, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, Jose-Enrique
Multi Agent Systems (MAS) are one of the most suitable frameworks for the implementation of intelligent distributed control system. Agents provide suitable flexibility to give support to implied heterogeneity in cyber-physical systems. Quality of Service (QoS) and Quality of Control (QoC) parameters are commonly utilized to evaluate the efficiency of the communications and the control loop. Agents can use the quality measures to take a wide range of decisions, like suitable placement on the control node or to change the workload to save energy. This article describes the architecture of a multi agent system that provides support to QoS and QoC parameters to optimize de system. The architecture uses a Publish-Subscriber model, based on Data Distribution Service (DDS) to send the control messages. Due to the nature of the Publish-Subscribe model, the architecture is suitable to implement event-based control (EBC) systems. The architecture has been called FSACtrl.
Lunar Outpost Life Support Architecture Study Based on a High-Mobility Exploration Scenario
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Anderson, Molly S.
2010-01-01
This paper presents results of a life support architecture study based on a 2009 NASA lunar surface exploration scenario known as Scenario 12. The study focuses on the assembly complete outpost configuration and includes pressurized rovers as part of a distributed outpost architecture in both stand-alone and integrated configurations. A range of life support architectures are examined reflecting different levels of closure and distributed functionality. Monte Carlo simulations are used to assess the sensitivity of results to volatile high-impact mission variables, including the quantity of residual Lander oxygen and hydrogen propellants available for scavenging, the fraction of crew time away from the outpost on excursions, total extravehicular activity hours, and habitat leakage. Surpluses or deficits of water and oxygen are reported for each architecture, along with fixed and 10-year total equivalent system mass estimates relative to a reference case. System robustness is discussed in terms of the probability of no water or oxygen resupply as determined from the Monte Carlo simulations.
Providing the full DDF link protection for bus-connected SIEPON based system architecture
NASA Astrophysics Data System (ADS)
Hwang, I.-Shyan; Pakpahan, Andrew Fernando; Liem, Andrew Tanny; Nikoukar, AliAkbar
2016-09-01
Currently a massive amount of traffic per second is delivered through EPON systems, one of the prominent access network technologies for delivering the next generation network. Therefore, it is vital to keep the EPON optical distribution network (ODN) working by providing the necessity protection mechanism in the deployed devices; otherwise, when failures occur it will cause a great loss for both network operators and business customers. In this paper, we propose a bus-connected architecture to protect and recover distribution drop fiber (DDF) link faults or transceiver failures at ONU(s) in SIEPON system. The proposed architecture provides a cost-effective architecture, which delivers the high fault-tolerance in handling multiple DDF faults, while also providing flexibility in choosing the backup ONU assignments. Simulation results show that the proposed architecture provides the reliability and maintains quality of service (QoS) performance in terms of mean packet delay, system throughput, packet loss and EF jitter when DDF link failures occur.
An object-oriented software approach for a distributed human tracking motion system
NASA Astrophysics Data System (ADS)
Micucci, Daniela L.
2003-06-01
Tracking is a composite job involving the co-operation of autonomous activities which exploit a complex information model and rely on a distributed architecture. Both information and activities must be classified and related in several dimensions: abstraction levels (what is modelled and how information is processed); topology (where the modelled entities are); time (when entities exist); strategy (why something happens); responsibilities (who is in charge of processing the information). A proper Object-Oriented analysis and design approach leads to a modular architecture where information about conceptual entities is modelled at each abstraction level via classes and intra-level associations, whereas inter-level associations between classes model the abstraction process. Both information and computation are partitioned according to level-specific topological models. They are also placed in a temporal framework modelled by suitable abstractions. Domain-specific strategies control the execution of the computations. Computational components perform both intra-level processing and intra-level information conversion. The paper overviews the phases of the analysis and design process, presents major concepts at each abstraction level, and shows how the resulting design turns into a modular, flexible and adaptive architecture. Finally, the paper sketches how the conceptual architecture can be deployed into a concrete distribute architecture by relying on an experimental framework.
Wealth inequality: The physics basis
NASA Astrophysics Data System (ADS)
Bejan, A.; Errera, M. R.
2017-03-01
"Inequality" is a common observation about us, as members of society. In this article, we unify physics with economics by showing that the distribution of wealth is related proportionally to the movement of all the streams of a live society. The hierarchical distribution of wealth on the earth happens naturally. Hierarchy is unavoidable, with staying power, and difficult to efface. We illustrate this with two architectures, river basins and the movement of freight. The physical flow architecture that emerges is hierarchical on the surface of the earth and in everything that flows inside the live human bodies, the movement of humans and their belongings, and the engines that drive the movement. The nonuniform distribution of wealth becomes more accentuated as the economy becomes more developed, i.e., as its flow architecture becomes more complex for the purpose of covering smaller and smaller interstices of the overall (fixed) territory. It takes a relatively modest complexity for the nonuniformity in the distribution of wealth to be evident. This theory also predicts the Lorenz-type distribution of income inequality, which was adopted empirically for a century.
Integration of Cloud resources in the LHCb Distributed Computing
NASA Astrophysics Data System (ADS)
Úbeda García, Mario; Méndez Muñoz, Víctor; Stagni, Federico; Cabarrou, Baptiste; Rauschmayr, Nathalie; Charpentier, Philippe; Closier, Joel
2014-06-01
This contribution describes how Cloud resources have been integrated in the LHCb Distributed Computing. LHCb is using its specific Dirac extension (LHCbDirac) as an interware for its Distributed Computing. So far, it was seamlessly integrating Grid resources and Computer clusters. The cloud extension of DIRAC (VMDIRAC) allows the integration of Cloud computing infrastructures. It is able to interact with multiple types of infrastructures in commercial and institutional clouds, supported by multiple interfaces (Amazon EC2, OpenNebula, OpenStack and CloudStack) - instantiates, monitors and manages Virtual Machines running on this aggregation of Cloud resources. Moreover, specifications for institutional Cloud resources proposed by Worldwide LHC Computing Grid (WLCG), mainly by the High Energy Physics Unix Information Exchange (HEPiX) group, have been taken into account. Several initiatives and computing resource providers in the eScience environment have already deployed IaaS in production during 2013. Keeping this on mind, pros and cons of a cloud based infrasctructure have been studied in contrast with the current setup. As a result, this work addresses four different use cases which represent a major improvement on several levels of our infrastructure. We describe the solution implemented by LHCb for the contextualisation of the VMs based on the idea of Cloud Site. We report on operational experience of using in production several institutional Cloud resources that are thus becoming integral part of the LHCb Distributed Computing resources. Furthermore, we describe as well the gradual migration of our Service Infrastructure towards a fully distributed architecture following the Service as a Service (SaaS) model.
Centralized and distributed control architectures under Foundation Fieldbus network.
Persechini, Maria Auxiliadora Muanis; Jota, Fábio Gonçalves
2013-01-01
This paper aims at discussing possible automation and control system architectures based on fieldbus networks in which the controllers can be implemented either in a centralized or in a distributed form. An experimental setup is used to demonstrate some of the addressed issues. The control and automation architecture is composed of a supervisory system, a programmable logic controller and various other devices connected to a Foundation Fieldbus H1 network. The procedures used in the network configuration, in the process modelling and in the design and implementation of controllers are described. The specificities of each one of the considered logical organizations are also discussed. Finally, experimental results are analysed using an algorithm for the assessment of control loops to compare the performances between the centralized and the distributed implementations. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Three-Dimensional Nanobiocomputing Architectures With Neuronal Hypercells
2007-06-01
Neumann architectures, and CMOS fabrication. Novel solutions of massive parallel distributed computing and processing (pipelined due to systolic... and processing platforms utilizing molecular hardware within an enabling organization and architecture. The design technology is based on utilizing a...Microsystems and Nanotechnologies investigated a novel 3D3 (Hardware Software Nanotechnology) technology to design super-high performance computing
A Flexible Hardware Test and Demonstration Platform for the Fractionated System Architecture YETE
NASA Astrophysics Data System (ADS)
Kempf, Florian; Haber, Roland; Tzschichholz, Tristan; Mikschl, Tobias; Hilgarth, Alexander; Montenegro, Sergio; Schilling, Klaus
2016-08-01
This paper introduces a hardware-in-the loop test and demonstration platform for the YETE system architecture for fractionated spacecraft. It is designed for rapid prototyping and testing of distributed control approaches for the YETE architecture subject to varying network topologies and transmission channel properties between the individual YETE hardware nodes.
HYDRA : High-speed simulation architecture for precision spacecraft formation simulation
NASA Technical Reports Server (NTRS)
Martin, Bryan J.; Sohl, Garett.
2003-01-01
e Hierarchical Distributed Reconfigurable Architecture- is a scalable simulation architecture that provides flexibility and ease-of-use which take advantage of modern computation and communication hardware. It also provides the ability to implement distributed - or workstation - based simulations and high-fidelity real-time simulation from a common core. Originally designed to serve as a research platform for examining fundamental challenges in formation flying simulation for future space missions, it is also finding use in other missions and applications, all of which can take advantage of the underlying Object-Oriented structure to easily produce distributed simulations. Hydra automates the process of connecting disparate simulation components (Hydra Clients) through a client server architecture that uses high-level descriptions of data associated with each client to find and forge desirable connections (Hydra Services) at run time. Services communicate through the use of Connectors, which abstract messaging to provide single-interface access to any desired communication protocol, such as from shared-memory message passing to TCP/IP to ACE and COBRA. Hydra shares many features with the HLA, although providing more flexibility in connectivity services and behavior overriding.
All-digital radar architecture
NASA Astrophysics Data System (ADS)
Molchanov, Pavlo A.
2014-10-01
All digital radar architecture requires exclude mechanical scan system. The phase antenna array is necessarily large because the array elements must be co-located with very precise dimensions and will need high accuracy phase processing system for aggregate and distribute T/R modules data to/from antenna elements. Even phase array cannot provide wide field of view. New nature inspired all digital radar architecture proposed. The fly's eye consists of multiple angularly spaced sensors giving the fly simultaneously thee wide-area visual coverage it needs to detect and avoid the threats around him. Fly eye radar antenna array consist multiple directional antennas loose distributed along perimeter of ground vehicle or aircraft and coupled with receiving/transmitting front end modules connected by digital interface to central processor. Non-steering antenna array allows creating all-digital radar with extreme flexible architecture. Fly eye radar architecture provides wide possibility of digital modulation and different waveform generation. Simultaneous correlation and integration of thousands signals per second from each point of surveillance area allows not only detecting of low level signals ((low profile targets), but help to recognize and classify signals (targets) by using diversity signals, polarization modulation and intelligent processing. Proposed all digital radar architecture with distributed directional antenna array can provide a 3D space vector to the jammer by verification direction of arrival for signals sources and as result jam/spoof protection not only for radar systems, but for communication systems and any navigation constellation system, for both encrypted or unencrypted signals, for not limited number or close positioned jammers.
Distributed Information Search and Retrieval for Astronomical Resource Discovery and Data Mining
NASA Astrophysics Data System (ADS)
Murtagh, Fionn; Guillaume, Damien
Information search and retrieval has become by nature a distributed task. We look at tools and techniques which are of importance in this area. Current technological evolution can be summarized as the growing stability and cohesiveness of distributed architectures of searchable objects. The objects themselves are more often than not multimedia, including published articles or grey literature reports, yellow page services, image data, catalogs, presentation and online display materials, and ``operations'' information such as scheduling and publicly accessible proposal information. The evolution towards distributed architectures, protocols and formats, and the direction of our own work, are focussed on in this paper.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-08-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called SimBOX that will use some of the real-time infrastructure (RTI) functionality from the current military real-time simulation architecture. The uniqueness of the approach is to provide a "plug and play environment" for various system components that run at various data rates (Hz) and the ability to replicate or transfer C2 operations to various subsystems in a scalable manner. This is possible by providing a communication bus called "Distributed Shared Data Bus" and a distributed computing environment used to scale the control needs by providing a self-contained computing, data logging and control function module that can be rapidly reconfigured to perform different functions. This kind of software-enabled control is very much needed to meet the needs of future aerospace command and control functions.
NASA Astrophysics Data System (ADS)
Prasad, Guru; Jayaram, Sanjay; Ward, Jami; Gupta, Pankaj
2004-09-01
In this paper, Aximetric proposes a decentralized Command and Control (C2) architecture for a distributed control of a cluster of on-board health monitoring and software enabled control systems called
NASA Astrophysics Data System (ADS)
Riecken, Mark; Lessmann, Kurt; Schillero, David
2016-05-01
The Data Distribution Service (DDS) was started by the Object Management Group (OMG) in 2004. Currently, DDS is one of the contenders to support the Internet of Things (IoT) and the Industrial IOT (IIoT). DDS has also been used as a distributed simulation architecture. Given the anticipated proliferation of IoT and II devices, along with the explosive growth of sensor technology, can we expect this to have an impact on the broader community of distributed simulation? If it does, what is the impact and which distributed simulation domains will be most affected? DDS shares many of the same goals and characteristics of distributed simulation such as the need to support scale and an emphasis on Quality of Service (QoS) that can be tailored to meet the end user's needs. In addition, DDS has some built-in features such as security that are not present in traditional distributed simulation protocols. If the IoT and II realize their potential application, we predict a large base of technology to be built around this distributed data paradigm, much of which could be directly beneficial to the distributed M&S community. In this paper we compare some of the perceived gaps and shortfalls of current distributed M&S technology to the emerging capabilities of DDS built around the IoT. Although some trial work has been conducted in this area, we propose a more focused examination of the potential of these new technologies and their applicability to current and future problems in distributed M&S. The Internet of Things (IoT) and its data communications mechanisms such as the Data Distribution System (DDS) share properties in common with distributed modeling and simulation (M&S) and its protocols such as the High Level Architecture (HLA) and the Test and Training Enabling Architecture (TENA). This paper proposes a framework based on the sensor use case for how the two communities of practice (CoP) can benefit from one another and achieve greater capability in practical distributed computing.
PIC codes for plasma accelerators on emerging computer architectures (GPUS, Multicore/Manycore CPUS)
NASA Astrophysics Data System (ADS)
Vincenti, Henri
2016-03-01
The advent of exascale computers will enable 3D simulations of a new laser-plasma interaction regimes that were previously out of reach of current Petasale computers. However, the paradigm used to write current PIC codes will have to change in order to fully exploit the potentialities of these new computing architectures. Indeed, achieving Exascale computing facilities in the next decade will be a great challenge in terms of energy consumption and will imply hardware developments directly impacting our way of implementing PIC codes. As data movement (from die to network) is by far the most energy consuming part of an algorithm future computers will tend to increase memory locality at the hardware level and reduce energy consumption related to data movement by using more and more cores on each compute nodes (''fat nodes'') that will have a reduced clock speed to allow for efficient cooling. To compensate for frequency decrease, CPU machine vendors are making use of long SIMD instruction registers that are able to process multiple data with one arithmetic operator in one clock cycle. SIMD register length is expected to double every four years. GPU's also have a reduced clock speed per core and can process Multiple Instructions on Multiple Datas (MIMD). At the software level Particle-In-Cell (PIC) codes will thus have to achieve both good memory locality and vectorization (for Multicore/Manycore CPU) to fully take advantage of these upcoming architectures. In this talk, we present the portable solutions we implemented in our high performance skeleton PIC code PICSAR to both achieve good memory locality and cache reuse as well as good vectorization on SIMD architectures. We also present the portable solutions used to parallelize the Pseudo-sepctral quasi-cylindrical code FBPIC on GPUs using the Numba python compiler.
AlJarullah, Asma; El-Masri, Samir
2013-08-01
The goal of a national electronic health records integration system is to aggregate electronic health records concerning a particular patient at different healthcare providers' systems to provide a complete medical history of the patient. It holds the promise to address the two most crucial challenges to the healthcare systems: improving healthcare quality and controlling costs. Typical approaches for the national integration of electronic health records are a centralized architecture and a distributed architecture. This paper proposes a new approach for the national integration of electronic health records, the semi-centralized approach, an intermediate solution between the centralized architecture and the distributed architecture that has the benefits of both approaches. The semi-centralized approach is provided with a clearly defined architecture. The main data elements needed by the system are defined and the main system modules that are necessary to achieve an effective and efficient functionality of the system are designed. Best practices and essential requirements are central to the evolution of the proposed architecture. The proposed architecture will provide the basis for designing the simplest and the most effective systems to integrate electronic health records on a nation-wide basis that maintain integrity and consistency across locations, time and systems, and that meet the challenges of interoperability, security, privacy, maintainability, mobility, availability, scalability, and load balancing.
Analysis of Disaster Preparedness Planning Measures in DoD Computer Facilities
1993-09-01
city, stae, aod ZP code) 10 Source of Funding Numbers SProgram Element No lProject No ITask No lWork Unit Accesion I 11 Title include security...Computer Disaster Recovery .... 13 a. PC and LAN Lessons Learned . . ..... 13 2. Distributed Architectures . . . .. . 14 3. Backups...amount of expense, but no client problems." (Leeke, 1993, p. 8) 2. Distributed Architectures The majority of operations that were disrupted by the
A distributed data acquisition software scheme for the Laboratory Telerobotic Manipulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Butler, P.L.; Glassell, R.L.; Rowe, J.C.
1990-01-01
A custom software architecture was developed for use in the Laboratory Telerobotic Manipulator (LTM) to provide support for the distributed data acquisition electronics. This architecture was designed to provide a comprehensive development environment that proved to be useful for both hardware and software debugging. This paper describes the development environment and the operational characteristics of the real-time data acquisition software. 8 refs., 5 figs.
The new landscape of parallel computer architecture
NASA Astrophysics Data System (ADS)
Shalf, John
2007-07-01
The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. In this paper we examine the reasons behind the movement to exponentially increasing parallelism, and its ramifications for system design, applications and programming models.
NASA Astrophysics Data System (ADS)
Guilfoyle, Peter S.; Stone, Richard V.
1991-12-01
OptiComp is currently completing a 32-bit, fully programmable digital optical computer (DOC II) that is designed to operate in a UNIX environment running RISC microcode. OptiComp's DOC II architecture is focused toward parallel microcode implementation where data is input in a dual rail format. By exploiting the physical principals inherent to optics (speed and low power consumption), an architectural balance of optical interconnects and software code efficiency can be achieved including high fan-in and fan-out. OptiComp's DOC II program is jointly sponsored by the Office of Naval Research (ONR), the Strategic Defense Initiative Office (SDIO), NASA space station group and Rome Laboratory (USAF). This paper not only describes the motivational basis behind DOC II but also provides an optical overview and architectural summary of the device that allows the emulation of any digital instruction set.
Taniguchi, Masahiko; Du, Hai; Lindsey, Jonathan S
2013-09-23
A wide variety of cyclic molecular architectures are built of modular subunits and can be formed combinatorially. The mathematics for enumeration of such objects is well-developed yet lacks key features of importance in chemistry, such as specifying (i) the structures of individual members among a set of isomers, (ii) the distribution (i.e., relative amounts) of products, and (iii) the effect of nonequal ratios of reacting monomers on the product distribution. Here, a software program (Cyclaplex) has been developed to determine the number, identity (including isomers), and relative amounts of linear and cyclic architectures from a given number and ratio of reacting monomers. The program includes both mathematical formulas and generative algorithms for enumeration; the latter go beyond the former to provide desired molecular-relevant information and data-mining features. The program is equipped to enumerate four types of architectures: (i) linear architectures with directionality (macroscopic equivalent = electrical extension cords), (ii) linear architectures without directionality (batons), (iii) cyclic architectures with directionality (necklaces), and (iv) cyclic architectures without directionality (bracelets). The program can be applied to cyclic peptides, cycloveratrylenes, cyclens, calixarenes, cyclodextrins, crown ethers, cucurbiturils, annulenes, expanded meso-substituted porphyrin(ogen)s, and diverse supramolecular (e.g., protein) assemblies. The size of accessible architectures encompasses up to 12 modular subunits derived from 12 reacting monomers or larger architectures (e.g. 13-17 subunits) from fewer types of monomers (e.g. 2-4). A particular application concerns understanding the possible heterogeneity of (natural or biohybrid) photosynthetic light-harvesting oligomers (cyclic, linear) formed from distinct peptide subunits.
Communication Architecture in Mixed-Reality Simulations of Unmanned Systems.
Selecký, Martin; Faigl, Jan; Rollo, Milan
2018-03-14
Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture's viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture.
Partially Decentralized Control Architectures for Satellite Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Bauer, Frank H.
2002-01-01
In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.
Research of Ancient Architectures in Jin-Fen Area Based on GIS&BIM Technology
NASA Astrophysics Data System (ADS)
Jia, Jing; Zheng, Qiuhong; Gao, Huiying; Sun, Hai
2017-05-01
The number of well-preserved ancient buildings located in Shanxi Province, enjoying the absolute maximum proportion of ancient architectures in China, is about 18418, among which, 9053 buildings have the structural style of wood frame. The value of the application of BIM (Building Information Modeling) and GIS (Geographic Information System) is gradually probed and testified in the corresponding fields of ancient architecture’s spatial distribution information management, routine maintenance and special conservation & restoration, the evaluation and simulation of related disasters, such as earthquake. The research objects are ancient architectures in JIN-FEN area, which were first investigated by Sicheng LIANG and recorded in his work of “Chinese ancient architectures survey report”. The research objects, i.e. the ancient architectures in Jin-Fen area include those in Sicheng LIANG’s investigation, and further adjustments were made through authors’ on-site investigation and literature searching & collection. During this research process, the spatial distributing Geodatabase of research objects is established utilizing GIS. The BIM components library for ancient buildings is formed combining on-site investigation data and precedent classic works, such as “Yingzao Fashi”, a treatise on architectural methods in Song Dynasty, “Yongle Encyclopedia” and “Gongcheng Zuofa Zeli”, case collections of engineering practice, by the Ministry of Construction of Qing Dynasty. A building of Guangsheng temple in Hongtong county is selected as an example to elaborate the BIM model construction process based on the BIM components library for ancient buildings. Based on the foregoing work results of spatial distribution data, attribute data of features, 3D graphic information and parametric building information model, the information management system for ancient architectures in Jin-Fen Area, utilizing GIS&BIM technology, could be constructed to support the further research of seismic disaster analysis and seismic performance simulation.
Global fully kinetic models of planetary magnetospheres with iPic3D
NASA Astrophysics Data System (ADS)
Gonzalez, D.; Sanna, L.; Amaya, J.; Zitz, A.; Lembege, B.; Markidis, S.; Schriver, D.; Walker, R. J.; Berchem, J.; Peng, I. B.; Travnicek, P. M.; Lapenta, G.
2016-12-01
We report on the latest developments of our approach to model planetary magnetospheres, mini magnetospheres and the Earth's magnetosphere with the fully kinetic, electromagnetic particle in cell code iPic3D. The code treats electrons and multiple species of ions as full kinetic particles. We review: 1) Why a fully kinetic model and in particular why kinetic electrons are needed for capturing some of the most important aspects of the physics processes of planetary magnetospheres. 2) Why the energy conserving implicit method (ECIM) in its newest implementation [1] is the right approach to reach this goal. We consider the different electron scales and study how the new IECIM can be tuned to resolve only the electron scales of interest while averaging over the unresolved scales preserving their contribution to the evolution. 3) How with modern computing planetary magnetospheres, mini magnetosphere and eventually Earth's magnetosphere can be modeled with fully kinetic electrons. The path from petascale to exascale for iPiC3D is outlined based on the DEEP-ER project [2], using dynamic allocation of different processor architectures (Xeon and Xeon Phi) and innovative I/O technologies.Specifically results from models of Mercury are presented and compared with MESSENGER observations and with previous hybrid (fluid electrons and kinetic ions) simulations. The plasma convection around the planets includes the development of hydrodynamic instabilities at the flanks, the presence of the collisionless shocks, the magnetosheath, the magnetopause, reconnection zones, the formation of the plasma sheet and the magnetotail, and the variation of ion/electron plasma flows when crossing these frontiers. Given the full kinetic nature of our approach we focus on detailed particle dynamics and distribution at locations that can be used for comparison with satellite data. [1] Lapenta, G. (2016). Exactly Energy Conserving Implicit Moment Particle in Cell Formulation. arXiv preprint arXiv:1602.06326.[2] www.deep-er.eu
Architecture and evolution of Goddard Space Flight Center Distributed Active Archive Center
NASA Technical Reports Server (NTRS)
Bedet, Jean-Jacques; Bodden, Lee; Rosen, Wayne; Sherman, Mark; Pease, Phil
1994-01-01
The Goddard Space Flight Center (GSFC) Distributed Active Archive Center (DAAC) has been developed to enhance Earth Science research by improved access to remote sensor earth science data. Building and operating an archive, even one of a moderate size (a few Terabytes), is a challenging task. One of the critical components of this system is Unitree, the Hierarchical File Storage Management System. Unitree, selected two years ago as the best available solution, requires constant system administrative support. It is not always suitable as an archive and distribution data center, and has moderate performance. The Data Archive and Distribution System (DADS) software developed to monitor, manage, and automate the ingestion, archive, and distribution functions turned out to be more challenging than anticipated. Having the software and tools is not sufficient to succeed. Human interaction within the system must be fully understood to improve efficiency to improve efficiency and ensure that the right tools are developed. One of the lessons learned is that the operability, reliability, and performance aspects should be thoroughly addressed in the initial design. However, the GSFC DAAC has demonstrated that it is capable of distributing over 40 GB per day. A backup system to archive a second copy of all data ingested is under development. This backup system will be used not only for disaster recovery but will also replace the main archive when it is unavailable during maintenance or hardware replacement. The GSFC DAAC has put a strong emphasis on quality at all level of its organization. A Quality team has also been formed to identify quality issues and to propose improvements. The DAAC has conducted numerous tests to benchmark the performance of the system. These tests proved to be extremely useful in identifying bottlenecks and deficiencies in operational procedures.
Three-dimensional wax patterning of paper fluidic devices.
Renault, Christophe; Koehne, Jessica; Ricco, Antonio J; Crooks, Richard M
2014-06-17
In this paper we describe a method for three-dimensional wax patterning of microfluidic paper-based analytical devices (μPADs). The method is rooted in the fundamental details of wax transport in paper and provides a simple way to fabricate complex channel architectures such as hemichannels and fully enclosed channels. We show that three-dimensional μPADs can be fabricated with half as much paper by using hemichannels rather than ordinary open channels. We also provide evidence that fully enclosed channels are efficiently isolated from the exterior environment, decreasing contamination risks, simplifying the handling of the device, and slowing evaporation of solvents.
Design of a MIMD neural network processor
NASA Astrophysics Data System (ADS)
Saeks, Richard E.; Priddy, Kevin L.; Pap, Robert M.; Stowell, S.
1994-03-01
The Accurate Automation Corporation (AAC) neural network processor (NNP) module is a fully programmable multiple instruction multiple data (MIMD) parallel processor optimized for the implementation of neural networks. The AAC NNP design fully exploits the intrinsic sparseness of neural network topologies. Moreover, by using a MIMD parallel processing architecture one can update multiple neurons in parallel with efficiency approaching 100 percent as the size of the network increases. Each AAC NNP module has 8 K neurons and 32 K interconnections and is capable of 140,000,000 connections per second with an eight processor array capable of over one billion connections per second.
Ultra-Dense Quantum Communication Using Integrated Photonic Architecture: First Annual Report
2011-08-24
REPORT Ultra-Dense Quantum Communication Using Integrated Photonic Architecture: First Annual Report 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The...goal of this program is to establish a fundamental information-theoretic understand of quantum secure communication and to devise a practical...scalable implementation of quantum key distribution protocols in an integrated photonic architecture. We report our progress on experimental and
Specifying structural constraints of architectural patterns in the ARCHERY language
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Alejandro; HASLab INESC TEC and Universidade do Minho, Campus de Gualtar, 4710-057 Braga; Barbosa, Luis S.
ARCHERY is an architectural description language for modelling and reasoning about distributed, heterogeneous and dynamically reconfigurable systems in terms of architectural patterns. The language supports the specification of architectures and their reconfiguration. This paper introduces a language extension for precisely describing the structural design decisions that pattern instances must respect in their (re)configurations. The extension is a propositional modal logic with recursion and nominals referencing components, i.e., a hybrid µ-calculus. Its expressiveness allows specifying safety and liveness constraints, as well as paths and cycles over structures. Refinements of classic architectural patterns are specified.
NASA Astrophysics Data System (ADS)
Yager, Kevin; Albert, Thomas; Brower, Bernard V.; Pellechia, Matthew F.
2015-06-01
The domain of Geospatial Intelligence Analysis is rapidly shifting toward a new paradigm of Activity Based Intelligence (ABI) and information-based Tipping and Cueing. General requirements for an advanced ABIAA system present significant challenges in architectural design, computing resources, data volumes, workflow efficiency, data mining and analysis algorithms, and database structures. These sophisticated ABI software systems must include advanced algorithms that automatically flag activities of interest in less time and within larger data volumes than can be processed by human analysts. In doing this, they must also maintain the geospatial accuracy necessary for cross-correlation of multi-intelligence data sources. Historically, serial architectural workflows have been employed in ABIAA system design for tasking, collection, processing, exploitation, and dissemination. These simpler architectures may produce implementations that solve short term requirements; however, they have serious limitations that preclude them from being used effectively in an automated ABIAA system with multiple data sources. This paper discusses modern ABIAA architectural considerations providing an overview of an advanced ABIAA system and comparisons to legacy systems. It concludes with a recommended strategy and incremental approach to the research, development, and construction of a fully automated ABIAA system.
Local deformation behavior of surface porous polyether-ether-ketone.
Evans, Nathan T; Torstrick, F Brennan; Safranski, David L; Guldberg, Robert E; Gall, Ken
2017-01-01
Surface porous polyether-ether-ketone has the ability to maintain the tensile monotonic and cyclic strength necessary for many load bearing orthopedic applications while providing a surface that facilitates bone ingrowth; however, the relevant deformation behavior of the pore architecture in response to various loading conditions is not yet fully characterized or understood. The focus of this study was to examine the compressive and wear behavior of the surface porous architecture using micro Computed Tomography (micro CT). Pore architectures of various depths (~0.5-2.5mm) and pore sizes (212-508µm) were manufactured using a melt extrusion and porogen leaching process. Compression testing revealed that the pore architecture deforms in the typical three staged linear elastic, plastic, and densification stages characteristic of porous materials. The experimental moduli and yield strengths decreased as the porosity increased but there was no difference in properties between pore sizes. The porous architecture maintained a high degree of porosity available for bone-ingrowth at all strains. Surface porous samples showed no increase in wear rate compared to injection molded samples, with slight pore densification accompanying wear. Copyright © 2016 Elsevier Ltd. All rights reserved.
Fast adaptive composite grid methods on distributed parallel architectures
NASA Technical Reports Server (NTRS)
Lemke, Max; Quinlan, Daniel
1992-01-01
The fast adaptive composite (FAC) grid method is compared with the adaptive composite method (AFAC) under variety of conditions including vectorization and parallelization. Results are given for distributed memory multiprocessor architectures (SUPRENUM, Intel iPSC/2 and iPSC/860). It is shown that the good performance of AFAC and its superiority over FAC in a parallel environment is a property of the algorithm and not dependent on peculiarities of any machine.
Integrating the Web and continuous media through distributed objects
NASA Astrophysics Data System (ADS)
Labajo, Saul P.; Garcia, Narciso N.
1998-09-01
The Web has rapidly grown to become the standard for documents interchange on the Internet. At the same time the interest on transmitting continuous media flows on the Internet, and its associated applications like multimedia on demand, is also growing. Integrating both kinds of systems should allow building real hypermedia systems where all media objects can be linked from any other, taking into account temporal and spatial synchronization. A way to achieve this integration is using the Corba architecture. This is a standard for open distributed systems. There are also recent efforts to integrate Web and Corba systems. We use this architecture to build a service for distribution of data flows endowed with timing restrictions. We use to integrate it with the Web, by one side Java applets that can use the Corba architecture and are embedded on HTML pages. On the other side, we also benefit from the efforts to integrate Corba and the Web.
Software Architecture of Sensor Data Distribution In Planetary Exploration
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard; Stone, Thom; Ossenfort, John; Walker, Ed; Notario, Hugo
2006-01-01
Data from mobile and stationary sensors will be vital in planetary surface exploration. The distribution and collection of sensor data in an ad-hoc wireless network presents a challenge. Irregular terrain, mobile nodes, new associations with access points and repeaters with stronger signals as the network reconfigures to adapt to new conditions, signal fade and hardware failures can cause: a) Data errors; b) Out of sequence packets; c) Duplicate packets; and d) Drop out periods (when node is not connected). To mitigate the effects of these impairments, a robust and reliable software architecture must be implemented. This architecture must also be tolerant of communications outages. This paper describes such a robust and reliable software infrastructure that meets the challenges of a distributed ad hoc network in a difficult environment and presents the results of actual field experiments testing the principles and actual code developed.
The emergence of overlapping scale-free genetic architecture in digital organisms.
Gerlee, P; Lundh, T
2008-01-01
We have studied the evolution of genetic architecture in digital organisms and found that the gene overlap follows a scale-free distribution, which is commonly found in metabolic networks of many organisms. Our results show that the slope of the scale-free distribution depends on the mutation rate and that the gene development is driven by expansion of already existing genes, which is in direct correspondence to the preferential growth algorithm that gives rise to scale-free networks. To further validate our results we have constructed a simple model of gene development, which recapitulates the results from the evolutionary process and shows that the mutation rate affects the tendency of genes to cluster. In addition we could relate the slope of the scale-free distribution to the genetic complexity of the organisms and show that a high mutation rate gives rise to a more complex genetic architecture.
NASA Technical Reports Server (NTRS)
Goldstein, David
1991-01-01
Extensions to an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS) are discussed. PRAIS strives for transparently parallelizing production (rule-based) systems, even under real-time constraints. PRAIS accomplished these goals (presented at the first annual C Language Integrated Production System (CLIPS) conference) by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors. Results using the original PRAIS architecture over a network of Sun 3's, Sun 4's and VAX's are presented. Mechanisms using the producer-consumer model to extend the architecture for fault-tolerance and distributed truth maintenance initiation are also discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-05
... additional time to review and more fully assess the proposed rule. In addition, just prior to the closing of...: Notice of proposed rulemaking; reopening of comment period. SUMMARY: The Architectural and Transportation... notice entitled ``Accessibility Guidelines for Pedestrian Facilities in the Public Right-of-Way,'' that...
Preliminary Results on the Influence of Engineered Artificial Mucus Layer on Phonation
ERIC Educational Resources Information Center
Döllinger, Michael; Gröhn, Franziska; Berry, David A.; Eysholdt, Ulrich; Luegmair, Georg
2014-01-01
Purpose: Previous studies have confirmed the influence of dehydration and an altered mucus (e.g., due to pathologies) on phonation. However, the underlying reasons for these influences are not fully understood. This study was a preliminary inquiry into the influences of mucus architecture and concentration on vocal fold oscillation. Method: Two…
An investigation of the needs and the design of an orbiting space station with growth capabilities
NASA Technical Reports Server (NTRS)
Dossey, J. R.; Trotti, G.
1977-01-01
An architectural approach to the evolutionary growth of an orbiting space station from a small manned satellite to a fully independent, self-sustainable space colony facility is presented. Social and environmental factors, ease of transportation via the space shuttle, and structural design are considered.
Automatic Assessment of 3D Modeling Exams
ERIC Educational Resources Information Center
Sanna, A.; Lamberti, F.; Paravati, G.; Demartini, C.
2012-01-01
Computer-based assessment of exams provides teachers and students with two main benefits: fairness and effectiveness in the evaluation process. This paper proposes a fully automatic evaluation tool for the Graphic and Virtual Design (GVD) curriculum at the First School of Architecture of the Politecnico di Torino, Italy. In particular, the tool is…
ERIC Educational Resources Information Center
Aldukheil, Maher A.
2013-01-01
The Healthcare industry is characterized by its complexity in delivering care to the patients. Accordingly, healthcare organizations adopt and implement Information Technology (IT) solutions to manage complexity, improve quality of care, and transform to a fully integrated and digitized environment. Electronic Medical Records (EMR), which is…
Microfluidics for Positron Emission Tomography (PET) Imaging Probe Development
Wang, Ming-Wei; Lin, Wei-Yu; Liu, Kan; Masterman-Smith, Michael; Shen, Clifton Kwang-Fu
2012-01-01
Due to increased needs for Positron Emission Tomography (PET) scanning, high demands for a wide variety of radiolabeled compounds will have to be met by exploiting novel radiochemistry and engineering technologies to improve the production and development of PET probes. The application of microfluidic reactors to perform radiosyntheses is currently attracting a great deal of interest because of their potential to deliver many advantages over conventional labeling systems. Microfluidic-based radiochemistry can lead to the use of smaller quantities of precursors, accelerated reaction rates and easier purification processes with greater yield and higher specific activity of desired probes. Several ‘proof-of-principle’ examples, along with basics of device architecture and operation, and potential limitations of each design are discussed here. Along with the concept of radioisotope distribution from centralized cyclotron facilities to individual imaging centers and laboratories (“decentralized model”), an easy-to-use, standalone, flexible, fully-automated radiochemical microfluidic platform can open up to simpler and more cost-effective procedures for molecular imaging using PET. PMID:20643021
MAX - An advanced parallel computer for space applications
NASA Technical Reports Server (NTRS)
Lewis, Blair F.; Bunker, Robert L.
1991-01-01
MAX is a fault-tolerant multicomputer hardware and software architecture designed to meet the needs of NASA spacecraft systems. It consists of conventional computing modules (computers) connected via a dual network topology. One network is used to transfer data among the computers and between computers and I/O devices. This network's topology is arbitrary. The second network operates as a broadcast medium for operating system synchronization messages and supports the operating system's Byzantine resilience. A fully distributed operating system supports multitasking in an asynchronous event and data driven environment. A large grain dataflow paradigm is used to coordinate the multitasking and provide easy control of concurrency. It is the basis of the system's fault tolerance and allows both static and dynamical location of tasks. Redundant execution of tasks with software voting of results may be specified for critical tasks. The dataflow paradigm also supports simplified software design, test and maintenance. A unique feature is a method for reliably patching code in an executing dataflow application.
NASA Astrophysics Data System (ADS)
Pérez-López, F.; Vallejo, J. C.; Martínez, S.; Ortiz, I.; Macfarlane, A.; Osuna, P.; Gill, R.; Casale, M.
2015-09-01
BepiColombo is an interdisciplinary ESA mission to explore the planet Mercury in cooperation with JAXA. The mission consists of two separate orbiters: ESA's Mercury Planetary Orbiter (MPO) and JAXA's Mercury Magnetospheric Orbiter (MMO), which are dedicated to the detailed study of the planet and its magnetosphere. The MPO scientific payload comprises eleven instruments packages covering different disciplines developed by several European teams. This paper describes the design and development approach of the framework required to support the operation of the distributed BepiColombo MPO instruments pipelines, developed and operated from different locations, but designed as a single entity. An architecture based on primary-redundant configuration, fully integrated into the BepiColombo Science Operations Control System (BSCS), has been selected, where some instrument pipelines will be operated from the instrument team's data processing centres, having a pipeline replica that can be run from the Science Ground Segment (SGS), while others will be executed as primary pipelines from the SGS, adopting the SGS the pipeline orchestration role.
Functional Biomimetic Architectures
NASA Astrophysics Data System (ADS)
Levine, Paul M.
N-substituted glycine oligomers, or 'peptoids,' are a class of sequence--specific foldamers composed of tertiary amide linkages, engendering proteolytic stability and enhanced cellular permeability. Peptoids are notable for their facile synthesis, sequence diversity, and ability to fold into distinct secondary structures. In an effort to establish new functional peptoid architectures, we utilize the copper-catalyzed azide-alkyne [3+2] cycloaddition (CuAAC) reaction to generate peptidomimetic assemblies bearing bioactive ligands that specifically target and modulate Androgen Receptor (AR) activity, a major therapeutic target for prostate cancer. Additionally, we explore chemical ligation protocols to generate semi-synthetic hybrid biomacromolecules capable of exhibiting novel structures and functions not accessible to fully biosynthesized proteins.
NASA Astrophysics Data System (ADS)
Roh, Won B.
Photonic technologies-based computational systems are projected to be able to offer order-of-magnitude improvements in processing speed, due to their intrinsic architectural parallelism and ultrahigh switching speeds; these architectures also minimize connectors, thereby enhancing reliability, and preclude EMP vulnerability. The use of optoelectronic ICs would also extend weapons capabilities in such areas as automated target recognition, systems-state monitoring, and detection avoidance. Fiber-optics technologies have an information-carrying capacity fully five orders of magnitude greater than copper-wire-based systems; energy loss in transmission is two orders of magnitude lower, and error rates one order of magnitude lower. Attention is being given to ZrF glasses for optical fibers with unprecedentedly low scattering levels.
Alternative Architectures for Distributed Work in the National Airspace System
NASA Technical Reports Server (NTRS)
Smith, Philip J.; Billings, Charles E.; Chapman, Roger; Obradovich, Heintz; McCoy, C. Elaine; Orasanu, Judith
2000-01-01
The architecture for the National Airspace System (NAS) in the United States has evolved over time to rely heavily on the distribution of tasks and control authority in order to keep cognitive complexity manageable for any one individual. This paper characterizes a number of different subsystems that have been recently incorporated in the NAS. The goal of this discussion is to begin to identify the critical parameters defining the differences among alternative architectures in terms of the locus of control and in terms of access to relevant data and knowledge. At an abstract level, this analysis can be described as an effort to describe alternative "rules of the game" for the NAS.
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
The deployment of routing protocols in distributed control plane of SDN.
Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu
2014-01-01
Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies.
Performance issues for domain-oriented time-driven distributed simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1987-01-01
It has long been recognized that simulations form an interesting and important class of computations that may benefit from distributed or parallel processing. Since the point of parallel processing is improved performance, the recent proliferation of multiprocessors requires that we consider the performance issues that naturally arise when attempting to implement a distributed simulation. Three such issues are: (1) the problem of mapping the simulation onto the architecture, (2) the possibilities for performing redundant computation in order to reduce communication, and (3) the avoidance of deadlock due to distributed contention for message-buffer space. These issues are discussed in the context of a battlefield simulation implemented on a medium-scale multiprocessor message-passing architecture.
Design Principles for E-Government Architectures
NASA Astrophysics Data System (ADS)
Sandoz, Alain
The paper introduces a holistic approach for architecting systems which must sustain the entire e-government activity of a public authority. Four principles directly impact the architecture: Legality, Responsibility, Transparency, and Symmetry leading to coherent representations of the architecture for the client, the designer and the builder. The approach enables to deploy multipartite, distributed public services, including legal delegation of roles and outsourcing of non mandatory tasks through PPP.
Design and Field Experimentation of a Cooperative ITS Architecture Based on Distributed RSUs.
Moreno, Asier; Osaba, Eneko; Onieva, Enrique; Perallos, Asier; Iovino, Giovanni; Fernández, Pablo
2016-07-22
This paper describes a new cooperative Intelligent Transportation System architecture that aims to enable collaborative sensing services. The main goal of this architecture is to improve transportation efficiency and performance. The system, which has been proven within the participation in the ICSI (Intelligent Cooperative Sensing for Improved traffic efficiency) European project, encompasses the entire process of capture and management of available road data. For this purpose, it applies a combination of cooperative services and methods for data sensing, acquisition, processing and communication amongst road users, vehicles, infrastructures and related stakeholders. Additionally, the advantages of using the proposed system are exposed. The most important of these advantages is the use of a distributed architecture, moving the system intelligence from the control centre to the peripheral devices. The global architecture of the system is presented, as well as the software design and the interaction between its main components. Finally, functional and operational results observed through the experimentation are described. This experimentation has been carried out in two real scenarios, in Lisbon (Portugal) and Pisa (Italy).
Design and Field Experimentation of a Cooperative ITS Architecture Based on Distributed RSUs †
Moreno, Asier; Osaba, Eneko; Onieva, Enrique; Perallos, Asier; Iovino, Giovanni; Fernández, Pablo
2016-01-01
This paper describes a new cooperative Intelligent Transportation System architecture that aims to enable collaborative sensing services. The main goal of this architecture is to improve transportation efficiency and performance. The system, which has been proven within the participation in the ICSI (Intelligent Cooperative Sensing for Improved traffic efficiency) European project, encompasses the entire process of capture and management of available road data. For this purpose, it applies a combination of cooperative services and methods for data sensing, acquisition, processing and communication amongst road users, vehicles, infrastructures and related stakeholders. Additionally, the advantages of using the proposed system are exposed. The most important of these advantages is the use of a distributed architecture, moving the system intelligence from the control centre to the peripheral devices. The global architecture of the system is presented, as well as the software design and the interaction between its main components. Finally, functional and operational results observed through the experimentation are described. This experimentation has been carried out in two real scenarios, in Lisbon (Portugal) and Pisa (Italy). PMID:27455277
A novel software architecture for the provision of context-aware semantic transport information.
Moreno, Asier; Perallos, Asier; López-de-Ipiña, Diego; Onieva, Enrique; Salaberria, Itziar; Masegosa, Antonio D
2015-05-26
The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user's context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system.
Space Internet Architectures and Technologies for NASA Enterprises
NASA Technical Reports Server (NTRS)
Bhasin, Kul; Hayden, Jeffrey L.
2001-01-01
NASA's future communications services will be supplied through a space communications network that mirrors the terrestrial Internet in its capabilities and flexibility. The notional requirements for future data gathering and distribution by this Space Internet have been gathered from NASA's Earth Science Enterprise (ESE), the Human Exploration and Development in Space (HEDS), and the Space Science Enterprise (SSE). This paper describes a communications infrastructure for the Space Internet, the architectures within the infrastructure, and the elements that make up the architectures. The architectures meet the requirements of the enterprises beyond 2010 with Internet 'compatible technologies and functionality. The elements of an architecture include the backbone, access, inter-spacecraft and proximity communication parts. From the architectures, technologies have been identified which have the most impact and are critical for the implementation of the architectures.
Integrating software architectures for distributed simulations and simulation analysis communities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldsby, Michael E.; Fellig, Daniel; Linebarger, John Michael
2005-10-01
The one-year Software Architecture LDRD (No.79819) was a cross-site effort between Sandia California and Sandia New Mexico. The purpose of this research was to further develop and demonstrate integrating software architecture frameworks for distributed simulation and distributed collaboration in the homeland security domain. The integrated frameworks were initially developed through the Weapons of Mass Destruction Decision Analysis Center (WMD-DAC), sited at SNL/CA, and the National Infrastructure Simulation & Analysis Center (NISAC), sited at SNL/NM. The primary deliverable was a demonstration of both a federation of distributed simulations and a federation of distributed collaborative simulation analysis communities in the context ofmore » the same integrated scenario, which was the release of smallpox in San Diego, California. To our knowledge this was the first time such a combination of federations under a single scenario has ever been demonstrated. A secondary deliverable was the creation of the standalone GroupMeld{trademark} collaboration client, which uses the GroupMeld{trademark} synchronous collaboration framework. In addition, a small pilot experiment that used both integrating frameworks allowed a greater range of crisis management options to be performed and evaluated than would have been possible without the use of the frameworks.« less
Deep Learning for ECG Classification
NASA Astrophysics Data System (ADS)
Pyakillya, B.; Kazachenko, N.; Mikhailovsky, N.
2017-10-01
The importance of ECG classification is very high now due to many current medical applications where this problem can be stated. Currently, there are many machine learning (ML) solutions which can be used for analyzing and classifying ECG data. However, the main disadvantages of these ML results is use of heuristic hand-crafted or engineered features with shallow feature learning architectures. The problem relies in the possibility not to find most appropriate features which will give high classification accuracy in this ECG problem. One of the proposing solution is to use deep learning architectures where first layers of convolutional neurons behave as feature extractors and in the end some fully-connected (FCN) layers are used for making final decision about ECG classes. In this work the deep learning architecture with 1D convolutional layers and FCN layers for ECG classification is presented and some classification results are showed.
NASA Astrophysics Data System (ADS)
Liu, Jianping; Xian, Benzhong; Wang, Junhui; Ji, Youliang; Lu, Zhiyong; Liu, Saijun
2017-12-01
The sedimentary architectures of submarine/sublacustrine fans are controlled by sedimentary processes, geomorphology and sediment composition in sediment gravity flows. To advance understanding of sedimentary architecture of debris fans formed predominantly by debris flows in deep-water environments, a sub-lacustrine fan (Y11 fan) within a lacustrine succession has been identified and studied through the integration of core data, well logging data and 3D seismic data in the Eocene Dongying Depression, Bohai Bay Basin, east China. Six types of resedimented lithofacies can be recognized, which are further grouped into five broad lithofacies associations. Quantification of gravity flow processes on the Y11 fan is suggested by quantitative lithofacies analysis, which demonstrates that the fan is dominated by debris flows, while turbidity currents and sandy slumps are less important. The distribution, geometry and sedimentary architecture are documented using well data and 3D seismic data. A well-developed depositional lobe with a high aspect ratio is identified based on a sandstone isopach map. Canyons and/or channels are absent, which is probably due to the unsteady sediment supply from delta-front collapse. Distributary tongue-shaped debris flow deposits can be observed at different stages of fan growth, suggesting a lobe constructed by debrite tongue complexes. Within each stage of the tongue complexes, architectural elements are interpreted by wireline log motifs showing amalgamated debrite tongues, which constitute the primary fan elements. Based on lateral lithofacies distribution and vertical sequence analysis, it is proposed that lakefloor erosion, entrainment and dilution in the flow direction lead to an organized distribution of sandy debrites, muddy debrites and turbidites on individual debrite tongues. Plastic rheology of debris flows combined with fault-related topography are considered the major factors that control sediment distribution and fan architecture. An important implication of this study is that a deep-water depositional model for debrite-dominated systems was proposed, which may be applicable to other similar deep-water environments.
van Ruymbeke, E; Lee, H; Chang, T; Nikopoulou, A; Hadjichristidis, N; Snijkers, F; Vlassopoulos, D
2014-07-21
An emerging challenge in polymer physics is the quantitative understanding of the influence of a macromolecular architecture (i.e., branching) on the rheological response of entangled complex polymers. Recent investigations of the rheology of well-defined architecturally complex polymers have determined the composition in the molecular structure and identified the role of side-products in the measured samples. The combination of different characterization techniques, experimental and/or theoretical, represents the current state-of-the-art. Here we review this interdisciplinary approach to molecular rheology of complex polymers, and show the importance of confronting these different tools for ensuring an accurate characterization of a given polymeric sample. We use statistical tools in order to relate the information available from the synthesis protocols of a sample and its experimental molar mass distribution (typically obtained from size exclusion chromatography), and hence obtain precise information about its structural composition, i.e. enhance the existing sensitivity limit. We critically discuss the use of linear rheology as a reliable quantitative characterization tool, along with the recently developed temperature gradient interaction chromatography. The latter, which has emerged as an indispensable characterization tool for branched architectures, offers unprecedented sensitivity in detecting the presence of different molecular structures in a sample. Combining these techniques is imperative in order to quantify the molecular composition of a polymer and its consequences on the macroscopic properties. We validate this approach by means of a new model asymmetric comb polymer which was synthesized anionically. It was thoroughly characterized and its rheology was carefully analyzed. The main result is that the rheological signal reveals fine molecular details, which must be taken into account to fully elucidate the viscoelastic response of entangled branched polymers. It is important to appreciate that, even optimal model systems, i.e., those synthesized with high-vacuum anionic methods, need thorough characterization via a combination of techniques. Besides helping to improve synthetic techniques, this methodology will be significant in fine-tuning mesoscopic tube-based models and addressing outstanding issues such as the quantitative description of the constraint release mechanism.
Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel
2016-01-01
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894
Fault architecture and deformation processes within poorly lithified rift sediments, Central Greece
NASA Astrophysics Data System (ADS)
Loveless, Sian; Bense, Victor; Turner, Jenni
2011-11-01
Deformation mechanisms and resultant fault architecture are primary controls on the permeability of faults in poorly lithified sediments. We characterise fault architecture using outcrop studies, hand samples, thin sections and grain-size data from a minor (1-10 m displacement) normal-fault array exposed within Gulf of Corinth rift sediments, Central Greece. These faults are dominated by mixed zones with poorly developed fault cores and damage zones. In poorly lithified sediment deformation is distributed across the mixed zone as beds are entrained and smeared. We find particulate flow aided by limited distributed cataclasis to be the primary deformation mechanism. Deformation may be localised in more competent sediments. Stratigraphic variations in sediment competency, and the subsequent alternating distributed and localised strain causes complexities within the mixed zone such as undeformed blocks or lenses of cohesive sediment, or asperities at the mixed zone/protolith boundary. Fault tip bifurcation and asperity removal are important processes in the evolution of these fault zones. Our results indicate that fault zone architecture and thus permeability is controlled by a range of factors including lithology, stratigraphy, cementation history and fault evolution, and that minor faults in poorly lithified sediment may significantly impact subsurface fluid flow.
Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel
2016-08-16
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.
VLBA Archive &Distribution Architecture
NASA Astrophysics Data System (ADS)
Wells, D. C.
1994-01-01
Signals from the 10 antennas of NRAO's VLBA [Very Long Baseline Array] are processed by a Correlator. The complex fringe visibilities produced by the Correlator are archived on magnetic cartridges using a low-cost architecture which is capable of scaling and evolving. Archive files are copied to magnetic media to be distributed to users in FITS format, using the BINTABLE extension. Archive files are labelled using SQL INSERT statements, in order to bind the DBMS-based archive catalog to the archive media.
The Genetic Basis of Plant Architecture in 10 Maize Recombinant Inbred Line Populations1[OPEN
Pan, Qingchun; Xu, Yuancheng; Peng, Yong; Zhan, Wei; Li, Wenqiang; Li, Lin
2017-01-01
Plant architecture is a key factor affecting planting density and grain yield in maize (Zea mays). However, the genetic mechanisms underlying plant architecture in diverse genetic backgrounds have not been fully addressed. Here, we performed a large-scale phenotyping of 10 plant architecture-related traits and dissected the genetic loci controlling these traits in 10 recombinant inbred line populations derived from 14 diverse genetic backgrounds. Nearly 800 quantitative trait loci (QTLs) with major and minor effects were identified as contributing to the phenotypic variation of plant architecture-related traits. Ninety-two percent of these QTLs were detected in only one population, confirming the diverse genetic backgrounds of the mapping populations and the prevalence of rare alleles in maize. The numbers and effects of QTLs are positively associated with the phenotypic variation in the population, which, in turn, correlates positively with parental phenotypic and genetic variations. A large proportion (38.5%) of QTLs was associated with at least two traits, suggestive of the frequent occurrence of pleiotropic loci or closely linked loci. Key developmental genes, which previously were shown to affect plant architecture in mutant studies, were found to colocalize with many QTLs. Five QTLs were further validated using the segregating populations developed from residual heterozygous lines present in the recombinant inbred line populations. Additionally, one new plant height QTL, qPH3, has been fine-mapped to a 600-kb genomic region where three candidate genes are located. These results provide insights into the genetic mechanisms controlling plant architecture and will benefit the selection of ideal plant architecture in maize breeding. PMID:28838954
An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Lytle, John K. (Technical Monitor)
2002-01-01
Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.
System design in an evolving system-of-systems architecture and concept of operations
NASA Astrophysics Data System (ADS)
Rovekamp, Roger N., Jr.
Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 1 2010-10-01 2010-10-01 false How are funds distributed when a Self-Governance..., DEPARTMENT OF HEALTH AND HUMAN SERVICES TRIBAL SELF-GOVERNANCE Retrocession § 137.250 How are funds distributed when a Self-Governance Tribe fully or partially retrocedes from its compact or funding agreement...
Topological structure and mechanics of glassy polymer networks.
Elder, Robert M; Sirk, Timothy W
2017-11-22
The influence of chain-level network architecture (i.e., topology) on mechanics was explored for unentangled polymer networks using a blend of coarse-grained molecular simulations and graph-theoretic concepts. A simple extension of the Watts-Strogatz model is proposed to control the graph properties of the network such that the corresponding physical properties can be studied with simulations. The architecture of polymer networks assembled with a dynamic curing approach were compared with the extended Watts-Strogatz model, and found to agree surprisingly well. The final cured structures of the dynamically-assembled networks were nearly an intermediate between lattice and random connections due to restrictions imposed by the finite length of the chains. Further, the uni-axial stress response, character of the bond breaking, and non-affine displacements of fully-cured glassy networks were analyzed as a function of the degree of disorder in the network architecture. It is shown that the architecture strongly affects the network stability, flow stress, onset of bond breaking, and ultimate stress while leaving the modulus and yield point nearly unchanged. The results show that internal restrictions imposed by the network architecture alter the chain-level response through changes to the crosslink dynamics in the flow regime and through the degree of coordinated chain failure at the ultimate stress. The properties considered here are shown to be sensitive to even incremental changes to the architecture and, therefore, the overall network architecture, beyond simple defects, is predicted to be a meaningful physical parameter in the mechanics of glassy polymer networks.
Comparative Study of 3-Dimensional Woven Joint Architectures for Composite Spacecraft Structures
NASA Technical Reports Server (NTRS)
Jones, Justin S.; Polis, Daniel L.; Segal, Kenneth N.
2011-01-01
The National Aeronautics and Space Administration (NASA) Exploration Systems Mission Directorate initiated an Advanced Composite Technology (ACT) Project through the Exploration Technology Development Program in order to support the polymer composite needs for future heavy lift launch architectures. As an example, the large composite structural applications on Ares V inspired the evaluation of advanced joining technologies, specifically 3D woven composite joints, which could be applied to traditionally manufactured barrel segments. Implementation of these 3D woven joint technologies may offer enhancements in damage tolerance without sacrificing weight. However, baseline mechanical performance data is needed to properly analyze the joint stresses and subsequently design/down-select a preform architecture. Six different configurations were designed and prepared for this study; each consisting of a different combination of warp/fill fiber volume ratio and preform interlocking method (z-fiber, fully interlocked, or hybrid). Tensile testing was performed for this study with the enhancement of a dual camera Digital Image Correlation (DIC) system which provides the capability to measure full-field strains and three dimensional displacements of objects under load. As expected, the ratio of warp/fill fiber has a direct influence on strength and modulus, with higher values measured in the direction of higher fiber volume bias. When comparing the z-fiber weave to a fully interlocked weave with comparable fiber bias, the z-fiber weave demonstrated the best performance in two different comparisons. We report the measured tensile strengths and moduli for test coupons from the 6 different weave configurations under study.
Migrating EO/IR sensors to cloud-based infrastructure as service architectures
NASA Astrophysics Data System (ADS)
Berglie, Stephen T.; Webster, Steven; May, Christopher M.
2014-06-01
The Night Vision Image Generator (NVIG), a product of US Army RDECOM CERDEC NVESD, is a visualization tool used widely throughout Army simulation environments to provide fully attributed synthesized, full motion video using physics-based sensor and environmental effects. The NVIG relies heavily on contemporary hardware-based acceleration and GPU processing techniques, which push the envelope of both enterprise and commodity-level hypervisor support for providing virtual machines with direct access to hardware resources. The NVIG has successfully been integrated into fully virtual environments where system architectures leverage cloudbased technologies to various extents in order to streamline infrastructure and service management. This paper details the challenges presented to engineers seeking to migrate GPU-bound processes, such as the NVIG, to virtual machines and, ultimately, Cloud-Based IAS architectures. In addition, it presents the path that led to success for the NVIG. A brief overview of Cloud-Based infrastructure management tool sets is provided, and several virtual desktop solutions are outlined. A discrimination is made between general purpose virtual desktop technologies compared to technologies that expose GPU-specific capabilities, including direct rendering and hard ware-based video encoding. Candidate hypervisor/virtual machine configurations that nominally satisfy the virtualized hardware-level GPU requirements of the NVIG are presented , and each is subsequently reviewed in light of its implications on higher-level Cloud management techniques. Implementation details are included from the hardware level, through the operating system, to the 3D graphics APls required by the NVIG and similar GPU-bound tools.
NASA Astrophysics Data System (ADS)
Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.
2004-11-01
Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-11
GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.
NASA Technical Reports Server (NTRS)
Evans, Richard K.; Hill, Gerald M.
2012-01-01
Very large space environment test facilities present unique engineering challenges in the design of facility data systems. Data systems of this scale must be versatile enough to meet the wide range of data acquisition and measurement requirements from a diverse set of customers and test programs, but also must minimize design changes to maintain reliability and serviceability. This paper presents an overview of the common architecture and capabilities of the facility data acquisition systems available at two of the world?s largest space environment test facilities located at the NASA Glenn Research Center?s Plum Brook Station in Sandusky, Ohio; namely, the Space Propulsion Research Facility (commonly known as the B-2 facility) and the Space Power Facility (SPF). The common architecture of the data systems is presented along with details on system scalability and efficient measurement systems analysis and verification. The architecture highlights a modular design, which utilizes fully-remotely managed components, enabling the data systems to be highly configurable and support multiple test locations with a wide-range of measurement types and very large system channel counts.
NASA Technical Reports Server (NTRS)
Evans, Richard K.; Hill, Gerald M.
2014-01-01
Very large space environment test facilities present unique engineering challenges in the design of facility data systems. Data systems of this scale must be versatile enough to meet the wide range of data acquisition and measurement requirements from a diverse set of customers and test programs, but also must minimize design changes to maintain reliability and serviceability. This paper presents an overview of the common architecture and capabilities of the facility data acquisition systems available at two of the world's largest space environment test facilities located at the NASA Glenn Research Center's Plum Brook Station in Sandusky, Ohio; namely, the Space Propulsion Research Facility (commonly known as the B-2 facility) and the Space Power Facility (SPF). The common architecture of the data systems is presented along with details on system scalability and efficient measurement systems analysis and verification. The architecture highlights a modular design, which utilizes fully-remotely managed components, enabling the data systems to be highly configurable and support multiple test locations with a wide-range of measurement types and very large system channel counts.
NASA Astrophysics Data System (ADS)
Huber, Katrin; Koebernick, Nicolai; Kerkhofs, Elien; Vanderborght, Jan; Javaux, Mathieu; Vetterlein, Doris; Vereecken, Harry
2014-05-01
A faba bean was grown in a column filled with a sandy soil, which was initially close to saturation and then subjected to a single drying cycle of 30 days. The column was divided in four hydraulically separated compartments using horizontal paraffin layers. Paraffin is impermeable to water but penetrable by roots. Thus by growing deeper, the roots can reach compartments that still contain water. The root architecture was measured every second day by X-ray CT. Transpiration rate, soil matric potential in four different depths, and leaf area were measured continously during the experiment. To investigate the influence of the partitioning of available soil water in the soil column on water uptake, we used R-SWMS, a fully coupled root and soil water model [1]. We compared a scenario with and without the split layers and investigated the influence on root xylem pressure. The detailed three-dimensional root architecture was obtained by reconstructing binarized root images manually with a virtual reality system, located at the Juelich Supercomputing Centre [2]. To verify the properties of the root system, we compared total root lengths, root length density distributions and root surface with estimations derived from Minkowski functionals [3]. In a next step, knowing the change of root architecture in time, we could allocate an age to each root segment and use this information to define age dependent root hydraulic properties that are required to simulate water uptake for the growing root system. The scenario with the split layers showed locally much lower pressures than the scenario without splits. Redistribution of water within the unrestricted soil column led to a more uniform distribution of water uptake and lowers the water stress in the plant. However, comparison of simulated and measured pressure heads with tensiometers suggested that the paraffin layers were not perfectly hydraulically isolating the different soil layers. We could show compensation efficiency of water uptake by the roots in the lower and wetter compartments. By comparing transpiration rates of experiments with and without additional paraffin layers, we were able to quantify restrictions of plant growth to available soil water. [1] Javaux, M., T. Schröder, J. Vanderborght, and H. Vereecken (2008), Use of a Three-Dimensional Detailed Modeling Approach for Predicting Root Water Uptake, Vadose Zone Journal, 7(3), 1079-1079. [2] Stingaciu, L., H. Schulz, A. Pohlmeier, S. Behnke, H. Zilken, M. Javaux, H. Vereecken (2013), In Situ Root System Architecture Extraction from Magnetic Resonance Imaging for Water Uptake Modeling, Vadose Zone Journal, 12(1). [3] Koebernick, N., U. Weller, K. Huber, S. Schlüter, H.-J. Vogel, R. Jahn; H. Vereecken, D. Vetterlein, In situ visualisation and quantification of root-system architecture and growth with X-ray CT, Manuscript submitted for publication.
Fully Burdened Cost of Fuel Using Input-Output Analysis
2011-12-01
Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single step, allowing for less complex and...wide extension of the Bulk Fuels Distribution Model could be used to replace the current seven-step Fully Burdened Cost of Fuel process with a single...ABBREVIATIONS AEM Atlantic, Europe, and the Mediterranean AOAs Analysis of Alternatives DAG Defense Acquisition Guidebook DAU Defense Acquisition University
An Airborne Onboard Parallel Processing Testbed
NASA Technical Reports Server (NTRS)
Mandl, Daniel J.
2014-01-01
This presentation provides information on the progress the Intelligent Payload Module (IPM) development effort. In addition, a vision is presented on integration of the IPM architecture with the GeoSocial Application Program Interface (API) architecture to enable efficient distribution of satellite data products.
2011-02-01
Process Architecture Technology Analysis: Executive .............................................. 15 UIMA as Executive...44 A.4: Flow Code in UIMA ......................................................................................................... 46... UIMA ................................................................................................................................ 57 E.2
An efficient architecture to support digital pathology in standard medical imaging repositories.
Marques Godinho, Tiago; Lebre, Rui; Silva, Luís Bastião; Costa, Carlos
2017-07-01
In the past decade, digital pathology and whole-slide imaging (WSI) have been gaining momentum with the proliferation of digital scanners from different manufacturers. The literature reports significant advantages associated with the adoption of digital images in pathology, namely, improvements in diagnostic accuracy and better support for telepathology. Moreover, it also offers new clinical and research applications. However, numerous barriers have been slowing the adoption of WSI, among which the most important are performance issues associated with storage and distribution of huge volumes of data, and lack of interoperability with other hospital information systems, most notably Picture Archive and Communications Systems (PACS) based on the DICOM standard. This article proposes an architecture of a Web Pathology PACS fully compliant with DICOM standard communications and data formats. The solution includes a PACS Archive responsible for storing whole-slide imaging data in DICOM WSI format and offers a communication interface based on the most recent DICOM Web services. The second component is a zero-footprint viewer that runs in any web-browser. It consumes data using the PACS archive standard web services. Moreover, it features a tiling engine especially suited to deal with the WSI image pyramids. These components were designed with special focus on efficiency and usability. The performance of our system was assessed through a comparative analysis of the state-of-the-art solutions. The results demonstrate that it is possible to have a very competitive solution based on standard workflows. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gallant, Frederick M.
A novel method of fabricating functionally graded extruded composite materials is proposed for propellant applications using the technology of continuous processing with a Twin-Screw Extruder. The method is applied to the manufacturing of grains for solid rocket motors in an end-burning configuration with an axial gradient in ammonium perchlorate volume fraction and relative coarse/fine particle size distributions. The fabrication of functionally graded extruded polymer composites with either inert or energetic ingredients has yet to be investigated. The lack of knowledge concerning the processing of these novel materials has necessitated that a number of research issues be addressed. Of primary concern is characterizing and modeling the relationship between the extruder screw geometry, transient processing conditions, and the gradient architecture that evolves in the extruder. Recent interpretations of the Residence Time Distributions (RTDs) and Residence Volume Distributions (RVDs) for polymer composites in the TSE are used to develop new process models for predicting gradient architectures in the direction of extrusion. An approach is developed for characterizing the sections of the extrudate using optical, mechanical, and compositional analysis to determine the gradient architectures. The effects of processing on the burning rate properties of extruded energetic polymer composites are characterized for homogeneous formulations over a range of compositions to determine realistic gradient architectures for solid rocket motor applications. The new process models and burning rate properties that have been characterized in this research effort will be the basis for an inverse design procedure that is capable of determining gradient architectures for grains in solid rocket motors that possess tailored burning rate distributions that conform to user-defined performance specifications.
An Architecture for Controlling Multiple Robots
NASA Technical Reports Server (NTRS)
Aghazarian, Hrand; Pirjanian, Paolo; Schenker, Paul; Huntsberger, Terrance
2004-01-01
The Control Architecture for Multirobot Outpost (CAMPOUT) is a distributed-control architecture for coordinating the activities of multiple robots. In the CAMPOUT, multiple-agent activities and sensor-based controls are derived as group compositions and involve coordination of more basic controllers denoted, for present purposes, as behaviors. The CAMPOUT provides basic mechanistic concepts for representation and execution of distributed group activities. One considers a network of nodes that comprise behaviors (self-contained controllers) augmented with hyper-links, which are used to exchange information between the nodes to achieve coordinated activities. Group behavior is guided by a scripted plan, which encodes a conditional sequence of single-agent activities. Thus, higher-level functionality is composed by coordination of more basic behaviors under the downward task decomposition of a multi-agent planner
UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.
Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L
2012-03-01
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Leaf hydraulic architecture correlates with regeneration irradiance in tropical rainforest trees
Lawren Sack; Melvin T. Tyree; N. Michele Holbrook; N. Michele Holbrook
2005-01-01
The leaf hydraulic conductance (Kleaf)s a major determinant of plant water transport capacity. Here, we measured Kleaf, and its basis in the resistances of leaf components, for fully illuminated leaves of five tree species that regenerate in deep shade, and five that regenerate in gaps or clearings, in Panamanian lowland tropical rainforest. We also determined...
ERIC Educational Resources Information Center
Wang, Tsungjuang
2009-01-01
This paper attempts to answer three questions: (1) What are the benefits of fully implementing ICTs for the education of professionals, such as architects? (2) What are the difficulties involved with carrying out these technological changes? and (3) How do these benefits and difficulties interact in a rapidly developing Asian nation such as…
Biologist Postbaccalaureate Fellow | Center for Cancer Research
A fully funded post bac position is available to study tumor microenvironment at the National Cancer Institute on the NIH main campus in Bethesda, MD. Specifically, this opening is for an ongoing project examining the role of tissue architecture and mechanotransduction in the establishment of metastatic lesions, using zebrafish as a model system.
Biologist Postdoctoral Fellow | Center for Cancer Research
A fully funded postdoctoral position is available at the National Cancer Institute on the NIH main campus in Bethesda, MD. Specifically, this opening is for an ongoing project examining the role of tissue architecture and mechanotransduction in the establishment of metastatic lesions, using zebrafish as a model system. The NIH will provide funding and benefits, though
Wireless spread-spectrum telesensor chip with synchronous digital architecture
Smith, Stephen F.; Turner, Gary W.; Wintenberg, Alan L.; Emery, Michael Steven
2005-03-08
A fully integrated wireless spread-spectrum sensor incorporating all elements of an "intelligent" sensor on a single circuit chip is capable of telemetering data to a receiver. Synchronous control of all elements of the chip provides low-cost, low-noise, and highly robust data transmission, in turn enabling the use of low-cost monolithic receivers.
Biologist postbaccalaureate fellow | Center for Cancer Research
A fully funded post bac position is available to study tumor microenvironment at the National Cancer Institute on the NIH main campus in Bethesda, MD. Specifically, this opening is for an ongoing project examining the role of tissue architecture and mechanotransduction in the establishment of metastatic lesions, using zebrafish as a model system.
Elements of a modern turbomachinery design system
NASA Astrophysics Data System (ADS)
Jennions, Ian K.
1994-05-01
The aerodynamic design system at GE Aircraft Engines (GEAE) consists of many parts: throughflow, secondary flow, geometry generators, blade-to-blade and fully three-dimensional (3D) analysis. This paper describes each of these elements and discusses optimization and computer architecture issues. Emphasis is placed on those areas in which the company is thought to have special capability.
Data Manipulation in an XML-Based Digital Image Library
ERIC Educational Resources Information Center
Chang, Naicheng
2005-01-01
Purpose: To help to clarify the role of XML tools and standards in supporting transition and migration towards a fully XML-based environment for managing access to information. Design/methodology/approach: The Ching Digital Image Library, built on a three-tier architecture, is used as a source of examples to illustrate a number of methods of data…
ERIC Educational Resources Information Center
Güven, Bülent; Kosa, Temel
2008-01-01
Geometry is the study of shape and space. Without spatial ability, students cannot fully appreciate the natural world. Spatial ability is also very important for work in various fields such as computer graphics, engineering, architecture, and cartography. A number of studies have demonstrated that technology has an important potential to develop…
Utilising eduroam[TM] Architecture in Building Wireless Community Networks
ERIC Educational Resources Information Center
Huhtanen, Karri; Vatiainen, Heikki; Keski-Kasari, Sami; Harju, Jarmo
2008-01-01
Purpose: eduroam[TM] has already been proved to be a scalable, secure and feasible way for universities and research institutions to connect their wireless networks into a WLAN roaming community, but the advantages of eduroam[TM] have not yet been fully discovered in the wireless community networks aimed at regular consumers. This aim of this…
Deterrence and the Future of U.S.-GCC Defense Cooperation: A Strategic Dialogue Event
2015-07-01
be taken to help incorporate ballistic missile defense systems into the security architecture of Gulf states. Finally, the United States should...American’s copanelist, the Kuwaiti political scientist, added that the GCC has not fully embraced the idea. “It’s an Egyptian -led project,” he said. “Cairo
Data Strategies to Support Automated Multi-Sensor Data Fusion in a Service Oriented Architecture
2008-06-01
and employ vast quantities of content. This dissertation provides two software architectural patterns and an auto-fusion process that guide the...UDDI), Simple Order Access Protocol (SOAP), Java, Maritime Domain Awareness (MDA), Business Process Execution Language for Web Service (BPEL4WS) 16...content. This dissertation provides two software architectural patterns and an auto-fusion process that guide the development of a distributed
Acoustic simulation in architecture with parallel algorithm
NASA Astrophysics Data System (ADS)
Li, Xiaohong; Zhang, Xinrong; Li, Dan
2004-03-01
In allusion to complexity of architecture environment and Real-time simulation of architecture acoustics, a parallel radiosity algorithm was developed. The distribution of sound energy in scene is solved with this method. And then the impulse response between sources and receivers at frequency segment, which are calculated with multi-process, are combined into whole frequency response. The numerical experiment shows that parallel arithmetic can improve the acoustic simulating efficiency of complex scene.
2017-06-01
students in a war- gaming class , and working in tandem with a NPS distance...surface mode ability provides a threat suppression method against small craft attacks and boarding attempts. b. Vulnerability As a sea-going surface...Design Architecture With a proposed CONOPS established, the physical architecture can proceed to a more detailed design. For the purpose of
Fully convolutional neural networks for polyp segmentation in colonoscopy
NASA Astrophysics Data System (ADS)
Brandao, Patrick; Mazomenos, Evangelos; Ciuti, Gastone; Caliò, Renato; Bianchi, Federico; Menciassi, Arianna; Dario, Paolo; Koulaouzidis, Anastasios; Arezzo, Alberto; Stoyanov, Danail
2017-03-01
Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.
Distributed Control Architecture for Gas Turbine Engine. Chapter 4
NASA Technical Reports Server (NTRS)
Culley, Dennis; Garg, Sanjay
2009-01-01
The transformation of engine control systems from centralized to distributed architecture is both necessary and enabling for future aeropropulsion applications. The continued growth of adaptive control applications and the trend to smaller, light weight cores is a counter influence on the weight and volume of control system hardware. A distributed engine control system using high temperature electronics and open systems communications will reverse the growing trend of control system weight ratio to total engine weight and also be a major factor in decreasing overall cost of ownership for aeropropulsion systems. The implementation of distributed engine control is not without significant challenges. There are the needs for high temperature electronics, development of simple, robust communications, and power supply for the on-board electronics.
A Distributed Ambient Intelligence Based Multi-Agent System for Alzheimer Health Care
NASA Astrophysics Data System (ADS)
Tapia, Dante I.; RodríGuez, Sara; Corchado, Juan M.
This chapter presents ALZ-MAS (Alzheimer multi-agent system), an ambient intelligence (AmI)-based multi-agent system aimed at enhancing the assistance and health care for Alzheimer patients. The system makes use of several context-aware technologies that allow it to automatically obtain information from users and the environment in an evenly distributed way, focusing on the characteristics of ubiquity, awareness, intelligence, mobility, etc., all of which are concepts defined by AmI. ALZ-MAS makes use of a services oriented multi-agent architecture, called flexible user and services oriented multi-agent architecture, to distribute resources and enhance its performance. It is demonstrated that a SOA approach is adequate to build distributed and highly dynamic AmI-based multi-agent systems.
Parallel PWMs Based Fully Digital Transmitter with Wide Carrier Frequency Range
Zhou, Bo; Zhang, Kun; Zhou, Wenbiao; Zhang, Yanjun; Liu, Dake
2013-01-01
The carrier-frequency (CF) and intermediate-frequency (IF) pulse-width modulators (PWMs) based on delay lines are proposed, where baseband signals are conveyed by both positions and pulse widths or densities of the carrier clock. By combining IF-PWM and precorrected CF-PWM, a fully digital transmitter with unit-delay autocalibration is implemented in 180 nm CMOS for high reconfiguration. The proposed architecture achieves wide CF range of 2 M–1 GHz, high power efficiency of 70%, and low error vector magnitude (EVM) of 3%, with spectrum purity of 20 dB optimized in comparison to the existing designs. PMID:24223503
NASA Technical Reports Server (NTRS)
Miller, Timothy M.; Costen, Nick; Allen, Christine
2007-01-01
This conference poster reviews the Indium hybridization of the large format TES bolometer arrays. We are developing a key technology to enable the next generation of detectors. That is the Hybridization of Large Format Arrays using Indium bonded detector arrays containing 32x40 elements which conforms to the NIST multiplexer readout architecture of 1135 micron pitch. We have fabricated and hybridized mechanical models with the detector chips bonded after being fully back-etched. The mechanical support consists of 30 micron walls between elements Demonstrated electrical continuity for each element. The goal is to hybridize fully functional array of TES detectors to NIST readout.
The Deployment of Routing Protocols in Distributed Control Plane of SDN
Jingjing, Zhou; Di, Cheng; Weiming, Wang; Rong, Jin; Xiaochun, Wu
2014-01-01
Software defined network (SDN) provides a programmable network through decoupling the data plane, control plane, and application plane from the original closed system, thus revolutionizing the existing network architecture to improve the performance and scalability. In this paper, we learned about the distributed characteristics of Kandoo architecture and, meanwhile, improved and optimized Kandoo's two levels of controllers based on ideological inspiration of RCP (routing control platform). Finally, we analyzed the deployment strategies of BGP and OSPF protocol in a distributed control plane of SDN. The simulation results show that our deployment strategies are superior to the traditional routing strategies. PMID:25250395
Schäfer, Christian G; Lederle, Christina; Zentel, Kristina; Stühn, Bernd; Gallei, Markus
2014-11-01
In this work, the preparation of highly thermoresponsive and fully reversible stretch-tunable elastomeric opal films featuring switchable structural colors is reported. Novel particle architectures based on poly(diethylene glycol methylether methacrylate-co-ethyl acrylate) (PDEGMEMA-co-PEA) as shell polymer are synthesized via seeded and stepwise emulsion polymerization protocols. The use of DEGMEMA as comonomer and herein established synthetic strategies leads to monodisperse soft shell particles, which can be directly processed to opal films by using the feasible melt-shear organization technique. Subsequent UV crosslinking strategies open access to mechanically stable and homogeneous elastomeric opal films. The structural colors of the opal films feature mechano- and thermoresponsiveness, which is found to be fully reversible. Optical characterization shows that the combination of both stimuli provokes a photonic bandgap shift of more than 50 nm from 560 nm in the stretched state to 611 nm in the fully swollen state. In addition, versatile colorful patterns onto the colloidal crystal structure are produced by spatial UV-induced crosslinking by using a photomask. This facile approach enables the generation of spatially cross-linked switchable opal films with fascinating optical properties. Herein described strategies for the preparation of PDEGMEMA-containing colloidal architectures, application of the melt-shear ordering technique, and patterned crosslinking of the final opal films open access to novel stimuli-responsive colloidal crystal films, which are expected to be promising materials in the field of security and sensing applications. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study.
Dolz, Jose; Desrosiers, Christian; Ben Ayed, Ismail
2018-04-15
This study investigates a 3D and fully convolutional neural network (CNN) for subcortical brain structure segmentation in MRI. 3D CNN architectures have been generally avoided due to their computational and memory requirements during inference. We address the problem via small kernels, allowing deeper architectures. We further model both local and global context by embedding intermediate-layer outputs in the final prediction, which encourages consistency between features extracted at different scales and embeds fine-grained information directly in the segmentation process. Our model is efficiently trained end-to-end on a graphics processing unit (GPU), in a single stage, exploiting the dense inference capabilities of fully CNNs. We performed comprehensive experiments over two publicly available datasets. First, we demonstrate a state-of-the-art performance on the ISBR dataset. Then, we report a large-scale multi-site evaluation over 1112 unregistered subject datasets acquired from 17 different sites (ABIDE dataset), with ages ranging from 7 to 64 years, showing that our method is robust to various acquisition protocols, demographics and clinical factors. Our method yielded segmentations that are highly consistent with a standard atlas-based approach, while running in a fraction of the time needed by atlas-based methods and avoiding registration/normalization steps. This makes it convenient for massive multi-site neuroanatomical imaging studies. To the best of our knowledge, our work is the first to study subcortical structure segmentation on such large-scale and heterogeneous data. Copyright © 2017 Elsevier Inc. All rights reserved.
A Review of Microgrid Architectures and Control Strategy
NASA Astrophysics Data System (ADS)
Jadav, Krishnarajsinh A.; Karkar, Hitesh M.; Trivedi, I. N.
2017-12-01
In this paper microgrid architecture and various converters control strategies are reviewed. Microgrid is defined as interconnected network of distributed energy resources, loads and energy storage systems. This emerging concept realizes the potential of distributed generators. AC microgrid interconnects various AC distributed generators like wind turbine and DC distributed generators like PV, fuel cell using inverter. While in DC microgrid output of an AC distributed generator must be converted to DC using rectifiers and DC distributed generator can be directly interconnected. Hybrid microgrid is the solution to avoid this multiple reverse conversions AC-DC-AC and DC-AC-DC that occur in the individual AC-DC microgrid. In hybrid microgrid all AC distributed generators will be connected in AC microgrid and DC distributed generators will be connected in DC microgrid. Interlinking converter is used for power balance in both microgrids, which transfer power from one microgrid to other if any microgrid is overloaded. At the end, review of interlinking converter control strategies is presented.
Oppenheim, Sara J; Gould, Fred; Hopper, Keith R
2018-03-01
Intraspecific variation in ecologically important traits is a cornerstone of Darwin's theory of evolution by natural selection. The evolution and maintenance of this variation depends on genetic architecture, which in turn determines responses to natural selection. Some models suggest that traits with complex architectures are less likely to respond to selection than those with simple architectures, yet rapid divergence has been observed in such traits. The simultaneous evolutionary lability and genetic complexity of host plant use in the Lepidopteran subfamily Heliothinae suggest that architecture may not constrain ecological adaptation in this group. Here we investigate the response of Chloridea virescens, a generalist that feeds on diverse plant species, to selection for performance on a novel host, Physalis angulata (Solanaceae). P. angulata is the preferred host of Chloridea subflexa, a narrow specialist on the genus Physalis. In previous experiments, we found that the performance of C. subflexa on P. angulata depends on many loci of small effect distributed throughout the genome, but whether the same architecture would be involved in the generalist's adoption of P. angulata was unknown. Here we report a rapid response to selection in C. virescens for performance on P. angulata, and establish that the genetic architecture of intraspecific variation is quite similar to that of the interspecific differences in terms of the number, distribution, and effect sizes of the QTL involved. We discuss the impact of genetic architecture on the ability of Heliothine moths to respond to varying ecological selection pressures.
Parallel integer sorting with medium and fine-scale parallelism
NASA Technical Reports Server (NTRS)
Dagum, Leonardo
1993-01-01
Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.
NASA Technical Reports Server (NTRS)
Tawel, Raoul (Inventor)
1994-01-01
A method for the rapid learning of nonlinear mappings and topological transformations using a dynamically reconfigurable artificial neural network is presented. This fully-recurrent Adaptive Neuron Model (ANM) network was applied to the highly degenerate inverse kinematics problem in robotics, and its performance evaluation is bench-marked. Once trained, the resulting neuromorphic architecture was implemented in custom analog neural network hardware and the parameters capturing the functional transformation downloaded onto the system. This neuroprocessor, capable of 10(exp 9) ops/sec, was interfaced directly to a three degree of freedom Heathkit robotic manipulator. Calculation of the hardware feed-forward pass for this mapping was benchmarked at approximately 10 microsec.
Multi-scale evaporator architectures for geothermal binary power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sabau, Adrian S; Nejad, Ali; Klett, James William
2016-01-01
In this paper, novel geometries of heat exchanger architectures are proposed for evaporators that are used in Organic Rankine Cycles. A multi-scale heat exchanger concept was developed by employing successive plenums at several length-scale levels. Flow passages contain features at both macro-scale and micro-scale, which are designed from Constructal Theory principles. Aside from pumping power and overall thermal resistance, several factors were considered in order to fully assess the performance of the new heat exchangers, such as weight of metal structures, surface area per unit volume, and total footprint. Component simulations based on laminar flow correlations for supercritical R134a weremore » used to obtain performance indicators.« less
Reference Architecture Model Enabling Standards Interoperability.
Blobel, Bernd
2017-01-01
Advanced health and social services paradigms are supported by a comprehensive set of domains managed by different scientific disciplines. Interoperability has to evolve beyond information and communication technology (ICT) concerns, including the real world business domains and their processes, but also the individual context of all actors involved. So, the system must properly reflect the environment in front and around the computer as essential and even defining part of the health system. This paper introduces an ICT-independent system-theoretical, ontology-driven reference architecture model allowing the representation and harmonization of all domains involved including the transformation into an appropriate ICT design and implementation. The entire process is completely formalized and can therefore be fully automated.
Traffic and Driving Simulator Based on Architecture of Interactive Motion.
Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza
2015-01-01
This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination.
Communication Architecture in Mixed-Reality Simulations of Unmanned Systems
2018-01-01
Verification of the correct functionality of multi-vehicle systems in high-fidelity scenarios is required before any deployment of such a complex system, e.g., in missions of remote sensing or in mobile sensor networks. Mixed-reality simulations where both virtual and physical entities can coexist and interact have been shown to be beneficial for development, testing, and verification of such systems. This paper deals with the problems of designing a certain communication subsystem for such highly desirable realistic simulations. Requirements of this communication subsystem, including proper addressing, transparent routing, visibility modeling, or message management, are specified prior to designing an appropriate solution. Then, a suitable architecture of this communication subsystem is proposed together with solutions to the challenges that arise when simultaneous virtual and physical message transmissions occur. The proposed architecture can be utilized as a high-fidelity network simulator for vehicular systems with implicit mobility models that are given by real trajectories of the vehicles. The architecture has been utilized within multiple projects dealing with the development and practical deployment of multi-UAV systems, which support the architecture’s viability and advantages. The provided experimental results show the achieved similarity of the communication characteristics of the fully deployed hardware setup to the setup utilizing the proposed mixed-reality architecture. PMID:29538290
Traffic and Driving Simulator Based on Architecture of Interactive Motion
Paz, Alexander; Veeramisti, Naveen; Khaddar, Romesh; de la Fuente-Mella, Hanns; Modorcea, Luiza
2015-01-01
This study proposes an architecture for an interactive motion-based traffic simulation environment. In order to enhance modeling realism involving actual human beings, the proposed architecture integrates multiple types of simulation, including: (i) motion-based driving simulation, (ii) pedestrian simulation, (iii) motorcycling and bicycling simulation, and (iv) traffic flow simulation. The architecture has been designed to enable the simulation of the entire network; as a result, the actual driver, pedestrian, and bike rider can navigate anywhere in the system. In addition, the background traffic interacts with the actual human beings. This is accomplished by using a hybrid mesomicroscopic traffic flow simulation modeling approach. The mesoscopic traffic flow simulation model loads the results of a user equilibrium traffic assignment solution and propagates the corresponding traffic through the entire system. The microscopic traffic flow simulation model provides background traffic around the vicinities where actual human beings are navigating the system. The two traffic flow simulation models interact continuously to update system conditions based on the interactions between actual humans and the fully simulated entities. Implementation efforts are currently in progress and some preliminary tests of individual components have been conducted. The implementation of the proposed architecture faces significant challenges ranging from multiplatform and multilanguage integration to multievent communication and coordination. PMID:26491711
OWLS as platform technology in OPTOS satellite
NASA Astrophysics Data System (ADS)
Rivas Abalo, J.; Martínez Oter, J.; Arruego Rodríguez, I.; Martín-Ortega Rico, A.; de Mingo Martín, J. R.; Jiménez Martín, J. J.; Martín Vodopivec, B.; Rodríguez Bustabad, S.; Guerrero Padrón, H.
2017-12-01
The aim of this work is to show the Optical Wireless Link to intraSpacecraft Communications (OWLS) technology as a platform technology for space missions, and more specifically its use within the On-Board Communication system of OPTOS satellite. OWLS technology was proposed by Instituto Nacional de Técnica Aeroespacial (INTA) at the end of the 1990s and developed along 10 years through a number of ground demonstrations, technological developments and in-orbit experiments. Its main benefits are: mass reduction, flexibility, and simplification of the Assembly, Integration and Tests phases. The final step was to go from an experimental technology to a platform one. This step was carried out in the OPTOS satellite, which makes use of optical wireless links in a distributed network based on an OLWS implementation of the CAN bus. OPTOS is the first fully wireless satellite. It is based on the triple configuration (3U) of the popular Cubesat standard, and was completely built at INTA. It was conceived to procure a fast development, low cost, and yet reliable platform to the Spanish scientific community, acting as a test bed for space born science and technology. OPTOS presents a distributed OBDH architecture in which all satellite's subsystems and payloads incorporate a small Distributed On-Board Computer (OBC) Terminal (DOT). All DOTs (7 in total) communicate between them by means of the OWLS-CAN that enables full data sharing capabilities. This collaboration allows them to perform all tasks that would normally be carried out by a centralized On-Board Computer.
Trends and New Directions in Software Architecture
2014-10-10
frameworks Open source Cloud strategies NoSQL Machine Learning MDD Incremental approaches Dashboards Distributed development...complexity grows NoSQL Models are not created equal 2014 Our Current Research Lightweight Evaluation and Architecture Prototyping for Big Data
NASA Astrophysics Data System (ADS)
Allouache, Hadj; Zegaoui, Abdallah; Boutoubat, Mohamed; Bokhtache, Aicha Aissa; Kessaissia, Fatma Zohra; Charles, Jean-Pierre; Aillerie, Michel
2018-05-01
This paper focuses on a photovoltaic generator feeding a load via a boost converter in a distributed PV architecture. The principal target is the evaluation of the efficiency of a distributed photovoltaic architecture powering a direct current (DC) PV bus. This task is achieved by outlining an original way for tracking the Maximum Power Point (MPP) taking into account load variations and duty cycle on the electrical quantities of the boost converter and on the PV generator output apparent impedance. Thereafter, in a given sized PV system, we analyze the influence of the load variations on the behavior of the boost converter and we deduce the limits imposed by the load on the DC PV bus. The simultaneous influences of 1- the variation of the duty cycle of the boost converter and 2- the load power on the parameters of the various components of the photovoltaic chain and on the boost performances are clearly presented as deduced by simulation.
A Network Scheduling Model for Distributed Control Simulation
NASA Technical Reports Server (NTRS)
Culley, Dennis; Thomas, George; Aretskin-Hariton, Eliot
2016-01-01
Distributed engine control is a hardware technology that radically alters the architecture for aircraft engine control systems. Of its own accord, it does not change the function of control, rather it seeks to address the implementation issues for weight-constrained vehicles that can limit overall system performance and increase life-cycle cost. However, an inherent feature of this technology, digital communication networks, alters the flow of information between critical elements of the closed-loop control. Whereas control information has been available continuously in conventional centralized control architectures through virtue of analog signaling, moving forward, it will be transmitted digitally in serial fashion over the network(s) in distributed control architectures. An underlying effect is that all of the control information arrives asynchronously and may not be available every loop interval of the controller, therefore it must be scheduled. This paper proposes a methodology for modeling the nominal data flow over these networks and examines the resulting impact for an aero turbine engine system simulation.
A New On-Line Diagnosis Protocol for the SPIDER Family of Byzantine Fault Tolerant Architectures
NASA Technical Reports Server (NTRS)
Geser, Alfons; Miner, Paul S.
2004-01-01
This paper presents the formal verification of a new protocol for online distributed diagnosis for the SPIDER family of architectures. An instance of the Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) architecture consists of a collection of processing elements communicating over a Reliable Optical Bus (ROBUS). The ROBUS is a specialized fault-tolerant device that guarantees Interactive Consistency, Distributed Diagnosis (Group Membership), and Synchronization in the presence of a bounded number of physical faults. Formal verification of the original SPIDER diagnosis protocol provided a detailed understanding that led to the discovery of a significantly more efficient protocol. The original protocol was adapted from the formally verified protocol used in the MAFT architecture. It required O(N) message exchanges per defendant to correctly diagnose failures in a system with N nodes. The new protocol achieves the same diagnostic fidelity, but only requires O(1) exchanges per defendant. This paper presents this new diagnosis protocol and a formal proof of its correctness using PVS.
Numerical Propulsion System Simulation Architecture
NASA Technical Reports Server (NTRS)
Naiman, Cynthia G.
2004-01-01
The Numerical Propulsion System Simulation (NPSS) is a framework for performing analysis of complex systems. Because the NPSS was developed using the object-oriented paradigm, the resulting architecture is an extensible and flexible framework that is currently being used by a diverse set of participants in government, academia, and the aerospace industry. NPSS is being used by over 15 different institutions to support rockets, hypersonics, power and propulsion, fuel cells, ground based power, and aerospace. Full system-level simulations as well as subsystems may be modeled using NPSS. The NPSS architecture enables the coupling of analyses at various levels of detail, which is called numerical zooming. The middleware used to enable zooming and distributed simulations is the Common Object Request Broker Architecture (CORBA). The NPSS Developer's Kit offers tools for the developer to generate CORBA-based components and wrap codes. The Developer's Kit enables distributed multi-fidelity and multi-discipline simulations, preserves proprietary and legacy codes, and facilitates addition of customized codes. The platforms supported are PC, Linux, HP, Sun, and SGI.
NASA Technical Reports Server (NTRS)
Ashworth, Barry R.
1989-01-01
A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.
Implementation of an Integrated On-Board Aircraft Engine Diagnostic Architecture
NASA Technical Reports Server (NTRS)
Armstrong, Jeffrey B.; Simon, Donald L.
2012-01-01
An on-board diagnostic architecture for aircraft turbofan engine performance trending, parameter estimation, and gas-path fault detection and isolation has been developed and evaluated in a simulation environment. The architecture incorporates two independent models: a realtime self-tuning performance model providing parameter estimates and a performance baseline model for diagnostic purposes reflecting long-term engine degradation trends. This architecture was evaluated using flight profiles generated from a nonlinear model with realistic fleet engine health degradation distributions and sensor noise. The architecture was found to produce acceptable estimates of engine health and unmeasured parameters, and the integrated diagnostic algorithms were able to perform correct fault isolation in approximately 70 percent of the tested cases
Advanced information processing system for advanced launch system: Avionics architecture synthesis
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.
1991-01-01
The Advanced Information Processing System (AIPS) is a fault-tolerant distributed computer system architecture that was developed to meet the real time computational needs of advanced aerospace vehicles. One such vehicle is the Advanced Launch System (ALS) being developed jointly by NASA and the Department of Defense to launch heavy payloads into low earth orbit at one tenth the cost (per pound of payload) of the current launch vehicles. An avionics architecture that utilizes the AIPS hardware and software building blocks was synthesized for ALS. The AIPS for ALS architecture synthesis process starting with the ALS mission requirements and ending with an analysis of the candidate ALS avionics architecture is described.
Avionics architecture studies for the entry research vehicle
NASA Technical Reports Server (NTRS)
Dzwonczyk, M. J.; Mckinney, M. F.; Adams, S. J.; Gauthier, R. J.
1989-01-01
This report is the culmination of a year-long investigation of the avionics architecture for NASA's Entry Research Vehicle (ERV). The Entry Research Vehicle is conceived to be an unmanned, autonomous spacecraft to be deployed from the Shuttle. It will perform various aerodynamic and propulsive maneuvers in orbit and land at Edwards AFB after a 5 to 10 hour mission. The design and analysis of the vehicle's avionics architecture are detailed here. The architecture consists of a central triply redundant ultra-reliable fault tolerant processor attached to three replicated and distributed MIL-STD-1553 buses for input and output. The reliability analysis is detailed here. The architecture was found to be sufficiently reliable for the ERV mission plan.
Alternative electrical distribution system architectures for automobiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afridi, K.K.; Tabors, R.D.; Kassakian, J.G.
At present most automobiles use a 12 V electrical system with point-to-point wiring. The capability of this architecture in meeting the needs of future electrical loads is questionable. Furthermore, with the development of electric vehicles (EVs) there is a greater need for a better architecture. In this paper the authors outline the limitations of the conventional architecture and identify alternatives. They also present a multi-attribute trade-off methodology which compares these alternatives, and identifies a set of Pareto optimal architectures. The system attributes traded off are cost, weight, losses and probability of failure. These are calculated by a computer program thatmore » has built-in component attribute models. System attributes of a few dozen architectures are also reported and the results analyzed. 17 refs.« less
Distributed Learning Metadata Standards
ERIC Educational Resources Information Center
McClelland, Marilyn
2004-01-01
Significant economies can be achieved in distributed learning systems architected with a focus on interoperability and reuse. The key building blocks of an efficient distributed learning architecture are the use of standards and XML technologies. The goal of plug and play capability among various components of a distributed learning system…
NASA Astrophysics Data System (ADS)
Leuchter, S.; Reinert, F.; Müller, W.
2014-06-01
Procurement and design of system architectures capable of network centric operations demand for an assessment scheme in order to compare different alternative realizations. In this contribution an assessment method for system architectures targeted at the C4ISR domain is presented. The method addresses the integration capability of software systems from a complex and distributed software system perspective focusing communication, interfaces and software. The aim is to evaluate the capability to integrate a system or its functions within a system-of-systems network. This method uses approaches from software architecture quality assessment and applies them on the system architecture level. It features a specific goal tree of several dimensions that are relevant for enterprise integration. These dimensions have to be weighed against each other and totalized using methods from the normative decision theory in order to reflect the intention of the particular enterprise integration effort. The indicators and measurements for many of the considered quality features rely on a model based view on systems, networks, and the enterprise. That means it is applicable to System-of-System specifications based on enterprise architectural frameworks relying on defined meta-models or domain ontologies for defining views and viewpoints. In the defense context we use the NATO Architecture Framework (NAF) to ground respective system models. The proposed assessment method allows evaluating and comparing competing system designs regarding their future integration potential. It is a contribution to the system-of-systems engineering methodology.
Robust quantum network architectures and topologies for entanglement distribution
NASA Astrophysics Data System (ADS)
Das, Siddhartha; Khatri, Sumeet; Dowling, Jonathan P.
2018-01-01
Entanglement distribution is a prerequisite for several important quantum information processing and computing tasks, such as quantum teleportation, quantum key distribution, and distributed quantum computing. In this work, we focus on two-dimensional quantum networks based on optical quantum technologies using dual-rail photonic qubits for the building of a fail-safe quantum internet. We lay out a quantum network architecture for entanglement distribution between distant parties using a Bravais lattice topology, with the technological constraint that quantum repeaters equipped with quantum memories are not easily accessible. We provide a robust protocol for simultaneous entanglement distribution between two distant groups of parties on this network. We also discuss a memory-based quantum network architecture that can be implemented on networks with an arbitrary topology. We examine networks with bow-tie lattice and Archimedean lattice topologies and use percolation theory to quantify the robustness of the networks. In particular, we provide figures of merit on the loss parameter of the optical medium that depend only on the topology of the network and quantify the robustness of the network against intermittent photon loss and intermittent failure of nodes. These figures of merit can be used to compare the robustness of different network topologies in order to determine the best topology in a given real-world scenario, which is critical in the realization of the quantum internet.
NASA Astrophysics Data System (ADS)
Kang, Soon Ju; Moon, Jae Chul; Choi, Doo-Hyun; Choi, Sung Su; Woo, Hee Gon
1998-06-01
The inspection of steam-generator (SG) tubes in a nuclear power plant (NPP) is a time-consuming, laborious, and hazardous task because of several hard constraints such as a highly radiated working environment, a tight task schedule, and the need for many experienced human inspectors. This paper presents a new distributed intelligent system architecture for automating traditional inspection methods. The proposed architecture adopts three basic technical strategies in order to reduce the complexity of system implementation. The first is the distributed task allocation into four stages: inspection planning (IF), signal acquisition (SA), signal evaluation (SE), and inspection data management (IDM). Consequently, dedicated subsystems for automation of each stage can be designed and implemented separately. The second strategy is the inclusion of several useful artificial intelligence techniques for implementing the subsystems of each stage, such as an expert system for IP and SE and machine vision and remote robot control techniques for SA. The third strategy is the integration of the subsystems using client/server-based distributed computing architecture and a centralized database management concept. Through the use of the proposed architecture, human errors, which can occur during inspection, can be minimized because the element of human intervention has been almost eliminated; however, the productivity of the human inspector can be increased equally. A prototype of the proposed system has been developed and successfully tested over the last six years in domestic NPP's.
NASA Technical Reports Server (NTRS)
Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary
1996-01-01
We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.
A Novel Software Architecture for the Provision of Context-Aware Semantic Transport Information
Moreno, Asier; Perallos, Asier; López-de-Ipiña, Diego; Onieva, Enrique; Salaberria, Itziar; Masegosa, Antonio D.
2015-01-01
The effectiveness of Intelligent Transportation Systems depends largely on the ability to integrate information from diverse sources and the suitability of this information for the specific user. This paper describes a new approach for the management and exchange of this information, related to multimodal transportation. A novel software architecture is presented, with particular emphasis on the design of the data model and the enablement of services for information retrieval, thereby obtaining a semantic model for the representation of transport information. The publication of transport data as semantic information is established through the development of a Multimodal Transport Ontology (MTO) and the design of a distributed architecture allowing dynamic integration of transport data. The advantages afforded by the proposed system due to the use of Linked Open Data and a distributed architecture are stated, comparing it with other existing solutions. The adequacy of the information generated in regard to the specific user’s context is also addressed. Finally, a working solution of a semantic trip planner using actual transport data and running on the proposed architecture is presented, as a demonstration and validation of the system. PMID:26016915
Agent Collaborative Target Localization and Classification in Wireless Sensor Networks
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.
Assured Mission Support Space Architecture (AMSSA) study
NASA Technical Reports Server (NTRS)
Hamon, Rob
1993-01-01
The assured mission support space architecture (AMSSA) study was conducted with the overall goal of developing a long-term requirements-driven integrated space architecture to provide responsive and sustained space support to the combatant commands. Although derivation of an architecture was the focus of the study, there are three significant products from the effort. The first is a philosophy that defines the necessary attributes for the development and operation of space systems to ensure an integrated, interoperable architecture that, by design, provides a high degree of combat utility. The second is the architecture itself; based on an interoperable system-of-systems strategy, it reflects a long-range goal for space that will evolve as user requirements adapt to a changing world environment. The third product is the framework of a process that, when fully developed, will provide essential information to key decision makers for space systems acquisition in order to achieve the AMSSA goal. It is a categorical imperative that military space planners develop space systems that will act as true force multipliers. AMSSA provides the philosophy, process, and architecture that, when integrated with the DOD requirements and acquisition procedures, can yield an assured mission support capability from space to the combatant commanders. An important feature of the AMSSA initiative is the participation by every organization that has a role or interest in space systems development and operation. With continued community involvement, the concept of the AMSSA will become a reality. In summary, AMSSA offers a better way to think about space (philosophy) that can lead to the effective utilization of limited resources (process) with an infrastructure designed to meet the future space needs (architecture) of our combat forces.
The Genetic Basis of Plant Architecture in 10 Maize Recombinant Inbred Line Populations.
Pan, Qingchun; Xu, Yuancheng; Li, Kun; Peng, Yong; Zhan, Wei; Li, Wenqiang; Li, Lin; Yan, Jianbing
2017-10-01
Plant architecture is a key factor affecting planting density and grain yield in maize ( Zea mays ). However, the genetic mechanisms underlying plant architecture in diverse genetic backgrounds have not been fully addressed. Here, we performed a large-scale phenotyping of 10 plant architecture-related traits and dissected the genetic loci controlling these traits in 10 recombinant inbred line populations derived from 14 diverse genetic backgrounds. Nearly 800 quantitative trait loci (QTLs) with major and minor effects were identified as contributing to the phenotypic variation of plant architecture-related traits. Ninety-two percent of these QTLs were detected in only one population, confirming the diverse genetic backgrounds of the mapping populations and the prevalence of rare alleles in maize. The numbers and effects of QTLs are positively associated with the phenotypic variation in the population, which, in turn, correlates positively with parental phenotypic and genetic variations. A large proportion (38.5%) of QTLs was associated with at least two traits, suggestive of the frequent occurrence of pleiotropic loci or closely linked loci. Key developmental genes, which previously were shown to affect plant architecture in mutant studies, were found to colocalize with many QTLs. Five QTLs were further validated using the segregating populations developed from residual heterozygous lines present in the recombinant inbred line populations. Additionally, one new plant height QTL, qPH3 , has been fine-mapped to a 600-kb genomic region where three candidate genes are located. These results provide insights into the genetic mechanisms controlling plant architecture and will benefit the selection of ideal plant architecture in maize breeding. © 2017 American Society of Plant Biologists. All Rights Reserved.
Model-Unified Planning and Execution for Distributed Autonomous System Control
NASA Technical Reports Server (NTRS)
Aschwanden, Pascal; Baskaran, Vijay; Bernardini, Sara; Fry, Chuck; Moreno, Maria; Muscettola, Nicola; Plaunt, Chris; Rijsman, David; Tompkins, Paul
2006-01-01
The Intelligent Distributed Execution Architecture (IDEA) is a real-time architecture that exploits artificial intelligence planning as the core reasoning engine for interacting autonomous agents. Rather than enforcing separate deliberation and execution layers, IDEA unifies them under a single planning technology. Deliberative and reactive planners reason about and act according to a single representation of the past, present and future domain state. The domain state behaves the rules dictated by a declarative model of the subsystem to be controlled, internal processes of the IDEA controller, and interactions with other agents. We present IDEA concepts - modeling, the IDEA core architecture, the unification of deliberation and reaction under planning - and illustrate its use in a simple example. Finally, we present several real-world applications of IDEA, and compare IDEA to other high-level control approaches.
Distributed and parallel approach for handle and perform huge datasets
NASA Astrophysics Data System (ADS)
Konopko, Joanna
2015-12-01
Big Data refers to the dynamic, large and disparate volumes of data comes from many different sources (tools, machines, sensors, mobile devices) uncorrelated with each others. It requires new, innovative and scalable technology to collect, host and analytically process the vast amount of data. Proper architecture of the system that perform huge data sets is needed. In this paper, the comparison of distributed and parallel system architecture is presented on the example of MapReduce (MR) Hadoop platform and parallel database platform (DBMS). This paper also analyzes the problem of performing and handling valuable information from petabytes of data. The both paradigms: MapReduce and parallel DBMS are described and compared. The hybrid architecture approach is also proposed and could be used to solve the analyzed problem of storing and processing Big Data.
Quantifying loopy network architectures.
Katifori, Eleni; Magnasco, Marcelo O
2012-01-01
Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.
Nano-Architecture of nitrogen-doped graphene films synthesized from a solid CN source.
Maddi, Chiranjeevi; Bourquard, Florent; Barnier, Vincent; Avila, José; Asensio, Maria-Carmen; Tite, Teddy; Donnet, Christophe; Garrelie, Florence
2018-02-19
New synthesis routes to tailor graphene properties by controlling the concentration and chemical configuration of dopants show great promise. Herein we report the direct reproducible synthesis of 2-3% nitrogen-doped 'few-layer' graphene from a solid state nitrogen carbide a-C:N source synthesized by femtosecond pulsed laser ablation. Analytical investigations, including synchrotron facilities, made it possible to identify the configuration and chemistry of the nitrogen-doped graphene films. Auger mapping successfully quantified the 2D distribution of the number of graphene layers over the surface, and hence offers a new original way to probe the architecture of graphene sheets. The films mainly consist in a Bernal ABA stacking three-layer architecture, with a layer number distribution ranging from 2 to 6. Nitrogen doping affects the charge carrier distribution but has no significant effects on the number of lattice defects or disorders, compared to undoped graphene synthetized in similar conditions. Pyridinic, quaternary and pyrrolic nitrogen are the dominant chemical configurations, pyridinic N being preponderant at the scale of the film architecture. This work opens highly promising perspectives for the development of self-organized nitrogen-doped graphene materials, as synthetized from solid carbon nitride, with various functionalities, and for the characterization of 2D materials using a significant new methodology.
Architectures for mission control at the Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Davidson, Reger A.; Murphy, Susan C.
1992-01-01
JPL is currently converting to an innovative control center data system which is a distributed, open architecture for telemetry delivery and which is enabling advancement towards improved automation and operability, as well as new technology, in mission operations at JPL. The scope of mission control within mission operations is examined. The concepts of a mission control center and how operability can affect the design of a control center data system are discussed. Examples of JPL's mission control architecture, data system development, and prototype efforts at the JPL Operations Engineering Laboratory are provided. Strategies for the future of mission control architectures are outlined.
Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.
NASA Technical Reports Server (NTRS)
Boesen, Michael Reibel; Madsen, Jan; Keymeulen, Didier
2011-01-01
This paper presents the current state of the autonomous dynamically self-organizing and self-healing electronic DNA (eDNA) hardware architecture (patent pending). In its current prototype state, the eDNA architecture is capable of responding to multiple injected faults by autonomously reconfiguring itself to accommodate the fault and keep the application running. This paper will also disclose advanced features currently available in the simulation model only. These features are future work and will soon be implemented in hardware. Finally we will describe step-by-step how an application is implemented on the eDNA architecture.
Linking and Combining Distributed Operations Facilities using NASA's "GMSEC" Systems Architectures
NASA Technical Reports Server (NTRS)
Smith, Danford; Grubb, Thomas; Esper, Jaime
2008-01-01
NASA's Goddard Mission Services Evolution Center (GMSEC) ground system architecture has been in development since late 2001, has successfully supported eight orbiting satellites and is being applied to many of NASA's future missions. GMSEC can be considered an event-driven service-oriented architecture built around a publish/subscribe message bus middleware. This paper briefly discusses the GMSEC technical approaches which have led to significant cost savings and risk reduction for NASA missions operated at the Goddard Space Flight Center (GSFC). The paper then focuses on the development and operational impacts of extending the architecture across multiple mission operations facilities.
Cathode architectures for alkali metal / oxygen batteries
Visco, Steven J; Nimon, Vitaliy; De Jonghe, Lutgard C; Volfkovich, Yury; Bograchev, Daniil
2015-01-13
Electrochemical energy storage devices, such as alkali metal-oxygen battery cells (e.g., non-aqueous lithium-air cells), have a cathode architecture with a porous structure and pore composition that is tailored to improve cell performance, especially as it pertains to one or more of the discharge/charge rate, cycle life, and delivered ampere-hour capacity. A porous cathode architecture having a pore volume that is derived from pores of varying radii wherein the pore size distribution is tailored as a function of the architecture thickness is one way to achieve one or more of the aforementioned cell performance improvements.
10Gbps monolithic silicon FTTH transceiver without laser diode for a new PON configuration.
Zhang, Jing; Liow, Tsung-Yang; Lo, Guo-Qiang; Kwong, Dim-Lee
2010-03-01
A new passive optical network (PON) configuration and a novel silicon photonic transceiver architecture for optical network unit (ONU) are proposed, eliminating the need for an internal laser source in ONU. The Si transceiver is fully monolithic, includes integrated wavelength division multiplexing (WDM) filters, modulators (MOD) and photo-detectors (PD), and demonstrates low-cost high volume manufacturability.
Deshpande, Gopikrishna; Wang, Peng; Rangaprakash, D; Wilamowski, Bogdan
2015-12-01
Automated recognition and classification of brain diseases are of tremendous value to society. Attention deficit hyperactivity disorder (ADHD) is a diverse spectrum disorder whose diagnosis is based on behavior and hence will benefit from classification utilizing objective neuroimaging measures. Toward this end, an international competition was conducted for classifying ADHD using functional magnetic resonance imaging data acquired from multiple sites worldwide. Here, we consider the data from this competition as an example to illustrate the utility of fully connected cascade (FCC) artificial neural network (ANN) architecture for performing classification. We employed various directional and nondirectional brain connectivity-based methods to extract discriminative features which gave better classification accuracy compared to raw data. Our accuracy for distinguishing ADHD from healthy subjects was close to 90% and between the ADHD subtypes was close to 95%. Further, we show that, if properly used, FCC ANN performs very well compared to other classifiers such as support vector machines in terms of accuracy, irrespective of the feature used. Finally, the most discriminative connectivity features provided insights about the pathophysiology of ADHD and showed reduced and altered connectivity involving the left orbitofrontal cortex and various cerebellar regions in ADHD.
High-speed fiber-optic links for distribution of satellite traffic
NASA Technical Reports Server (NTRS)
Daryoush, Afshin S.; Saedi, Reza; Ackerman, Edward; Kunath, Richard; Shalkhauser, Kurt
1990-01-01
Low-loss fiberoptic links are designed for distribution of data and the frequency reference in large-aperture phased-array antennas based on the transmit/receive-level data mixing architecture. In particular, design aspects of a fiberoptic link satisfying the distribution requirements of satellite data traffic are presented. The design is addressed in terms of reactively matched optical transmitter and receiver modules. Analog and digital characterization of a 50-m fiberoptic link realized using these modules indicates the applicability of this architecture as the only viable alternative for distribution of data signals inside a satellite at present. It is demonstrated that the design of a reactive matching modules enhances the link performance. A dynamic range of 88 dB/MHz was measured for analog data over a 500-1000-MHz bandwidth.
Integration of the instrument control electronics for the ESPRESSO spectrograph at ESO-VLT
NASA Astrophysics Data System (ADS)
Baldini, V.; Calderone, G.; Cirami, R.; Coretti, I.; Cristiani, S.; Di Marcantonio, P.; Mégevand, D.; Riva, M.; Santin, P.
2016-07-01
ESPRESSO, the Echelle SPectrograph for Rocky Exoplanet and Stable Spectroscopic Observations of the ESO - Very Large Telescope site, is now in its integration phase. The large number of functions of this complex instrument are fully controlled by a Beckhoff PLC based control electronics architecture. Four small and one large cabinets host the main electronic parts to control all the sensors, motorized stages and other analogue and digital functions of ESPRESSO. The Instrument Control Electronics (ICE) is built following the latest ESO standards and requirements. Two main PLC CPUs are used and are programmed through the TwinCAT Beckhoff dedicated software. The assembly, integration and verification phase of ESPRESSO, due to its distributed nature and different geographical locations of the consortium partners, is quite challenging. After the preliminary assembling and test of the electronic components at the Astronomical Observatory of Trieste and the test of some electronics and software parts at ESO (Garching), the complete system for the control of the four Front End Unit (FEU) arms of ESPRESSO has been fully assembled and tested in Merate (Italy) at the beginning of 2016. After these first tests, the system will be located at the Geneva Observatory (Switzerland) until the Preliminary Acceptance Europe (PAE) and finally shipped to Chile for the commissioning. This paper describes the integration strategy of the ICE workpackage of ESPRESSO, the hardware and software tests that have been performed, with an overall view of the experience gained during these project's phases.
High Performance Data Distribution for Scientific Community
NASA Astrophysics Data System (ADS)
Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus
2010-05-01
Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009
A Reference Architecture for Space Information Management
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.
2006-01-01
We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.
Framework for the Parametric System Modeling of Space Exploration Architectures
NASA Technical Reports Server (NTRS)
Komar, David R.; Hoffman, Jim; Olds, Aaron D.; Seal, Mike D., II
2008-01-01
This paper presents a methodology for performing architecture definition and assessment prior to, or during, program formulation that utilizes a centralized, integrated architecture modeling framework operated by a small, core team of general space architects. This framework, known as the Exploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), enables: 1) a significantly larger fraction of an architecture trade space to be assessed in a given study timeframe; and 2) the complex element-to-element and element-to-system relationships to be quantitatively explored earlier in the design process. Discussion of the methodology advantages and disadvantages with respect to the distributed study team approach typically used within NASA to perform architecture studies is presented along with an overview of EXAMINE s functional components and tools. An example Mars transportation system architecture model is used to demonstrate EXAMINE s capabilities in this paper. However, the framework is generally applicable for exploration architecture modeling with destinations to any celestial body in the solar system.
Information architecture for a planetary 'exploration web'
NASA Technical Reports Server (NTRS)
Lamarra, N.; McVittie, T.
2002-01-01
'Web services' is a common way of deploying distributed applications whose software components and data sources may be in different locations, formats, languages, etc. Although such collaboration is not utilized significantly in planetary exploration, we believe there is significant benefit in developing an architecture in which missions could leverage each others capabilities. We believe that an incremental deployment of such an architecture could significantly contribute to the evolution of increasingly capable, efficient, and even autonomous remote exploration.
NASA Technical Reports Server (NTRS)
Lindley, Craig A.
1995-01-01
This paper presents an architecture for satellites regarded as intercommunicating agents. The architecture is based upon a postmodern paradigm of artificial intelligence in which represented knowledge is regarded as text, inference procedures are regarded as social discourse and decision making conventions and the semantics of representations are grounded in the situated behaviour and activity of agents. A particular protocol is described for agent participation in distributed search and retrieval operations conducted as joint activities.
Thermal Hotspots in CPU Die and It's Future Architecture
NASA Astrophysics Data System (ADS)
Wang, Jian; Hu, Fu-Yuan
Owing to the increasing core frequency and chip integration and the limited die dimension, the power densities in CPU chip have been increasing fastly. The high temperature on chip resulted by power densities threats the processor's performance and chip's reliability. This paper analyzed the thermal hotspots in die and their properties. A new architecture of function units in die - - hot units distributed architecture is suggested to cope with the problems of high power densities for future processor chip.
Unified web-based network management based on distributed object orientated software agents
NASA Astrophysics Data System (ADS)
Djalalian, Amir; Mukhtar, Rami; Zukerman, Moshe
2002-09-01
This paper presents an architecture that provides a unified web interface to managed network devices that support CORBA, OSI or Internet-based network management protocols. A client gains access to managed devices through a web browser, which is used to issue management operations and receive event notifications. The proposed architecture is compatible with both the OSI Management reference Model and CORBA. The steps required for designing the building blocks of such architecture are identified.
A Role for Semantic Web Technologies in Patient Record Data Collection
NASA Astrophysics Data System (ADS)
Ogbuji, Chimezie
Business Process Management Systems (BPMS) are a component of the stack of Web standards that comprise Service Oriented Architecture (SOA). Such systems are representative of the architectural framework of modern information systems built in an enterprise intranet and are in contrast to systems built for deployment on the larger World Wide Web. The REST architectural style is an emerging style for building loosely coupled systems based purely on the native HTTP protocol. It is a coordinated set of architectural constraints with a goal to minimize latency, maximize the independence and scalability of distributed components, and facilitate the use of intermediary processors.Within the development community for distributed, Web-based systems, there has been a debate regarding themerits of both approaches. In some cases, there are legitimate concerns about the differences in both architectural styles. In other cases, the contention seems to be based on concerns that are marginal at best. In this chapter, we will attempt to contribute to this debate by focusing on a specific, deployed use case that emphasizes the role of the Semantic Web, a simple Web application architecture that leverages the use of declarative XML processing, and the needs of a workflow system. The use case involves orchestrating a work process associated with the data entry of structured patient record content into a research registry at the Cleveland Clinic's Clinical Investigation department in the Heart and Vascular Institute.
McGinnis, John W.
1980-01-01
The very same technological advances that support distributed systems have also dramatically increased the efficiency and capabilities of centralized systems making it more complex for health care managers to select the “right” system architecture to meet their particular needs. How this selection can be made with a reasonable degree of managerial comfort is the focus of this paper. The approach advocated is based on experience in developing the Tri-Service Medical Information System (TRIMIS) program. Along with this technical standards and configuration management procedures were developed that provided the necessary guidance to implement the selected architecture and to allow it to change in a controlled way over its life cycle.
Quantum key distribution network for multiple applications
NASA Astrophysics Data System (ADS)
Tajima, A.; Kondoh, T.; Ochi, T.; Fujiwara, M.; Yoshino, K.; Iizuka, H.; Sakamoto, T.; Tomita, A.; Shimamura, E.; Asami, S.; Sasaki, M.
2017-09-01
The fundamental architecture and functions of secure key management in a quantum key distribution (QKD) network with enhanced universal interfaces for smooth key sharing between arbitrary two nodes and enabling multiple secure communication applications are proposed. The proposed architecture consists of three layers: a quantum layer, key management layer and key supply layer. We explain the functions of each layer, the key formats in each layer and the key lifecycle for enabling a practical QKD network. A quantum key distribution-advanced encryption standard (QKD-AES) hybrid system and an encrypted smartphone system were developed as secure communication applications on our QKD network. The validity and usefulness of these systems were demonstrated on the Tokyo QKD Network testbed.
Comparative Study of 3-Dimensional Woven Joint Architectures for Composite Spacecraft Structures
NASA Technical Reports Server (NTRS)
Jones, Justin S.; Polis, Daniel L.; Rowles, Russell R.; Segal, Kenneth N.
2011-01-01
The National Aeronautics and Space Administration (NASA) Exploration Systems Mission Directorate initiated an Advanced Composite Technology (ACT) Project through the Exploration Technology Development Program in order to support the polymer composite needs for future heavy lift launch architectures. As an example, the large composite structural applications on Ares V inspired the evaluation of advanced joining technologies, specifically 3D woven composite joints, which could be applied to segmented barrel structures needed for autoclave cured barrel segments due to autoclave size constraints. Implementation of these 3D woven joint technologies may offer enhancements in damage tolerance without sacrificing weight. However, baseline mechanical performance data is needed to properly analyze the joint stresses and subsequently design/down-select a preform architecture. Six different configurations were designed and prepared for this study; each consisting of a different combination of warp/fill fiber volume ratio and preform interlocking method (Z-fiber, fully interlocked, or hybrid). Tensile testing was performed for this study with the enhancement of a dual camera Digital Image Correlation (DIC) system which provides the capability to measure full-field strains and three dimensional displacements of objects under load. As expected, the ratio of warp/fill fiber has a direct influence on strength and modulus, with higher values measured in the direction of higher fiber volume bias. When comparing the Z-fiber weave to a fully interlocked weave with comparable fiber bias, the Z-fiber weave demonstrated the best performance in two different comparisons. We report the measured tensile strengths and moduli for test coupons from the 6 different weave configurations under study.
Remodeling of nuclear architecture by the thiodioxoxpiperazine metabolite chaetocin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Illner, Doris; Zinner, Roman; Handtke, Violet
2010-06-10
Extensive changes of higher order chromatin arrangements can be observed during prometaphase, terminal cell differentiation and cellular senescence. Experimental systems where major reorganization of nuclear architecture can be induced under defined conditions, may help to better understand the functional implications of such changes. Here, we report on profound chromatin reorganization in fibroblast nuclei by chaetocin, a thiodioxopiperazine metabolite. Chaetocin induces strong condensation of chromosome territories separated by a wide interchromatin space largely void of DNA. Cell viability is maintained irrespective of this peculiar chromatin phenotype. Cell cycle markers, histone signatures, and tests for cellular senescence and for oxidative stress indicatemore » that chaetocin induced chromatin condensation/clustering (CICC) represents a distinct entity among nuclear phenotypes associated with condensed chromatin. The territorial organization of entire chromosomes is maintained in CICC nuclei; however, the conventional nuclear architecture harboring gene-dense chromatin in the nuclear interior and gene-poor chromatin at the nuclear periphery is lost. Instead gene-dense and transcriptionally active chromatin is shifted to the periphery of individual condensed chromosome territories where nascent RNA becomes highly enriched around their outer surface. This chromatin reorganization makes CICC nuclei an attractive model system to study this border zone as a distinct compartment for transcription. Induction of CICC is fully inhibited by thiol-dependent antioxidants, but is not related to the production of reactive oxygen species. Our results suggest that chaetocin functionally impairs the thioredoxin (Trx) system, which is essential for deoxynucleotide synthesis, but in addition involved in a wide range of cellular functions. The mechanisms involved in CICC formation remain to be fully explored.« less
Designing Online Learning Communities of Practice: A Democratic Perspective
ERIC Educational Resources Information Center
Sorensen, Elsebeth Korsgaard; Murchu, Daithi O.
2004-01-01
This study addresses the problem of designing an appropriate learning space or architecture for distributed online courses using net-based communication technologies. We apply Wenger's criteria to explore, identify and discuss the design architectures of two online courses from two comparable online Master's programmes, developed and delivered in…
Integrating hospital information systems in healthcare institutions: a mediation architecture.
El Azami, Ikram; Cherkaoui Malki, Mohammed Ouçamah; Tahon, Christian
2012-10-01
Many studies have examined the integration of information systems into healthcare institutions, leading to several standards in the healthcare domain (CORBAmed: Common Object Request Broker Architecture in Medicine; HL7: Health Level Seven International; DICOM: Digital Imaging and Communications in Medicine; and IHE: Integrating the Healthcare Enterprise). Due to the existence of a wide diversity of heterogeneous systems, three essential factors are necessary to fully integrate a system: data, functions and workflow. However, most of the previous studies have dealt with only one or two of these factors and this makes the system integration unsatisfactory. In this paper, we propose a flexible, scalable architecture for Hospital Information Systems (HIS). Our main purpose is to provide a practical solution to insure HIS interoperability so that healthcare institutions can communicate without being obliged to change their local information systems and without altering the tasks of the healthcare professionals. Our architecture is a mediation architecture with 3 levels: 1) a database level, 2) a middleware level and 3) a user interface level. The mediation is based on two central components: the Mediator and the Adapter. Using the XML format allows us to establish a structured, secured exchange of healthcare data. The notion of medical ontology is introduced to solve semantic conflicts and to unify the language used for the exchange. Our mediation architecture provides an effective, promising model that promotes the integration of hospital information systems that are autonomous, heterogeneous, semantically interoperable and platform-independent.
NASA Astrophysics Data System (ADS)
Dutta, Sandeep; Gros, Eric
2018-03-01
Deep Learning (DL) has been successfully applied in numerous fields fueled by increasing computational power and access to data. However, for medical imaging tasks, limited training set size is a common challenge when applying DL. This paper explores the applicability of DL to the task of classifying a single axial slice from a CT exam into one of six anatomy regions. A total of 29000 images selected from 223 CT exams were manually labeled for ground truth. An additional 54 exams were labeled and used as an independent test set. The network architecture developed for this application is composed of 6 convolutional layers and 2 fully connected layers with RELU non-linear activations between each layer. Max-pooling was used after every second convolutional layer, and a softmax layer was used at the end. Given this base architecture, the effect of inclusion of network architecture components such as Dropout and Batch Normalization on network performance and training is explored. The network performance as a function of training and validation set size is characterized by training each network architecture variation using 5,10,20,40,50 and 100% of the available training data. The performance comparison of the various network architectures was done for anatomy classification as well as two computer vision datasets. The anatomy classifier accuracy varied from 74.1% to 92.3% in this study depending on the training size and network layout used. Dropout layers improved the model accuracy for all training sizes.
Convolutional neural network architectures for predicting DNA–protein binding
Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.
2016-01-01
Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608
Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency
Abu Bakr, Muhammad; Lee, Sukhan
2017-01-01
The paradigm of multisensor data fusion has been evolved from a centralized architecture to a decentralized or distributed architecture along with the advancement in sensor and communication technologies. These days, distributed state estimation and data fusion has been widely explored in diverse fields of engineering and control due to its superior performance over the centralized one in terms of flexibility, robustness to failure and cost effectiveness in infrastructure and communication. However, distributed multisensor data fusion is not without technical challenges to overcome: namely, dealing with cross-correlation and inconsistency among state estimates and sensor data. In this paper, we review the key theories and methodologies of distributed multisensor data fusion available to date with a specific focus on handling unknown correlation and data inconsistency. We aim at providing readers with a unifying view out of individual theories and methodologies by presenting a formal analysis of their implications. Finally, several directions of future research are highlighted. PMID:29077035
Structural optimization of 3D-printed synthetic spider webs for high strength
NASA Astrophysics Data System (ADS)
Qin, Zhao; Compton, Brett G.; Lewis, Jennifer A.; Buehler, Markus J.
2015-05-01
Spiders spin intricate webs that serve as sophisticated prey-trapping architectures that simultaneously exhibit high strength, elasticity and graceful failure. To determine how web mechanics are controlled by their topological design and material distribution, here we create spider-web mimics composed of elastomeric filaments. Specifically, computational modelling and microscale 3D printing are combined to investigate the mechanical response of elastomeric webs under multiple loading conditions. We find the existence of an asymptotic prey size that leads to a saturated web strength. We identify pathways to design elastomeric material structures with maximum strength, low density and adaptability. We show that the loading type dictates the optimal material distribution, that is, a homogeneous distribution is better for localized loading, while stronger radial threads with weaker spiral threads is better for distributed loading. Our observations reveal that the material distribution within spider webs is dictated by the loading condition, shedding light on their observed architectural variations.
Structural optimization of 3D-printed synthetic spider webs for high strength.
Qin, Zhao; Compton, Brett G; Lewis, Jennifer A; Buehler, Markus J
2015-05-15
Spiders spin intricate webs that serve as sophisticated prey-trapping architectures that simultaneously exhibit high strength, elasticity and graceful failure. To determine how web mechanics are controlled by their topological design and material distribution, here we create spider-web mimics composed of elastomeric filaments. Specifically, computational modelling and microscale 3D printing are combined to investigate the mechanical response of elastomeric webs under multiple loading conditions. We find the existence of an asymptotic prey size that leads to a saturated web strength. We identify pathways to design elastomeric material structures with maximum strength, low density and adaptability. We show that the loading type dictates the optimal material distribution, that is, a homogeneous distribution is better for localized loading, while stronger radial threads with weaker spiral threads is better for distributed loading. Our observations reveal that the material distribution within spider webs is dictated by the loading condition, shedding light on their observed architectural variations.
The Basal Ganglia and Adaptive Motor Control
NASA Astrophysics Data System (ADS)
Graybiel, Ann M.; Aosaki, Toshihiko; Flaherty, Alice W.; Kimura, Minoru
1994-09-01
The basal ganglia are neural structures within the motor and cognitive control circuits in the mammalian forebrain and are interconnected with the neocortex by multiple loops. Dysfunction in these parallel loops caused by damage to the striatum results in major defects in voluntary movement, exemplified in Parkinson's disease and Huntington's disease. These parallel loops have a distributed modular architecture resembling local expert architectures of computational learning models. During sensorimotor learning, such distributed networks may be coordinated by widely spaced striatal interneurons that acquire response properties on the basis of experienced reward.
Programming with process groups: Group and multicast semantics
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry
1991-01-01
Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.
Computer Sciences and Data Systems, volume 1
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.
Fault tolerant and lifetime control architecture for autonomous vehicles
NASA Astrophysics Data System (ADS)
Bogdanov, Alexander; Chen, Yi-Liang; Sundareswaran, Venkataraman; Altshuler, Thomas
2008-04-01
Increased vehicle autonomy, survivability and utility can provide an unprecedented impact on mission success and are one of the most desirable improvements for modern autonomous vehicles. We propose a general architecture of intelligent resource allocation, reconfigurable control and system restructuring for autonomous vehicles. The architecture is based on fault-tolerant control and lifetime prediction principles, and it provides improved vehicle survivability, extended service intervals, greater operational autonomy through lower rate of time-critical mission failures and lesser dependence on supplies and maintenance. The architecture enables mission distribution, adaptation and execution constrained on vehicle and payload faults and desirable lifetime. The proposed architecture will allow managing missions more efficiently by weighing vehicle capabilities versus mission objectives and replacing the vehicle only when it is necessary.
NASA Astrophysics Data System (ADS)
Park, Soomyung; Joo, Seong-Soon; Yae, Byung-Ho; Lee, Jong-Hyun
2002-07-01
In this paper, we present the Optical Cross-Connect (OXC) Management Control System Architecture, which has the scalability and robust maintenance and provides the distributed managing environment in the optical transport network. The OXC system we are developing, which is divided into the hardware and the internal and external software for the OXC system, is made up the OXC subsystem with the Optical Transport Network (OTN) sub layers-hardware and the optical switch control system, the signaling control protocol subsystem performing the User-to-Network Interface (UNI) and Network-to-Network Interface (NNI) signaling control, the Operation Administration Maintenance & Provisioning (OAM&P) subsystem, and the network management subsystem. And the OXC management control system has the features that can support the flexible expansion of the optical transport network, provide the connectivity to heterogeneous external network elements, be added or deleted without interrupting OAM&P services, be remotely operated, provide the global view and detail information for network planner and operator, and have Common Object Request Broker Architecture (CORBA) based the open system architecture adding and deleting the intelligent service networking functions easily in future. To meet these considerations, we adopt the object oriented development method in the whole developing steps of the system analysis, design, and implementation to build the OXC management control system with the scalability, the maintenance, and the distributed managing environment. As a consequently, the componentification for the OXC operation management functions of each subsystem makes the robust maintenance, and increases code reusability. Also, the component based OXC management control system architecture will have the flexibility and scalability in nature.
ELF communications system ecological monitoring program: Pollinating insect studies
NASA Astrophysics Data System (ADS)
Strickler, Karen; Schriber, J. Mark
1994-11-01
High voltage transmission lines and the earth's and other magnetic fields have been shown to affect honeybee reproduction, survival, orientation, and nest structure. ELY EM fields could have similar effects on native megachilid bees. Two species in the genus Megachile were abundant in artificial nests at experimental and control areas in Dickinson and Iron Counties in Michigan. Data on their nest architecture, nest activity, and emergence/mortality were collected between 1983 and 1993. Eight hypotheses concerning the possible effects of ELY EM fields were considered using these data. The ELY antenna has been fully operational since the summer of 1989. Tests of the hypotheses compare control vs. experimental areas before and after the ELY antenna became fully operational.
Crystal structure of an EfPDF complex with Met-Ala-Ser based on crystallographic packing.
Nam, Ki Hyun; Kim, Kook-Han; Kim, Eunice Eun Kyeong; Hwang, Kwang Yeon
2009-04-17
PDF (peptide deformylase) plays a critical role in the production of mature proteins by removing the N-formyl polypeptide of nascent proteins in the prokaryote cell system. This protein is essential for bacterial growth, making it an attractive target for the design of new antibiotics. Accordingly, PDF has been evaluated as a drug target; however, architectural mechanism studies of PDF have not yet fully elucidated its molecular function. We recently reported the crystal structure of PDF produced by Enterococcus faecium [K.H. Nam, J.I. Ham, A. Priyadarshi, E.E. Kim, N. Chung, K.Y. Hwang, "Insight into the antibacterial drug design and architectural mechanism of peptide recognition from the E. faecium peptide deformylase structure", Proteins 74 (2009) 261-265]. Here, we present the crystal structure of the EfPDF complex with MAS (Met-Ser-Ala), thereby not only delineating the architectural mechanism for the recognition of mimic-peptides by N-terminal cleaved expression peptide, but also suggesting possible targets for rational design of antibacterial drugs. In addition to their implications for drug design, these structural studies will facilitate elucidation of the architectural mechanism responsible for the peptide recognition of PDF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Liu; Manthiram, Arumugam
A high-loading electrode is essential for establishing high-energy-density lithium-sulfur (Li-S) batteries, but it is confronted with critical challenges. Here in this paper, we present a freestanding poached-egg-shaped architecture through a facile template supported vacuum-filtration strategy and employ it as an efficient sulfur host for Li-S batteries. This unique architecture guarantees an effective encapsulation of the “sulfur yolk” inside the fully vacuum sealed framework, effectively limiting the active material loss and polysulfide diffusion. Also, the conductive and porous framework serves as an interlinked electron pathway and electrolyte channel, greatly facilitating fast electric/ionic transport along with active material reactivation and reutilization duringmore » cycling. A high peak discharge capacity (1200 mA h g -1), a low capacity-fade rate (0.09% cycle-1) for 500 cycles, and excellent rate capability (C/5-1C rates) are accomplished. Moreover, with such an advantageous architecture, the sulfur loading is successfully increased to 32 mg cm -2 to achieve an areal capacity of up to 16 mA h cm -2. This work provides guidelines for realizing optimized highloading Li-S batteries.« less
Blueprint for a microwave trapped ion quantum computer.
Lekitsch, Bjoern; Weidt, Sebastian; Fowler, Austin G; Mølmer, Klaus; Devitt, Simon J; Wunderlich, Christof; Hensinger, Winfried K
2017-02-01
The availability of a universal quantum computer may have a fundamental impact on a vast number of research fields and on society as a whole. An increasingly large scientific and industrial community is working toward the realization of such a device. An arbitrarily large quantum computer may best be constructed using a modular approach. We present a blueprint for a trapped ion-based scalable quantum computer module, making it possible to create a scalable quantum computer architecture based on long-wavelength radiation quantum gates. The modules control all operations as stand-alone units, are constructed using silicon microfabrication techniques, and are within reach of current technology. To perform the required quantum computations, the modules make use of long-wavelength radiation-based quantum gate technology. To scale this microwave quantum computer architecture to a large size, we present a fully scalable design that makes use of ion transport between different modules, thereby allowing arbitrarily many modules to be connected to construct a large-scale device. A high error-threshold surface error correction code can be implemented in the proposed architecture to execute fault-tolerant operations. With appropriate adjustments, the proposed modules are also suitable for alternative trapped ion quantum computer architectures, such as schemes using photonic interconnects.
Quantification of complex modular architecture in plants.
Reeb, Catherine; Kaandorp, Jaap; Jansson, Fredrik; Puillandre, Nicolas; Dubuisson, Jean-Yves; Cornette, Raphaël; Jabbour, Florian; Coudert, Yoan; Patiño, Jairo; Flot, Jean-François; Vanderpoorten, Alain
2018-04-01
Morphometrics, the assignment of quantities to biological shapes, is a powerful tool to address taxonomic, evolutionary, functional and developmental questions. We propose a novel method for shape quantification of complex modular architecture in thalloid plants, whose extremely reduced morphologies, combined with the lack of a formal framework for thallus description, have long rendered taxonomic and evolutionary studies extremely challenging. Using graph theory, thalli are described as hierarchical series of nodes and edges, allowing for accurate, homologous and repeatable measurements of widths, lengths and angles. The computer program MorphoSnake was developed to extract the skeleton and contours of a thallus and automatically acquire, at each level of organization, width, length, angle and sinuosity measurements. Through the quantification of leaf architecture in Hymenophyllum ferns (Polypodiopsida) and a fully worked example of integrative taxonomy in the taxonomically challenging thalloid liverwort genus Riccardia, we show that MorphoSnake is applicable to all ramified plants. This new possibility of acquiring large numbers of quantitative traits in plants with complex modular architectures opens new perspectives of applications, from the development of rapid species identification tools to evolutionary analyses of adaptive plasticity. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.
Luo, Liu; Manthiram, Arumugam
2017-08-31
A high-loading electrode is essential for establishing high-energy-density lithium-sulfur (Li-S) batteries, but it is confronted with critical challenges. Here in this paper, we present a freestanding poached-egg-shaped architecture through a facile template supported vacuum-filtration strategy and employ it as an efficient sulfur host for Li-S batteries. This unique architecture guarantees an effective encapsulation of the “sulfur yolk” inside the fully vacuum sealed framework, effectively limiting the active material loss and polysulfide diffusion. Also, the conductive and porous framework serves as an interlinked electron pathway and electrolyte channel, greatly facilitating fast electric/ionic transport along with active material reactivation and reutilization duringmore » cycling. A high peak discharge capacity (1200 mA h g -1), a low capacity-fade rate (0.09% cycle-1) for 500 cycles, and excellent rate capability (C/5-1C rates) are accomplished. Moreover, with such an advantageous architecture, the sulfur loading is successfully increased to 32 mg cm -2 to achieve an areal capacity of up to 16 mA h cm -2. This work provides guidelines for realizing optimized highloading Li-S batteries.« less
AltiVec performance increases for autonomous robotics for the MARSSCAPE architecture program
NASA Astrophysics Data System (ADS)
Gothard, Benny M.
2002-02-01
One of the main tall poles that must be overcome to develop a fully autonomous vehicle is the inability of the computer to understand its surrounding environment to a level that is required for the intended task. The military mission scenario requires a robot to interact in a complex, unstructured, dynamic environment. Reference A High Fidelity Multi-Sensor Scene Understanding System for Autonomous Navigation The Mobile Autonomous Robot Software Self Composing Adaptive Programming Environment (MarsScape) perception research addresses three aspects of the problem; sensor system design, processing architectures, and algorithm enhancements. A prototype perception system has been demonstrated on robotic High Mobility Multi-purpose Wheeled Vehicle and All Terrain Vehicle testbeds. This paper addresses the tall pole of processing requirements and the performance improvements based on the selected MarsScape Processing Architecture. The processor chosen is the Motorola Altivec-G4 Power PC(PPC) (1998 Motorola, Inc.), a highly parallized commercial Single Instruction Multiple Data processor. Both derived perception benchmarks and actual perception subsystems code will be benchmarked and compared against previous Demo II-Semi-autonomous Surrogate Vehicle processing architectures along with desktop Personal Computers(PC). Performance gains are highlighted with progress to date, and lessons learned and future directions are described.
VASSAR: Value assessment of system architectures using rules
NASA Astrophysics Data System (ADS)
Selva, D.; Crawley, E. F.
A key step of the mission development process is the selection of a system architecture, i.e., the layout of the major high-level system design decisions. This step typically involves the identification of a set of candidate architectures and a cost-benefit analysis to compare them. Computational tools have been used in the past to bring rigor and consistency into this process. These tools can automatically generate architectures by enumerating different combinations of decisions and options. They can also evaluate these architectures by applying cost models and simplified performance models. Current performance models are purely quantitative tools that are best fit for the evaluation of the technical performance of mission design. However, assessing the relative merit of a system architecture is a much more holistic task than evaluating performance of a mission design. Indeed, the merit of a system architecture comes from satisfying a variety of stakeholder needs, some of which are easy to quantify, and some of which are harder to quantify (e.g., elegance, scientific value, political robustness, flexibility). Moreover, assessing the merit of a system architecture at these very early stages of design often requires dealing with a mix of: a) quantitative and semi-qualitative data; objective and subjective information. Current computational tools are poorly suited for these purposes. In this paper, we propose a general methodology that can used to assess the relative merit of several candidate system architectures under the presence of objective, subjective, quantitative, and qualitative stakeholder needs. The methodology called VASSAR (Value ASsessment for System Architectures using Rules). The major underlying assumption of the VASSAR methodology is that the merit of a system architecture can assessed by comparing the capabilities of the architecture with the stakeholder requirements. Hence for example, a candidate architecture that fully satisfies all critical sta- eholder requirements is a good architecture. The assessment process is thus fundamentally seen as a pattern matching process where capabilities match requirements, which motivates the use of rule-based expert systems (RBES). This paper describes the VASSAR methodology and shows how it can be applied to a large complex space system, namely an Earth observation satellite system. Companion papers show its applicability to the NASA space communications and navigation program and the joint NOAA-DoD NPOESS program.
NASA Astrophysics Data System (ADS)
Berkouk, Djihed; Bouzir, Tallal Abdel Karim; Mazouz, Said
2018-05-01
The bioclimatic architecture considers the local climatic conditions in order to reconcile maximally the comfort condition of the occupants. Through the several simulations effectuated by the TRNSYS software, this paper shows that the new architecture produced in the south of Algeria following the northern cities tendency is not fully adapted to the hot dry climate of the southern regions, such as the city of Biskra. In these regions, the passive techniques design influence strongly on the thermal architectural space performance. In this regard, diverse of the vertical shading devices size were proposed to evaluate the impact of this passive technique on the thermal performance of the promotional apartments situated in the city of Biskra. The comparative analysis between the simulation results says that the effectiveness of the vertical shading devices on the thermal performance spaces is reducing the indoor air temperature during the summer period. In addition, this analysis shows that promotional apartments are unsuitable for the desert climate.
Wavelet-enhanced convolutional neural network: a new idea in a deep learning paradigm.
Savareh, Behrouz Alizadeh; Emami, Hassan; Hajiabadi, Mohamadreza; Azimi, Seyed Majid; Ghafoori, Mahyar
2018-05-29
Manual brain tumor segmentation is a challenging task that requires the use of machine learning techniques. One of the machine learning techniques that has been given much attention is the convolutional neural network (CNN). The performance of the CNN can be enhanced by combining other data analysis tools such as wavelet transform. In this study, one of the famous implementations of CNN, a fully convolutional network (FCN), was used in brain tumor segmentation and its architecture was enhanced by wavelet transform. In this combination, a wavelet transform was used as a complementary and enhancing tool for CNN in brain tumor segmentation. Comparing the performance of basic FCN architecture against the wavelet-enhanced form revealed a remarkable superiority of enhanced architecture in brain tumor segmentation tasks. Using mathematical functions and enhancing tools such as wavelet transform and other mathematical functions can improve the performance of CNN in any image processing task such as segmentation and classification.
A quantum annealing architecture with all-to-all connectivity from local interactions.
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-10-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is-in the spirit of topological quantum memories-redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems.
Terahertz Array Receivers with Integrated Antennas
NASA Technical Reports Server (NTRS)
Chattopadhyay, Goutam; Llombart, Nuria; Lee, Choonsup; Jung, Cecile; Lin, Robert; Cooper, Ken B.; Reck, Theodore; Siles, Jose; Schlecht, Erich; Peralta, Alessandro;
2011-01-01
Highly sensitive terahertz heterodyne receivers have been mostly single-pixel. However, now there is a real need of multi-pixel array receivers at these frequencies driven by the science and instrument requirements. In this paper we explore various receiver font-end and antenna architectures for use in multi-pixel integrated arrays at terahertz frequencies. Development of wafer-level integrated terahertz receiver front-end by using advanced semiconductor fabrication technologies has progressed very well over the past few years. Novel stacking of micro-machined silicon wafers which allows for the 3-dimensional integration of various terahertz receiver components in extremely small packages has made it possible to design multi-pixel heterodyne arrays. One of the critical technologies to achieve fully integrated system is the antenna arrays compatible with the receiver array architecture. In this paper we explore different receiver and antenna architectures for multi-pixel heterodyne and direct detector arrays for various applications such as multi-pixel high resolution spectrometer and imaging radar at terahertz frequencies.
Hardware implementation of hierarchical volume subdivision-based elastic registration.
Dandekar, Omkar; Walimbe, Vivek; Shekhar, Raj
2006-01-01
Real-time, elastic and fully automated 3D image registration is critical to the efficiency and effectiveness of many image-guided diagnostic and treatment procedures relying on multimodality image fusion or serial image comparison. True, real-time performance will make many 3D image registration-based techniques clinically viable. Hierarchical volume subdivision-based image registration techniques are inherently faster than most elastic registration techniques, e.g. free-form deformation (FFD)-based techniques, and are more amenable for achieving real-time performance through hardware acceleration. Our group has previously reported an FPGA-based architecture for accelerating FFD-based image registration. In this article we show how our existing architecture can be adapted to support hierarchical volume subdivision-based image registration. A proof-of-concept implementation of the architecture achieved speedups of 100 for elastic registration against an optimized software implementation on a 3.2 GHz Pentium III Xeon workstation. Due to inherent parallel nature of the hierarchical volume subdivision-based image registration techniques further speedup can be achieved by using several computing modules in parallel.
Gichoya, Judy; Pearce, Chris; Wickramasinghe, Nilmini
2013-01-01
Kenya ranks among the twenty-two countries that collectively contribute about 80% of the world's Tuberculosis cases; with a 50-200 fold increased risk of tuberculosis in HIV infected persons versus non-HIV hosts. Contemporaneously, there is an increase in mobile penetration and its use to support healthcare throughout Africa. Many are skeptical that such m-health solutions are unsustainable and not scalable. We seek to design a scalable, pervasive m-health solution for Tuberculosis care to become a use case for sustainable and scalable health IT in limited resource settings. We combine agile design principles and user-centered design to develop the architecture needed for this initiative. Furthermore, the architecture runs on multiple devices integrated to deliver functionality critical for successful Health IT implementation in limited resource settings. It is anticipated that once fully implemented, the proposed m-health solution will facilitate superior monitoring and management of Tuberculosis and thereby reduce the alarming statistic regarding this disease in this region.
A top-down manner-based DCNN architecture for semantic image segmentation.
Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin
2017-01-01
Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.
Region-Oriented Placement Algorithm for Coarse-Grained Power-Gating FPGA Architecture
NASA Astrophysics Data System (ADS)
Li, Ce; Dong, Yiping; Watanabe, Takahiro
An FPGA plays an essential role in industrial products due to its fast, stable and flexible features. But the power consumption of FPGAs used in portable devices is one of critical issues. Top-down hierarchical design method is commonly used in both ASIC and FPGA design. But, in the case where plural modules are integrated in an FPGA and some of them might be in sleep-mode, current FPGA architecture cannot be fully effective. In this paper, coarse-grained power gating FPGA architecture is proposed where a whole area of an FPGA is partitioned into several regions and power supply is controlled for each region, so that modules in sleep mode can be effectively power-off. We also propose a region oriented FPGA placement algorithm fitted to this user's hierarchical design based on VPR[1]. Simulation results show that this proposed method could reduce power consumption of FPGA by 38% on average by setting unused modules or regions in sleep mode.
A quantum annealing architecture with all-to-all connectivity from local interactions
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-01-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is—in the spirit of topological quantum memories—redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems. PMID:26601316
The Visualization Toolkit (VTK): Rewriting the rendering code for modern graphics cards
NASA Astrophysics Data System (ADS)
Hanwell, Marcus D.; Martin, Kenneth M.; Chaudhary, Aashish; Avila, Lisa S.
2015-09-01
The Visualization Toolkit (VTK) is an open source, permissively licensed, cross-platform toolkit for scientific data processing, visualization, and data analysis. It is over two decades old, originally developed for a very different graphics card architecture. Modern graphics cards feature fully programmable, highly parallelized architectures with large core counts. VTK's rendering code was rewritten to take advantage of modern graphics cards, maintaining most of the toolkit's programming interfaces. This offers the opportunity to compare the performance of old and new rendering code on the same systems/cards. Significant improvements in rendering speeds and memory footprints mean that scientific data can be visualized in greater detail than ever before. The widespread use of VTK means that these improvements will reap significant benefits.
Techniques for the rapid display and manipulation of 3-D biomedical data.
Goldwasser, S M; Reynolds, R A; Talton, D A; Walsh, E S
1988-01-01
The use of fully interactive 3-D workstations with true real-time performance will become increasingly common as technology matures and economical commercial systems become available. This paper provides a comprehensive introduction to high speed approaches to the display and manipulation of 3-D medical objects obtained from tomographic data acquisition systems such as CT, MR, and PET. A variety of techniques are outlined including the use of software on conventional minicomputers, hardware assist devices such as array processors and programmable frame buffers, and special purpose computer architecture for dedicated high performance systems. While both algorithms and architectures are addressed, the major theme centers around the utilization of hardware-based approaches including parallel processors for the implementation of true real-time systems.
Shared Memory Parallelization of an Implicit ADI-type CFD Code
NASA Technical Reports Server (NTRS)
Hauser, Th.; Huang, P. G.
1999-01-01
A parallelization study designed for ADI-type algorithms is presented using the OpenMP specification for shared-memory multiprocessor programming. Details of optimizations specifically addressed to cache-based computer architectures are described and performance measurements for the single and multiprocessor implementation are summarized. The paper demonstrates that optimization of memory access on a cache-based computer architecture controls the performance of the computational algorithm. A hybrid MPI/OpenMP approach is proposed for clusters of shared memory machines to further enhance the parallel performance. The method is applied to develop a new LES/DNS code, named LESTool. A preliminary DNS calculation of a fully developed channel flow at a Reynolds number of 180, Re(sub tau) = 180, has shown good agreement with existing data.
Sun, Mengshu; Xue, Yuankun; Bogdan, Paul; Tang, Jian; Wang, Yanzhi; Lin, Xue
2018-01-01
Recently, a new approach has been introduced that leverages and over-provisions energy storage devices (ESDs) in data centers for performing power capping and facilitating capex/opex reductions, without performance overhead. To fully realize the potential benefits of the hierarchical ESD structure, we propose a comprehensive design, control, and provisioning framework including (i) designing power delivery architecture supporting hierarchical ESD structure and hybrid ESDs for some levels, as well as (ii) control and provisioning of the hierarchical ESD structure including run-time ESD charging/discharging control and design-time determination of ESD types, homogeneous/hybrid options, ESD provisioning at each level. Experiments have been conducted using real Google data center workloads based on realistic data center specifications.
Xue, Yuankun; Bogdan, Paul; Tang, Jian; Wang, Yanzhi; Lin, Xue
2018-01-01
Recently, a new approach has been introduced that leverages and over-provisions energy storage devices (ESDs) in data centers for performing power capping and facilitating capex/opex reductions, without performance overhead. To fully realize the potential benefits of the hierarchical ESD structure, we propose a comprehensive design, control, and provisioning framework including (i) designing power delivery architecture supporting hierarchical ESD structure and hybrid ESDs for some levels, as well as (ii) control and provisioning of the hierarchical ESD structure including run-time ESD charging/discharging control and design-time determination of ESD types, homogeneous/hybrid options, ESD provisioning at each level. Experiments have been conducted using real Google data center workloads based on realistic data center specifications. PMID:29351553
Crowley, James D; Bandeen, Pauline H
2010-01-14
A one pot, multicomponent CuAAC reaction has been exploited for the safe generation of alkyl, benzyl or aryl linked polydentate pyridyl-1,2,3-triazole ligands from their corresponding halides, sodium azide and alkynes in excellent yields. The ligands have been fully characterised by elemental analysis, HR-ESMS, IR, (1)H and (13)C NMR and in two cases the structures were confirmed by X-ray crystallography. Additionally, we have examined the Ag(I) coordination chemistry of these ligands and found, using HR-ESMS, (1)H NMR, and X-ray crystallography, that both discrete and polymeric metallosupramolecular architectures can be formed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chien, C; Elgorriaga, I; McConaghy, C
2001-07-03
Emerging CMOS and MEMS technologies enable the implementation of a large number of wireless distributed microsensors that can be easily and rapidly deployed to form highly redundant, self-configuring, and ad hoc sensor networks. To facilitate ease of deployment, these sensors should operate on battery for extended periods of time. A particular challenge in maintaining extended battery lifetime lies in achieving communications with low power. This paper presents a direct-sequence spread-spectrum modem architecture that provides robust communications for wireless sensor networks while dissipating very low power. The modem architecture has been verified in an FPGA implementation that dissipates only 33 mWmore » for both transmission and reception. The implementation can be easily mapped to an ASIC technology, with an estimated power performance of less than 1 mW.« less
A task control architecture for autonomous robots
NASA Technical Reports Server (NTRS)
Simmons, Reid; Mitchell, Tom
1990-01-01
An architecture is presented for controlling robots that have multiple tasks, operate in dynamic domains, and require a fair degree of autonomy. The architecture is built on several layers of functionality, including a distributed communication layer, a behavior layer for querying sensors, expanding goals, and executing commands, and a task level for managing the temporal aspects of planning and achieving goals, coordinating tasks, allocating resources, monitoring, and recovering from errors. Application to a legged planetary rover and an indoor mobile manipulator is described.
2005-12-01
weapon system evaluation as a high-level architecture and distributed interactive simulation 6 compliant, human-in-the-loop, virtual environment...Directorate to participate in the Limited Early User Evaluation (LEUE) of the Common Avionics Architecture System (CAAS) cockpit. ARL conducted a human...CAAS, the UH-60M PO conducted a limited early user evaluation (LEUE) to evaluate the integration of the CAAS in the UH-60M crew station. The
Hypercluster Parallel Processor
NASA Technical Reports Server (NTRS)
Blech, Richard A.; Cole, Gary L.; Milner, Edward J.; Quealy, Angela
1992-01-01
Hypercluster computer system includes multiple digital processors, operation of which coordinated through specialized software. Configurable according to various parallel-computing architectures of shared-memory or distributed-memory class, including scalar computer, vector computer, reduced-instruction-set computer, and complex-instruction-set computer. Designed as flexible, relatively inexpensive system that provides single programming and operating environment within which one can investigate effects of various parallel-computing architectures and combinations on performance in solution of complicated problems like those of three-dimensional flows in turbomachines. Hypercluster software and architectural concepts are in public domain.
Distributed numerical controllers
NASA Astrophysics Data System (ADS)
Orban, Peter E.
2001-12-01
While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.
Systematic Development of Intelligent Systems for Public Road Transport.
García, Carmelo R; Quesada-Arencibia, Alexis; Cristóbal, Teresa; Padrón, Gabino; Alayón, Francisco
2016-07-16
This paper presents an architecture model for the development of intelligent systems for public passenger transport by road. The main objective of our proposal is to provide a framework for the systematic development and deployment of telematics systems to improve various aspects of this type of transport, such as efficiency, accessibility and safety. The architecture model presented herein is based on international standards on intelligent transport system architectures, ubiquitous computing and service-oriented architecture for distributed systems. To illustrate the utility of the model, we also present a use case of a monitoring system for stops on a public passenger road transport network.
Systematic Development of Intelligent Systems for Public Road Transport
García, Carmelo R.; Quesada-Arencibia, Alexis; Cristóbal, Teresa; Padrón, Gabino; Alayón, Francisco
2016-01-01
This paper presents an architecture model for the development of intelligent systems for public passenger transport by road. The main objective of our proposal is to provide a framework for the systematic development and deployment of telematics systems to improve various aspects of this type of transport, such as efficiency, accessibility and safety. The architecture model presented herein is based on international standards on intelligent transport system architectures, ubiquitous computing and service-oriented architecture for distributed systems. To illustrate the utility of the model, we also present a use case of a monitoring system for stops on a public passenger road transport network. PMID:27438836
Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture
NASA Technical Reports Server (NTRS)
Fiene, Bruce F.
1994-01-01
The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.
Pi-Sat: A Low Cost Small Satellite and Distributed Spacecraft Mission System Test Platform
NASA Technical Reports Server (NTRS)
Cudmore, Alan
2015-01-01
Current technology and budget trends indicate a shift in satellite architectures from large, expensive single satellite missions, to small, low cost distributed spacecraft missions. At the center of this shift is the SmallSatCubesat architecture. The primary goal of the Pi-Sat project is to create a low cost, and easy to use Distributed Spacecraft Mission (DSM) test bed to facilitate the research and development of next-generation DSM technologies and concepts. This test bed also serves as a realistic software development platform for Small Satellite and Cubesat architectures. The Pi-Sat is based on the popular $35 Raspberry Pi single board computer featuring a 700Mhz ARM processor, 512MB of RAM, a flash memory card, and a wealth of IO options. The Raspberry Pi runs the Linux operating system and can easily run Code 582s Core Flight System flight software architecture. The low cost and high availability of the Raspberry Pi make it an ideal platform for a Distributed Spacecraft Mission and Cubesat software development. The Pi-Sat models currently include a Pi-Sat 1U Cube, a Pi-Sat Wireless Node, and a Pi-Sat Cubesat processor card.The Pi-Sat project takes advantage of many popular trends in the Maker community including low cost electronics, 3d printing, and rapid prototyping in order to provide a realistic platform for flight software testing, training, and technology development. The Pi-Sat has also provided fantastic hands on training opportunities for NASA summer interns and Pathways students.
76 FR 17158 - Assumption Buster Workshop: Distributed Data Schemes Provide Security
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-28
... Schemes Provide Security''. Distributed data architectures, such as cloud computing, offer very attractive... locating your data in the cloud, and by breaking it up and replicating different segments throughout the...
NASA Astrophysics Data System (ADS)
Abdel-Fattah, Mohamed I.; Slatt, Roger M.
2013-12-01
Understanding sequence stratigraphy architecture in the incised-valley is a crucial step to understanding the effect of relative sea level changes on reservoir characterization and architecture. This paper presents a sequence stratigraphic framework of the incised-valley strata within the late Messinian Abu Madi Formation based on seismic and borehole data. Analysis of sand-body distribution reveals that fluvial channel sandstones in the Abu Madi Formation in the Baltim Fields, offshore Nile Delta, Egypt, are not randomly distributed but are predictable in their spatial and stratigraphic position. Elucidation of the distribution of sandstones in the Abu Madi incised-valley fill within a sequence stratigraphic framework allows a better understanding of their characterization and architecture during burial. Strata of the Abu Madi Formation are interpreted to comprise two sequences, which are the most complex stratigraphically; their deposits comprise a complex incised valley fill. The lower sequence (SQ1) consists of a thick incised valley-fill of a Lowstand Systems Tract (LST1)) overlain by a Transgressive Systems Tract (TST1) and Highstand Systems Tract (HST1). The upper sequence (SQ2) contains channel-fill and is interpreted as a LST2 which has a thin sandstone channel deposits. Above this, channel-fill sandstone and related strata with tidal influence delineates the base of TST2, which is overlain by a HST2. Gas reservoirs of the Abu Madi Formation (present-day depth ˜3552 m), the Baltim Fields, Egypt, consist of fluvial lowstand systems tract (LST) sandstones deposited in an incised valley. LST sandstones have a wide range of porosity (15 to 28%) and permeability (1 to 5080mD), which reflect both depositional facies and diagenetic controls. This work demonstrates the value of constraining and evaluating the impact of sequence stratigraphic distribution on reservoir characterization and architecture in incised-valley deposits, and thus has an important impact on reservoir quality evolution in hydrocarbon exploration in such settings.
Plagianakos, V P; Magoulas, G D; Vrahatis, M N
2006-03-01
Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used.
Biologist postdoctoral fellow | Center for Cancer Research
A fully funded postdoctoral position is available at the National Cancer Institute on the NIH main campus in Bethesda, MD. Specifically, this opening is for an ongoing project examining the role of tissue architecture and mechanotransduction in the establishment of metastatic lesions, using zebrafish as a model system. The NIH will provide funding and benefits, though extramural fellowship applications will be strongly encouraged and supported.
Flexible manufacturing of aircraft engine parts
NASA Astrophysics Data System (ADS)
Hassan, Ossama M.; Jenkins, Douglas M.
1992-06-01
GE Aircraft Engines, a major supplier of jet engines for commercial and military aircraft, has developed a fully integrated manufacturing facility to produce aircraft engine components in flexible manufacturing cells. This paper discusses many aspects of the implementation including process technologies, material handling, software control system architecture, socio-technical systems and lessons learned. Emphasis is placed on the appropriate use of automation in a flexible manufacturing system.
TriG: Next Generation Scalable Spaceborne GNSS Receiver
NASA Technical Reports Server (NTRS)
Tien, Jeffrey Y.; Okihiro, Brian Bachman; Esterhuizen, Stephan X.; Franklin, Garth W.; Meehan, Thomas K.; Munson, Timothy N.; Robison, David E.; Turbiner, Dmitry; Young, Lawrence E.
2012-01-01
TriG is the next generation NASA scalable space GNSS Science Receiver. It will track all GNSS and additional signals (i.e. GPS, GLONASS, Galileo, Compass and Doris). Scalable 3U architecture and fully software and firmware recofigurable, enabling optimization to meet specific mission requirements. TriG GNSS EM is currently undergoing testing and is expected to complete full performance testing later this year.
Abstract Machines for Polymorphous Computing
2007-12-01
s/ /s/ MARK NOVAK WARREN H. DEBANY, Jr. Work Unit Manager Technical Advisor, Information Grid Division Information...models and LLCs have been developed for Raw, MONARCH [18][19], TRIPS [20][21], and Smart Memories [22][23]. These research projects were conducted...used here. In our approach on Raw, two key concepts are used to fully leverage the Raw architecture [34]. First, the tile grid is viewed as a
UBioLab: a web-laboratory for ubiquitous in-silico experiments.
Bartocci, Ezio; Cacciagrano, Diletta; Di Berardini, Maria Rita; Merelli, Emanuela; Vito, Leonardo
2012-07-09
The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists –for what concerns their management and visualization– and for bioinformaticians –for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle –and possibly to handle in a transparent and uniform way– aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features –as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques– give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.
Optimizing Data Management in Grid Environments
NASA Astrophysics Data System (ADS)
Zissimos, Antonis; Doka, Katerina; Chazapis, Antony; Tsoumakos, Dimitrios; Koziris, Nectarios
Grids currently serve as platforms for numerous scientific as well as business applications that generate and access vast amounts of data. In this paper, we address the need for efficient, scalable and robust data management in Grid environments. We propose a fully decentralized and adaptive mechanism comprising of two components: A Distributed Replica Location Service (DRLS) and a data transfer mechanism called GridTorrent. They both adopt Peer-to-Peer techniques in order to overcome performance bottlenecks and single points of failure. On one hand, DRLS ensures resilience by relying on a Byzantine-tolerant protocol and is able to handle massive concurrent requests even during node churn. On the other hand, GridTorrent allows for maximum bandwidth utilization through collaborative sharing among the various data providers and consumers. The proposed integrated architecture is completely backwards-compatible with already deployed Grids. To demonstrate these points, experiments have been conducted in LAN as well as WAN environments under various workloads. The evaluation shows that our scheme vastly outperforms the conventional mechanisms in both efficiency (up to 10 times faster) and robustness in case of failures and flash crowd instances.
White matter pathways and social cognition.
Wang, Yin; Metoki, Athanasia; Alm, Kylie H; Olson, Ingrid R
2018-04-20
There is a growing consensus that social cognition and behavior emerge from interactions across distributed regions of the "social brain". Researchers have traditionally focused their attention on functional response properties of these gray matter networks and neglected the vital role of white matter connections in establishing such networks and their functions. In this article, we conduct a comprehensive review of prior research on structural connectivity in social neuroscience and highlight the importance of this literature in clarifying brain mechanisms of social cognition. We pay particular attention to three key social processes: face processing, embodied cognition, and theory of mind, and their respective underlying neural networks. To fully identify and characterize the anatomical architecture of these networks, we further implement probabilistic tractography on a large sample of diffusion-weighted imaging data. The combination of an in-depth literature review and the empirical investigation gives us an unprecedented, well-defined landscape of white matter pathways underlying major social brain networks. Finally, we discuss current problems in the field, outline suggestions for best practice in diffusion-imaging data collection and analysis, and offer new directions for future research. Copyright © 2018 Elsevier Ltd. All rights reserved.
A novel pulmonary polyomavirus in alpacas (Vicugna pacos).
Dela Cruz, Florante N; Li, Linlin; Delwart, Eric; Pesavento, P A
2017-03-01
Viral metagenomic analysis detected a novel polyomavirus in a 6-month old female alpaca (Vicugna pacos) euthanized after a diagnosis of disseminated lymphosarcoma. The viral genome was fully sequenced, found to be similar to other polyomaviruses in gene architecture and provisionally named Alpaca polyomavirus or AlPyV. Viral nucleic acid was detected by PCR in venous blood, spleen, thymus, and lung. AlPyV phylogenetically clustered in the "Wuki" group of PyVs, which includes WU and KI polyomaviruses, commonly found in human respiratory samples. In an ISH analysis of 17 alpaca necropsies, 7 had detectable virus within the lung. In animals without pneumonia, probe hybridization was restricted to the nuclei of scattered individual bronchiolar epithelial cells. Three of the ISH positive alpacas had interstitial pneumonia of unknown origin, and in these animals there was viral nucleic acid detected in bronchiolar epithelium, type II pneumocytes, and alveolar macrophages. The pattern of AlPyV distribution is consistent with a persistent respiratory virus that has a possible role in respiratory disease. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Isaia, Roberto; Carapezza, Maria Luisa; Conti, Eric; Giulia Di Giuseppe, Maria; Lucchetti, Carlo; Prinzi, Ernesto; Ranaldi, Massimo; Tarchini, Luca; Tramparulo, Francesco; Troiano, Antonio; Vitale, Stefano; Cascella, Enrico; Castello, Nicola; Cicatiello, Alessandro; Maiolino, Marco; Puzio, Domenico; Tazza, Lucia; Villani, Roberto
2017-04-01
Recent volcanism at Campi Flegrei caldera produced more than 70 eruptions in the last 15 ka formed different volcanic edifices. The vent distribution was related to the main volcano-tectonic structure active in the caldera along which also concentrated part of the present hydrothermal and fumarolic activity, such as in the Solfatara area. In order to define the role of major faults in the Campi Flegrei Caldera, we analyzed some volcanic craters (Fondi di Baia and Astroni) and the Agnano caldera, by means of different geochemical and geophysical technics including CO2 flux, electrical resistivity (ERT), self-potential and permeability surveys. We provided some ERT profiles and different maps of geochemical and geophysical features. Major fault planes were identified comparing ERT imaging with alignments of anomalies in maps. The results can improve the knowledge on the present state of these volcanoes actually not fully monitored though included in the area with high probability of future vent opening within the Campi Flegrei caldera.
A two-locus model of spatially varying stabilizing or directional selection on a quantitative trait
Geroldinger, Ludwig; Bürger, Reinhard
2014-01-01
The consequences of spatially varying, stabilizing or directional selection on a quantitative trait in a subdivided population are studied. A deterministic two-locus two-deme model is employed to explore the effects of migration, the degree of divergent selection, and the genetic architecture, i.e., the recombination rate and ratio of locus effects, on the maintenance of genetic variation. The possible equilibrium configurations are determined as functions of the migration rate. They depend crucially on the strength of divergent selection and the genetic architecture. The maximum migration rates are investigated below which a stable fully polymorphic equilibrium or a stable single-locus polymorphism can exist. Under stabilizing selection, but with different optima in the demes, strong recombination may facilitate the maintenance of polymorphism. However usually, and in particular with directional selection in opposite direction, the critical migration rates are maximized by a concentrated genetic architecture, i.e., by a major locus and a tightly linked minor one. Thus, complementing previous work on the evolution of genetic architectures in subdivided populations subject to diversifying selection, it is shown that concentrated architectures may aid the maintenance of polymorphism. Conditions are obtained when this is the case. Finally, the dependence of the phenotypic variance, linkage disequilibrium, and various measures of local adaptation and differentiation on the parameters is elaborated. PMID:24726489