Open System Architecture design for planet surface systems
NASA Technical Reports Server (NTRS)
Petri, D. A.; Pieniazek, L. A.; Toups, L. D.
1992-01-01
The Open System Architecture is an approach to meeting the needs for flexibility and evolution of the U.S. Space Exploration Initiative program of the manned exploration of the solar system and its permanent settlement. This paper investigates the issues that future activities of the planet exploration program must confront, defines the basic concepts that provide the basis for establishing an Open System Architecture, identifies the appropriate features of such an architecture, and discusses examples of Open System Architectures.
NASA Technical Reports Server (NTRS)
Mandl, Dan; Sohlberg, Rob; Frye, Stu; Cappelaere, P.; Derezinski, L.; Ungar, Steve; Ames, Troy; Chien, Steve; Tran, Danny
2007-01-01
A viewgraph presentation on experiments with sensor webs and service oriented architectures is shown. The topics include: 1) Problem; 2) Basic Service Oriented Architecture Approach; 3) Series of Experiments; and 4) Next Experiments.
NASA Technical Reports Server (NTRS)
1983-01-01
An overview of the basic space station infrastructure is presented. A strong case is made for the evolution of the station using the basic Space Transportation System (STS) to achieve a smooth transition and cost effective implementation. The integrated logistics support (ILS) element of the overall station infrastructure is investigated. The need for an orbital transport system capability that is the key to servicing and spacecraft positioning scenarios and associated mission needs is examined. Communication is also an extremely important element and the basic issue of station autonomy versus ground support effects the system and subsystem architecture.
An Open Specification for Space Project Mission Operations Control Architectures
NASA Technical Reports Server (NTRS)
Hooke, A.; Heuser, W. R.
1995-01-01
An 'open specification' for Space Project Mission Operations Control Architectures is under development in the Spacecraft Control Working Group of the American Institute for Aeronautics and Astro- nautics. This architecture identifies 5 basic elements incorporated in the design of similar operations systems: Data, System Management, Control Interface, Decision Support Engine, & Space Messaging Service.
Swanson, Larry W.; Bota, Mihail
2010-01-01
The nervous system is a biological computer integrating the body's reflex and voluntary environmental interactions (behavior) with a relatively constant internal state (homeostasis)—promoting survival of the individual and species. The wiring diagram of the nervous system's structural connectivity provides an obligatory foundational model for understanding functional localization at molecular, cellular, systems, and behavioral organization levels. This paper provides a high-level, downwardly extendible, conceptual framework—like a compass and map—for describing and exploring in neuroinformatics systems (such as our Brain Architecture Knowledge Management System) the structural architecture of the nervous system's basic wiring diagram. For this, the Foundational Model of Connectivity's universe of discourse is the structural architecture of nervous system connectivity in all animals at all resolutions, and the model includes two key elements—a set of basic principles and an internally consistent set of concepts (defined vocabulary of standard terms)—arranged in an explicitly defined schema (set of relationships between concepts) allowing automatic inferences. In addition, rules and procedures for creating and modifying the foundational model are considered. Controlled vocabularies with broad community support typically are managed by standing committees of experts that create and refine boundary conditions, and a set of rules that are available on the Web. PMID:21078980
Rapid phenotyping of alfalfa root system architecture
USDA-ARS?s Scientific Manuscript database
Root system architecture (RSA) influences the capacity of an alfalfa plant for symbiotic nitrogen fixation, nutrient uptake and water use efficiency, resistance to frost heaving, winterhardiness, and some pest and pathogen resistance. However, we currently lack a basic understanding of root system d...
Software Architecture Evolution
ERIC Educational Resources Information Center
Barnes, Jeffrey M.
2013-01-01
Many software systems eventually undergo changes to their basic architectural structure. Such changes may be prompted by new feature requests, new quality attribute requirements, changing technology, or other reasons. Whatever the causes, architecture evolution is commonplace in real-world software projects. Today's software architects, however,…
Particulate Matter Filtration Design Considerations for Crewed Spacecraft Life Support Systems
NASA Technical Reports Server (NTRS)
Agui, Juan H.; Vijayakumar, R.; Perry, Jay L.
2016-01-01
Particulate matter filtration is a key component of crewed spacecraft cabin ventilation and life support system (LSS) architectures. The basic particulate matter filtration functional requirements as they relate to an exploration vehicle LSS architecture are presented. Particulate matter filtration concepts are reviewed and design considerations are discussed. A concept for a particulate matter filtration architecture suitable for exploration missions is presented. The conceptual architecture considers the results from developmental work and incorporates best practice design considerations.
A Simple Case Study of a Grid Performance System
NASA Technical Reports Server (NTRS)
Aydt, Ruth; Gunter, Dan; Quesnel, Darcy; Smith, Warren; Taylor, Valerie; Biegel, Bryan (Technical Monitor)
2001-01-01
This document presents a simple case study of a Grid performance system based on the Grid Monitoring Architecture (GMA) being developed by the Grid Forum Performance Working Group. It describes how the various system components would interact for a very basic monitoring scenario, and is intended to introduce people to the terminology and concepts presented in greater detail in other Working Group documents. We believe that by focusing on the simple case first, working group members can familiarize themselves with terminology and concepts, and productively join in the ongoing discussions of the group. In addition, prototype implementations of this basic scenario can be built to explore the feasibility of the proposed architecture and to expose possible shortcomings. Once the simple case is understood and agreed upon, complexities can be added incrementally as warranted by cases not addressed in the most basic implementation described here. Following the basic performance monitoring scenario discussion, unresolved issues are introduced for future discussion.
Future internet architecture and cloud ecosystem: A survey
NASA Astrophysics Data System (ADS)
Wan, Man; Yin, Shiqun
2018-04-01
The Internet has gradually become a social infrastructure, the existing TCP/IP architecture faces many challenges. So future Internet architecture become hot research. This paper introduces two ways of idea about the future research of Internet structure system, probes into the future Internet architecture and the environment of cloud ecosystem. Finally, we focuses the related research, and discuss basic principles and problems of OpenStack.
Advanced flight control system study
NASA Technical Reports Server (NTRS)
Mcgough, J.; Moses, K.; Klafin, J. F.
1982-01-01
The architecture, requirements, and system elements of an ultrareliable, advanced flight control system are described. The basic criteria are functional reliability of 10 to the minus 10 power/hour of flight and only 6 month scheduled maintenance. A distributed system architecture is described, including a multiplexed communication system, reliable bus controller, the use of skewed sensor arrays, and actuator interfaces. Test bed and flight evaluation program are proposed.
Architectures and protocols for an integrated satellite-terrestrial mobile system
NASA Technical Reports Server (NTRS)
Delre, E.; Dellipriscoli, F.; Iannucci, P.; Menolascino, R.; Settimo, F.
1993-01-01
This paper aims to depict some basic concepts related to the definition of an integrated system for mobile communications, consisting of a satellite network and a terrestrial cellular network. In particular three aspects are discussed: (1) architecture definition for the satellite network; (2) assignment strategy of the satellite channels; and (3) definition of 'internetworking procedures' between cellular and satellite network, according to the selected architecture and the satellite channel assignment strategy.
NASA Technical Reports Server (NTRS)
1983-01-01
The remote manipulating system, the pointing control system, and the external radiator for the core module of the space station are discussed. The principal interfaces for four basic classes of user and transportation vehicles or facilities associated with the space station were examined.
Internet-enabled collaborative agent-based supply chains
NASA Astrophysics Data System (ADS)
Shen, Weiming; Kremer, Rob; Norrie, Douglas H.
2000-12-01
This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.
Space Telecommunications Radio System (STRS) Architecture. Part 1; Tutorial - Overview
NASA Technical Reports Server (NTRS)
Handler, Louis M.; Briones, Janette C.; Mortensen, Dale J.; Reinhart, Richard C.
2012-01-01
Space Telecommunications Radio System (STRS) Architecture Standard provides a NASA standard for software-defined radio. STRS is being demonstrated in the Space Communications and Navigation (SCaN) Testbed formerly known as Communications, Navigation and Networking Configurable Testbed (CoNNeCT). Ground station radios communicating the SCaN testbed are also being written to comply with the STRS architecture. The STRS Architecture Tutorial Overview presents a general introduction to the STRS architecture standard developed at the NASA Glenn Research Center (GRC), addresses frequently asked questions, and clarifies methods of implementing the standard. The STRS architecture should be used as a base for many of NASA s future telecommunications technologies. The presentation will provide a basic understanding of STRS.
Current Thrusts in Ground Robotics: Programs, Systems, Technologies, Issues
2000-03-01
MPRS – Demo III – MDARS-E and MDARS-I – JAUGS SPAWAR Systems Center, San Diego San Diego CA 92152-7383 Basic UXO Gathering System (BUGS) Use tens of...processing resources • Modularity improves sensor fusion for alarms and alerts • Joint Architecture for Unmanned Ground Systems ( JAUGS ) SPAWAR Systems...pp lic at io ns 1996 1998 2000 2002 SPAWAR Systems Center, San Diego San Diego CA 92152-7383 JAUGS : Joint Architecture for Unmanned Ground Systems
Mars Science Laboratory thermal control architecture
NASA Technical Reports Server (NTRS)
Bhandari, Pradeep; Birur, Gajanana; Pauken, Michael; Paris, Anthony; Novak, Keith; Prina, Mauro; Ramirez, Brenda; Bame, David
2005-01-01
The Mars Science Laboratory (MSL) mission to land a large rover on Mars is being planned for launch in 2009. This paper will describe the basic architecture of the thermal control system, the challenges and the methods used to overcome them by the use of an innovative architecture to maximize the use of heritage from past projects while meeting the requirements for the design.
NASA Technical Reports Server (NTRS)
Azarbar, Bahman
1990-01-01
Existing and actively planned mobile satellite systems are competing for a viable share of the spectrum allocated by the International Telecommunications Union (ITU) to the satellite based mobile services in the 1.5/1.6 GHz range. The limited amount of spectrum available worldwide and the sheer number of existing and planned mobile satellite systems dictate the adoption of an architecture which will maximize sharing possibilities. A viable sharing architecture must recognize the operational needs and limitations of the existing systems. Furthermore, recognizing the right of access of the future systems as they will emerge in time, the adopted architecture must allow for additional growth and be amenable to orderly introduction of future systems. An attempt to devise such a sharing architecture is described. A specific example of the application of the basic concept to the existing and planned mobile satellite systems is also discussed.
41 CFR 102-77.10 - What basic Art-in-Architecture policy governs Federal agencies?
Code of Federal Regulations, 2012 CFR
2012-01-01
... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false What basic Art-in-Architecture policy governs Federal agencies? 102-77.10 Section 102-77.10 Public Contracts and Property... PROPERTY 77-ART-IN-ARCHITECTURE General Provisions § 102-77.10 What basic Art-in-Architecture policy...
41 CFR 102-77.10 - What basic Art-in-Architecture policy governs Federal agencies?
Code of Federal Regulations, 2014 CFR
2014-01-01
... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false What basic Art-in-Architecture policy governs Federal agencies? 102-77.10 Section 102-77.10 Public Contracts and Property... PROPERTY 77-ART-IN-ARCHITECTURE General Provisions § 102-77.10 What basic Art-in-Architecture policy...
41 CFR 102-77.10 - What basic Art-in-Architecture policy governs Federal agencies?
Code of Federal Regulations, 2013 CFR
2013-07-01
... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false What basic Art-in-Architecture policy governs Federal agencies? 102-77.10 Section 102-77.10 Public Contracts and Property... PROPERTY 77-ART-IN-ARCHITECTURE General Provisions § 102-77.10 What basic Art-in-Architecture policy...
Optical Disk Technology and Information.
ERIC Educational Resources Information Center
Goldstein, Charles M.
1982-01-01
Provides basic information on videodisks and potential applications, including inexpensive online storage, random access graphics to complement online information systems, hybrid network architectures, office automation systems, and archival storage. (JN)
A development framework for semantically interoperable health information systems.
Lopez, Diego M; Blobel, Bernd G M E
2009-02-01
Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.
Bit-parallel arithmetic in a massively-parallel associative processor
NASA Technical Reports Server (NTRS)
Scherson, Isaac D.; Kramer, David A.; Alleyne, Brian D.
1992-01-01
A simple but powerful new architecture based on a classical associative processor model is presented. Algorithms for performing the four basic arithmetic operations both for integer and floating point operands are described. For m-bit operands, the proposed architecture makes it possible to execute complex operations in O(m) cycles as opposed to O(m exp 2) for bit-serial machines. A word-parallel, bit-parallel, massively-parallel computing system can be constructed using this architecture with VLSI technology. The operation of this system is demonstrated for the fast Fourier transform and matrix multiplication.
System on Mobile Devices Middleware: Thinking beyond Basic Phones and PDAs
NASA Astrophysics Data System (ADS)
Prasad, Sushil K.
Several classes of emerging applications, spanning domains such as medical informatics, homeland security, mobile commerce, and scientific applications, are collaborative, and a significant portion of these will harness the capabilities of both the stable and mobile infrastructures (the “mobile grid”). Currently, it is possible to develop a collaborative application running on a collection of heterogeneous, possibly mobile, devices, each potentially hosting data stores, using existing middleware technologies such as JXTA, BREW, Compact .NET and J2ME. However, they require too many ad-hoc techniques as well as cumbersome and time-consuming programming. Our System on Mobile Devices (SyD) middleware, on the other hand, has a modular architecture that makes such application development very systematic and streamlined. The architecture supports transactions over mobile data stores, with a range of remote group invocation options and embedded interdependencies among such data store objects. The architecture further provides a persistent uniform object view, group transaction with Quality of Service (QoS) specifications, and XML vocabulary for inter-device communication. I will present the basic SyD concepts, introduce the architecture and the design of the SyD middleware and its components. We will discuss the basic performance figures of SyD components and a few SyD applications on PDAs. SyD platform has led to developments in distributed web service coordination and workflow technologies, which we will briefly discuss. There is a vital need to develop methodologies and systems to empower common users, such as computational scientists, for rapid development of such applications. Our BondFlow system enables rapid configuration and execution of workflows over web services. The small footprint of the system enables them to reside on Java-enabled handheld devices.
Improving a HMM-based off-line handwriting recognition system using MME-PSO optimization
NASA Astrophysics Data System (ADS)
Hamdani, Mahdi; El Abed, Haikal; Hamdani, Tarek M.; Märgner, Volker; Alimi, Adel M.
2011-01-01
One of the trivial steps in the development of a classifier is the design of its architecture. This paper presents a new algorithm, Multi Models Evolvement (MME) using Particle Swarm Optimization (PSO). This algorithm is a modified version of the basic PSO, which is used to the unsupervised design of Hidden Markov Model (HMM) based architectures. For instance, the proposed algorithm is applied to an Arabic handwriting recognizer based on discrete probability HMMs. After the optimization of their architectures, HMMs are trained with the Baum- Welch algorithm. The validation of the system is based on the IfN/ENIT database. The performance of the developed approach is compared to the participating systems at the 2005 competition organized on Arabic handwriting recognition on the International Conference on Document Analysis and Recognition (ICDAR). The final system is a combination between an optimized HMM with 6 other HMMs obtained by a simple variation of the number of states. An absolute improvement of 6% of word recognition rate with about 81% is presented. This improvement is achieved comparing to the basic system (ARAB-IfN). The proposed recognizer outperforms also most of the known state-of-the-art systems.
41 CFR 102-76.10 - What basic design and construction policy governs Federal agencies?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What basic design and... Management Federal Property Management Regulations System (Continued) FEDERAL MANAGEMENT REGULATION REAL... must be timely, efficient, and cost effective. (b) Use a distinguished architectural style and form in...
41 CFR 102-77.10 - What basic Art-in-Architecture policy governs Federal agencies?
Code of Federal Regulations, 2011 CFR
2011-01-01
... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What basic Art-in... PROPERTY 77-ART-IN-ARCHITECTURE General Provisions § 102-77.10 What basic Art-in-Architecture policy governs Federal agencies? Federal agencies must incorporate fine arts as an integral part of the total...
41 CFR 102-77.10 - What basic Art-in-Architecture policy governs Federal agencies?
Code of Federal Regulations, 2010 CFR
2010-07-01
... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What basic Art-in... PROPERTY 77-ART-IN-ARCHITECTURE General Provisions § 102-77.10 What basic Art-in-Architecture policy governs Federal agencies? Federal agencies must incorporate fine arts as an integral part of the total...
Asymmetry and basic pathways in sleep-stage transitions
NASA Astrophysics Data System (ADS)
Lo, Chung-Chuan; Bartsch, Ronny P.; Ivanov, Plamen Ch.
2013-04-01
We study dynamical aspects of sleep micro-architecture. We find that sleep dynamics exhibits a high degree of asymmetry, and that the entire class of sleep-stage transition pathways underlying the complexity of sleep dynamics throughout the night can be characterized by two independent asymmetric transition paths. These basic pathways remain stable under sleep disorders, even though the degree of asymmetry is significantly reduced. Our findings demonstrate an intriguing temporal organization in sleep micro-architecture at short time scales that is typical for physical systems exhibiting self-organized criticality (SOC), and indicates nonequilibrium critical dynamics in brain activity during sleep.
Crew systems and architectural considerations for first lunar surface return missions
NASA Astrophysics Data System (ADS)
Winisdoerffer, F.; Ximenes, S.
1992-08-01
The design requirements for the habitability of the pressurized volumes of a typical first manned lander are presented. Attention is given to providing dual habitation/exploration services (EVA/IVA), supporting the separation of the surface/flight functions, allowing growth potential based on site characteristics, and in situ resources utilization. Lunar lander conceptual diagrams are provided for the basic system architecture, automatic cargo delivery, the piloted crew module, and the pressurized volumes.
Functional language and data flow architectures
NASA Technical Reports Server (NTRS)
Ercegovac, M. D.; Patel, D. R.; Lang, T.
1983-01-01
This is a tutorial article about language and architecture approaches for highly concurrent computer systems based on the functional style of programming. The discussion concentrates on the basic aspects of functional languages, and sequencing models such as data-flow, demand-driven and reduction which are essential at the machine organization level. Several examples of highly concurrent machines are described.
NASA Astrophysics Data System (ADS)
Kambe, Hidetoshi; Mitsui, Hiroyasu; Endo, Satoshi; Koizumi, Hisao
The applications of embedded system technologies have spread widely in various products, such as home appliances, cellular phones, automobiles, industrial machines and so on. Due to intensified competition, embedded software has expanded its role in realizing sophisticated functions, and new development methods like a hardware/software (HW/SW) co-design for uniting HW and SW development have been researched. The shortfall of embedded SW engineers was estimated to be approximately 99,000 in the year 2006, in Japan. Embedded SW engineers should understand HW technologies and system architecture design as well as SW technologies. However, a few universities offer this kind of education systematically. We propose a student experiment method for learning the basics of embedded system development, which includes a set of experiments for developing embedded SW, developing embedded HW and experiencing HW/SW co-design. The co-design experiment helps students learn about the basics of embedded system architecture design and the flow of designing actual HW and SW modules. We developed these experiments and evaluated them.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This exploratory study initiated our effort to understand performance modeling on parallel systems. The basic goal of performance modeling is to understand and predict the performance of a computer program or set of programs on a computer system. Performance modeling has numerous applications, including evaluation of algorithms, optimization of code implementations, parallel library development, comparison of system architectures, parallel system design, and procurement of new systems. Our work lays the basis for the construction of parallel libraries that allow for the reconstruction of application codes on several distinct architectures so as to assure performance portability. Following our strategy, once the requirements of applications are well understood, one can then construct a library in a layered fashion. The top level of this library will consist of architecture-independent geometric, numerical, and symbolic algorithms that are needed by the sample of applications. These routines should be written in a language that is portable across the targeted architectures.
GPS Block 2R Time Standard Assembly (TSA) architecture
NASA Technical Reports Server (NTRS)
Baker, Anthony P.
1990-01-01
The underlying philosophy of the Global Positioning System (GPS) 2R Time Standard Assembly (TSA) architecture is to utilize two frequency sources, one fixed frequency reference source and one system frequency source, and to couple the system frequency source to the reference frequency source via a sample data loop. The system source is used to provide the basic clock frequency and timing for the space vehicle (SV) and it uses a voltage controlled crystal oscillator (VCXO) with high short term stability. The reference source is an atomic frequency standard (AFS) with high long term stability. The architecture can support any type of frequency standard. In the system design rubidium, cesium, and H2 masers outputting a canonical frequency were accommodated. The architecture is software intensive. All VCXO adjustments are digital and are calculated by a processor. They are applied to the VCXO via a digital to analog converter.
Peeling the Onion: Okapi System Architecture and Software Design Issues.
ERIC Educational Resources Information Center
Jones, S.; And Others
1997-01-01
Discusses software design issues for Okapi, an information retrieval system that incorporates both search engine and user interface and supports weighted searching, relevance feedback, and query expansion. The basic search system, adjacency searching, and moving toward a distributed system are discussed. (Author/LRW)
Radiant exchange in partially specular architectural environments
NASA Astrophysics Data System (ADS)
Beamer, C. Walter; Muehleisen, Ralph T.
2003-10-01
The radiant exchange method, also known as radiosity, was originally developed for thermal radiative heat transfer applications. Later it was used to model architectural lighting systems, and more recently it has been extended to model acoustic systems. While there are subtle differences in these applications, the basic method is based on solving a system of energy balance equations, and it is best applied to spaces with mainly diffuse reflecting surfaces. The obvious drawback to this method is that it is based around the assumption that all surfaces in the system are diffuse reflectors. Because almost all architectural systems have at least some partially specular reflecting surfaces in the system it is important to extend the radiant exchange method to deal with this type of surface reflection. [Work supported by NSF.
National Launch System comparative economic analysis
NASA Technical Reports Server (NTRS)
Prince, A.
1992-01-01
Results are presented from an analysis of economic benefits (or losses), in the form of the life cycle cost savings, resulting from the development of the National Launch System (NLS) family of launch vehicles. The analysis was carried out by comparing various NLS-based architectures with the current Shuttle/Titan IV fleet. The basic methodology behind this NLS analysis was to develop a set of annual payload requirements for the Space Station Freedom and LEO, to design launch vehicle architectures around these requirements, and to perform life-cycle cost analyses on all of the architectures. A SEI requirement was included. Launch failure costs were estimated and combined with the relative reliability assumptions to measure the effects of losses. Based on the analysis, a Shuttle/NLS architecture evolving into a pressurized-logistics-carrier/NLS architecture appears to offer the best long-term cost benefit.
A real-time standard parts inspection based on deep learning
NASA Astrophysics Data System (ADS)
Xu, Kuan; Li, XuDong; Jiang, Hongzhi; Zhao, Huijie
2017-10-01
Since standard parts are necessary components in mechanical structure like bogie and connector. These mechanical structures will be shattered or loosen if standard parts are lost. So real-time standard parts inspection systems are essential to guarantee their safety. Researchers would like to take inspection systems based on deep learning because it works well in image with complex backgrounds which is common in standard parts inspection situation. A typical inspection detection system contains two basic components: feature extractors and object classifiers. For the object classifier, Region Proposal Network (RPN) is one of the most essential architectures in most state-of-art object detection systems. However, in the basic RPN architecture, the proposals of Region of Interest (ROI) have fixed sizes (9 anchors for each pixel), they are effective but they waste much computing resources and time. In standard parts detection situations, standard parts have given size, thus we can manually choose sizes of anchors based on the ground-truths through machine learning. The experiments prove that we could use 2 anchors to achieve almost the same accuracy and recall rate. Basically, our standard parts detection system could reach 15fps on NVIDIA GTX1080 (GPU), while achieving detection accuracy 90.01% mAP.
Highly Adjustable Systems: An Architecture for Future Space Observatories
NASA Astrophysics Data System (ADS)
Arenberg, Jonathan; Conti, Alberto; Redding, David; Lawrence, Charles R.; Hachkowski, Roman; Laskin, Robert; Steeves, John
2017-06-01
Mission costs for ground breaking space astronomical observatories are increasing to the point of unsustainability. We are investigating the use of adjustable or correctable systems as a means to reduce development and therefore mission costs. The poster introduces the promise and possibility of realizing a “net zero CTE” system for the general problem of observatory design and introduces the basic systems architecture we are considering. This poster concludes with an overview of our planned study and demonstrations for proving the value and worth of highly adjustable telescopes and systems ahead of the upcoming decadal survey.
Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Schulz, Stefan
2015-01-01
Chronic diseases such as Type 2 Diabetes Mellitus (T2DM) constitute a big burden to the global health economy. T2DM Care Management requires a multi-disciplinary and multi-organizational approach. Because of different languages and terminologies, education, experiences, skills, etc., such an approach establishes a special interoperability challenge. The solution is a flexible, scalable, business-controlled, adaptive, knowledge-based, intelligent system following a systems-oriented, architecture-centric, ontology-based and policy-driven approach. The architecture of real systems is described, using the basics and principles of the Generic Component Model (GCM). For representing the functional aspects of a system the Business Process Modeling Notation (BPMN) is used. The system architecture obtained is presented using a GCM graphical notation, class diagrams and BPMN diagrams. The architecture-centric approach considers the compositional nature of the real world system and its functionalities, guarantees coherence, and provides right inferences. The level of generality provided in this paper facilitates use case specific adaptations of the system. By that way, intelligent, adaptive and interoperable T2DM care systems can be derived from the presented model as presented in another publication.
A support architecture for reliable distributed computing systems
NASA Technical Reports Server (NTRS)
Dasgupta, Partha; Leblanc, Richard J., Jr.
1988-01-01
The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.
A multitasking finite state architecture for computer control of an electric powertrain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burba, J.C.
1984-01-01
Finite state techniques provide a common design language between the control engineer and the computer engineer for event driven computer control systems. They simplify communication and provide a highly maintainable control system understandable by both. This paper describes the development of a control system for an electric vehicle powertrain utilizing finite state concepts. The basics of finite state automata are provided as a framework to discuss a unique multitasking software architecture developed for this application. The architecture employs conventional time-sliced techniques with task scheduling controlled by a finite state machine representation of the control strategy of the powertrain. The complexitiesmore » of excitation variable sampling in this environment are also considered.« less
General-Purpose Electronic System Tests Aircraft
NASA Technical Reports Server (NTRS)
Glover, Richard D.
1989-01-01
Versatile digital equipment supports research, development, and maintenance. Extended aircraft interrogation and display system is general-purpose assembly of digital electronic equipment on ground for testing of digital electronic systems on advanced aircraft. Many advanced features, including multiple 16-bit microprocessors, pipeline data-flow architecture, advanced operating system, and resident software-development tools. Basic collection of software includes program for handling many types of data and for displays in various formats. User easily extends basic software library. Hardware and software interfaces to subsystems provided by user designed for flexibility in configuration to meet user's requirements.
Uciteli, Alexandr; Groß, Silvia; Kireyev, Sergej; Herre, Heinrich
2011-08-09
This paper presents an ontologically founded basic architecture for information systems, which are intended to capture, represent, and maintain metadata for various domains of clinical and epidemiological research. Clinical trials exhibit an important basis for clinical research, and the accurate specification of metadata and their documentation and application in clinical and epidemiological study projects represents a significant expense in the project preparation and has a relevant impact on the value and quality of these studies.An ontological foundation of an information system provides a semantic framework for the precise specification of those entities which are presented in this system. This semantic framework should be grounded, according to our approach, on a suitable top-level ontology. Such an ontological foundation leads to a deeper understanding of the entities of the domain under consideration, and provides a common unifying semantic basis, which supports the integration of data and the interoperability between different information systems.The intended information systems will be applied to the field of clinical and epidemiological research and will provide, depending on the application context, a variety of functionalities. In the present paper, we focus on a basic architecture which might be common to all such information systems. The research, set forth in this paper, is included in a broader framework of clinical research and continues the work of the IMISE on these topics.
The Center for Advanced Systems and Engineering (CASE)
2012-01-01
targets from multiple sensors. Qinru Qiu, State University of New York at Binghamton – A Neuromorphic Approach for Intelligent Text Recognition...Rogers, SUNYIT, Basic Research, Development and Emulation of Derived Models of Neuromorphic Brain Processes to Investigate the Computational Architecture...Issues They Present Work pertaining to the basic research, development and emulation of derived models of Neuromorphic brain processes to
A Concept Transformation Learning Model for Architectural Design Learning Process
ERIC Educational Resources Information Center
Wu, Yun-Wu; Weng, Kuo-Hua; Young, Li-Ming
2016-01-01
Generally, in the foundation course of architectural design, much emphasis is placed on teaching of the basic design skills without focusing on teaching students to apply the basic design concepts in their architectural designs or promoting students' own creativity. Therefore, this study aims to propose a concept transformation learning model to…
Neuromorphic Computing – From Materials Research to Systems Architecture Roundtable
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuller, Ivan K.; Stevens, Rick; Pino, Robinson
2015-10-29
Computation in its many forms is the engine that fuels our modern civilization. Modern computation—based on the von Neumann architecture—has allowed, until now, the development of continuous improvements, as predicted by Moore’s law. However, computation using current architectures and materials will inevitably—within the next 10 years—reach a limit because of fundamental scientific reasons. DOE convened a roundtable of experts in neuromorphic computing systems, materials science, and computer science in Washington on October 29-30, 2015 to address the following basic questions: Can brain-like (“neuromorphic”) computing devices based on new material concepts and systems be developed to dramatically outperform conventional CMOS basedmore » technology? If so, what are the basic research challenges for materials sicence and computing? The overarching answer that emerged was: The development of novel functional materials and devices incorporated into unique architectures will allow a revolutionary technological leap toward the implementation of a fully “neuromorphic” computer. To address this challenge, the following issues were considered: The main differences between neuromorphic and conventional computing as related to: signaling models, timing/clock, non-volatile memory, architecture, fault tolerance, integrated memory and compute, noise tolerance, analog vs. digital, and in situ learning New neuromorphic architectures needed to: produce lower energy consumption, potential novel nanostructured materials, and enhanced computation Device and materials properties needed to implement functions such as: hysteresis, stability, and fault tolerance Comparisons of different implementations: spin torque, memristors, resistive switching, phase change, and optical schemes for enhanced breakthroughs in performance, cost, fault tolerance, and/or manufacturability.« less
Storage system architectures and their characteristics
NASA Technical Reports Server (NTRS)
Sarandrea, Bryan M.
1993-01-01
Not all users storage requirements call for 20 MBS data transfer rates, multi-tier file or data migration schemes, or even automated retrieval of data. The number of available storage solutions reflects the broad range of user requirements. It is foolish to think that any one solution can address the complete range of requirements. For users with simple off-line storage requirements, the cost and complexity of high end solutions would provide no advantage over a more simple solution. The correct answer is to match the requirements of a particular storage need to the various attributes of the available solutions. The goal of this paper is to introduce basic concepts of archiving and storage management in combination with the most common architectures and to provide some insight into how these concepts and architectures address various storage problems. The intent is to provide potential consumers of storage technology with a framework within which to begin the hunt for a solution which meets their particular needs. This paper is not intended to be an exhaustive study or to address all possible solutions or new technologies, but is intended to be a more practical treatment of todays storage system alternatives. Since most commercial storage systems today are built on Open Systems concepts, the majority of these solutions are hosted on the UNIX operating system. For this reason, some of the architectural issues discussed focus around specific UNIX architectural concepts. However, most of the architectures are operating system independent and the conclusions are applicable to such architectures on any operating system.
ERIC Educational Resources Information Center
Fulkerson, Dan
This publication contains student and teacher instructional materials for a course in residential solar systems. The text is designed either as a basic solar course or as a supplement to extend student skills in areas such as architectural drafting, air conditioning and refrigeration, and plumbing. The materials are presented in four units…
Agents Control in Intelligent Learning Systems: The Case of Reactive Characteristics
ERIC Educational Resources Information Center
Laureano-Cruces, Ana Lilia; Ramirez-Rodriguez, Javier; de Arriaga, Fernando; Escarela-Perez, Rafael
2006-01-01
Intelligent learning systems (ILSs) have evolved in the last few years basically because of influences received from multi-agent architectures (MAs). Conflict resolution among agents has been a very important problem for multi-agent systems, with specific features in the case of ILSs. The literature shows that ILSs with cognitive or pedagogical…
Building Systems: Passing Fad or Basic Tool?
ERIC Educational Resources Information Center
Rezab, Donald
Building systems can be traced back to a 1516 A.D. project by Leonardo da Vinci and to a variety of prefabrication projects in every succeeding century. When integrated into large and repetitive spatial units through careful design, building systems can produce an architecture of the first order, as evidenced in the award winning design of…
A specification of 3D manipulation in virtual environments
NASA Technical Reports Server (NTRS)
Su, S. Augustine; Furuta, Richard
1994-01-01
In this paper we discuss the modeling of three basic kinds of 3-D manipulations in the context of a logical hand device and our virtual panel architecture. The logical hand device is a useful software abstraction representing hands in virtual environments. The virtual panel architecture is the 3-D component of the 2-D window systems. Both of the abstractions are intended to form the foundation for adaptable 3-D manipulation.
Space station needs, attributes and architectural options: Study summary
NASA Technical Reports Server (NTRS)
1983-01-01
Space station needs, attributes, and architectural options that affect the future implementation and design of a space station system are examined. Requirements for candidate missions are used to define functional attributes of a space station. Station elements that perform these functions form the basic station architecture. Alternative ways to accomplish these functions are defined and configuration concepts are developed and evaluated. Configuration analyses are carried to the point that budgetary cost estimates of alternate approaches could be made. Emphasis is placed on differential costs for station support elements and benefits that accrue through use of the station.
Avoid Disaster: Use Firewalls for Inter-Intranet Security.
ERIC Educational Resources Information Center
Charnetski, J. R.
1998-01-01
Discusses the use of firewalls for library intranets, highlighting the move from mainframes to PCs, security issues and firewall architecture, and operating systems. Provides a glossary of basic networking terms and a bibliography of suggested reading. (PEN)
Reliability Modeling of Double Beam Bridge Crane
NASA Astrophysics Data System (ADS)
Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li
2018-05-01
This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.
NASA Technical Reports Server (NTRS)
Barnes, Jeffrey M.
2011-01-01
All software systems of significant size and longevity eventually undergo changes to their basic architectural structure. Such changes may be prompted by evolving requirements, changing technology, or other reasons. Whatever the cause, software architecture evolution is commonplace in real world software projects. Recently, software architecture researchers have begun to study this phenomenon in depth. However, this work has suffered from problems of validation; research in this area has tended to make heavy use of toy examples and hypothetical scenarios and has not been well supported by real world examples. To help address this problem, I describe an ongoing effort at the Jet Propulsion Laboratory to re-architect the Advanced Multimission Operations System (AMMOS), which is used to operate NASA's deep-space and astrophysics missions. Based on examination of project documents and interviews with project personnel, I describe the goals and approach of this evolution effort and then present models that capture some of the key architectural changes. Finally, I demonstrate how approaches and formal methods from my previous research in architecture evolution may be applied to this evolution, while using languages and tools already in place at the Jet Propulsion Laboratory.
Multi-Kepler GPU vs. multi-Intel MIC for spin systems simulations
NASA Astrophysics Data System (ADS)
Bernaschi, M.; Bisson, M.; Salvadore, F.
2014-10-01
We present and compare the performances of two many-core architectures: the Nvidia Kepler and the Intel MIC both in a single system and in cluster configuration for the simulation of spin systems. As a benchmark we consider the time required to update a single spin of the 3D Heisenberg spin glass model by using the Over-relaxation algorithm. We present data also for a traditional high-end multi-core architecture: the Intel Sandy Bridge. The results show that although on the two Intel architectures it is possible to use basically the same code, the performances of a Intel MIC change dramatically depending on (apparently) minor details. Another issue is that to obtain a reasonable scalability with the Intel Phi coprocessor (Phi is the coprocessor that implements the MIC architecture) in a cluster configuration it is necessary to use the so-called offload mode which reduces the performances of the single system. As to the GPU, the Kepler architecture offers a clear advantage with respect to the previous Fermi architecture maintaining exactly the same source code. Scalability of the multi-GPU implementation remains very good by using the CPU as a communication co-processor of the GPU. All source codes are provided for inspection and for double-checking the results.
A Hybrid Power Management (HPM) Based Vehicle Architecture
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2011-01-01
Society desires vehicles with reduced fuel consumption and reduced emissions. This presents a challenge and an opportunity for industry and the government. The NASA John H. Glenn Research Center (GRC) has developed a Hybrid Power Management (HPM) based vehicle architecture for space and terrestrial vehicles. GRC's Electrical and Electromagnetics Branch of the Avionics and Electrical Systems Division initiated the HPM Program for the GRC Technology Transfer and Partnership Office. HPM is the innovative integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications. The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, providing all power to a common energy storage system, which is used to power the drive motors and vehicle accessory systems, as well as provide power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. This flexible vehicle architecture can be applied to all vehicles to considerably improve system efficiency, reliability, safety, security, and performance. This unique vehicle architecture has the potential to alleviate global energy concerns, improve the environment, stimulate the economy, and enable new missions.
Ultra-Stable Segmented Telescope Sensing and Control Architecture
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Bolcar, Matthew; Knight, Scott; Redding, David
2017-01-01
The LUVOIR team is conducting two full architecture studies Architecture A 15 meter telescope that folds up in an 8.4m SLS Block 2 shroud is nearly complete. Architecture B 9.2 meter that uses an existing fairing size will begin study this Fall. This talk will summarize the ultra-stable architecture of the 15m segmented telescope including the basic requirements, the basic rationale for the architecture, the technologies employed, and the expected performance. This work builds on several dynamics and thermal studies performed for ATLAST segmented telescope configurations. The most important new element was an approach to actively control segments for segment to segment motions which will be discussed later.
1981-05-01
factors that cause damage are discussed below. a. Architectural elements. Damage to architectural elements can result in both significant dollar losses...hazard priority- ranking procedure are: 1. To produce meaningful results which are as simple as possible, con- sidering the existing databases. 2. To...minimize the amount of data required for meaningful results , i.e., the database should contain only the most fundamental building characteris- tics. 3. To
Architectural Design Propaedeutics in Russia: History and Prospects
NASA Astrophysics Data System (ADS)
Lee, I. S.
2017-11-01
Architectural design propaedeutics is the introductory course of the composition basics which largely determines the process of professional training of an architect and a designer and the result of their work in the form of artistically meaningful artificial human environment. The article gives a brief overview of the history of propedeutics development in Russia, the experience of application and the prospects of development of the methods used to teach the basics of composition to future professionals. The article considered the main direction of the VKHUTEMAS development, Moscow Architectural Institute. Further, the paper identifies he connection of propedeutics with the architectural and design practice of the corresponding period. The article addresses to the author’s personal experiences related to the composition basics learning at Moscow Architectural Institute in the 70-ies of the last century. Besides, it presents the examples of the works made by the students from South Ural State University at the Chair of Design and Fine Arts.
A low-cost approach to the exploration of Mars through a robotic technology demonstrator mission
NASA Astrophysics Data System (ADS)
Ellery, Alex; Richter, Lutz; Parnell, John; Baker, Adam
2003-11-01
We present a proposed robotic mission to Mars - Vanguard - for the Aurora Arrow programme which combines an extensive technology demonstrator with a high scientific return. The novel aspect of this technology demonstrator is the demonstration of "water mining" capabilities for in-situ resource utilisation in conjunction with high-value astrobiological investigation within a low mass lander package of 70 kg. The basic architecture comprises a small lander, a micro-rover and a number of ground-penetrating moles. This basic architecture offers the possibility of testing a wide variety of generic technologies associated with space systems and planetary exploration. The architecture provides for the demonstration of specific technologies associated with planetary surface exploration, and with the Aurora programme specifically. Technology demonstration of in-situ resource utilisation will be a necessary precursor to any future human mission to Mars. Furthermore, its modest mass overhead allows the reuse of the already built Mars Express bus, making it a very low cost option.
A low-cost approach to the exploration of Mars through a robotic technology demonstrator mission
NASA Astrophysics Data System (ADS)
Ellery, Alex; Richter, Lutz; Parnell, John; Baker, Adam
2006-10-01
We present a proposed robotic mission to Mars—Vanguard—for the Aurora Arrow programme which combines an extensive technology demonstrator with a high scientific return. The novel aspect of this technology demonstrator is the demonstration of “water mining” capabilities for in situ resource utilisation (ISRU) in conjunction with high-value astrobiological investigation within a low-mass lander package of 70 kg. The basic architecture comprises a small lander, a micro-rover and a number of ground-penetrating moles. This basic architecture offers the possibility of testing a wide variety of generic technologies associated with space systems and planetary exploration. The architecture provides for the demonstration of specific technologies associated with planetary surface exploration, and with the Aurora programme specifically. Technology demonstration of ISRU will be a necessary precursor to any future human mission to Mars. Furthermore, its modest mass overhead allows the re-use of the already built Mars Express bus, making it a very low-cost option.
Molecular basis of angiosperm tree architecture
USDA-ARS?s Scientific Manuscript database
The shoot architecture of trees greatly impacts orchard and forest management methods. Amassing greater knowledge of the molecular genetics behind tree form can benefit these industries as well as contribute to basic knowledge of plant developmental biology. This review covers basic components of ...
Molecular basis of angiosperm tree architecture.
Hollender, Courtney A; Dardick, Chris
2015-04-01
The architecture of trees greatly impacts the productivity of orchards and forestry plantations. Amassing greater knowledge on the molecular genetics that underlie tree form can benefit these industries, as well as contribute to basic knowledge of plant developmental biology. This review describes the fundamental components of branch architecture, a prominent aspect of tree structure, as well as genetic and hormonal influences inferred from studies in model plant systems and from trees with non-standard architectures. The bulk of the molecular and genetic data described here is from studies of fruit trees and poplar, as these species have been the primary subjects of investigation in this field of science. No claim to original US Government works. New Phytologist © 2014 New Phytologist Trust.
Integrated information in discrete dynamical systems: motivation and theoretical framework.
Balduzzi, David; Tononi, Giulio
2008-06-13
This paper introduces a time- and state-dependent measure of integrated information, phi, which captures the repertoire of causal states available to a system as a whole. Specifically, phi quantifies how much information is generated (uncertainty is reduced) when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i) there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii) this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i) phi varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii) phi varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii) phi varies as a function of network architecture. High phi values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high phi because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high phi but are inefficient. (iv) In Hopfield networks, phi is low for attractor states and neutral states, but increases if the networks are optimized to achieve tension between local and global interactions. These basic examples appear to match well against neurobiological evidence concerning the neural substrates of consciousness. More generally, phi appears to be a useful metric to characterize the capacity of any physical system to integrate information.
System Architectural Concepts: Army Battlefield Command and Control Information Utility (CCIU).
1982-07-25
produce (device-type), the computers they may interface with (required- host), and the identification number of the devices (device- number). Line- printers ...interface in a network PE ( ZINK Sol. A-5 GLOSSARY Kernel A layer of the PEOS; implements the basic system primitives. LUS Local Name Space Locking A
A Mobile Sensor Network System for Monitoring of Unfriendly Environments.
Song, Guangming; Zhou, Yaoxin; Ding, Fei; Song, Aiguo
2008-11-14
Observing microclimate changes is one of the most popular applications of wireless sensor networks. However, some target environments are often too dangerous or inaccessible to humans or large robots and there are many challenges for deploying and maintaining wireless sensor networks in those unfriendly environments. This paper presents a mobile sensor network system for solving this problem. The system architecture, the mobile node design, the basic behaviors and advanced network capabilities have been investigated respectively. A wheel-based robotic node architecture is proposed here that can add controlled mobility to wireless sensor networks. A testbed including some prototype nodes has also been created for validating the basic functions of the proposed mobile sensor network system. Motion performance tests have been done to get the positioning errors and power consumption model of the mobile nodes. Results of the autonomous deployment experiment show that the mobile nodes can be distributed evenly into the previously unknown environments. It provides powerful support for network deployment and maintenance and can ensure that the sensor network will work properly in unfriendly environments.
Vehicle infrastructure integration proof of concept : technical description--vehicle : final report
DOT National Transportation Integrated Search
2009-05-19
This report provides the technical description of the VII system developed for the Cooperative Agreement VII Program between the USDOT and the VII Consortium. The basic architectural elements are summarized and detailed descriptions of the hardware a...
Hernando, M Elena; Pascual, Mario; Salvador, Carlos H; García-Sáez, Gema; Rodríguez-Herrero, Agustín; Martínez-Sarriegui, Iñaki; Gómez, Enrique J
2008-09-01
The growing availability of continuous data from medical devices in diabetes management makes it crucial to define novel information technology architectures for efficient data storage, data transmission, and data visualization. The new paradigm of care demands the sharing of information in interoperable systems as the only way to support patient care in a continuum of care scenario. The technological platforms should support all the services required by the actors involved in the care process, located in different scenarios and managing diverse information for different purposes. This article presents basic criteria for defining flexible and adaptive architectures that are capable of interoperating with external systems, and integrating medical devices and decision support tools to extract all the relevant knowledge to support diabetes care.
A broadband multimedia TeleLearning system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Ruiping; Karmouch, A.
1996-12-31
In this paper we discuss a broadband multimedia TeleLearning system under development in the Multimedia Information Research Laboratory at the University of Ottawa. The system aims at providing a seamless environment for TeleLearning using the latest telecommunication and multimedia information processing technology. It basically consists of a media production center, a courseware author site, a courseware database, a courseware user site, and an on-line facilitator site. All these components are distributed over an ATM network and work together to offer a multimedia interactive courseware service. An MHEG-based model is exploited in designing the system architecture to achieve the real-time, interactive,more » and reusable information interchange through heterogeneous platforms. The system architecture, courseware processing strategies, courseware document models are presented.« less
Efficient parallel architecture for highly coupled real-time linear system applications
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Homaifar, Abdollah; Barua, Soumavo
1988-01-01
A systematic procedure is developed for exploiting the parallel constructs of computation in a highly coupled, linear system application. An overall top-down design approach is adopted. Differential equations governing the application under consideration are partitioned into subtasks on the basis of a data flow analysis. The interconnected task units constitute a task graph which has to be computed in every update interval. Multiprocessing concepts utilizing parallel integration algorithms are then applied for efficient task graph execution. A simple scheduling routine is developed to handle task allocation while in the multiprocessor mode. Results of simulation and scheduling are compared on the basis of standard performance indices. Processor timing diagrams are developed on the basis of program output accruing to an optimal set of processors. Basic architectural attributes for implementing the system are discussed together with suggestions for processing element design. Emphasis is placed on flexible architectures capable of accommodating widely varying application specifics.
Information systems in healthcare - state and steps towards sustainability.
Lenz, R
2009-01-01
To identify core challenges and first steps on the way to sustainable information systems in healthcare. Recent articles on healthcare information technology and related articles from Medical Informatics and Computer Science were reviewed and analyzed. Core challenges that couldn't be solved over the years are identified. The two core problem areas are process integration, meaning to effectively embed IT-systems into routine workflows, and systems integration, meaning to reduce the effort for interconnecting independently developed IT-components. Standards for systems integration have improved a lot, but their usefulness is limited where system evolution is needed. Sustainable Healthcare Information Systems should be based on system architectures that support system evolution and avoid costly system replacements every five to ten years. Some basic principles for the design of such systems are separation of concerns, loose coupling, deferred systems design, and service oriented architectures.
NASA Technical Reports Server (NTRS)
Muratore, John F.
1991-01-01
Lessons learned from operational real time expert systems are examined. The basic system architecture is discussed. An expert system is any software that performs tasks to a standard that would normally require a human expert. An expert system implies knowledge contained in data rather than code. And an expert system implies the use of heuristics as well as algorithms. The 15 top lessons learned by the operation of a real time data system are presented.
A brick-architecture-based mobile under-vehicle inspection system
NASA Astrophysics Data System (ADS)
Qian, Cheng; Page, David; Koschan, Andreas; Abidi, Mongi
2005-05-01
In this paper, a mobile scanning system for real-time under-vehicle inspection is presented, which is founded on a "Brick" architecture. In this "Brick" architecture, the inspection system is basically decomposed into bricks of three kinds: sensing, mobility, and computing. These bricks are physically and logically independent and communicate with each other by wireless communication. Each brick is mainly composed by five modules: data acquisition, data processing, data transmission, power, and self-management. These five modules can be further decomposed into submodules where the function and the interface are well-defined. Based on this architecture, the system is built by four bricks: two sensing bricks consisting of a range scanner and a line CCD, one mobility brick, and one computing brick. The sensing bricks capture geometric data and texture data of the under-vehicle scene, while the mobility brick provides positioning data along the motion path. Data of these three modalities are transmitted to the computing brick where they are fused and reconstruct a 3D under-vehicle model for visualization and danger inspection. This system has been successfully used in several military applications and proved to be an effective safer method for national security.
Large Scale GW Calculations on the Cori System
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven
The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.
Securing the Global Airspace System Via Identity-Based Security
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2015-01-01
Current telecommunications systems have very good security architectures that include authentication and authorization as well as accounting. These three features enable an edge system to obtain access into a radio communication network, request specific Quality-of-Service (QoS) requirements and ensure proper billing for service. Furthermore, the links are secure. Widely used telecommunication technologies are Long Term Evolution (LTE) and Worldwide Interoperability for Microwave Access (WiMAX) This paper provides a system-level view of network-centric operations for the global airspace system and the problems and issues with deploying new technologies into the system. The paper then focuses on applying the basic security architectures of commercial telecommunication systems and deployment of federated Authentication, Authorization and Accounting systems to provide a scalable, evolvable reliable and maintainable solution to enable a globally deployable identity-based secure airspace system.
Health-enabling technologies for pervasive health care: on services and ICT architecture paradigms.
Haux, Reinhold; Howe, Jurgen; Marschollek, Michael; Plischke, Maik; Wolf, Klaus-Hendrik
2008-06-01
Progress in information and communication technologies (ICT) is providing new opportunities for pervasive health care services in aging societies. To identify starting points of health-enabling technologies for pervasive health care. To describe typical services of and contemporary ICT architecture paradigms for pervasive health care. Summarizing outcomes of literature analyses and results from own research projects in this field. Basic functions for pervasive health care with respect to home care comprise emergency detection and alarm, disease management, as well as health status feedback and advice. These functions are complemented by optional (non-health care) functions. Four major paradigms for contemporary ICT architectures are person-centered ICT architectures, home-centered ICT architectures, telehealth service-centered ICT architectures and health care institution-centered ICT architectures. Health-enabling technologies may lead to both new ways of living and new ways of health care. Both ways are interwoven. This has to be considered for appropriate ICT architectures of sensor-enhanced health information systems. IMIA, the International Medical Informatics Association, may be an appropriate forum for interdisciplinary research exchange on health-enabling technologies for pervasive health care.
VLSI architecture for a Reed-Solomon decoder
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor)
1992-01-01
A basic single-chip building block for a Reed-Solomon (RS) decoder system is partitioned into a plurality of sections, the first of which consists of a plurality of syndrome subcells each of which contains identical standard-basis finite-field multipliers that are programmable between 10 and 8 bit operation. A desired number of basic building blocks may be assembled to provide a RS decoder of any syndrome subcell size that is programmable between 10 and 8 bit operation.
Characterization of Model-Based Reasoning Strategies for Use in IVHM Architectures
NASA Technical Reports Server (NTRS)
Poll, Scott; Iverson, David; Patterson-Hine, Ann
2003-01-01
Open architectures are gaining popularity for Integrated Vehicle Health Management (IVHM) applications due to the diversity of subsystem health monitoring strategies in use and the need to integrate a variety of techniques at the system health management level. The basic concept of an open architecture suggests that whatever monitoring or reasoning strategy a subsystem wishes to deploy, the system architecture will support the needs of that subsystem and will be capable of transmitting subsystem health status across subsystem boundaries and up to the system level for system-wide fault identification and diagnosis. There is a need to understand the capabilities of various reasoning engines and how they, coupled with intelligent monitoring techniques, can support fault detection and system level fault management. Researchers in IVHM at NASA Ames Research Center are supporting the development of an IVHM system for liquefying-fuel hybrid rockets. In the initial stage of this project, a few readily available reasoning engines were studied to assess candidate technologies for application in next generation launch systems. Three tools representing the spectrum of model-based reasoning approaches, from a quantitative simulation based approach to a graph-based fault propagation technique, were applied to model the behavior of the Hybrid Combustion Facility testbed at Ames. This paper summarizes the characterization of the modeling process for each of the techniques.
Wang, Yanchao; Sunderraman, Rajshekhar
2006-01-01
In this paper, we propose two architectures for curating PDB data to improve its quality. The first one, PDB Data Curation System, is developed by adding two parts, Checking Filter and Curation Engine, between User Interface and Database. This architecture supports the basic PDB data curation. The other one, PDB Data Curation System with XCML, is designed for further curation which adds four more parts, PDB-XML, PDB, OODB, Protin-OODB, into the previous one. This architecture uses XCML language to automatically check errors of PDB data that enables PDB data more consistent and accurate. These two tools can be used for cleaning existing PDB files and creating new PDB files. We also show some ideas how to add constraints and assertions with XCML to get better data. In addition, we discuss the data provenance that may affect data accuracy and consistency.
Heterarchies: Reconciling Networks and Hierarchies.
Cumming, Graeme S
2016-08-01
Social-ecological systems research suffers from a disconnect between hierarchical (top-down or bottom-up) and network (peer-to-peer) analyses. The concept of the heterarchy unifies these perspectives in a single framework. Here, I review the history and application of 'heterarchy' in neuroscience, ecology, archaeology, multiagent control systems, business and organisational studies, and politics. Recognising complex system architecture as a continuum along vertical and lateral axes ('flat versus hierarchical' and 'individual versus networked') suggests four basic types of heterarchy: reticulated, polycentric, pyramidal, and individualistic. Each has different implications for system functioning and resilience. Systems can also shift predictably and abruptly between architectures. Heterarchies suggest new ways of contextualising and generalising from case studies and new methods for analysing complex structure-function relations. Copyright © 2016 Elsevier Ltd. All rights reserved.
This Old House: Revitalizing Higher Education's Architecture.
ERIC Educational Resources Information Center
Flynn, William J.
2000-01-01
Asserts that in order to transform colleges into learning organizations, the infrastructure of higher education must be analyzed. States that basic relationships must be redesigned--the pedagogical interaction between teacher and student, the tension between faculty and administration, the caste system relationship that has existed between faculty…
Distributed Network and Multiprocessing Minicomputer State-of-the-Art Capabilities.
ERIC Educational Resources Information Center
Theis, Douglas J.
An examination of the capabilities of minicomputers and midicomputers now on the market reveals two basic items which users should evaluate when selecting computers for their own applications: distributed networking systems and multiprocessing architectures. Variables which should be considered in evaluating a distributed networking system…
ERIC Educational Resources Information Center
Snoonian, Deborah
2002-01-01
Describes the main lecture hall at the University of Michigan's Taubman College of Architecture and Urban Planning, which contains state-of-the-art computing, sound, lighting, and projection systems. This working audiovisual (AV) laboratory allows its designer, professor Mojtaba Navvab, to teach the basics of environmental technology and AV design…
Fault tolerant computer control for a Maglev transportation system
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George
1994-01-01
Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.
Imaging through Fog Using Polarization Imaging in the Visible/NIR/SWIR Spectrum
2017-01-11
few haze effects as possible. One post processing step on the image in order to complete image dehazing Figure 6: Basic architecture of the...Page 16 Figure 7: Basic architecture of post-processing techniques to recover an image dehazed from a raw image This first study was limited on the
Framework for Flexible Security in Group Communications
NASA Technical Reports Server (NTRS)
McDaniel, Patrick; Prakash, Atul
2006-01-01
The Antigone software system defines a framework for the flexible definition and implementation of security policies in group communication systems. Antigone does not dictate the available security policies, but provides high-level mechanisms for implementing them. A central element of the Antigone architecture is a suite of such mechanisms comprising micro-protocols that provide the basic services needed by secure groups.
Architectures Toward Reusable Science Data Systems
NASA Astrophysics Data System (ADS)
Moses, J. F.
2014-12-01
Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.
Architectures Toward Reusable Science Data Systems
NASA Technical Reports Server (NTRS)
Moses, John
2015-01-01
Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.
Expert Systems on Multiprocessor Architectures. Volume 2. Technical Reports
1991-06-01
Report RC 12936 (#58037). IBM T. J. Wartson Reiearch Center. July 1987. Alan Jay Smith. Cache memories. Coniputing Sitrry., 1.1(3): I.3-5:30...basic-shared is an instrument for ashared memory design. The components panels are processor- qload-scrolling-bar-panel, memory-qload-scrolling-bar-panel
Approach for Mitigating Pressure Garment Design Risks in a Mobile Lunar Surface Systems Architecture
NASA Technical Reports Server (NTRS)
Aitchison, Lindsay
2009-01-01
The stated goals of the 2004 Vision for Space Exploration focus on establishing a human presence throughout the solar system beginning with the establishment of a permanent human presence on the Moon. However, the precise objectives to be accomplished on the lunar surface and the optimal system architecture to achieve those objectives have been a topic of much debate since the inception of the Constellation Program. There are two basic styles of system architectures being traded at the Programmatic level: a traditional large outpost that would focus on techniques for survival off our home planet and a greater depth of exploration within one area, or a mobile approach- akin to a series of nomadic camps- that would allow greater breadth of exploration opportunities. The traditional outpost philosophy is well within the understood pressure garment design space with respect to developing interfaces and operational life cycle models. The mobile outpost, however, combines many unknowns with respect to pressure garment performance and reliability that could dramatically affect the cost and schedule risks associated with the Constellation space suit system. This paper provides an overview of the concepts being traded for a mobile architecture from the operations and hardware implementation perspective, describes the primary risks to the Constellation pressure garment associated with each of the concepts, and summarizes the approach necessary to quantify the pressure garment design risks to enable the Constellation Program to make informed decisions when deciding on an overall lunar surface systems architecture.
Automatic acquisition of domain and procedural knowledge
NASA Technical Reports Server (NTRS)
Ferber, H. J.; Ali, M.
1988-01-01
The design concept and performance of AKAS, an automated knowledge-acquisition system for the development of expert systems, are discussed. AKAS was developed using the FLES knowledge base for the electrical system of the B-737 aircraft and employs a 'learn by being told' strategy. The system comprises four basic modules, a system administration module, a natural-language concept-comprehension module, a knowledge-classification/extraction module, and a knowledge-incorporation module; details of the module architectures are explored.
Using a software-defined computer in teaching the basics of computer architecture and operation
NASA Astrophysics Data System (ADS)
Kosowska, Julia; Mazur, Grzegorz
2017-08-01
The paper describes the concept and implementation of SDC_One software-defined computer designed for experimental and didactic purposes. Equipped with extensive hardware monitoring mechanisms, the device enables the students to monitor the computer's operation on bus transfer cycle or instruction cycle basis, providing the practical illustration of basic aspects of computer's operation. In the paper, we describe the hardware monitoring capabilities of SDC_One and some scenarios of using it in teaching the basics of computer architecture and microprocessor operation.
An Architecture for Controlling Multiple Robots
NASA Technical Reports Server (NTRS)
Aghazarian, Hrand; Pirjanian, Paolo; Schenker, Paul; Huntsberger, Terrance
2004-01-01
The Control Architecture for Multirobot Outpost (CAMPOUT) is a distributed-control architecture for coordinating the activities of multiple robots. In the CAMPOUT, multiple-agent activities and sensor-based controls are derived as group compositions and involve coordination of more basic controllers denoted, for present purposes, as behaviors. The CAMPOUT provides basic mechanistic concepts for representation and execution of distributed group activities. One considers a network of nodes that comprise behaviors (self-contained controllers) augmented with hyper-links, which are used to exchange information between the nodes to achieve coordinated activities. Group behavior is guided by a scripted plan, which encodes a conditional sequence of single-agent activities. Thus, higher-level functionality is composed by coordination of more basic behaviors under the downward task decomposition of a multi-agent planner
A Multi-Component Automated Laser-Origami System for Cyber-Manufacturing
NASA Astrophysics Data System (ADS)
Ko, Woo-Hyun; Srinivasa, Arun; Kumar, P. R.
2017-12-01
Cyber-manufacturing systems can be enhanced by an integrated network architecture that is easily configurable, reliable, and scalable. We consider a cyber-physical system for use in an origami-type laser-based custom manufacturing machine employing folding and cutting of sheet material to manufacture 3D objects. We have developed such a system for use in a laser-based autonomous custom manufacturing machine equipped with real-time sensing and control. The basic elements in the architecture are built around the laser processing machine. They include a sensing system to estimate the state of the workpiece, a control system determining control inputs for a laser system based on the estimated data and user’s job requests, a robotic arm manipulating the workpiece in the work space, and middleware, named Etherware, supporting the communication among the systems. We demonstrate automated 3D laser cutting and bending to fabricate a 3D product as an experimental result.
The SysMan monitoring service and its management environment
NASA Astrophysics Data System (ADS)
Debski, Andrzej; Janas, Ekkehard
1996-06-01
Management of modern information systems is becoming more and more complex. There is a growing need for powerful, flexible and affordable management tools to assist system managers in maintaining such systems. It is at the same time evident that effective management should integrate network management, system management and application management in a uniform way. Object oriented OSI management architecture with its four basic modelling concepts (information, organization, communication and functional models) together with widely accepted distribution platforms such as ANSA/CORBA, constitutes a reliable and modern framework for the implementation of a management toolset. This paper focuses on the presentation of concepts and implementation results of an object oriented management toolset developed and implemented within the framework of the ESPRIT project 7026 SysMan. An overview is given of the implemented SysMan management services including the System Management Service, Monitoring Service, Network Management Service, Knowledge Service, Domain and Policy Service, and the User Interface. Special attention is paid to the Monitoring Service which incorporates the architectural key entity responsible for event management. Its architecture and building components, especially filters, are emphasized and presented in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yusufoglu, Yusuf
Nature offers many exciting ideas and inspiration for the development of new materials and processes. The toughness of spider silk, the strength and lightweight of bone, and the adhesion abilities of the gecko's feet are some of the many examples of highperformance natural materials, which have attracted the interest of scientist to duplicate their properties in man-made materials. Materials found in nature combine many inspiring properties such as miniaturization, sophistication, hierarchical organization, hybridization, and adaptability. In all biological systems, whether very basic or highly complex, nature provides a multiplicity of materials, architectures, systems and functions. Generally, the architectural configurations andmore » material characteristics are the important features that have been duplicated from nature for building synthetic structural composites.« less
A portable platform for accelerated PIC codes and its application to GPUs using OpenACC
NASA Astrophysics Data System (ADS)
Hariri, F.; Tran, T. M.; Jocksch, A.; Lanti, E.; Progsch, J.; Messmer, P.; Brunner, S.; Gheller, C.; Villard, L.
2016-10-01
We present a portable platform, called PIC_ENGINE, for accelerating Particle-In-Cell (PIC) codes on heterogeneous many-core architectures such as Graphic Processing Units (GPUs). The aim of this development is efficient simulations on future exascale systems by allowing different parallelization strategies depending on the application problem and the specific architecture. To this end, this platform contains the basic steps of the PIC algorithm and has been designed as a test bed for different algorithmic options and data structures. Among the architectures that this engine can explore, particular attention is given here to systems equipped with GPUs. The study demonstrates that our portable PIC implementation based on the OpenACC programming model can achieve performance closely matching theoretical predictions. Using the Cray XC30 system, Piz Daint, at the Swiss National Supercomputing Centre (CSCS), we show that PIC_ENGINE running on an NVIDIA Kepler K20X GPU can outperform the one on an Intel Sandy bridge 8-core CPU by a factor of 3.4.
The NASA Integrated Information Technology Architecture
NASA Technical Reports Server (NTRS)
Baldridge, Tim
1997-01-01
This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context of IT systems, 3) the Technical Architecture: a common, vendor-independent framework for design, integration and implementation of IT systems and 4) the Product Architecture: vendor=specific IT solutions. The Systems Architecture is effectively a description of the end-user "requirements". Generalized end-user requirements are discussed and subsequently organized into specific mission and project functions. The Technical Architecture depicts the framework, and relationship, of the specific IT components that enable the end-user functionality as described in the Systems Architecture. The primary components as described in the Technical Architecture are: 1) Applications: Basic Client Component, Object Creation Applications, Collaborative Applications, Object Analysis Applications, 2) Services: Messaging, Information Broker, Collaboration, Distributed Processing, and 3) Infrastructure: Network, Security, Directory, Certificate Management, Enterprise Management and File System. This Architecture also provides specific Implementation Recommendations, the most significant of which is the recognition of IT as core to NASA activities and defines a plan, which is aligned with the NASA strategic planning processes, for keeping the Architecture alive and useful.
Network architecture test-beds as platforms for ubiquitous computing.
Roscoe, Timothy
2008-10-28
Distributed systems research, and in particular ubiquitous computing, has traditionally assumed the Internet as a basic underlying communications substrate. Recently, however, the networking research community has come to question the fundamental design or 'architecture' of the Internet. This has been led by two observations: first, that the Internet as it stands is now almost impossible to evolve to support new functionality; and second, that modern applications of all kinds now use the Internet rather differently, and frequently implement their own 'overlay' networks above it to work around its perceived deficiencies. In this paper, I discuss recent academic projects to allow disruptive change to the Internet architecture, and also outline a radically different view of networking for ubiquitous computing that such proposals might facilitate.
Drafting. Advanced Print Reading--Electrical.
ERIC Educational Resources Information Center
Oregon State Dept. of Education, Salem.
This document is a workbook for drafting students learning advanced print reading for electricity applications. The workbook contains seven units covering the following material: architectural working drawings; architectural symbols and dimensions; basic architectural electrical symbols; wiring symbols; riser diagrams; schematic diagrams; and…
Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.
2011-01-01
This Component-Level Electronic-Assembly Repair (CLEAR) Operational Concept document was developed as a first step in developing the Component-Level Electronic-Assembly Repair (CLEAR) System Architecture (NASA/TM-2011-216956). The CLEAR operational concept defines how the system will be used by the Constellation Program and what needs it meets. The document creates scenarios for major elements of the CLEAR architecture. These scenarios are generic enough to apply to near-Earth, Moon, and Mars missions. The CLEAR operational concept involves basic assumptions about the overall program architecture and interactions with the CLEAR system architecture. The assumptions include spacecraft and operational constraints for near-Earth orbit, Moon, and Mars missions. This document addresses an incremental development strategy where capabilities evolve over time, but it is structured to prevent obsolescence. The approach minimizes flight hardware by exploiting Internet-like telecommunications that enables CLEAR capabilities to remain on Earth and to be uplinked as needed. To minimize crew time and operational cost, CLEAR exploits offline development and validation to support online teleoperations. Operational concept scenarios are developed for diagnostics, repair, and functional test operations. Many of the supporting functions defined in these operational scenarios are further defined as technologies in NASA/TM-2011-216956.
On the national characteristics of Chinese ancient architecture
NASA Astrophysics Data System (ADS)
Yan, Jun; Shan, Xiaoxian
2018-03-01
architecture is a complex composed of technology and art. It is a concrete reflection of everything in the local society at that time. The architecture is basically consistent with the social content and historical development. This paper analyzes the formation, characteristics and style of ancient Chinese architecture and expounds its national spirit and characteristics.
A reference web architecture and patterns for real-time visual analytics on large streaming data
NASA Astrophysics Data System (ADS)
Kandogan, Eser; Soroker, Danny; Rohall, Steven; Bak, Peter; van Ham, Frank; Lu, Jie; Ship, Harold-Jeffrey; Wang, Chun-Fu; Lai, Jennifer
2013-12-01
Monitoring and analysis of streaming data, such as social media, sensors, and news feeds, has become increasingly important for business and government. The volume and velocity of incoming data are key challenges. To effectively support monitoring and analysis, statistical and visual analytics techniques need to be seamlessly integrated; analytic techniques for a variety of data types (e.g., text, numerical) and scope (e.g., incremental, rolling-window, global) must be properly accommodated; interaction, collaboration, and coordination among several visualizations must be supported in an efficient manner; and the system should support the use of different analytics techniques in a pluggable manner. Especially in web-based environments, these requirements pose restrictions on the basic visual analytics architecture for streaming data. In this paper we report on our experience of building a reference web architecture for real-time visual analytics of streaming data, identify and discuss architectural patterns that address these challenges, and report on applying the reference architecture for real-time Twitter monitoring and analysis.
[The organization of system of information support of regional health care].
Konovalov, A A
2014-01-01
The comparative analysis was implemented concerning versions of architecture of segment of unified public information system of health care within the framework of the regional program of modernization of Nizhniy Novgorod health care system. The author proposed means of increasing effectiveness of public investments on the basis of analysis of aggregate value of ownership of information system. The evaluation is given concerning running up to target program indicators and dynamics of basic indicators of informatization of institutions of oblast health care system.
A remote instruction system empowered by tightly shared haptic sensation
NASA Astrophysics Data System (ADS)
Nishino, Hiroaki; Yamaguchi, Akira; Kagawa, Tsuneo; Utsumiya, Kouichi
2007-09-01
We present a system to realize an on-line instruction environment among physically separated participants based on a multi-modal communication strategy. In addition to visual and acoustic information, commonly used communication modalities in network environments, our system provides a haptic channel to intuitively conveying partners' sense of touch. The human touch sensation, however, is very sensitive for delays and jitters in the networked virtual reality (NVR) systems. Therefore, a method to compensate for such negative factors needs to be provided. We show an NVR architecture to implement a basic framework that can be shared by various applications and effectively deals with the problems. We take a hybrid approach to implement both data consistency by client-server and scalability by peer-to-peer models. As an application system built on the proposed architecture, a remote instruction system targeted at teaching handwritten characters and line patterns on a Korea-Japan high-speed research network also is mentioned.
First Report on Non-Thermal Plasma Reactor Scaling Criteria and Optimization Models
1998-01-13
decomposition chemistry of nitric oxide and two representative VOCs, trichloroethylene and carbon tetrachloride, and the connection between the basic plasma ... chemistry , the target species properties, and the reactor operating parameters. System architecture, that is how NTP reactors can be combined or ganged to achieve higher capacity, will also be briefly discussed.
A Study on Technology Architecture and Serving Approaches of Electronic Government System
NASA Astrophysics Data System (ADS)
Liu, Chunnian; Huang, Yiyun; Pan, Qin
As E-government becomes a very active research area, a lot of solutions to solve citizens' needs are being deployed. This paper provides technology architecture of E-government system and approaches of service in Public Administrations. The proposed electronic system addresses the basic E-government requirements of user friendliness, security, interoperability, transparency and effectiveness in the communication between small and medium sized public organizations and their citizens, businesses and other public organizations. The paper has provided several serving approaches of E-government, which includes SOA, web service, mobile E-government, public library and every has its own characteristics and application scenes. Still, there are a number of E-government issues for further research on organization structure change, including research methodology, data collection analysis, etc.
Architecture for a 1-GHz Digital RADAR
NASA Technical Reports Server (NTRS)
Mallik, Udayan
2011-01-01
An architecture for a Direct RF-digitization Type Digital Mode RADAR was developed at GSFC in 2008. Two variations of a basic architecture were developed for use on RADAR imaging missions using aircraft and spacecraft. Both systems can operate with a pulse repetition rate up to 10 MHz with 8 received RF samples per pulse repetition interval, or at up to 19 kHz with 4K received RF samples per pulse repetition interval. The first design describes a computer architecture for a Continuous Mode RADAR transceiver with a real-time signal processing and display architecture. The architecture can operate at a high pulse repetition rate without interruption for an infinite amount of time. The second design describes a smaller and less costly burst mode RADAR that can transceive high pulse repetition rate RF signals without interruption for up to 37 seconds. The burst-mode RADAR was designed to operate on an off-line signal processing paradigm. The temporal distribution of RF samples acquired and reported to the RADAR processor remains uniform and free of distortion in both proposed architectures. The majority of the RADAR's electronics is implemented in digital CMOS (complementary metal oxide semiconductor), and analog circuits are restricted to signal amplification operations and analog to digital conversion. An implementation of the proposed systems will create a 1-GHz, Direct RF-digitization Type, L-Band Digital RADAR--the highest band achievable for Nyquist Rate, Direct RF-digitization Systems that do not implement an electronic IF downsample stage (after the receiver signal amplification stage), using commercially available off-the-shelf integrated circuits.
The high speed interconnect system architecture and operation
NASA Astrophysics Data System (ADS)
Anderson, Steven C.
The design and operation of a fiber-optic high-speed interconnect system (HSIS) being developed to meet the requirements of future avionics and flight-control hardware with distributed-system architectures are discussed. The HSIS is intended for 100-Mb/s operation of a local-area network with up to 256 stations. It comprises a bus transmission system (passive star couplers and linear media linked by active elements) and network interface units (NIUs). Each NIU is designed to perform the physical, data link, network, and transport functions defined by the ISO OSI Basic Reference Model (1982 and 1983) and incorporates a fiber-optic transceiver, a high-speed protocol based on the SAE AE-9B linear token-passing data bus (1986), and a specialized application interface unit. The operating modes and capabilities of HSIS are described in detail and illustrated with diagrams.
NASA Technical Reports Server (NTRS)
1989-01-01
The results of the refined conceptual design phase (task 5) of the Simulation Computer System (SCS) study are reported. The SCS is the computational portion of the Payload Training Complex (PTC) providing simulation based training on payload operations of the Space Station Freedom (SSF). In task 4 of the SCS study, the range of architectures suitable for the SCS was explored. Identified system architectures, along with their relative advantages and disadvantages for SCS, were presented in the Conceptual Design Report. Six integrated designs-combining the most promising features from the architectural formulations-were additionally identified in the report. The six integrated designs were evaluated further to distinguish the more viable designs to be refined as conceptual designs. The three designs that were selected represent distinct approaches to achieving a capable and cost effective SCS configuration for the PTC. Here, the results of task 4 (input to this task) are briefly reviewed. Then, prior to describing individual conceptual designs, the PTC facility configuration and the SSF systems architecture that must be supported by the SCS are reviewed. Next, basic features of SCS implementation that have been incorporated into all selected SCS designs are considered. The details of the individual SCS designs are then presented before making a final comparison of the three designs.
NASA Astrophysics Data System (ADS)
Gaševic, Dragan; Djuric, Dragan; Devedžic, Vladan
A relevant initiative from the software engineering community called Model Driven Engineering (MDE) is being developed in parallel with the Semantic Web (Mellor et al. 2003a). The MDE approach to software development suggests that one should first develop a model of the system under study, which is then transformed into the real thing (i.e., an executable software entity). The most important research initiative in this area is the Model Driven Architecture (MDA), which is Model Driven Architecture being developed under the umbrella of the Object Management Group (OMG). This chapter describes the basic concepts of this software engineering effort.
Environment and Architecture - a Paradigm Shift
NASA Astrophysics Data System (ADS)
di Battista, Valerio
The interaction of human cultures and the built environment allows a wide range of interpretations and has been studied inside the domain of many disciplines. This paper discusses three interpretations descending from a systemic approach to the question: - architecture as an "emergence" of the settlement system; - place (and space) as an "accumulator" of time and a "flux" of systems; - landscape as one representation/description of the human settlement. Architecture emerges as a new physical conformation or layout, or as a change in a specific site, arising from actions and representations of political, religious, economical or social powers, being shaped at all times by the material culture belonging to a specific time and place in the course of human evolution. Any inhabited space becomes over time a place as well as a landscape, i.e. a representation of the settlement and a relationship between setting and people. Therefore, any place owns a landscape which, in turn, is a system of physical systems; it could be defined as a system of sites that builds up its own structure stemming from the orographical features and the geometry of land surfaces that set out the basic characters of its space.
NASA Astrophysics Data System (ADS)
Rhodes, Russel E.; Byrd, Raymond J.
1998-01-01
This paper presents a ``back of the envelope'' technique for fast, timely, on-the-spot, assessment of affordability (profitability) of commercial space transportation architectural concepts. The tool presented here is not intended to replace conventional, detailed costing methodology. The process described enables ``quick look'' estimations and assumptions to effectively determine whether an initial concept (with its attendant cost estimating line items) provides focus for major leapfrog improvement. The Cost Charts Users Guide provides a generic sample tutorial, building an approximate understanding of the basic launch system cost factors and their representative magnitudes. This process will enable the user to develop a net ``cost (and price) per payload-mass unit to orbit'' incorporating a variety of significant cost drivers, supplemental to basic vehicle cost estimates. If acquisition cost and recurring cost factors (as a function of cost per payload-mass unit to orbit) do not meet the predetermined system-profitability goal, the concept in question will be clearly seen as non-competitive. Multiple analytical approaches, and applications of a variety of interrelated assumptions, can be examined in a quick, (on-the-spot) cost approximation analysis as this tool has inherent flexibility. The technique will allow determination of concept conformance to system objectives.
NASA Astrophysics Data System (ADS)
Demin, V. A.; Emelyanov, A. V.; Lapkin, D. A.; Erokhin, V. V.; Kashkarov, P. K.; Kovalchuk, M. V.
2016-11-01
The instrumental realization of neuromorphic systems may form the basis of a radically new social and economic setup, redistributing roles between humans and complex technical aggregates. The basic elements of any neuromorphic system are neurons and synapses. New memristive elements based on both organic (polymer) and inorganic materials have been formed, and the possibilities of instrumental implementation of very simple neuromorphic systems with different architectures on the basis of these elements have been demonstrated.
All-memristive neuromorphic computing with level-tuned neurons
NASA Astrophysics Data System (ADS)
Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos
2016-09-01
In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.
All-memristive neuromorphic computing with level-tuned neurons.
Pantazi, Angeliki; Woźniak, Stanisław; Tuma, Tomas; Eleftheriou, Evangelos
2016-09-02
In the new era of cognitive computing, systems will be able to learn and interact with the environment in ways that will drastically enhance the capabilities of current processors, especially in extracting knowledge from vast amount of data obtained from many sources. Brain-inspired neuromorphic computing systems increasingly attract research interest as an alternative to the classical von Neumann processor architecture, mainly because of the coexistence of memory and processing units. In these systems, the basic components are neurons interconnected by synapses. The neurons, based on their nonlinear dynamics, generate spikes that provide the main communication mechanism. The computational tasks are distributed across the neural network, where synapses implement both the memory and the computational units, by means of learning mechanisms such as spike-timing-dependent plasticity. In this work, we present an all-memristive neuromorphic architecture comprising neurons and synapses realized by using the physical properties and state dynamics of phase-change memristors. The architecture employs a novel concept of interconnecting the neurons in the same layer, resulting in level-tuned neuronal characteristics that preferentially process input information. We demonstrate the proposed architecture in the tasks of unsupervised learning and detection of multiple temporal correlations in parallel input streams. The efficiency of the neuromorphic architecture along with the homogenous neuro-synaptic dynamics implemented with nanoscale phase-change memristors represent a significant step towards the development of ultrahigh-density neuromorphic co-processors.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1995-01-01
This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.
The computational structural mechanics testbed architecture. Volume 1: The language
NASA Technical Reports Server (NTRS)
Felippa, Carlos A.
1988-01-01
This is the first set of five volumes which describe the software architecture for the Computational Structural Mechanics Testbed. Derived from NICE, an integrated software system developed at Lockheed Palo Alto Research Laboratory, the architecture is composed of the command language CLAMP, the command language interpreter CLIP, and the data manager GAL. Volumes 1, 2, and 3 (NASA CR's 178384, 178385, and 178386, respectively) describe CLAMP and CLIP, and the CLIP-processor interface. Volumes 4 and 5 (NASA CR's 178387 and 178388, respectively) describe GAL and its low-level I/O. CLAMP, an acronym for Command Language for Applied Mechanics Processors, is designed to control the flow of execution of processors written for NICE. Volume 1 presents the basic elements of the CLAMP language and is intended for all users.
Launch Vehicle Control Center Architectures
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Williams, Randall; McLaughlin, Tom
2014-01-01
This analysis is a survey of control center architectures of the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures have similarities in basic structure, and differences in functional distribution of responsibilities for the phases of operations: (a) Launch vehicles in the international community vary greatly in configuration and process; (b) Each launch site has a unique processing flow based on the specific configurations; (c) Launch and flight operations are managed through a set of control centers associated with each launch site, however the flight operations may be a different control center than the launch center; and (d) The engineering support centers are primarily located at the design center with a small engineering support team at the launch site.
ERIC Educational Resources Information Center
Uwakonye, Obioha; Alagbe, Oluwole; Oluwatayo, Adedapo; Alagbe, Taiye; Alalade, Gbenga
2015-01-01
As a result of globalization of digital technology, intellectual discourse on what constitutes the basic body of architectural knowledge to be imparted to future professionals has been on the increase. This digital revolution has brought to the fore the need to review the already overloaded architectural education curriculum of Nigerian schools of…
Using Ada to implement the operations management system in a community of experts
NASA Technical Reports Server (NTRS)
Frank, M. S.
1986-01-01
An architecture is described for the Space Station Operations Management System (OMS), consisting of a distributed expert system framework implemented in Ada. The motivation for such a scheme is based on the desire to integrate the very diverse elements of the OMS while taking maximum advantage of knowledge based systems technology. Part of the foundation of an Ada based distributed expert system was accomplished in the form of a proof of concept prototype for the KNOMES project (Knowledge-based Maintenance Expert System). This prototype successfully used concurrently active experts to accomplish monitoring and diagnosis for the Remote Manipulator System. The basic concept of this software architecture is named ACTORS for Ada Cognitive Task ORganization Scheme. It is when one considers the overall problem of integrating all of the OMS elements into a cooperative system that the AI solution stands out. By utilizing a distributed knowledge based system as the framework for OMS, it is possible to integrate those components which need to share information in an intelligent manner.
[Automated anesthesia record system].
Zhu, Tao; Liu, Jin
2005-12-01
Based on Client/Server architecture, a software of automated anesthesia record system running under Windows operation system and networks has been developed and programmed with Microsoft Visual C++ 6.0, Visual Basic 6.0 and SQL Server. The system can deal with patient's information throughout the anesthesia. It can collect and integrate the data from several kinds of medical equipment such as monitor, infusion pump and anesthesia machine automatically and real-time. After that, the system presents the anesthesia sheets automatically. The record system makes the anesthesia record more accurate and integral and can raise the anesthesiologist's working efficiency.
Move-tecture: A Conceptual Framework for Designing Movement in Architecture
NASA Astrophysics Data System (ADS)
Yilmaz, Irem
2017-10-01
Along with the technological improvements in our age, it is now possible for the movement to become one of the basic components of the architectural space. Accordingly, architectural construction of movement changes both our architectural production practices and our understanding of architectural space. However, existing design concepts and approaches are insufficient to discuss and understand this change. In this respect, this study aims to form a conceptual framework on the relationship of architecture and movement. In this sense, the conceptualization of move-tecture is developed to research on the architectural construction of movement and the potentials of spatial creation through architecturally constructed movement. Move-tecture, is a conceptualization that treats movement as a basic component of spatial creation. It presents the framework of a qualitative categorization on the design of moving architectural structures. However, this categorization is a flexible one that can evolve in the direction of the expanding possibilities of the architectural design and the changing living conditions. With this understanding, six categories have been defined within the context of the article: Topological Organization, Choreographic Formation, Kinetic Structuring, Corporeal Constitution, Technological Configuration and Interactional Patterning. In line with these categories, a multifaceted perspective on the moving architectural structures is promoted. It is aimed that such an understanding constitutes a new initiative in the design practices carried out in this area and provides a conceptual basis for the discussions to be developed.
Taking advantage of ground data systems attributes to achieve quality results in testing software
NASA Technical Reports Server (NTRS)
Sigman, Clayton B.; Koslosky, John T.; Hageman, Barbara H.
1994-01-01
During the software development life cycle process, basic testing starts with the development team. At the end of the development process, an acceptance test is performed for the user to ensure that the deliverable is acceptable. Ideally, the delivery is an operational product with zero defects. However, the goal of zero defects is normally not achieved but is successful to various degrees. With the emphasis on building low cost ground support systems while maintaining a quality product, a key element in the test process is simulator capability. This paper reviews the Transportable Payload Operations Control Center (TPOCC) Advanced Spacecraft Simulator (TASS) test tool that is used in the acceptance test process for unmanned satellite operations control centers. The TASS is designed to support the development, test and operational environments of the Goddard Space Flight Center (GSFC) operations control centers. The TASS uses the same basic architecture as the operations control center. This architecture is characterized by its use of distributed processing, industry standards, commercial off-the-shelf (COTS) hardware and software components, and reusable software. The TASS uses much of the same TPOCC architecture and reusable software that the operations control center developer uses. The TASS also makes use of reusable simulator software in the mission specific versions of the TASS. Very little new software needs to be developed, mainly mission specific telemetry communication and command processing software. By taking advantage of the ground data system attributes, successful software reuse for operational systems provides the opportunity to extend the reuse concept into the test area. Consistency in test approach is a major step in achieving quality results.
Patterns-Based IS Change Management in SMEs
NASA Astrophysics Data System (ADS)
Makna, Janis; Kirikova, Marite
The majority of information systems change management guidelines and standards are either too abstract or too bureaucratic to be easily applicable in small enterprises. This chapter proposes the approach, the method, and the prototype that are designed especially for information systems change management in small and medium enterprises. The approach is based on proven patterns of changes in the set of information systems elements. The set of elements was obtained by theoretical analysis of information systems and business process definitions and enterprise architectures. The patterns were evolved from a number of information systems theories and tested in 48 information systems change management projects. The prototype presents and helps to handle three basic change patterns, which help to anticipate the overall scope of changes related to particular elementary changes in an enterprise information system. The use of prototype requires just basic knowledge in organizational business process and information management.
Modeling and performance analysis of QoS data
NASA Astrophysics Data System (ADS)
Strzeciwilk, Dariusz; Zuberek, Włodzimierz M.
2016-09-01
The article presents the results of modeling and analysis of data transmission performance on systems that support quality of service. Models are designed and tested, taking into account multiservice network architecture, i.e. supporting the transmission of data related to different classes of traffic. Studied were mechanisms of traffic shaping systems, which are based on the Priority Queuing with an integrated source of data and the various sources of data that is generated. Discussed were the basic problems of the architecture supporting QoS and queuing systems. Designed and built were models based on Petri nets, supported by temporal logics. The use of simulation tools was to verify the mechanisms of shaping traffic with the applied queuing algorithms. It is shown that temporal models of Petri nets can be effectively used in the modeling and analysis of the performance of computer networks.
Generic worklist handler for workflow-enabled products
NASA Astrophysics Data System (ADS)
Schmidt, Joachim; Meetz, Kirsten; Wendler, Thomas
1999-07-01
Workflow management (WfM) is an emerging field of medical information technology. It appears as a promising key technology to model, optimize and automate processes, for the sake of improved efficiency, reduced costs and improved patient care. The Application of WfM concepts requires the standardization of architectures and interfaces. A component of central interest proposed in this report is a generic work list handler: A standardized interface between a workflow enactment service and application system. Application systems with embedded work list handlers will be called 'Workflow Enabled Application Systems'. In this paper we discus functional requirements of work list handlers, as well as their integration into workflow architectures and interfaces. To lay the foundation for this specification, basic workflow terminology, the fundamentals of workflow management and - later in the paper - the available standards as defined by the Workflow Management Coalition are briefly reviewed.
Matrix light and pixel light: optical system architecture and requirements to the light source
NASA Astrophysics Data System (ADS)
Spinger, Benno; Timinger, Andreas L.
2015-09-01
Modern Automotive headlamps enable improved functionality for more driving comfort and safety. Matrix or Pixel light headlamps are not restricted to either pure low beam functionality or pure high beam. Light in direction of oncoming traffic is selectively switched of, potential hazard can be marked via an isolated beam and the illumination on the road can even follow a bend. The optical architectures that enable these advanced functionalities are diverse. Electromechanical shutters and lens units moved by electric motors were the first ways to realize these systems. Switching multiple LED light sources is a more elegant and mechanically robust solution. While many basic functionalities can already be realized with a limited number of LEDs, an increasing number of pixels will lead to more driving comfort and better visibility. The required optical system needs not only to generate a desired beam distribution with a high angular dynamic, but also needs to guarantee minimal stray light and cross talk between the different pixels. The direct projection of the LED array via a lens is a simple but not very efficient optical system. We discuss different optical elements for pre-collimating the light with minimal cross talk and improved contrast between neighboring pixels. Depending on the selected optical system, we derive the basic light source requirements: luminance, surface area, contrast, flux and color homogeneity.
Architectural Drafting, Drafting 2: 9255.04.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL.
The course covers the basic fundamentals of architectural drafting and is not intended to delve into the more advanced phases of architecture. The student is presented with standards and procedures, and will become proficient in layout of floor plans, electrical plans, roof construction, foundation plans, typical wall construction, plot plans, and…
ERIC Educational Resources Information Center
Hubbert, Beth
2011-01-01
Architecture is a versatile, multifaceted area to study in the artroom with multiple age levels. It can easily stimulate a study of basic line, shape, and various other art elements and principles. It can then be extended into a more extensive study of architectural elements, styles, specific architects, architecture of different cultures, and…
Architecture is Elementary: Visual Thinking through Architectural Concepts.
ERIC Educational Resources Information Center
Winters, Nathan B.
This book presents very basic but important concepts about architecture and outlines some of the most important concepts used by great architects. These concepts are taught at levels of perceptual maturity applicable to adults and children alike and progress from levels one through seven as the concepts become progressively intertwined. The…
Video sensor architecture for surveillance applications.
Sánchez, Jordi; Benet, Ginés; Simó, José E
2012-01-01
This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%.
Video Sensor Architecture for Surveillance Applications
Sánchez, Jordi; Benet, Ginés; Simó, José E.
2012-01-01
This paper introduces a flexible hardware and software architecture for a smart video sensor. This sensor has been applied in a video surveillance application where some of these video sensors are deployed, constituting the sensory nodes of a distributed surveillance system. In this system, a video sensor node processes images locally in order to extract objects of interest, and classify them. The sensor node reports the processing results to other nodes in the cloud (a user or higher level software) in the form of an XML description. The hardware architecture of each sensor node has been developed using two DSP processors and an FPGA that controls, in a flexible way, the interconnection among processors and the image data flow. The developed node software is based on pluggable components and runs on a provided execution run-time. Some basic and application-specific software components have been developed, in particular: acquisition, segmentation, labeling, tracking, classification and feature extraction. Preliminary results demonstrate that the system can achieve up to 7.5 frames per second in the worst case, and the true positive rates in the classification of objects are better than 80%. PMID:22438723
Another HISA--the new standard: health informatics--service architecture.
Klein, Gunnar O; Sottile, Pier Angelo; Endsleff, Frederik
2007-01-01
In addition to the meaning as Health Informatics Society of Australia, HISA is the acronym used for the new European Standard: Health Informatics - Service Architecture. This EN 12967 standard has been developed by CEN - the federation of 29 national standards bodies in Europe. This standard defines the essential elements of a Service Oriented Architecture and a methodology for localization particularly useful for large healthcare organizations. It is based on the Open Distributed Processing (ODP) framework from ISO 10746 and contains the following parts: Part 1: Enterprise viewpoint. Part 2: Information viewpoint. Part 3: Computational viewpoint. This standard is now also the starting point for the consideration for an International standard in ISO/TC 215. The basic principles with a set of health specific middleware services as a common platform for various applications for regional health information systems, or large integrated hospital information systems, are well established following a previous prestandard. Examples of large scale deployments in Sweden, Denmark and Italy are described.
Stereoscopic applications for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2007-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
Onboard data-processing architecture of the soft X-ray imager (SXI) on NeXT satellite
NASA Astrophysics Data System (ADS)
Ozaki, Masanobu; Dotani, Tadayasu; Tsunemi, Hiroshi; Hayashida, Kiyoshi; Tsuru, Takeshi G.
2004-09-01
NeXT is the X-ray satellite proposed for the next Japanese space science mission. While the satellite total mass and the launching vehicle are similar to the prior satellite Astro-E2, the sensitivity is much improved; it requires all the components to be lighter and faster than previous architecture. This paper shows the data processing architecture of the X-ray CCD camera system SXI (Soft X-ray Imager), which is the top half of the WXI (Wide-band X-ray Imager) of the sensitivity in 0.2-80keV. The system is basically a variation of Astro-E2 XIS, but event extraction speed is much faster than it to fulfill the requirements coming from the large effective area and fast exposure period. At the same time, data transfer lines between components are redesigned in order to reduce the number and mass of the wire harnesses that limit the flexibility of the component distribution.
NASA Astrophysics Data System (ADS)
Masoumi, Massoud; Raissi, Farshid; Ahmadian, Mahmoud; Keshavarzi, Parviz
2006-01-01
We are proposing that the recently proposed semiconductor-nanowire-molecular architecture (CMOL) is an optimum platform to realize encryption algorithms. The basic modules for the advanced encryption standard algorithm (Rijndael) have been designed using CMOL architecture. The performance of this design has been evaluated with respect to chip area and speed. It is observed that CMOL provides considerable improvement over implementation with regular CMOS architecture even with a 20% defect rate. Pseudo-optimum gate placement and routing are provided for Rijndael building blocks and the possibility of designing high speed, attack tolerant and long key encryptions are discussed.
An intelligent training system for payload-assist module deploys
NASA Technical Reports Server (NTRS)
Loftin, R. Bowen; Wang, Lui; Baffes, Paul; Rua, Monica
1987-01-01
An autonomous intelligent training system which integrates expert system technology with training/teaching methodologies is described. The Payload-Assist Module Deploys/Intelligent Computer-Aided Training (PD/ICAT) system has, so far, proven to be a potentially valuable addition to the training tools available for training Flight Dynamics Officers in shuttle ground control. The authors are convinced that the basic structure of PD/ICAT can be extended to form a general architecture for intelligent training systems for training flight controllers and crew members in the performance of complex, mission-critical tasks.
NASA Technical Reports Server (NTRS)
1985-01-01
The initial task in the Space Station Data System (SSDS) Analysis/Architecture Study is the definition of the functional and key performance requirements for the SSDS. The SSDS is the set of hardware and software, both on the ground and in space, that provides the basic data management services for Space Station customers and systems. The primary purpose of the requirements development activity was to provide a coordinated, documented requirements set as a basis for the system definition of the SSDS and for other subsequent study activities. These requirements should also prove useful to other Space Station activities in that they provide an indication of the scope of the information services and systems that will be needed in the Space Station program. The major results of the requirements development task are as follows: (1) identification of a conceptual topology and architecture for the end-to-end Space Station Information Systems (SSIS); (2) development of a complete set of functional requirements and design drivers for the SSIS; (3) development of functional requirements and key performance requirements for the Space Station Data System (SSDS); and (4) definition of an operating concept for the SSIS. The operating concept was developed both from a Space Station payload customer and operator perspective in order to allow a requirements practicality assessment.
Composable Framework Support for Software-FMEA Through Model Execution
NASA Astrophysics Data System (ADS)
Kocsis, Imre; Patricia, Andras; Brancati, Francesco; Rossi, Francesco
2016-08-01
Performing Failure Modes and Effect Analysis (FMEA) during software architecture design is becoming a basic requirement in an increasing number of domains; however, due to the lack of standardized early design phase model execution, classic SW-FMEA approaches carry significant risks and are human effort-intensive even in processes that use Model-Driven Engineering.Recently, modelling languages with standardized executable semantics have emerged. Building on earlier results, this paper describes framework support for generating executable error propagation models from such models during software architecture design. The approach carries the promise of increased precision, decreased risk and more automated execution for SW-FMEA during dependability- critical system development.
Modeling driver behavior in a cognitive architecture.
Salvucci, Dario D
2006-01-01
This paper explores the development of a rigorous computational model of driver behavior in a cognitive architecture--a computational framework with underlying psychological theories that incorporate basic properties and limitations of the human system. Computational modeling has emerged as a powerful tool for studying the complex task of driving, allowing researchers to simulate driver behavior and explore the parameters and constraints of this behavior. An integrated driver model developed in the ACT-R (Adaptive Control of Thought-Rational) cognitive architecture is described that focuses on the component processes of control, monitoring, and decision making in a multilane highway environment. This model accounts for the steering profiles, lateral position profiles, and gaze distributions of human drivers during lane keeping, curve negotiation, and lane changing. The model demonstrates how cognitive architectures facilitate understanding of driver behavior in the context of general human abilities and constraints and how the driving domain benefits cognitive architectures by pushing model development toward more complex, realistic tasks. The model can also serve as a core computational engine for practical applications that predict and recognize driver behavior and distraction.
Redondo, Jonatan Pajares; González, Lisardo Prieto; Guzman, Javier García; Boada, Beatriz L; Díaz, Vicente
2018-02-06
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices.
Díaz, Vicente
2018-01-01
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices. PMID:29415507
... free mailed brochure Table of Contents Introduction The Architecture of the Brain The Geography of Thought The ... brain is diseased or dysfunctional. Image 1 The Architecture of the Brain The brain is like a ...
The Cbf5-Nop10 Complex is a Molecular Bracket that Organizes Box H/ACA RNPs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamma, Tomoko; Reichow, Steve L.; Varani, Gabriele
2005-12-01
Box H/ACA ribonucleoprotein particles (RNPs) catalyze RNA pseudouridylation and direct processing of ribosomal RNA, and are essential architectural components of vertebrate telomerases. H/ACA RNPs comprise four proteins and a multihelical RNA. Two proteins, Cbf5 and Nop10, suffice for basal enzymatic activity in an archaeal in vitro system. We now report their cocrystal structure at 1.95-A resolution. We find that archaeal Cbf5 can assemble with yeast Nop10 and with human telomerase RNA, consistent with the high sequence identity of the RNP componenets between archaea and eukarya. Thus, the Cbf5-Nop10 architecture is phylogenetically conserved. The structure shows how Nop10 buttresses the activemore » site of Cbf5, and it reveals two basic troughs that bidirectionally extend the active site cleft. Mutagenesis results implicate an adjacent basic patch in RNA binding. This tripartite RNA-binding surface may function as a molecular bracket that organizes the multihelical H/ACA and telomerase RNAs.« less
GW Calculations of Materials on the Intel Xeon-Phi Architecture
NASA Astrophysics Data System (ADS)
Deslippe, Jack; da Jornada, Felipe H.; Vigil-Fowler, Derek; Biller, Ariel; Chelikowsky, James R.; Louie, Steven G.
Intel Xeon-Phi processors are expected to power a large number of High-Performance Computing (HPC) systems around the United States and the world in the near future. We evaluate the ability of GW and pre-requisite Density Functional Theory (DFT) calculations for materials on utilizing the Xeon-Phi architecture. We describe the optimization process and performance improvements achieved. We find that the GW method, like other higher level Many-Body methods beyond standard local/semilocal approximations to Kohn-Sham DFT, is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism over plane-waves, band-pairs and frequencies. Support provided by the SCIDAC program, Department of Energy, Office of Science, Advanced Scientic Computing Research and Basic Energy Sciences. Grant Numbers DE-SC0008877 (Austin) and DE-AC02-05CH11231 (LBNL).
Energy and Architecture: The Solar and Conservation Potential. Worldwatch Paper 40.
ERIC Educational Resources Information Center
Flavin, Christopher
This monograph explores how architecture is influenced by and is responding to the global energy dilemma. Emphasis is placed on conservation techniques (using heavy insulation) and on passive solar construction (supplying most of a building's heating, cooling, and lighting requirements by sunlight). The basic problem is that architecture, like…
Stochastic architecture for Hopfield neural nets
NASA Technical Reports Server (NTRS)
Pavel, Sandy
1992-01-01
An expandable stochastic digital architecture for recurrent (Hopfield like) neural networks is proposed. The main features and basic principles of stochastic processing are presented. The stochastic digital architecture is based on a chip with n full interconnected neurons with a pipeline, bit processing structure. For large applications, a flexible way to interconnect many such chips is provided.
Architectures Toward Reusable Science Data Systems
NASA Technical Reports Server (NTRS)
Moses, John Firor
2014-01-01
Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAA's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today.
Study of a unified hardware and software fault-tolerant architecture
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart
1989-01-01
A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.
NASA Astrophysics Data System (ADS)
Kang, Soon Ju; Moon, Jae Chul; Choi, Doo-Hyun; Choi, Sung Su; Woo, Hee Gon
1998-06-01
The inspection of steam-generator (SG) tubes in a nuclear power plant (NPP) is a time-consuming, laborious, and hazardous task because of several hard constraints such as a highly radiated working environment, a tight task schedule, and the need for many experienced human inspectors. This paper presents a new distributed intelligent system architecture for automating traditional inspection methods. The proposed architecture adopts three basic technical strategies in order to reduce the complexity of system implementation. The first is the distributed task allocation into four stages: inspection planning (IF), signal acquisition (SA), signal evaluation (SE), and inspection data management (IDM). Consequently, dedicated subsystems for automation of each stage can be designed and implemented separately. The second strategy is the inclusion of several useful artificial intelligence techniques for implementing the subsystems of each stage, such as an expert system for IP and SE and machine vision and remote robot control techniques for SA. The third strategy is the integration of the subsystems using client/server-based distributed computing architecture and a centralized database management concept. Through the use of the proposed architecture, human errors, which can occur during inspection, can be minimized because the element of human intervention has been almost eliminated; however, the productivity of the human inspector can be increased equally. A prototype of the proposed system has been developed and successfully tested over the last six years in domestic NPP's.
Advanced and secure architectural EHR approaches.
Blobel, Bernd
2006-01-01
Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.
Grounding Robot Autonomy in Emotion and Self-awareness
NASA Astrophysics Data System (ADS)
Sanz, Ricardo; Hernández, Carlos; Hernando, Adolfo; Gómez, Jaime; Bermejo, Julita
Much is being done in an attempt to transfer emotional mechanisms from reverse-engineered biology into social robots. There are two basic approaches: the imitative display of emotion —e.g. to intend more human-like robots— and the provision of architectures with intrinsic emotion —in the hope of enhancing behavioral aspects. This paper focuses on the second approach, describing a core vision regarding the integration of cognitive, emotional and autonomic aspects in social robot systems. This vision has evolved as a result of the efforts in consolidating the models extracted from rat emotion research and their implementation in technical use cases based on a general systemic analysis in the framework of the ICEA and C3 projects. The desire for generality of the approach intends obtaining universal theories of integrated —autonomic, emotional, cognitive— behavior. The proposed conceptualizations and architectural principles are then captured in a theoretical framework: ASys — The Autonomous Systems Framework.
Defining and using open architecture levels
NASA Astrophysics Data System (ADS)
Cramer, M. A.; Morrison, A. W.; Cordes, B.; Stack, J. R.
2012-05-01
Open architecture (OA) within military systems enables delivery of increased warfighter capabilities in a shorter time at a reduced cost.i In fact in today's standards-aware environment, solutions are often proposed to the government that include OA as one of its basics design tenets. Yet the ability to measure and assess OA in an objective manner, particularly at the subsystem/component level within a system, remains an elusive proposition. Furthermore, it is increasingly apparent that the establishment of an innovation ecosystem of an open business model that leverages thirdparty development requires more than just technical modifications that promote openness. This paper proposes a framework to migrate not only towards technical openness, but also towards enabling and facilitating an open business model, driven by third party development, for military systems. This framework was developed originally for the U.S. Navy Littoral and Mine Warfare community; however, the principles and approach may be applied elsewhere within the Navy and Department of Defense.
Basic concepts and architectural details of the Delphi trigger system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bocci, V.; Booth, P.S.L.; Bozzo, M.
1995-08-01
Delphi (DEtector with Lepton, Photon and Hadron Identification) is one of the four experiments of the LEP (Large Electron Positron) collider at CERN. The detector is laid out to provide a nearly 4 {pi} coverage for charged particle tracking, electromagnetic, hadronic calorimetry and extended particle identification. The trigger system consists of four levels. The first two are synchronous with the BCO (Beam Cross Over) and rely on hardwired control units, while the last two are performed asynchronously with respect to the BCO and are driven by the Delphi host computers. The aim of this paper is to give a comprehensivemore » global view of the trigger system architecture, presenting in detail the first two levels, their various hardware components and the latest modifications introduced in order to improve their performance and make more user friendly the whole software user interface.« less
Janson, Natalia B; Marsden, Christopher J
2017-12-05
It is well known that architecturally the brain is a neural network, i.e. a collection of many relatively simple units coupled flexibly. However, it has been unclear how the possession of this architecture enables higher-level cognitive functions, which are unique to the brain. Here, we consider the brain from the viewpoint of dynamical systems theory and hypothesize that the unique feature of the brain, the self-organized plasticity of its architecture, could represent the means of enabling the self-organized plasticity of its velocity vector field. We propose that, conceptually, the principle of cognition could amount to the existence of appropriate rules governing self-organization of the velocity field of a dynamical system with an appropriate account of stimuli. To support this hypothesis, we propose a simple non-neuromorphic mathematical model with a plastic self-organized velocity field, which has no prototype in physical world. This system is shown to be capable of basic cognition, which is illustrated numerically and with musical data. Our conceptual model could provide an additional insight into the working principles of the brain. Moreover, hardware implementations of plastic velocity fields self-organizing according to various rules could pave the way to creating artificial intelligence of a novel type.
Quantum Computing Architectural Design
NASA Astrophysics Data System (ADS)
West, Jacob; Simms, Geoffrey; Gyure, Mark
2006-03-01
Large scale quantum computers will invariably require scalable architectures in addition to high fidelity gate operations. Quantum computing architectural design (QCAD) addresses the problems of actually implementing fault-tolerant algorithms given physical and architectural constraints beyond those of basic gate-level fidelity. Here we introduce a unified framework for QCAD that enables the scientist to study the impact of varying error correction schemes, architectural parameters including layout and scheduling, and physical operations native to a given architecture. Our software package, aptly named QCAD, provides compilation, manipulation/transformation, multi-paradigm simulation, and visualization tools. We demonstrate various features of the QCAD software package through several examples.
An overview of expert systems. [artificial intelligence
NASA Technical Reports Server (NTRS)
Gevarter, W. B.
1982-01-01
An expert system is defined and its basic structure is discussed. The knowledge base, the inference engine, and uses of expert systems are discussed. Architecture is considered, including choice of solution direction, reasoning in the presence of uncertainty, searching small and large search spaces, handling large search spaces by transforming them and by developing alternative or additional spaces, and dealing with time. Existing expert systems are reviewed. Tools for building such systems, construction, and knowledge acquisition and learning are discussed. Centers of research and funding sources are listed. The state-of-the-art, current problems, required research, and future trends are summarized.
NASA Astrophysics Data System (ADS)
Thubaasini, P.; Rusnida, R.; Rohani, S. M.
This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.
Essential issues in multiprocessor systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gajski, D.D.; Peir, J.K.
1985-06-01
During the past several years, a great number of proposals have been made with the objective to increase supercomputer performance by an order of magnitude on the basis of a utilization of new computer architectures. The present paper is concerned with a suitable classification scheme for comparing these architectures. It is pointed out that there are basically four schools of thought as to the most important factor for an enhancement of computer performance. According to one school, the development of faster circuits will make it possible to retain present architectures, except, possibly, for a mechanism providing synchronization of parallel processes.more » A second school assigns priority to the optimization and vectorization of compilers, which will detect parallelism and help users to write better parallel programs. A third school believes in the predominant importance of new parallel algorithms, while the fourth school supports new models of computation. The merits of the four approaches are critically evaluated. 50 references.« less
Universal computer control system (UCCS) for space telerobots
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Szakaly, Zoltan
1987-01-01
A universal computer control system (UCCS) is under development for all motor elements of a space telerobot. The basic hardware architecture and software design of UCCS are described, together with the rich motor sensing, control, and self-test capabilities of this all-computerized motor control system. UCCS is integrated into a multibus computer environment with direct interface to higher level control processors, uses pulsewidth multiplier power amplifiers, and one unit can control up to sixteen different motors simultaneously at a high I/O rate. UCCS performance capabilities are illustrated by a few data.
Content addressable memory project
NASA Technical Reports Server (NTRS)
Hall, Josh; Levy, Saul; Smith, D.; Wei, S.; Miyake, K.; Murdocca, M.
1991-01-01
The progress on the Rutgers CAM (Content Addressable Memory) Project is described. The overall design of the system is completed at the architectural level and described. The machine is composed of two kinds of cells: (1) the CAM cells which include both memory and processor, and support local processing within each cell; and (2) the tree cells, which have smaller instruction set, and provide global processing over the CAM cells. A parameterized design of the basic CAM cell is completed. Progress was made on the final specification of the CPS. The machine architecture was driven by the design of algorithms whose requirements are reflected in the resulted instruction set(s). A few of these algorithms are described.
Enterprise systems security management: a framework for breakthrough protection
NASA Astrophysics Data System (ADS)
Farroha, Bassam S.; Farroha, Deborah L.
2010-04-01
Securing the DoD information network is a tremendous task due to its size, access locations and the amount of network intrusion attempts on a daily basis. This analysis investigates methods/architecture options to deliver capabilities for secure information sharing environment. Crypto-binding and intelligent access controls are basic requirements for secure information sharing in a net-centric environment. We introduce many of the new technology components to secure the enterprise. The cooperative mission requirements lead to developing automatic data discovery and data stewards granting access to Cross Domain (CD) data repositories or live streaming data. Multiple architecture models are investigated to determine best-of-breed approaches including SOA and Private/Public Clouds.
Tiled architecture of a CNN-mostly IP system
NASA Astrophysics Data System (ADS)
Spaanenburg, Lambert; Malki, Suleyman
2009-05-01
Multi-core architectures have been popularized with the advent of the IBM CELL. On a finer grain the problems in scheduling multi-cores have already existed in the tiled architectures, such as the EPIC and Da Vinci. It is not easy to evaluate the performance of a schedule on such architecture as historical data are not available. One solution is to compile algorithms for which an optimal schedule is known by analysis. A typical example is an algorithm that is already defined in terms of many collaborating simple nodes, such as a Cellular Neural Network (CNN). A simple node with a local register stack together with a 'rotating wheel' internal communication mechanism has been proposed. Though the basic CNN allows for a tiled implementation of a tiled algorithm on a tiled structure, a practical CNN system will have to disturb this regularity by the additional need for arithmetical and logical operations. Arithmetic operations are needed for instance to accommodate for low-level image processing, while logical operations are needed to fork and merge different data streams without use of the external memory. It is found that the 'rotating wheel' internal communication mechanism still handles such mechanisms without the need for global control. Overall the CNN system provides for a practical network size as implemented on a FPGA, can be easily used as embedded IP and provides a clear benchmark for a multi-core compiler.
NASA Technical Reports Server (NTRS)
Traversi, M.; Piccolo, R.
1980-01-01
Tradeoff study activities and the analysis process used are described with emphasis on (1) review of the alternatives; (2) vehicle architecture; and (3) evaluation of the propulsion system alternatives; interim results are presented for the basic hybrid vehicle characterization; vehicle scheme development; propulsion system power and transmission ratios; vehicle weight; energy consumption and emissions; performance; production costs; reliability, availability and maintainability; life cycle costs, and operational quality. The final vehicle conceptual design is examined.
New framework of NGN web-based management system
NASA Astrophysics Data System (ADS)
Nian, Zhou; Jie, Yin; Qian, Mao
2007-11-01
This paper introduces the basic conceptions and key technology of the Ajax and some popular frameworks in the J2EE architecture, try to integrate all the frameworks into a new framework. The developers can develop web applications much more convenient by using this framework and the web application can provide a more friendly and interactive platform to the end users. At last an example is given to explain how to use the new framework to build a web-based management system of the softswitch network.
Evaluation of SuperLU on multicore architectures
NASA Astrophysics Data System (ADS)
Li, X. S.
2008-07-01
The Chip Multiprocessor (CMP) will be the basic building block for computer systems ranging from laptops to supercomputers. New software developments at all levels are needed to fully utilize these systems. In this work, we evaluate performance of different high-performance sparse LU factorization and triangular solution algorithms on several representative multicore machines. We included both Pthreads and MPI implementations in this study and found that the Pthreads implementation consistently delivers good performance and that a left-looking algorithm is usually superior.
Model Based Document and Report Generation for Systems Engineering
NASA Technical Reports Server (NTRS)
Delp, Christopher; Lam, Doris; Fosse, Elyse; Lee, Cin-Young
2013-01-01
As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.
Model based document and report generation for systems engineering
NASA Astrophysics Data System (ADS)
Delp, C.; Lam, D.; Fosse, E.; Lee, Cin-Young
As Model Based Systems Engineering (MBSE) practices gain adoption, various approaches have been developed in order to simplify and automate the process of generating documents from models. Essentially, all of these techniques can be unified around the concept of producing different views of the model according to the needs of the intended audience. In this paper, we will describe a technique developed at JPL of applying SysML Viewpoints and Views to generate documents and reports. An architecture of model-based view and document generation will be presented, and the necessary extensions to SysML with associated rationale will be explained. A survey of examples will highlight a variety of views that can be generated, and will provide some insight into how collaboration and integration is enabled. We will also describe the basic architecture for the enterprise applications that support this approach.
Towards shared patient records: an architecture for using routine data for nationwide research.
Knaup, Petra; Garde, Sebastian; Merzweiler, Angela; Graf, Norbert; Schilling, Freimut; Weber, Ralf; Haux, Reinhold
2006-01-01
Ubiquitous information is currently one of the most challenging slogans in medical informatics research. An adequate architecture for shared electronic patient records is needed which can use data for multiple purposes and which is extensible for new research questions. We introduce eardap as architecture for using routine data for nationwide clinical research in a multihospital environment. eardap can be characterized as terminology-based. Main advantage of our approach is the extensibility by new items and new research questions. Once the definition of items for a research question is finished, a consistent, corresponding database can be created without any informatics skills. Our experiences in pediatric oncology in Germany have shown the applicability of eardap. The functions of our core system were in routine clinical use in several hospitals. We validated the terminology management system (TMS) and the module generation tool with the basic data set of pediatric oncology. The multiple usability depends mainly on the quality of item planning in the TMS. High quality harmonization will lead to a higher amount of multiply used data. When using eardap, special emphasis is to be placed on interfaces to local hospital information systems and data security issues.
Six-Port Based Interferometry for Precise Radar and Sensing Applications.
Koelpin, Alexander; Lurz, Fabian; Linz, Sarah; Mann, Sebastian; Will, Christoph; Lindner, Stefan
2016-09-22
Microwave technology plays a more important role in modern industrial sensing applications. Pushed by the significant progress in monolithic microwave integrated circuit technology over the past decades, complex sensing systems operating in the microwave and even millimeter-wave range are available for reasonable costs combined with exquisite performance. In the context of industrial sensing, this stimulates new approaches for metrology based on microwave technology. An old measurement principle nearly forgotten over the years has recently gained more and more attention in both academia and industry: the six-port interferometer. This paper reviews the basic concept, investigates promising applications in remote, as well as contact-based sensing and compares the system with state-of-the-art metrology. The significant advantages will be discussed just as the limitations of the six-port architecture. Particular attention will be paid to impairment effects and non-ideal behavior, as well as compensation and linearization concepts. It will be shown that in application fields, like remote distance sensing, precise alignment measurements, as well as interferometrically-evaluated mechanical strain analysis, the six-port architecture delivers extraordinary measurement results combined with high measurement data update rates for reasonable system costs. This makes the six-port architecture a promising candidate for industrial metrology.
NASA Astrophysics Data System (ADS)
Lewe, Jung-Ho
The National Transportation System (NTS) is undoubtedly a complex system-of-systems---a collection of diverse 'things' that evolve over time, organized at multiple levels, to achieve a range of possibly conflicting objectives, and never quite behaving as planned. The purpose of this research is to develop a virtual transportation architecture for the ultimate goal of formulating an integrated decision-making framework. The foundational endeavor begins with creating an abstraction of the NTS with the belief that a holistic frame of reference is required to properly study such a multi-disciplinary, trans-domain system. The culmination of the effort produces the Transportation Architecture Field (TAF) as a mental model of the NTS, in which the relationships between four basic entity groups are identified and articulated. This entity-centric abstraction framework underpins the construction of a virtual NTS couched in the form of an agent-based model. The transportation consumers and the service providers are identified as adaptive agents that apply a set of preprogrammed behavioral rules to achieve their respective goals. The transportation infrastructure and multitude of exogenous entities (disruptors and drivers) in the whole system can also be represented without resorting to an extremely complicated structure. The outcome is a flexible, scalable, computational model that allows for examination of numerous scenarios which involve the cascade of interrelated effects of aviation technology, infrastructure, and socioeconomic changes throughout the entire system.
Three-dimensional micro electromechanical system piezoelectric ultrasound transducer
NASA Astrophysics Data System (ADS)
Hajati, Arman; Latev, Dimitre; Gardner, Deane; Hajati, Azadeh; Imai, Darren; Torrey, Marc; Schoeppler, Martin
2012-12-01
Here we present the design and experimental acoustic test data for an ultrasound transducer technology based on a combination of micromachined dome-shaped piezoelectric resonators arranged in a flexible architecture. Our high performance niobium-doped lead zirconate titanate film is implemented in three-dimensional dome-shaped structures, which form the basic resonating cells. Adjustable frequency response is realized by mixing these basic cells and modifying their dimensions by lithography. Improved characteristics such as high sensitivity, adjustable wide-bandwidth frequency response, low transmit voltage compatible with ordinary integrated circuitry, low electrical impedance well matched to coaxial cabling, and intrinsic acoustic impedance match to water are demonstrated.
MIDEX Advanced Modular and Distributed Spacecraft Avionics Architecture
NASA Technical Reports Server (NTRS)
Ruffa, John A.; Castell, Karen; Flatley, Thomas; Lin, Michael
1998-01-01
MIDEX (Medium Class Explorer) is the newest line in NASA's Explorer spacecraft development program. As part of the MIDEX charter, the MIDEX spacecraft development team has developed a new modular, distributed, and scaleable spacecraft architecture that pioneers new spaceflight technologies and implementation approaches, all designed to reduce overall spacecraft cost while increasing overall functional capability. This resultant "plug and play" system dramatically decreases the complexity and duration of spacecraft integration and test, providing a basic framework that supports spacecraft modularity and scalability for missions of varying size and complexity. Together, these subsystems form a modular, flexible avionics suite that can be modified and expanded to support low-end and very high-end mission requirements with a minimum of redesign, as well as allowing a smooth, continuous infusion of new technologies as they are developed without redesigning the system. This overall approach has the net benefit of allowing a greater portion of the overall mission budget to be allocated to mission science instead of a spacecraft bus. The MIDEX scaleable architecture is currently being manufactured and tested for use on the Microwave Anisotropy Probe (MAP), an inhouse program at GSFC.
Stereoscopic display of 3D models for design visualization
NASA Astrophysics Data System (ADS)
Gilson, Kevin J.
2006-02-01
Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.
Hödl, Iris; Mari, Lorenzo; Bertuzzo, Enrico; Suweis, Samir; Besemer, Katharina; Rinaldo, Andrea; Battin, Tom J
2014-01-01
Ecology, with a traditional focus on plants and animals, seeks to understand the mechanisms underlying structure and dynamics of communities. In microbial ecology, the focus is changing from planktonic communities to attached biofilms that dominate microbial life in numerous systems. Therefore, interest in the structure and function of biofilms is on the rise. Biofilms can form reproducible physical structures (i.e. architecture) at the millimetre-scale, which are central to their functioning. However, the spatial dynamics of the clusters conferring physical structure to biofilms remains often elusive. By experimenting with complex microbial communities forming biofilms in contrasting hydrodynamic microenvironments in stream mesocosms, we show that morphogenesis results in ‘ripple-like’ and ‘star-like’ architectures – as they have also been reported from monospecies bacterial biofilms, for instance. To explore the potential contribution of demographic processes to these architectures, we propose a size-structured population model to simulate the dynamics of biofilm growth and cluster size distribution. Our findings establish that basic physical and demographic processes are key forces that shape apparently universal biofilm architectures as they occur in diverse microbial but also in single-species bacterial biofilms. PMID:23879839
Constellation Architecture Team-Lunar Scenario 12.0 Habitation Overview
NASA Technical Reports Server (NTRS)
Kennedy, Kriss J.; Toups, Larry D.; Rudisill, Marianne
2010-01-01
This paper will describe an overview of the Constellation Architecture Team Lunar Scenario 12.0 (LS-12) surface habitation approach and concept performed during the study definition. The Lunar Scenario 12 architecture study focused on two primary habitation approaches: a horizontally-oriented habitation module (LS-12.0) and a vertically-oriented habitation module (LS-12.1). This paper will provide an overview of the 12.0 lunar surface campaign, the associated outpost architecture, habitation functionality, concept description, system integration strategy, mass and power resource estimates. The Scenario 12 architecture resulted from combining three previous scenario attributes from Scenario 4 "Optimized Exploration", Scenario 5 "Fission Surface Power System" and Scenario 8 "Initial Extensive Mobility" into Scenario 12 along with an added emphasis on defining the excursion ConOps while the crew is away from the outpost location. This paper will describe an overview of the CxAT-Lunar Scenario 12.0 habitation concepts and their functionality. The Crew Operations area includes basic crew accommodations such as sleeping, eating, hygiene and stowage. The EVA Operations area includes additional EVA capability beyond the suitlock function such as suit maintenance, spares stowage, and suit stowage. The Logistics Operations area includes the enhanced accommodations for 180 days such as enhanced life support systems hardware, consumable stowage, spares stowage, interconnection to the other habitation elements, a common interface mechanism for future growth, and mating to a pressurized rover or Pressurized Logistics Module (PLM). The Mission & Science Operations area includes enhanced outpost autonomy such as an IVA glove box, life support, medical operations, and exercise equipment.
Nonlinear Dynamic Inversion Baseline Control Law: Architecture and Performance Predictions
NASA Technical Reports Server (NTRS)
Miller, Christopher J.
2011-01-01
A model reference dynamic inversion control law has been developed to provide a baseline control law for research into adaptive elements and other advanced flight control law components. This controller has been implemented and tested in a hardware-in-the-loop simulation; the simulation results show excellent handling qualities throughout the limited flight envelope. A simple angular momentum formulation was chosen because it can be included in the stability proofs for many basic adaptive theories, such as model reference adaptive control. Many design choices and implementation details reflect the requirements placed on the system by the nonlinear flight environment and the desire to keep the system as basic as possible to simplify the addition of the adaptive elements. Those design choices are explained, along with their predicted impact on the handling qualities.
NASA Astrophysics Data System (ADS)
Leilei, Sun; Liang, Zhang; Bing, Chen; Hong, Xi
2017-11-01
This thesis is to analyze the basic pattern hierarchy of communication space by using the theory of environmental psychology and behavior combined with relevant principles in architecture, to evaluate the design and improvement of communication space in specific meaning, and to bring new observation ideas and innovation in design methods to the system of space, environment and behavior.
Launch Vehicle Control Center Architectures
NASA Technical Reports Server (NTRS)
Watson, Michael D.; Epps, Amy; Woodruff, Van; Vachon, Michael Jacob; Monreal, Julio; Levesque, Marl; Williams, Randall; Mclaughlin, Tom
2014-01-01
Launch vehicles within the international community vary greatly in their configuration and processing. Each launch site has a unique processing flow based on the specific launch vehicle configuration. Launch and flight operations are managed through a set of control centers associated with each launch site. Each launch site has a control center for launch operations; however flight operations support varies from being co-located with the launch site to being shared with the space vehicle control center. There is also a nuance of some having an engineering support center which may be co-located with either the launch or flight control center, or in a separate geographical location altogether. A survey of control center architectures is presented for various launch vehicles including the NASA Space Launch System (SLS), United Launch Alliance (ULA) Atlas V and Delta IV, and the European Space Agency (ESA) Ariane 5. Each of these control center architectures shares some similarities in basic structure while differences in functional distribution also exist. The driving functions which lead to these factors are considered and a model of control center architectures is proposed which supports these commonalities and variations.
The Raptor Real-Time Processing Architecture
NASA Astrophysics Data System (ADS)
Galassi, M.; Starr, D.; Wozniak, P.; Brozdin, K.
The primary goal of Raptor is ambitious: to identify interesting optical transients from very wide field of view telescopes in real time, and then to quickly point the higher resolution Raptor ``fovea'' cameras and spectrometer to the location of the optical transient. The most interesting of Raptor's many applications is the real-time search for orphan optical counterparts of Gamma Ray Bursts. The sequence of steps (data acquisition, basic calibration, source extraction, astrometry, relative photometry, the smarts of transient identification and elimination of false positives, telescope pointing feedback, etc.) is implemented with a ``component'' approach. All basic elements of the pipeline functionality have been written from scratch or adapted (as in the case of SExtractor for source extraction) to form a consistent modern API operating on memory resident images and source lists. The result is a pipeline which meets our real-time requirements and which can easily operate as a monolithic or distributed processing system. Finally, the Raptor architecture is entirely based on free software (sometimes referred to as ``open source'' software). In this paper we also discuss the interplay between various free software technologies in this type of astronomical problem.
Liu, Nan; Zhang, Hongzhe; Zhang, Shanshan
2014-12-01
Emerging infectious disease is one of the most minatory threats in modern society. A perfect medical building network system need to be established to protect and control emerging infectious disease. Although in China a preliminary medical building network is already set up with disease control center, the infectious disease hospital, infectious diseases department in general hospital and basic medical institutions, there are still many defects in this system, such as simple structural model, weak interoperability among subsystems, and poor capability of the medical building to adapt to outbreaks of infectious disease. Based on the characteristics of infectious diseases, the whole process of its prevention and control and the comprehensive influence factors, three-dimensional medical architecture network system is proposed as an inevitable trend. In this conception of medical architecture network structure, the evolutions are mentioned, such as from simple network system to multilayer space network system, from static network to dynamic network, and from mechanical network to sustainable network. Ultimately, a more adaptable and corresponsive medical building network system will be established and argued in this paper.
WATERLOPP V2/64: A highly parallel machine for numerical computation
NASA Astrophysics Data System (ADS)
Ostlund, Neil S.
1985-07-01
Current technological trends suggest that the high performance scientific machines of the future are very likely to consist of a large number (greater than 1024) of processors connected and communicating with each other in some as yet undetermined manner. Such an assembly of processors should behave as a single machine in obtaining numerical solutions to scientific problems. However, the appropriate way of organizing both the hardware and software of such an assembly of processors is an unsolved and active area of research. It is particularly important to minimize the organizational overhead of interprocessor comunication, global synchronization, and contention for shared resources if the performance of a large number ( n) of processors is to be anything like the desirable n times the performance of a single processor. In many situations, adding a processor actually decreases the performance of the overall system since the extra organizational overhead is larger than the extra processing power added. The systolic loop architecture is a new multiple processor architecture which attemps at a solution to the problem of how to organize a large number of asynchronous processors into an effective computational system while minimizing the organizational overhead. This paper gives a brief overview of the basic systolic loop architecture, systolic loop algorithms for numerical computation, and a 64-processor implementation of the architecture, WATERLOOP V2/64, that is being used as a testbed for exploring the hardware, software, and algorithmic aspects of the architecture.
Smart Building: Decision Making Architecture for Thermal Energy Management.
Uribe, Oscar Hernández; Martin, Juan Pablo San; Garcia-Alegre, María C; Santos, Matilde; Guinea, Domingo
2015-10-30
Smart applications of the Internet of Things are improving the performance of buildings, reducing energy demand. Local and smart networks, soft computing methodologies, machine intelligence algorithms and pervasive sensors are some of the basics of energy optimization strategies developed for the benefit of environmental sustainability and user comfort. This work presents a distributed sensor-processor-communication decision-making architecture to improve the acquisition, storage and transfer of thermal energy in buildings. The developed system is implemented in a near Zero-Energy Building (nZEB) prototype equipped with a built-in thermal solar collector, where optical properties are analysed; a low enthalpy geothermal accumulation system, segmented in different temperature zones; and an envelope that includes a dynamic thermal barrier. An intelligent control of this dynamic thermal barrier is applied to reduce the thermal energy demand (heating and cooling) caused by daily and seasonal weather variations. Simulations and experimental results are presented to highlight the nZEB thermal energy reduction.
ERIC Educational Resources Information Center
Davis, Ronald; Yancey, Bruce
Designed to be used as a supplement to a two-book course in basic drafting, these instructional materials consisting of 14 units cover the process of drawing all working drawings necessary for residential buildings. The following topics are covered in the individual units: introduction to architectural drafting, lettering and tools, site…
Information Processing in Cognition Process and New Artificial Intelligent Systems
NASA Astrophysics Data System (ADS)
Zheng, Nanning; Xue, Jianru
In this chapter, we discuss, in depth, visual information processing and a new artificial intelligent (AI) system that is based upon cognitive mechanisms. The relationship between a general model of intelligent systems and cognitive mechanisms is described, and in particular we explore visual information processing with selective attention. We also discuss a methodology for studying the new AI system and propose some important basic research issues that have emerged in the intersecting fields of cognitive science and information science. To this end, a new scheme for associative memory and a new architecture for an AI system with attractors of chaos are addressed.
The architecture of personality.
Cervone, David
2004-01-01
This article presents a theoretical framework for analyzing psychological systems that contribute to the variability, consistency, and cross-situational coherence of personality functioning. In the proposed knowledge-and-appraisal personality architecture (KAPA), personality structures and processes are delineated by combining 2 principles: distinctions (a) between knowledge structures and appraisal processes and (b) among intentional cognitions with varying directions of fit, with the latter distinction differentiating among beliefs, evaluative standards, and aims. Basic principles of knowledge activation and use illuminate relations between knowledge and appraisal, yielding a synthetic account of personality structures and processes. Novel empirical data illustrate the heuristic value of the knowledge/appraisal distinction by showing how self-referent and situational knowledge combine to foster cross-situational coherence in appraisals of self-efficacy.
NASA Astrophysics Data System (ADS)
Nurliani Lukito, Yulia; Previta Handoko, Bella
2018-03-01
During the 1950s, the idea of Minimalism presents itself as one of the response of the search of universal language in art and architecture. This particular style, which was started as an art movement, has received many critics in the relation to the loss of art but nevertheless Minimalism has spread all over the world and influenced many disciplines, including architecture. In minimalist architecture, elements of design convey simplicity, basic geometrical forms, with no decoration, and the use of white color, modern materials and clean spaces. The “less is more” movement in architecture, which can be seen in the works of Mies van der Rohe and also in the International Style that celebrates materiality and rationality, is also understood as Minimalism. Moreover, an important historical connection to minimalist architecture is the relationship to popular representations of how the upscale modern family lived. Recently, the idea of minimalist architecture appears in Indonesia as a preferable housing style. Adapting minimalist architecture to be suitable for a tropical climate can be done partly by modifying the forms and the microclimate such as using passive system approach or additional equipment that creates comfort in the building. This paper investigates the idea of minimalist architecture in Jakarta, Indonesia, and how the idea is widely used for housing. Some questions related to this study are investigating whether minimalist architecture in Jakarta shares the same principles with minimalist architecture in its earlier time or it is only a trend in housing design. Not only this study analyzes the moment when the idea of Minimalism develops in the history of modern architecture but also some important characteristics of minimalist architecture in different era and space. In addition, this study also discusses how minimalist architecture that happens in Jakarta becomes a way of dealing with both modern and local conditions, including a break free from traditions.
Future Generation Network Architecture (New Arch)
2004-06-01
Laboratory/IFKF, Rome NY. Other, unfunded, participants in the project included the UC Berkeley ICSI Center for Internet Research (Mark Handley), and an...developed in the late 1970s under DARPA’s Internet research program. The global technical principles, or architecture, of the Internet design represented a...wide range of key aspects of the basic architecture, in search of unifying principles. The success of the original DARPA Internet research program
The Role of Sketch in Architecture Design
NASA Astrophysics Data System (ADS)
Li, Yanjin; Ning, Wen
2017-06-01
With the continuous development of computer technology, we rely more and more on the computer and pay more and more attention to the final design results, so that we ignore the importance of the sketch. However, the sketch is the most basic and effective way of architecture design. Based on the study of the sketch of Tjibao Cultural Center of sketch, the paper explores the role of sketch in architecture design .
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Frederick, Kenneth R.; Scott, Joseph P.; Reinermann, Dana N.
2011-01-01
Photocatalytic oxidation (PCO) is a maturing process technology that shows potential for spacecraft life support system application. Incorporating PCO into a spacecraft cabin atmosphere revitalization system requires an understanding of basic performance, particularly with regard to partial oxidation product production. Four PCO reactor design concepts have been evaluated for their effectiveness for mineralizing key trace volatile organic com-pounds (VOC) typically observed in crewed spacecraft cabin atmospheres. Mineralization efficiency and selectivity for partial oxidation products are compared for the reactor design concepts. The role of PCO in a spacecraft s life support system architecture is discussed.
A unified teleoperated-autonomous dual-arm robotic system
NASA Technical Reports Server (NTRS)
Hayati, Samad; Lee, Thomas S.; Tso, Kam Sing; Backes, Paul G.; Lloyd, John
1991-01-01
A description is given of complete robot control facility built as part of a NASA telerobotics program to develop a state-of-the-art robot control environment for performing experiments in the repair and assembly of spacelike hardware to gain practical knowledge of such work and to improve the associated technology. The basic architecture of the manipulator control subsystem is presented. The multiarm Robot Control C Library (RCCL), a key software component of the system, is described, along with its implementation on a Sun-4 computer. The system's simulation capability is also described, and the teleoperation and shared control features are explained.
Connecting a cognitive architecture to robotic perception
NASA Astrophysics Data System (ADS)
Kurup, Unmesh; Lebiere, Christian; Stentz, Anthony; Hebert, Martial
2012-06-01
We present an integrated architecture in which perception and cognition interact and provide information to each other leading to improved performance in real-world situations. Our system integrates the Felzenswalb et. al. object-detection algorithm with the ACT-R cognitive architecture. The targeted task is to predict and classify pedestrian behavior in a checkpoint scenario, most specifically to discriminate between normal versus checkpoint-avoiding behavior. The Felzenswalb algorithm is a learning-based algorithm for detecting and localizing objects in images. ACT-R is a cognitive architecture that has been successfully used to model human cognition with a high degree of fidelity on tasks ranging from basic decision-making to the control of complex systems such as driving or air traffic control. The Felzenswalb algorithm detects pedestrians in the image and provides ACT-R a set of features based primarily on their locations. ACT-R uses its pattern-matching capabilities, specifically its partial-matching and blending mechanisms, to track objects across multiple images and classify their behavior based on the sequence of observed features. ACT-R also provides feedback to the Felzenswalb algorithm in the form of expected object locations that allow the algorithm to eliminate false-positives and improve its overall performance. This capability is an instance of the benefits pursued in developing a richer interaction between bottom-up perceptual processes and top-down goal-directed cognition. We trained the system on individual behaviors (only one person in the scene) and evaluated its performance across single and multiple behavior sets.
Distributed numerical controllers
NASA Astrophysics Data System (ADS)
Orban, Peter E.
2001-12-01
While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.
Wang, Hui; Zhang, Xiao-Bo; Huang, Lu-Qi; Guo, Lan-Ping; Wang, Ling; Zhao, Yu-Ping; Yang, Guang
2017-11-01
The supply of Chinese patent medicine is influenced by the price of raw materials (Chinese herbal medicines) and the stock of resources. On the one hand, raw material prices show cyclical volatility or even irreversible soaring, making the price of Chinese patent medicine is not stable or even the highest cost of hanging upside down. On the other hand, due to lack of resources or disable some of the proprietary Chinese medicine was forced to stop production. Based on the micro-service architecture and Redis cluster deployment Based on the micro-service architecture and Redis cluster deployment, the supply security monitoring and analysis system for Chinese patent medicines in national essential medicines has realized the dynamic monitoring and intelligence warning of herbs and Chinese patent medicine by connecting and integrating the database of Chinese medicine resources, the dynamic monitoring system of traditional Chinese medicine resources and the basic medicine database of Chinese patent medicine. Copyright© by the Chinese Pharmaceutical Association.
Beppu, Teruo; Tomiguchi, Kosuke; Masuhara, Akito; Pu, Yong-Jin; Katagiri, Hiroshi
2015-06-15
Benzene is the simplest aromatic hydrocarbon with a six-membered ring. It is one of the most basic structural units for the construction of π conjugated systems, which are widely used as fluorescent dyes and other luminescent materials for imaging applications and displays because of their enhanced spectroscopic signal. Presented herein is 2,5-bis(methylsulfonyl)-1,4-diaminobenzene as a novel architecture for green fluorophores, established based on an effective push-pull system supported by intramolecular hydrogen bonding. This compound demonstrates high fluorescence emission and photostability and is solid-state emissive, water-soluble, and solvent- and pH-independent with quantum yields of Φ=0.67 and Stokes shift of 140 nm (in water). This architecture is a significant departure from conventional extended π-conjugated systems based on a flat and rigid molecular design and provides a minimum requirement for green fluorophores comprising a single benzene ring. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hybrid battery/supercapacitor energy storage system for the electric vehicles
NASA Astrophysics Data System (ADS)
Kouchachvili, Lia; Yaïci, Wahiba; Entchev, Evgueniy
2018-01-01
Electric vehicles (EVs) have recently attracted considerable attention and so did the development of the battery technologies. Although the battery technology has been significantly advanced, the available batteries do not entirely meet the energy demands of the EV power consumption. One of the key issues is non-monotonic consumption of energy accompanied by frequent changes during the battery discharging process. This is very harmful to the electrochemical process of the battery. A practical solution is to couple the battery with a supercapacitor, which is basically an electrochemical cell with a similar architecture, but with a higher rate capability and better cyclability. In this design, the supercapacitor can provide the excess energy required while the battery fails to do so. In addition to the battery and supercapacitor as the individual units, designing the architecture of the corresponding hybrid system from an electrical engineering point of view is of utmost importance. The present manuscript reviews the recent works devoted to the application of various battery/supercapacitor hybrid systems in EVs.
Developing Low-Power Transceiver Technologies for In Situ Communication Applications
NASA Astrophysics Data System (ADS)
Lay, N.; Cheetham, C.; Mojaradi, H.; Neal, J.
2001-07-01
For future deep-space missions, significant reductions in the mass and power requirements for short-range telecommunication systems will be critical in enabling a wide variety of new mission concepts. These possibilities include penetrators, gliders, miniature rovers, balloons, and sensor networks. The recent development activity reported in this article has focused on the design of ultra-low-mass and -power transceiver systems and subsystems suitable for operation in a flight environment. Under these efforts, the basic functionality of the transceiver has been targeted towards a Mars microprobe communications scenario. However, the overall transceiver architecture is well suited to any short- or medium-range application where a remote probe will aperiodically communicate with a base station, possibly an orbiter, for the eventual purpose of relaying science information back to Earth. Additionally, elements of the radio architecture can be applied in situations involving surface-to-surface communications, thereby enabling different mission communications topologies. Through a system analysis of these channels, both the applicability and benefit of very low power communications will be quantitatively addressed.
A systematic approach for analysis and design of secure health information systems.
Blobel, B; Roger-France, F
2001-06-01
A toolset using object-oriented techniques including the nowadays popular unified modelling language (UML) approach has been developed to facilitate the different users' views for security analysis and design of health care information systems. Paradigm and concepts used are based on the component architecture of information systems and on a general layered security model. The toolset was developed in 1996/1997 within the ISHTAR project funded by the European Commission as well as through international standardisation activities. Analysing and systematising real health care scenarios, only six and nine use case types could be found in the health and the security-related view, respectively. By combining these use case types, the analysis and design of any thinkable system architecture can be simplified significantly. Based on generic schemes, the environment needed for both communication and application security can be established by appropriate sets of security services and mechanisms. Because of the importance and the basic character of electronic health care record (EHCR) systems, the understanding of the approach is facilitated by (incomplete) examples for this application.
Advanced software integration: The case for ITV facilities
NASA Technical Reports Server (NTRS)
Garman, John R.
1990-01-01
The array of technologies and methodologies involved in the development and integration of avionics software has moved almost as rapidly as computer technology itself. Future avionics systems involve major advances and risks in the following areas: (1) Complexity; (2) Connectivity; (3) Security; (4) Duration; and (5) Software engineering. From an architectural standpoint, the systems will be much more distributed, involve session-based user interfaces, and have the layered architectures typified in the layers of abstraction concepts popular in networking. Typified in the NASA Space Station Freedom will be the highly distributed nature of software development itself. Systems composed of independent components developed in parallel must be bound by rigid standards and interfaces, the clean requirements and specifications. Avionics software provides a challenge in that it can not be flight tested until the first time it literally flies. It is the binding of requirements for such an integration environment into the advances and risks of future avionics systems that form the basis of the presented concept and the basic Integration, Test, and Verification concept within the development and integration life cycle of Space Station Mission and Avionics systems.
Research on Basic Design Education: An International Survey
ERIC Educational Resources Information Center
Boucharenc, C. G.
2006-01-01
This paper reports on the results of a survey and qualitative analysis on the teaching of "Basic Design" in schools of design and architecture located in 22 countries. In the context of this research work, Basic Design means the teaching and learning of design fundamentals that may also be commonly referred to as the Principles of Two- and…
Modeling of serial data acquisition structure for GEM detector system in Matlab
NASA Astrophysics Data System (ADS)
Kolasinski, Piotr; Pozniak, Krzysztof T.; Czarski, Tomasz; Chernyshova, Maryna; Kasprowicz, Grzegorz; Krawczyk, Rafal D.; Wojenski, Andrzej; Zabolotny, Wojciech; Byszuk, Adrian
2016-09-01
This article presents method of modeling in Matlab hardware architecture dedicated for FPGA created by languages like VHDL or Verilog. Purposes of creating such type of model with its advantages and disadvantages are described. Rules presented in this article were exploited to create model of Serial Data Acquisition algorithm used in X-ray GEM detector system. Result were compared to real working model implemented in VHDL. After testing of basic structure, other two structures were modeled to see influence parameters of the structure on its behavior.
NASA Technical Reports Server (NTRS)
1981-01-01
The use of an International Standards Organization (ISO) Open Systems Interconnection (OSI) Reference Model and its relevance to interconnecting an Applications Data Service (ADS) pilot program for data sharing is discussed. A top level mapping between the conjectured ADS requirements and identified layers within the OSI Reference Model was performed. It was concluded that the OSI model represents an orderly architecture for the ADS networking planning and that the protocols being developed by the National Bureau of Standards offer the best available implementation approach.
Architectural Drafting: Commercial Applications. Teacher Guide.
ERIC Educational Resources Information Center
Whitney, Terry A.
This curriculum guide contains the technical information and tasks necessary for a student (who has already completed basic drafting) to be employed as an architectural drafter trainee. The curriculum is written in terms of student performance using measurable objectives, technical information, tasks developed to accomplish those objectives, and…
Six-Port Based Interferometry for Precise Radar and Sensing Applications
Koelpin, Alexander; Lurz, Fabian; Linz, Sarah; Mann, Sebastian; Will, Christoph; Lindner, Stefan
2016-01-01
Microwave technology plays a more important role in modern industrial sensing applications. Pushed by the significant progress in monolithic microwave integrated circuit technology over the past decades, complex sensing systems operating in the microwave and even millimeter-wave range are available for reasonable costs combined with exquisite performance. In the context of industrial sensing, this stimulates new approaches for metrology based on microwave technology. An old measurement principle nearly forgotten over the years has recently gained more and more attention in both academia and industry: the six-port interferometer. This paper reviews the basic concept, investigates promising applications in remote, as well as contact-based sensing and compares the system with state-of-the-art metrology. The significant advantages will be discussed just as the limitations of the six-port architecture. Particular attention will be paid to impairment effects and non-ideal behavior, as well as compensation and linearization concepts. It will be shown that in application fields, like remote distance sensing, precise alignment measurements, as well as interferometrically-evaluated mechanical strain analysis, the six-port architecture delivers extraordinary measurement results combined with high measurement data update rates for reasonable system costs. This makes the six-port architecture a promising candidate for industrial metrology. PMID:27669246
ERIC Educational Resources Information Center
i Serrano, Magda Mària; Musquera Felip, Sílvia; Beriain Sanzol, Luis
2018-01-01
"Form is 'what', Design is 'how'" (Kahn, 1960). Learning about the formal universe and the wide range of possibilities it offers should be one of the purposes of the early subjects in architectural studies. This article aims to explain the contents of a first course of architectural design and demonstrate how, using a methodology based…
The Mosque Project: Collective Drawings
ERIC Educational Resources Information Center
Erwin, Douglas B.
2013-01-01
Teaching the author's fifth-graders about Islam through art was a challenge. Remembering a colleague's "Collective Architecture" project, he reworked the concept using mosque architecture as the basis for a new project. The goal was to introduce Islam and its basic tenets using the visual arts, with the hope of enhancing cultural tolerance and…
Information Architecture without Internal Theory: An Inductive Design Process.
ERIC Educational Resources Information Center
Haverty, Marsha
2002-01-01
Suggests that information architecture design is primarily an inductive process, partly because it lacks internal theory and partly because it is an activity that supports emergent phenomena (user experiences) from basic design components. Suggests a resemblance to Constructive Induction, a design process that locates the best representational…
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
NASA Astrophysics Data System (ADS)
Samadzadegan, F.; Saber, M.; Zahmatkesh, H.; Joze Ghazi Khanlou, H.
2013-09-01
Rapidly discovering, sharing, integrating and applying geospatial information are key issues in the domain of emergency response and disaster management. Due to the distributed nature of data and processing resources in disaster management, utilizing a Service Oriented Architecture (SOA) to take advantages of workflow of services provides an efficient, flexible and reliable implementations to encounter different hazardous situation. The implementation specification of the Web Processing Service (WPS) has guided geospatial data processing in a Service Oriented Architecture (SOA) platform to become a widely accepted solution for processing remotely sensed data on the web. This paper presents an architecture design based on OGC web services for automated workflow for acquisition, processing remotely sensed data, detecting fire and sending notifications to the authorities. A basic architecture and its building blocks for an automated fire detection early warning system are represented using web-based processing of remote sensing imageries utilizing MODIS data. A composition of WPS processes is proposed as a WPS service to extract fire events from MODIS data. Subsequently, the paper highlights the role of WPS as a middleware interface in the domain of geospatial web service technology that can be used to invoke a large variety of geoprocessing operations and chaining of other web services as an engine of composition. The applicability of proposed architecture by a real world fire event detection and notification use case is evaluated. A GeoPortal client with open-source software was developed to manage data, metadata, processes, and authorities. Investigating feasibility and benefits of proposed framework shows that this framework can be used for wide area of geospatial applications specially disaster management and environmental monitoring.
A quantum annealing architecture with all-to-all connectivity from local interactions.
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-10-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is-in the spirit of topological quantum memories-redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems.
Portable implementation model for CFD simulations. Application to hybrid CPU/GPU supercomputers
NASA Astrophysics Data System (ADS)
Oyarzun, Guillermo; Borrell, Ricard; Gorobets, Andrey; Oliva, Assensi
2017-10-01
Nowadays, high performance computing (HPC) systems experience a disruptive moment with a variety of novel architectures and frameworks, without any clarity of which one is going to prevail. In this context, the portability of codes across different architectures is of major importance. This paper presents a portable implementation model based on an algebraic operational approach for direct numerical simulation (DNS) and large eddy simulation (LES) of incompressible turbulent flows using unstructured hybrid meshes. The strategy proposed consists in representing the whole time-integration algorithm using only three basic algebraic operations: sparse matrix-vector product, a linear combination of vectors and dot product. The main idea is based on decomposing the nonlinear operators into a concatenation of two SpMV operations. This provides high modularity and portability. An exhaustive analysis of the proposed implementation for hybrid CPU/GPU supercomputers has been conducted with tests using up to 128 GPUs. The main objective consists in understanding the challenges of implementing CFD codes on new architectures.
A quantum annealing architecture with all-to-all connectivity from local interactions
Lechner, Wolfgang; Hauke, Philipp; Zoller, Peter
2015-01-01
Quantum annealers are physical devices that aim at solving NP-complete optimization problems by exploiting quantum mechanics. The basic principle of quantum annealing is to encode the optimization problem in Ising interactions between quantum bits (qubits). A fundamental challenge in building a fully programmable quantum annealer is the competing requirements of full controllable all-to-all connectivity and the quasi-locality of the interactions between physical qubits. We present a scalable architecture with full connectivity, which can be implemented with local interactions only. The input of the optimization problem is encoded in local fields acting on an extended set of physical qubits. The output is—in the spirit of topological quantum memories—redundantly encoded in the physical qubits, resulting in an intrinsic fault tolerance. Our model can be understood as a lattice gauge theory, where long-range interactions are mediated by gauge constraints. The architecture can be realized on various platforms with local controllability, including superconducting qubits, NV-centers, quantum dots, and atomic systems. PMID:26601316
Study on the Index System of Green Ecological Building and Its Evaluation
NASA Astrophysics Data System (ADS)
Wu, Ying
2017-12-01
Based on the concept of sustainable development, green and ecology has become a hot topic in the development and research of many industries. It is not only a new culture, but also that art, technology, material and so on will change under the guidance of this kind of thought. Architecture is the main body of the city, and it also is the necessary component of the human survival and social developments, the basic function of the building is to provide people with living space. With the development of society, the architectural function is constantly enriched, the structure tends to be complicated, but the influence of its own problems is also expanding. The development of the construction industry requires a lot of resources, and in the process of using its function and it needs other energy to provide its due support, because in the past we only consider the building function, ignoring the energy and information consumption. Considering the current social development, we have to take the energy and resource issues into account, based on this condition, the green eco-building concept and technical standards is producing, and it changed people’s views on social development. Green eco-buildings also need to have indicators as a reference, while providing guidance of architectural design and construction. This paper gives a brief exposition of the research system of green ecological architecture and its evaluation.
Electro-optic architecture (EOA) for sensors and actuators in aircraft propulsion systems
NASA Technical Reports Server (NTRS)
Glomb, W. L., Jr.
1989-01-01
Results of a study to design an optimal architecture for electro-optical sensing and control in advanced aircraft and space systems are described. The propulsion full authority digital Electronic Engine Control (EEC) was the focus for the study. The recommended architecture is an on-engine EEC which contains electro-optic interface circuits for fiber-optic sensors on the engine. Size and weight are reduced by multiplexing arrays of functionally similar sensors on a pair of optical fibers to common electro-optical interfaces. The architecture contains common, multiplex interfaces to seven sensor groups: (1) self luminous sensors; (2) high temperatures; (3) low temperatures; (4) speeds and flows; (5) vibration; (6) pressures; and (7) mechanical positions. Nine distinct fiber-optic sensor types were found to provide these sensing functions: (1) continuous wave (CW) intensity modulators; (2) time division multiplexing (TDM) digital optic codeplates; (3) time division multiplexing (TDM) analog self-referenced sensors; (4) wavelength division multiplexing (WDM) digital optic code plates; (5) wavelength division multiplexing (WDM) analog self-referenced intensity modulators; (6) analog optical spectral shifters; (7) self-luminous bodies; (8) coherent optical interferometers; and (9) remote electrical sensors. The report includes the results of a trade study including engine sensor requirements, environment, the basic sensor types, and relevant evaluation criteria. These figures of merit for the candidate interface types were calculated from the data supplied by leading manufacturers of fiber-optic sensors.
A Facility and Architecture for Autonomy Research
NASA Technical Reports Server (NTRS)
Pisanich, Greg; Clancy, Daniel (Technical Monitor)
2002-01-01
Autonomy is a key enabling factor in the advancement of the remote robotic exploration. There is currently a large gap between autonomy software at the research level and software that is ready for insertion into near-term space missions. The Mission Simulation Facility (MST) will bridge this gap by providing a simulation framework and suite of simulation tools to support research in autonomy for remote exploration. This system will allow developers of autonomy software to test their models in a high-fidelity simulation and evaluate their system's performance against a set of integrated, standardized simulations. The Mission Simulation ToolKit (MST) uses a distributed architecture with a communication layer that is built on top of the standardized High Level Architecture (HLA). This architecture enables the use of existing high fidelity models, allows mixing simulation components from various computing platforms and enforces the use of a standardized high-level interface among components. The components needed to achieve a realistic simulation can be grouped into four categories: environment generation (terrain, environmental features), robotic platform behavior (robot dynamics), instrument models (camera/spectrometer/etc.), and data analysis. The MST will provide basic components in these areas but allows users to plug-in easily any refined model by means of a communication protocol. Finally, a description file defines the robot and environment parameters for easy configuration and ensures that all the simulation models share the same information.
Affordable multisensor digital video architecture for 360° situational awareness displays
NASA Astrophysics Data System (ADS)
Scheiner, Steven P.; Khan, Dina A.; Marecki, Alexander L.; Berman, David A.; Carberry, Dana
2011-06-01
One of the major challenges facing today's military ground combat vehicle operations is the ability to achieve and maintain full-spectrum situational awareness while under armor (i.e. closed hatch). Thus, the ability to perform basic tasks such as driving, maintaining local situational awareness, surveillance, and targeting will require a high-density array of real time information be processed, distributed, and presented to the vehicle operators and crew in near real time (i.e. low latency). Advances in display and sensor technologies are providing never before seen opportunities to supply large amounts of high fidelity imagery and video to the vehicle operators and crew in real time. To fully realize the advantages of these emerging display and sensor technologies, an underlying digital architecture must be developed that is capable of processing these large amounts of video and data from separate sensor systems and distributing it simultaneously within the vehicle to multiple vehicle operators and crew. This paper will examine the systems and software engineering efforts required to overcome these challenges and will address development of an affordable, integrated digital video architecture. The approaches evaluated will enable both current and future ground combat vehicle systems the flexibility to readily adopt emerging display and sensor technologies, while optimizing the Warfighter Machine Interface (WMI), minimizing lifecycle costs, and improve the survivability of the vehicle crew working in closed-hatch systems during complex ground combat operations.
A reliability analysis tool for SpaceWire network
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
Extravehicular Activity System Sizing Analysis Tool (EVAS_SAT)
NASA Technical Reports Server (NTRS)
Brown, Cheryl B.; Conger, Bruce C.; Miranda, Bruno M.; Bue, Grant C.; Rouen, Michael N.
2007-01-01
An effort was initiated by NASA/JSC in 2001 to develop an Extravehicular Activity System Sizing Analysis Tool (EVAS_SAT) for the sizing of Extravehicular Activity System (EVAS) architecture and studies. Its intent was to support space suit development efforts and to aid in conceptual designs for future human exploration missions. Its basis was the Life Support Options Performance Program (LSOPP), a spacesuit and portable life support system (PLSS) sizing program developed for NASA/JSC circa 1990. EVAS_SAT estimates the mass, power, and volume characteristics for user-defined EVAS architectures, including Suit Systems, Airlock Systems, Tools and Translation Aids, and Vehicle Support equipment. The tool has undergone annual changes and has been updated as new data have become available. Certain sizing algorithms have been developed based on industry standards, while others are based on the LSOPP sizing routines. The sizing algorithms used by EVAS_SAT are preliminary. Because EVAS_SAT was designed for use by members of the EVA community, subsystem familiarity on the part of the intended user group and in the analysis of results is assumed. The current EVAS_SAT is operated within Microsoft Excel 2003 using a Visual Basic interface system.
FPGA-accelerated algorithm for the regular expression matching system
NASA Astrophysics Data System (ADS)
Russek, P.; Wiatr, K.
2015-01-01
This article describes an algorithm to support a regular expressions matching system. The goal was to achieve an attractive performance system with low energy consumption. The basic idea of the algorithm comes from a concept of the Bloom filter. It starts from the extraction of static sub-strings for strings of regular expressions. The algorithm is devised to gain from its decomposition into parts which are intended to be executed by custom hardware and the central processing unit (CPU). The pipelined custom processor architecture is proposed and a software algorithm explained accordingly. The software part of the algorithm was coded in C and runs on a processor from the ARM family. The hardware architecture was described in VHDL and implemented in field programmable gate array (FPGA). The performance results and required resources of the above experiments are given. An example of target application for the presented solution is computer and network security systems. The idea was tested on nearly 100,000 body-based viruses from the ClamAV virus database. The solution is intended for the emerging technology of clusters of low-energy computing nodes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cutler, Dylan; Frank, Stephen; Slovensky, Michelle
Rich, well-organized building performance and energy consumption data enable a host of analytic capabilities for building owners and operators, from basic energy benchmarking to detailed fault detection and system optimization. Unfortunately, data integration for building control systems is challenging and costly in any setting. Large portfolios of buildings--campuses, cities, and corporate portfolios--experience these integration challenges most acutely. These large portfolios often have a wide array of control systems, including multiple vendors and nonstandard communication protocols. They typically have complex information technology (IT) networks and cybersecurity requirements and may integrate distributed energy resources into their infrastructure. Although the challenges are significant,more » the integration of control system data has the potential to provide proportionally greater value for these organizations through portfolio-scale analytics, comprehensive demand management, and asset performance visibility. As a large research campus, the National Renewable Energy Laboratory (NREL) experiences significant data integration challenges. To meet them, NREL has developed an architecture for effective data collection, integration, and analysis, providing a comprehensive view of data integration based on functional layers. The architecture is being evaluated on the NREL campus through deployment of three pilot implementations.« less
A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kim, Hwa-Soo
1991-01-01
A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.
Optimization of image processing algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Poudel, Pramod; Shirvaikar, Mukul
2011-03-01
This work presents a technique to optimize popular image processing algorithms on mobile platforms such as cell phones, net-books and personal digital assistants (PDAs). The increasing demand for video applications like context-aware computing on mobile embedded systems requires the use of computationally intensive image processing algorithms. The system engineer has a mandate to optimize them so as to meet real-time deadlines. A methodology to take advantage of the asymmetric dual-core processor, which includes an ARM and a DSP core supported by shared memory, is presented with implementation details. The target platform chosen is the popular OMAP 3530 processor for embedded media systems. It has an asymmetric dual-core architecture with an ARM Cortex-A8 and a TMS320C64x Digital Signal Processor (DSP). The development platform was the BeagleBoard with 256 MB of NAND RAM and 256 MB SDRAM memory. The basic image correlation algorithm is chosen for benchmarking as it finds widespread application for various template matching tasks such as face-recognition. The basic algorithm prototypes conform to OpenCV, a popular computer vision library. OpenCV algorithms can be easily ported to the ARM core which runs a popular operating system such as Linux or Windows CE. However, the DSP is architecturally more efficient at handling DFT algorithms. The algorithms are tested on a variety of images and performance results are presented measuring the speedup obtained due to dual-core implementation. A major advantage of this approach is that it allows the ARM processor to perform important real-time tasks, while the DSP addresses performance-hungry algorithms.
High performance semantic factoring of giga-scale semantic graph databases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
al-Saffar, Sinan; Adolf, Bob; Haglin, David
2010-10-01
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to bring high performance computational resources to bear on their analysis, interpretation, and visualization, especially with respect to their innate semantic structure. Our research group built a novel high performance hybrid system comprising computational capability for semantic graph database processing utilizing the large multithreaded architecture of the Cray XMT platform, conventional clusters, and large data stores. In this paper we describe that architecture, and present the results of our deployingmore » that for the analysis of the Billion Triple dataset with respect to its semantic factors, including basic properties, connected components, namespace interaction, and typed paths.« less
Artificial Intelligence for Controlling Robotic Aircraft
NASA Technical Reports Server (NTRS)
Krishnakumar, Kalmanje
2005-01-01
A document consisting mostly of lecture slides presents overviews of artificial-intelligence-based control methods now under development for application to robotic aircraft [called Unmanned Aerial Vehicles (UAVs) in the paper] and spacecraft and to the next generation of flight controllers for piloted aircraft. Following brief introductory remarks, the paper presents background information on intelligent control, including basic characteristics defining intelligent systems and intelligent control and the concept of levels of intelligent control. Next, the paper addresses several concepts in intelligent flight control. The document ends with some concluding remarks, including statements to the effect that (1) intelligent control architectures can guarantee stability of inner control loops and (2) for UAVs, intelligent control provides a robust way to accommodate an outer-loop control architecture for planning and/or related purposes.
Study of a Secondary Power System Based on an Intermediate Bus Converter and POLs
NASA Astrophysics Data System (ADS)
Santoja, Almudena; Fernandez, Arturo; Tonicello, Ferdinando
2014-08-01
Secondary power systems in satellites are everything but standard nowadays. All sorts of options can be found and, in the end, a new custom design is used in most of the cases. Even though this might be interesting in some specific cases, for most of them it would be more convenient to have a straightforward system based on standard components. One of the options to achieve this is to design the secondary power system with an Intermediate Bus Converter (IBC) and Point of Load converters (POLs). This paper presents a study of this architecture and some experimental verifications to establish some basic rules devoted to achieve an optimum design of this system.
Basic research planning in mathematical pattern recognition and image analysis
NASA Technical Reports Server (NTRS)
Bryant, J.; Guseman, L. F., Jr.
1981-01-01
Fundamental problems encountered while attempting to develop automated techniques for applications of remote sensing are discussed under the following categories: (1) geometric and radiometric preprocessing; (2) spatial, spectral, temporal, syntactic, and ancillary digital image representation; (3) image partitioning, proportion estimation, and error models in object scene interference; (4) parallel processing and image data structures; and (5) continuing studies in polarization; computer architectures and parallel processing; and the applicability of "expert systems" to interactive analysis.
Terascale spectral element algorithms and implementations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fischer, P. F.; Tufo, H. M.
1999-08-17
We describe the development and implementation of an efficient spectral element code for multimillion gridpoint simulations of incompressible flows in general two- and three-dimensional domains. We review basic and recently developed algorithmic underpinnings that have resulted in good parallel and vector performance on a broad range of architectures, including the terascale computing systems now coming online at the DOE labs. Sustained performance of 219 GFLOPS has been recently achieved on 2048 nodes of the Intel ASCI-Red machine at Sandia.
Distributed intelligence for ground/space systems
NASA Technical Reports Server (NTRS)
Aarup, Mads; Munch, Klaus Heje; Fuchs, Joachim; Hartmann, Ralf; Baud, Tim
1994-01-01
DI is short for Distributed Intelligence for Ground/Space Systems and the DI Study is one in a series of ESA projects concerned with the development of new concepts and architectures for future autonomous spacecraft systems. The kick-off of DI was in January 1994 and the planned duration is three years. The background of DI is the desire to design future ground/space systems with a higher degree of autonomy than seen in today's missions. The aim of introducing autonomy in spacecraft systems is to: (1) lift the role of the spacecraft operators from routine work and basic troubleshooting to supervision; (2) ease access to and increase availability of spacecraft resources; (3) carry out basic mission planning for users; (4) enable missions which have not yet been feasible due to eg. propagation delays, insufficient ground station coverage etc.; and (5) possibly reduce mission cost. The study serves to identify the feasibility of using state-of-the-art technologies in the area of planning, scheduling, fault detection using model-based diagnosis and knowledge processing to obtain a higher level of autonomy in ground/space systems.
Blood and interstitial flow in the hierarchical pore space architecture of bone tissue.
Cowin, Stephen C; Cardoso, Luis
2015-03-18
There are two main types of fluid in bone tissue, blood and interstitial fluid. The chemical composition of these fluids varies with time and location in bone. Blood arrives through the arterial system containing oxygen and other nutrients and the blood components depart via the venous system containing less oxygen and reduced nutrition. Within the bone, as within other tissues, substances pass from the blood through the arterial walls into the interstitial fluid. The movement of the interstitial fluid carries these substances to the cells within the bone and, at the same time, carries off the waste materials from the cells. Bone tissue would not live without these fluid movements. The development of a model for poroelastic materials with hierarchical pore space architecture for the description of blood flow and interstitial fluid flow in living bone tissue is reviewed. The model is applied to the problem of determining the exchange of pore fluid between the vascular porosity and the lacunar-canalicular porosity in bone tissue due to cyclic mechanical loading and blood pressure. These results are basic to the understanding of interstitial flow in bone tissue that, in turn, is basic to understanding of nutrient transport from the vasculature to the bone cells buried in the bone tissue and to the process of mechanotransduction by these cells. Copyright © 2014 Elsevier Ltd. All rights reserved.
Blood and Interstitial flow in the hierarchical pore space architecture of bone tissue
Cowin, Stephen C.; Cardoso, Luis
2015-01-01
There are two main types of fluid in bone tissue, blood and interstitial fluid. The chemical composition of these fluids varies with time and location in bone. Blood arrives through the arterial system containing oxygen and other nutrients and the blood components depart via the venous system containing less oxygen and reduced nutrition. Within the bone, as within other tissues, substances pass from the blood through the arterial walls into the interstitial fluid. The movement of the interstitial fluid carries these substances to the cells within the bone and, at the same time, carries off the waste materials from the cells. Bone tissue would not live without these fluid movements. The development of a model for poroelastic materials with hierarchical pore space architecture for the description of blood flow and interstitial fluid flow in living bone tissue is reviewed. The model is applied to the problem of determining the exchange of pore fluid between the vascular porosity and the lacunar-canalicular porosity in bone tissue due to cyclic mechanical loading and blood pressure. These results are basic to the understanding of interstitial flow in bone tissue that, in turn, is basic to understanding of nutrient transport from the vasculature to the bone cells buried in the bone tissue and to the process of mechanotransduction by these cells. PMID:25666410
Construction Morphology and the Parallel Architecture of Grammar
ERIC Educational Resources Information Center
Booij, Geert; Audring, Jenny
2017-01-01
This article presents a systematic exposition of how the basic ideas of Construction Grammar (CxG) (Goldberg, 2006) and the Parallel Architecture (PA) of grammar (Jackendoff, 2002]) provide the framework for a proper account of morphological phenomena, in particular word formation. This framework is referred to as Construction Morphology (CxM). As…
Polynomial Calculus: Rethinking the Role of Calculus in High Schools
ERIC Educational Resources Information Center
Grant, Melva R.; Crombie, William; Enderson, Mary; Cobb, Nell
2016-01-01
Access to advanced study in mathematics, in general, and to calculus, in particular, depends in part on the conceptual architecture of these knowledge domains. In this paper, we outline an alternative conceptual architecture for elementary calculus. Our general strategy is to separate basic concepts from the particular advanced techniques used in…
Process Management inside ATLAS DAQ
NASA Astrophysics Data System (ADS)
Alexandrov, I.; Amorim, A.; Badescu, E.; Burckhart-Chromek, D.; Caprini, M.; Dobson, M.; Duval, P. Y.; Hart, R.; Jones, R.; Kazarov, A.; Kolos, S.; Kotov, V.; Liko, D.; Lucio, L.; Mapelli, L.; Mineev, M.; Moneta, L.; Nassiakou, M.; Pedro, L.; Ribeiro, A.; Roumiantsev, V.; Ryabov, Y.; Schweiger, D.; Soloviev, I.; Wolters, H.
2002-10-01
The Process Management component of the online software of the future ATLAS experiment data acquisition system is presented. The purpose of the Process Manager is to perform basic job control of the software components of the data acquisition system. It is capable of starting, stopping and monitoring the status of those components on the data acquisition processors independent of the underlying operating system. Its architecture is designed on the basis of a server client model using CORBA based communication. The server part relies on C++ software agent objects acting as an interface between the local operating system and client applications. Some of the major design challenges of the software agents were to achieve the maximum degree of autonomy possible, to create processes aware of dynamic conditions in their environment and with the ability to determine corresponding actions. Issues such as the performance of the agents in terms of time needed for process creation and destruction, the scalability of the system taking into consideration the final ATLAS configuration and minimizing the use of hardware resources were also of critical importance. Besides the details given on the architecture and the implementation, we also present scalability and performance tests results of the Process Manager system.
Basic Requirements for Systems Software Research and Development
NASA Technical Reports Server (NTRS)
Kuszmaul, Chris; Nitzberg, Bill
1996-01-01
Our success over the past ten years evaluating and developing advanced computing technologies has been due to a simple research and development (R/D) model. Our model has three phases: (a) evaluating the state-of-the-art, (b) identifying problems and creating innovations, and (c) developing solutions, improving the state- of-the-art. This cycle has four basic requirements: a large production testbed with real users, a diverse collection of state-of-the-art hardware, facilities for evalua- tion of emerging technologies and development of innovations, and control over system management on these testbeds. Future research will be irrelevant and future products will not work if any of these requirements is eliminated. In order to retain our effectiveness, the numerical aerospace simulator (NAS) must replace out-of-date production testbeds in as timely a fashion as possible, and cannot afford to ignore innovative designs such as new distributed shared memory machines, clustered commodity-based computers, and multi-threaded architectures.
How Architecture-Driven Modernization Is Changing the Game in Information System Modernization
2010-04-01
Health Administration MUMPS to Java 300K 4 mo. State of OR Employee Retirement System COBOL to C# .Net 250K 4 mo. Civilian State of WA Off. of Super of...Jovial, Mumps , A MagnaX, Natural, B PVL, P owerBuilder, A SQL, Vax Basic, s V B 6, + Others E revolution, inc. C, Target System "To Be" C#, C...successfully completed in 4 months • Created a new JANUSTM MUMPS parser TM , Implementation • Final “To-Be” Documentation • JANUS rules engine
Collections Care: A Basic Reference Shelflist.
ERIC Educational Resources Information Center
de Torres, Amparo R., Ed.
This is an extensive bibliography of reference sources--i.e., books and articles--that relate to the care and conservation of library, archival, and museum collections. Bibliographies are presented under the following headings: (1) General Information; (2) Basic Collections Care; (3) Architectural Conservation; (4) Collections Management: Law,…
Cell wall peptidoglycan architecture in Bacillus subtilis
Hayhurst, Emma J.; Kailas, Lekshmi; Hobbs, Jamie K.; Foster, Simon J.
2008-01-01
The bacterial cell wall is essential for viability and shape determination. Cell wall structural dynamics allowing growth and division, while maintaining integrity is a basic problem governing the life of bacteria. The polymer peptidoglycan is the main structural component for most bacteria and is made up of glycan strands that are cross-linked by peptide side chains. Despite study and speculation over many years, peptidoglycan architecture has remained largely elusive. Here, we show that the model rod-shaped bacterium Bacillus subtilis has glycan strands up to 5 μm, longer than the cell itself and 50 times longer than previously proposed. Atomic force microscopy revealed the glycan strands to be part of a peptidoglycan architecture allowing cell growth and division. The inner surface of the cell wall has a regular macrostructure with ≈50 nm-wide peptidoglycan cables [average 53 ± 12 nm (n = 91)] running basically across the short axis of the cell. Cross striations with an average periodicity of 25 ± 9 nm (n = 96) along each cable are also present. The fundamental cabling architecture is also maintained during septal development as part of cell division. We propose a coiled-coil model for peptidoglycan architecture encompassing our data and recent evidence concerning the biosynthetic machinery for this essential polymer. PMID:18784364
Research on blackboard architectures at the Heuristic Programming Project (HPP)
NASA Technical Reports Server (NTRS)
Nii, H. Penny
1985-01-01
Researchers are entering the second decade of research in the Blackboard problem solving framework with focus in the following areas: (1) extensions to the basic concepts implemented in AGE-1 to address, for example, reasoning with uncertain data; (2) a new architecture and development environment, BB1, that implements methods for explicity controlling the reasoning; and (3) the design of and experimentation with multiprocessor architectures using the Blackboard as an organizing framework. A summary of these efforts is presented.
Evaluation of an Atmosphere Revitalization Subsystem for Deep Space Exploration Missions
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Abney, Morgan B.; Conrad, Ruth E.; Frederick, Kenneth R.; Greenwood, Zachary W.; Kayatin, Matthew J.; Knox, James C.; Newton, Robert L.; Parrish, Keith J.; Takada, Kevin C.;
2015-01-01
An Atmosphere Revitalization Subsystem (ARS) suitable for deployment aboard deep space exploration mission vehicles has been developed and functionally demonstrated. This modified ARS process design architecture was derived from the International Space Station's (ISS) basic ARS. Primary functions considered in the architecture include trace contaminant control, carbon dioxide removal, carbon dioxide reduction, and oxygen generation. Candidate environmental monitoring instruments were also evaluated. The process architecture rearranges unit operations and employs equipment operational changes to reduce mass, simplify, and improve the functional performance for trace contaminant control, carbon dioxide removal, and oxygen generation. Results from integrated functional demonstration are summarized and compared to the performance observed during previous testing conducted on an ISS-like subsystem architecture and a similarly evolved process architecture. Considerations for further subsystem architecture and process technology development are discussed.
Hybrid Power Management-Based Vehicle Architecture
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2011-01-01
Hybrid Power Management (HPM) is the integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications (s ee figure). The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, that provides all power to a common energy storage system that is used to power the drive motors and vehicle accessory systems. This architecture also provides power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. The key element of HPM is the energy storage system. All generated power is sent to the energy storage system, and all loads derive their power from that system. This can significantly reduce the power requirement of the primary power source, while increasing the vehicle reliability. Ultracapacitors are ideal for an HPM-based energy storage system due to their exceptionally long cycle life, high reliability, high efficiency, high power density, and excellent low-temperature performance. Multiple power sources and multiple loads are easily incorporated into an HPM-based vehicle. A gas turbine is a good primary power source because of its high efficiency, high power density, long life, high reliability, and ability to operate on a wide range of fuels. An HPM controller maintains optimal control over each vehicle component. This flexible operating system can be applied to all vehicles to considerably improve vehicle efficiency, reliability, safety, security, and performance. The HPM-based vehicle architecture has many advantages over conventional vehicle architectures. Ultracapacitors have a much longer cycle life than batteries, which greatly improves system reliability, reduces life-of-system costs, and reduces environmental impact as ultracapacitors will probably never need to be replaced and disposed of. The environmentally safe ultracapacitor components reduce disposal concerns, and their recyclable nature reduces the environmental impact. High ultracapacitor power density provides high power during surges, and the ability to absorb high power during recharging. Ultracapacitors are extremely efficient in capturing recharging energy, are rugged, reliable, maintenance-free, have excellent lowtemperature characteristic, provide consistent performance over time, and promote safety as they can be left indefinitely in a safe, discharged state whereas batteries cannot.
Systems Biology Approaches for Understanding Genome Architecture.
Sewitz, Sven; Lipkow, Karen
2016-01-01
The linear and three-dimensional arrangement and composition of chromatin in eukaryotic genomes underlies the mechanisms directing gene regulation. Understanding this organization requires the integration of many data types and experimental results. Here we describe the approach of integrating genome-wide protein-DNA binding data to determine chromatin states. To investigate spatial aspects of genome organization, we present a detailed description of how to run stochastic simulations of protein movements within a simulated nucleus in 3D. This systems level approach enables the development of novel questions aimed at understanding the basic mechanisms that regulate genome dynamics.
Learning and Reasoning in Unknown Domains
NASA Astrophysics Data System (ADS)
Strannegård, Claes; Nizamani, Abdul Rahim; Juel, Jonas; Persson, Ulf
2016-12-01
In the story Alice in Wonderland, Alice fell down a rabbit hole and suddenly found herself in a strange world called Wonderland. Alice gradually developed knowledge about Wonderland by observing, learning, and reasoning. In this paper we present the system Alice In Wonderland that operates analogously. As a theoretical basis of the system, we define several basic concepts of logic in a generalized setting, including the notions of domain, proof, consistency, soundness, completeness, decidability, and compositionality. We also prove some basic theorems about those generalized notions. Then we model Wonderland as an arbitrary symbolic domain and Alice as a cognitive architecture that learns autonomously by observing random streams of facts from Wonderland. Alice is able to reason by means of computations that use bounded cognitive resources. Moreover, Alice develops her belief set by continuously forming, testing, and revising hypotheses. The system can learn a wide class of symbolic domains and challenge average human problem solvers in such domains as propositional logic and elementary arithmetic.
Speed scanning system based on solid-state microchip laser for architectural planning
NASA Astrophysics Data System (ADS)
Redka, Dmitriy; Grishkanich, Alexsandr S.; Kolmakov, Egor; Tsvetkov, Konstantin
2017-10-01
According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.
Laser metrology and optic active control system for GAIA
NASA Astrophysics Data System (ADS)
D'Angelo, F.; Bonino, L.; Cesare, S.; Castorina, G.; Mottini, S.; Bertinetto, F.; Bisi, M.; Canuto, E.; Musso, F.
2017-11-01
The Laser Metrology and Optic Active Control (LM&OAC) program has been carried out under ESA contract with the purpose to design and validate a laser metrology system and an actuation mechanism to monitor and control at microarcsec level the stability of the Basic Angle (angle between the lines of sight of the two telescopes) of GAIA satellite. As part of the program, a breadboard (including some EQM elements) of the laser metrology and control system has been built and submitted to functional, performance and environmental tests. In the followings we describe the mission requirements, the system architecture, the breadboard design, and finally the performed validation tests. Conclusion and appraisals from this experience are also reported.
Position reporting system using small satellites
NASA Technical Reports Server (NTRS)
Pavesi, B.; Rondinelli, G.; Graziani, F.
1990-01-01
A system able to provide position reporting and monitoring services for mobile applications represents a natural complement to the Global Positioning System (GPS) navigation system. The system architecture is defined on the basis of the communications requirements derived by user needs, allowing maximum flexibility in the use of channel capacity, and a very simple and low cost terminal. The payload is sketched, outlining the block modularity and the use of qualified hardware. The global system capacity is also derived. The spacecraft characteristics are defined on the basis of the payload requirements. A small bus optimized for Ariane IV, Delta II vehicles and based on the modularity concept is presented. The design takes full advantage of each launcher with a common basic bus or bus elements for a mass production.
Analysis of Introducing Active Learning Methodologies in a Basic Computer Architecture Course
ERIC Educational Resources Information Center
Arbelaitz, Olatz; José I. Martín; Muguerza, Javier
2015-01-01
This paper presents an analysis of introducing active methodologies in the Computer Architecture course taught in the second year of the Computer Engineering Bachelor's degree program at the University of the Basque Country (UPV/EHU), Spain. The paper reports the experience from three academic years, 2011-2012, 2012-2013, and 2013-2014, in which…
Architecture as a Primary Source for Social Studies. How To Do It Series, Series 2, Number 5.
ERIC Educational Resources Information Center
Leclerc, Daniel C.
Designed for elementary and secondary use in the social studies, this guide provides activities for learning the basic elements and the history of architecture. Through this study, students develop critical observation skills and investigate buildings as manifestations of religious, social, and personal values. The historical overview traces the…
SPATIAL APPROACH TO PLANNING THE PHYSICAL ENVIRONMENT.
ERIC Educational Resources Information Center
BELLOMY, CLEON C.; CAUDILL, WILLIAM W.
THE PURPOSE OF THIS REPORT DEFINES THE SPATIAL APPROACH TO PLANNING THE PHYSICAL ENVIRONMENT AND SUGGESTS A MORE NATURAL APPROACH TO A LESS RESTRICTED ARCHITECTURE. ONE OF THE TWO BASIC ARCHITECTURAL ELEMENTS IN THE SPATIAL CONCEPT IS THE HORIZONTAL SCREEN WHICH KEEPS THE SUN AND RAIN OFF, LETS IN LIGHT, KEEPS OUT SUN HEAT, RETAINS ROOM HEAT, AND…
Tutorial on architectural acoustics
NASA Astrophysics Data System (ADS)
Shaw, Neil; Talaske, Rick; Bistafa, Sylvio
2002-11-01
This tutorial is intended to provide an overview of current knowledge and practice in architectural acoustics. Topics covered will include basic concepts and history, acoustics of small rooms (small rooms for speech such as classrooms and meeting rooms, music studios, small critical listening spaces such as home theatres) and the acoustics of large rooms (larger assembly halls, auditoria, and performance halls).
Anticipatory Cognitive Systems: a Theoretical Model
NASA Astrophysics Data System (ADS)
Terenzi, Graziano
This paper deals with the problem of understanding anticipation in biological and cognitive systems. It is argued that a physical theory can be considered as biologically plausible only if it incorporates the ability to describe systems which exhibit anticipatory behaviors. The paper introduces a cognitive level description of anticipation and provides a simple theoretical characterization of anticipatory systems on this level. Specifically, a simple model of a formal anticipatory neuron and a model (i.e. the τ-mirror architecture) of an anticipatory neural network which is based on the former are introduced and discussed. The basic feature of this architecture is that a part of the network learns to represent the behavior of the other part over time, thus constructing an implicit model of its own functioning. As a consequence, the network is capable of self-representation; anticipation, on a oscopic level, is nothing but a consequence of anticipation on a microscopic level. Some learning algorithms are also discussed together with related experimental tasks and possible integrations. The outcome of the paper is a formal characterization of anticipation in cognitive systems which aims at being incorporated in a comprehensive and more general physical theory.
1985-12-01
development of an improved Universal Network Interface Device (UNID II). The UNID II’s architecture was based on a preliminary design project at...interface device, performing all functions required ,: the multi-ring LAN. The device depicted by RADC’s studies would connect a highly variable group of host...used the ISO Open Systems Ilterconnection (OSI) seven layer model as the basic structure for data flow and program development . In 1982 Cuomo
Li, Yi; Zhong, Yingpeng; Zhang, Jinjian; Xu, Lei; Wang, Qing; Sun, Huajun; Tong, Hao; Cheng, Xiaoming; Miao, Xiangshui
2014-05-09
Nanoscale inorganic electronic synapses or synaptic devices, which are capable of emulating the functions of biological synapses of brain neuronal systems, are regarded as the basic building blocks for beyond-Von Neumann computing architecture, combining information storage and processing. Here, we demonstrate a Ag/AgInSbTe/Ag structure for chalcogenide memristor-based electronic synapses. The memristive characteristics with reproducible gradual resistance tuning are utilised to mimic the activity-dependent synaptic plasticity that serves as the basis of memory and learning. Bidirectional long-term Hebbian plasticity modulation is implemented by the coactivity of pre- and postsynaptic spikes, and the sign and degree are affected by assorted factors including the temporal difference, spike rate and voltage. Moreover, synaptic saturation is observed to be an adjustment of Hebbian rules to stabilise the growth of synaptic weights. Our results may contribute to the development of highly functional plastic electronic synapses and the further construction of next-generation parallel neuromorphic computing architecture.
Smart Building: Decision Making Architecture for Thermal Energy Management
Hernández Uribe, Oscar; San Martin, Juan Pablo; Garcia-Alegre, María C.; Santos, Matilde; Guinea, Domingo
2015-01-01
Smart applications of the Internet of Things are improving the performance of buildings, reducing energy demand. Local and smart networks, soft computing methodologies, machine intelligence algorithms and pervasive sensors are some of the basics of energy optimization strategies developed for the benefit of environmental sustainability and user comfort. This work presents a distributed sensor-processor-communication decision-making architecture to improve the acquisition, storage and transfer of thermal energy in buildings. The developed system is implemented in a near Zero-Energy Building (nZEB) prototype equipped with a built-in thermal solar collector, where optical properties are analysed; a low enthalpy geothermal accumulation system, segmented in different temperature zones; and an envelope that includes a dynamic thermal barrier. An intelligent control of this dynamic thermal barrier is applied to reduce the thermal energy demand (heating and cooling) caused by daily and seasonal weather variations. Simulations and experimental results are presented to highlight the nZEB thermal energy reduction. PMID:26528978
GaAs Supercomputing: Architecture, Language, And Algorithms For Image Processing
NASA Astrophysics Data System (ADS)
Johl, John T.; Baker, Nick C.
1988-10-01
The application of high-speed GaAs processors in a parallel system matches the demanding computational requirements of image processing. The architecture of the McDonnell Douglas Astronautics Company (MDAC) vector processor is described along with the algorithms and language translator. Most image and signal processing algorithms can utilize parallel processing and show a significant performance improvement over sequential versions. The parallelization performed by this system is within each vector instruction. Since each vector has many elements, each requiring some computation, useful concurrent arithmetic operations can easily be performed. Balancing the memory bandwidth with the computation rate of the processors is an important design consideration for high efficiency and utilization. The architecture features a bus-based execution unit consisting of four to eight 32-bit GaAs RISC microprocessors running at a 200 MHz clock rate for a peak performance of 1.6 BOPS. The execution unit is connected to a vector memory with three buses capable of transferring two input words and one output word every 10 nsec. The address generators inside the vector memory perform different vector addressing modes and feed the data to the execution unit. The functions discussed in this paper include basic MATRIX OPERATIONS, 2-D SPATIAL CONVOLUTION, HISTOGRAM, and FFT. For each of these algorithms, assembly language programs were run on a behavioral model of the system to obtain performance figures.
Design and Development of a Run-Time Monitor for Multi-Core Architectures in Cloud Computing
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P.; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data. PMID:22163811
Design and development of a run-time monitor for multi-core architectures in cloud computing.
Kang, Mikyung; Kang, Dong-In; Crago, Stephen P; Park, Gyung-Leen; Lee, Junghoon
2011-01-01
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is a type of parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring system status changes, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design and develop a Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize cloud computing resources for multi-core architectures. RTM monitors application software through library instrumentation as well as underlying hardware through a performance counter optimizing its computing configuration based on the analyzed data.
Application of Tessellation in Architectural Geometry Design
NASA Astrophysics Data System (ADS)
Chang, Wei
2018-06-01
Tessellation plays a significant role in architectural geometry design, which is widely used both through history of architecture and in modern architectural design with the help of computer technology. Tessellation has been found since the birth of civilization. In terms of dimensions, there are two- dimensional tessellations and three-dimensional tessellations; in terms of symmetry, there are periodic tessellations and aperiodic tessellations. Besides, some special types of tessellations such as Voronoi Tessellation and Delaunay Triangles are also included. Both Geometry and Crystallography, the latter of which is the basic theory of three-dimensional tessellations, need to be studied. In history, tessellation was applied into skins or decorations in architecture. The development of Computer technology enables tessellation to be more powerful, as seen in surface control, surface display and structure design, etc. Therefore, research on the application of tessellation in architectural geometry design is of great necessity in architecture studies.
Session 6: Dynamic Modeling and Systems Analysis
NASA Technical Reports Server (NTRS)
Csank, Jeffrey; Chapman, Jeffryes; May, Ryan
2013-01-01
These presentations cover some of the ongoing work in dynamic modeling and dynamic systems analysis. The first presentation discusses dynamic systems analysis and how to integrate dynamic performance information into the systems analysis. The ability to evaluate the dynamic performance of an engine design may allow tradeoffs between the dynamic performance and operability of a design resulting in a more efficient engine design. The second presentation discusses the Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS). T-MATS is a Simulation system with a library containing the basic building blocks that can be used to create dynamic Thermodynamic Systems. Some of the key features include Turbo machinery components, such as turbines, compressors, etc., and basic control system blocks. T-MAT is written in the Matlab-Simulink environment and is open source software. The third presentation focuses on getting additional performance from the engine by allowing the limit regulators only to be active when a limit is danger of being violated. Typical aircraft engine control architecture is based on MINMAX scheme, which is designed to keep engine operating within prescribed mechanical/operational safety limits. Using a conditionally active min-max limit regulator scheme, additional performance can be gained by disabling non-relevant limit regulators
Rooney, Kevin K.; Condia, Robert J.; Loschky, Lester C.
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one’s fist at arm’s length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words) PMID:28360867
Rooney, Kevin K; Condia, Robert J; Loschky, Lester C
2017-01-01
Neuroscience has well established that human vision divides into the central and peripheral fields of view. Central vision extends from the point of gaze (where we are looking) out to about 5° of visual angle (the width of one's fist at arm's length), while peripheral vision is the vast remainder of the visual field. These visual fields project to the parvo and magno ganglion cells, which process distinctly different types of information from the world around us and project that information to the ventral and dorsal visual streams, respectively. Building on the dorsal/ventral stream dichotomy, we can further distinguish between focal processing of central vision, and ambient processing of peripheral vision. Thus, our visual processing of and attention to objects and scenes depends on how and where these stimuli fall on the retina. The built environment is no exception to these dependencies, specifically in terms of how focal object perception and ambient spatial perception create different types of experiences we have with built environments. We argue that these foundational mechanisms of the eye and the visual stream are limiting parameters of architectural experience. We hypothesize that people experience architecture in two basic ways based on these visual limitations; by intellectually assessing architecture consciously through focal object processing and assessing architecture in terms of atmosphere through pre-conscious ambient spatial processing. Furthermore, these separate ways of processing architectural stimuli operate in parallel throughout the visual perceptual system. Thus, a more comprehensive understanding of architecture must take into account that built environments are stimuli that are treated differently by focal and ambient vision, which enable intellectual analysis of architectural experience versus the experience of architectural atmosphere, respectively. We offer this theoretical model to help advance a more precise understanding of the experience of architecture, which can be tested through future experimentation. (298 words).
Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.
Musen, M. A.
1998-01-01
When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community. PMID:9929181
Modern architectures for intelligent systems: reusable ontologies and problem-solving methods.
Musen, M A
1998-01-01
When interest in intelligent systems for clinical medicine soared in the 1970s, workers in medical informatics became particularly attracted to rule-based systems. Although many successful rule-based applications were constructed, development and maintenance of large rule bases remained quite problematic. In the 1980s, an entire industry dedicated to the marketing of tools for creating rule-based systems rose and fell, as workers in medical informatics began to appreciate deeply why knowledge acquisition and maintenance for such systems are difficult problems. During this time period, investigators began to explore alternative programming abstractions that could be used to develop intelligent systems. The notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) domain-independent problem-solving methods-standard algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper will highlight how intelligent systems for diverse tasks can be efficiently automated using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.
Framework for teleoperated microassembly systems
NASA Astrophysics Data System (ADS)
Reinhart, Gunther; Anton, Oliver; Ehrenstrasser, Michael; Patron, Christian; Petzold, Bernd
2002-02-01
Manual assembly of minute parts is currently done using simple devices such as tweezers or magnifying glasses. The operator therefore requires a great deal of concentration for successful assembly. Teleoperated micro-assembly systems are a promising method for overcoming the scaling barrier. However, most of today's telepresence systems are based on proprietary and one-of-a-kind solutions. Frameworks which supply the basic functions of a telepresence system, e.g. to establish flexible communication links that depend on bandwidth requirements or to synchronize distributed components, are not currently available. Large amounts of time and money have to be invested in order to create task-specific teleoperated micro-assembly systems from scratch. For this reason, an object-oriented framework for telepresence systems that is based on CORBA as a common middleware was developed at the Institute for Machine Tools and Industrial Management (iwb). The framework is based on a distributed architectural concept and is realized in C++. External hardware components such as haptic, video or sensor devices are coupled to the system by means of defined software interfaces. In this case, the special requirements of teleoperation systems have to be considered, e.g. dynamic parameter settings for sensors during operation. Consequently, an architectural concept based on logical sensors has been developed to achieve maximum flexibility and to enable a task-oriented integration of hardware components.
Signori, Marcos R; Garcia, Renato
2010-01-01
This paper presents a model that aids the Clinical Engineering to deal with Risk Management in the Healthcare Technological Process. The healthcare technological setting is complex and supported by three basics entities: infrastructure (IS), healthcare technology (HT), and human resource (HR). Was used an Enterprise Architecture - MODAF (Ministry of Defence Architecture Framework) - to model this process for risk management. Thus, was created a new model to contribute to the risk management in the HT process, through the Clinical Engineering viewpoint. This architecture model can support and improve the decision making process of the Clinical Engineering to the Risk Management in the Healthcare Technological process.
Image and Morphology in Modern Theory of Architecture
NASA Astrophysics Data System (ADS)
Yankovskaya, Y. S.; Merenkov, A. V.
2017-11-01
This paper is devoted to some important and fundamental problems of the modern Russian architectural theory. These problems are: methodological and technological retardation; substitution of the modern professional architectural theoretical knowledge by the humanitarian concepts; preference of the traditional historical or historical-theoretical research. One of the most probable ways is the formation of useful modern subject (and multi-subject)-oriented concepts in architecture. To get over the criticism and distrust of the architectural theory is possible through the recognition of an important role of the subject (architect, consumer, contractor, ruler, etc.) and direction of the practical tasks of the forming human environment in the today’s rapidly changing world and post-industrial society. In this article we consider the evolution of two basic concepts for the theory of architecture such as the image and morphology.
A novel architecture of recovered data comparison for high speed clock and data recovery
NASA Astrophysics Data System (ADS)
Gao, Susan; Li, Fei; Wang, Zhigong; Cui, Hongliang
2005-05-01
A clock and data recovery (CDR) circuit is one of the crucial blocks in high-speed serial link communication systems. The data received in these systems are asynchronous and noisy, requiring that a clock be extracted to allow synchronous operations. Furthermore, the data must be "retimed" so that the jitter accumulated during transmission is removed. This paper presents a novel architecture of CDR, which is very tolerant to long sequences of serial ones or zeros and also robust to occasional long absence of transitions. The design is based on the fact that a basic clock recovery having a clock recovery circuit (CRC) and a data decision circuit separately would generate a high jitter clock when the received non-return-to-zero (NRZ) data with long sequences of ones or zeros. To eliminate this drawback, the proposed architecture incorporates a data circuit decision circuit within the phase-locked loop (PLL) CRC. Other than this, a new phase detector (PD) is also proposed, which was easy to accomplish and robust at high speed. This PD is functional with a random input and automatically turns to disable during both the locked state and long absence of transitions. The voltage-controlled oscillator (VCO) is also designed delicately to suppress the jitter. Due to the high stability, the jitter is highly reduced when the loop is locked. The simulation results of such CDR working at 1.25Gb/s particularly for 1000BASE-X Gigabit Ethernet by using TSMC 0.25μm technology are presented to prove the feasibility of this architecture. One more CDR based on edge detection architecture is also built in the circuit for performance comparisons.
NASA Astrophysics Data System (ADS)
Majerska-Pałubicka, Beata
2017-10-01
Currently, there is a tendency in architecture to search for solutions implementing the assumptions of the sustainable development paradigm. A number of them are components of architecture, which in the future will certainly affect urban planning and architecture to a much greater extent. On the one hand, an issue of great significance is the need to integrate sustainable system elements with the spatial structure of environmentally friendly architectural facilities and complexes and to determine their influence on design solutions as well as the implementation, operation and recycling, while on the other hand, it is very important to solve the problem of how to design buildings, housing estates and towns so that their impact on the environment will be acceptable, i.e. will not exceed the possibilities of natural environment regeneration and, how to cooperate in interdisciplinary design teams to reach an agreement and acceptance so as to achieve harmony between the built and natural environment, which is a basis of sustainable development. In this broad interdisciplinary context an increasing importance is being attached to design strategies, systems of evaluating designs and buildings as well as tools to support integrated activities in the field of architectural design. The above topics are the subject of research presented in this paper. The basic research aim of the paper is: to look for a current method of solving design tasks within the framework of Integrated Design Process (IDP) using modern design tools and technical possibilities, in the context of sustainable development imperative, including, the optimisation of IDP design strategies regarding the assumptions of conscious creation of sustainable built environment, adjusted to Polish conditions. As a case study used examples of Scandinavian housing settlements, sustainable in a broad context.
Thermal Management Architecture for Future Responsive Spacecraft
NASA Astrophysics Data System (ADS)
Bugby, D.; Zimbeck, W.; Kroliczek, E.
2009-03-01
This paper describes a novel thermal design architecture that enables satellites to be conceived, configured, launched, and operationally deployed very quickly. The architecture has been given the acronym SMARTS for Satellite Modular and Reconfigurable Thermal System and it involves four basic design rules: modest radiator oversizing, maximum external insulation, internal isothermalization and radiator heat flow modulation. The SMARTS philosophy is being developed in support of the DoD Operationally Responsive Space (ORS) initiative which seeks to drastically improve small satellite adaptability, deployability, and design flexibility. To illustrate the benefits of the philosophy for a prototypical multi-paneled small satellite, the paper describes a SMARTS thermal control system implementation that uses: panel-to-panel heat conduction, intra-panel heat pipe isothermalization, radiator heat flow modulation via a thermoelectric cooler (TEC) cold-biased loop heat pipe (LHP) and maximum external multi-layer insulation (MLI). Analyses are presented that compare the traditional "cold-biasing plus heater power" passive thermal design approach to the SMARTS approach. Plans for a 3-panel SMARTS thermal test bed are described. Ultimately, the goal is to incorporate SMARTS into the design of future ORS satellites, but it is also possible that some aspects of SMARTS technology could be used to improve the responsiveness of future NASA spacecraft. [22 CFR 125.4(b)(13) applicable
Oh, Sungyoung; Cha, Jieun; Ji, Myungkyu; Kang, Hyekyung; Kim, Seok; Heo, Eunyoung; Han, Jong Soo; Kang, Hyunggoo; Chae, Hoseok; Hwang, Hee; Yoo, Sooyoung
2015-04-01
To design a cloud computing-based Healthcare Software-as-a-Service (SaaS) Platform (HSP) for delivering healthcare information services with low cost, high clinical value, and high usability. We analyzed the architecture requirements of an HSP, including the interface, business services, cloud SaaS, quality attributes, privacy and security, and multi-lingual capacity. For cloud-based SaaS services, we focused on Clinical Decision Service (CDS) content services, basic functional services, and mobile services. Microsoft's Azure cloud computing for Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) was used. The functional and software views of an HSP were designed in a layered architecture. External systems can be interfaced with the HSP using SOAP and REST/JSON. The multi-tenancy model of the HSP was designed as a shared database, with a separate schema for each tenant through a single application, although healthcare data can be physically located on a cloud or in a hospital, depending on regulations. The CDS services were categorized into rule-based services for medications, alert registration services, and knowledge services. We expect that cloud-based HSPs will allow small and mid-sized hospitals, in addition to large-sized hospitals, to adopt information infrastructures and health information technology with low system operation and maintenance costs.
A Reusable Design for Precision Lunar Landing Systems
NASA Technical Reports Server (NTRS)
Fuhrman, Linda; Brand, Timothy; Fill, Tom; Norris, Lee; Paschall, Steve
2005-01-01
The top-level architecture to accomplish NASA's Vision for Space Exploration is to use Lunar missions and systems not just as an end in themselves, but also as testbeds for the more ambitious goals of Human Mars Exploration (HME). This approach means that Lunar missions and systems are most likely going to be targeted for (Lunar) polar missions, and also for long-duration (months) surface stays. This overacting theme creates basic top-level requirements for any next-generation lander system: 1) Long duration stays: a) Multiple landers in close proximity; b) Pinpoint landings for "surface rendezvous"; c) Autonomous landing of pre-positioned assets; and d) Autonomous Hazard Detection and Avoidance. 2) Polar and deep-crater landings (dark); 3) Common/extensible systems for Moon and Mars, crew and cargo. These requirements pose challenging technology and capability needs. Compare and contrast: 4) Apollo: a) 1 km landing accuracy; b) Lunar near-side (well imaged and direct-to-Earth com. possible); c) Lunar equatorial (landing trajectories offer best navigation support from Earth); d) Limited lighting conditions; e) Significant ground-in-the-loop operations; 5) Lunar Access: a) 10-100m landing precision; b) "Anywhere" access includes polar (potentially poor nav. support from Earth) and far side (poor gravity and imaging; no direct-to-Earth com); c) "Anytime" access includes any lighting condition (including dark); d) Full autonomous landing capability; e) Extensible design for tele-operation or operator-in-the-loop; and f) Minimal ground support to reduce operations costs. The Lunar Access program objectives, therefore, are to: a) Develop a baseline Lunar Precision Landing System (PLS) design to enable pinpoint "anywhere, anytime" landings; b) landing precision 10m-100m; c) Any LAT, LON; and d) Any lighting condition; This paper will characterize basic features of the next generation Lunar landing system, including trajectory types, sensor suite options and a reference system architecture.
Realistic absorption coefficient of each individual film in a multilayer architecture
NASA Astrophysics Data System (ADS)
Cesaria, M.; Caricato, A. P.; Martino, M.
2015-02-01
A spectrophotometric strategy, termed multilayer-method (ML-method), is presented and discussed to realistically calculate the absorption coefficient of each individual layer embedded in multilayer architectures without reverse engineering, numerical refinements and assumptions about the layer homogeneity and thickness. The strategy extends in a non-straightforward way a consolidated route, already published by the authors and here termed basic-method, able to accurately characterize an absorbing film covering transparent substrates. The ML-method inherently accounts for non-measurable contribution of the interfaces (including multiple reflections), describes the specific film structure as determined by the multilayer architecture and used deposition approach and parameters, exploits simple mathematics, and has wide range of applicability (high-to-weak absorption regions, thick-to-ultrathin films). Reliability tests are performed on films and multilayers based on a well-known material (indium tin oxide) by deliberately changing the film structural quality through doping, thickness-tuning and underlying supporting-film. Results are found consistent with information obtained by standard (optical and structural) analysis, the basic-method and band gap values reported in the literature. The discussed example-applications demonstrate the ability of the ML-method to overcome the drawbacks commonly limiting an accurate description of multilayer architectures.
Optical beam forming techniques for phased array antennas
NASA Technical Reports Server (NTRS)
Wu, Te-Kao; Chandler, C.
1993-01-01
Conventional phased array antennas using waveguide or coax for signal distribution are impractical for large scale implementation on satellites or spacecraft because they exhibit prohibitively large system size, heavy weight, high attenuation loss, limited bandwidth, sensitivity to electromagnetic interference (EMI) temperature drifts and phase instability. However, optical beam forming systems are smaller, lighter, and more flexible. Three optical beam forming techniques are identified as applicable to large spaceborne phased array antennas. They are (1) the optical fiber replacement of conventional RF phased array distribution and control components, (2) spatial beam forming, and (3) optical beam splitting with integrated quasi-optical components. The optical fiber replacement and the spatial beam forming approaches were pursued by many organizations. Two new optical beam forming architectures are presented. Both architectures involve monolithic integration of the antenna radiating elements with quasi-optical grid detector arrays. The advantages of the grid detector array in the optical process are the higher power handling capability and the dynamic range. One architecture involves a modified version of the original spatial beam forming approach. The basic difference is the spatial light modulator (SLM) device for controlling the aperture field distribution. The original liquid crystal light valve SLM is replaced by an optical shuffling SLM, which was demonstrated for the 'smart pixel' technology. The advantages are the capability of generating the agile beams of a phased array antenna and to provide simultaneous transmit and receive functions. The second architecture considered is the optical beam splitting approach. This architecture involves an alternative amplitude control for each antenna element with an optical beam power divider comprised of mirrors and beam splitters. It also implements the quasi-optical grid phase shifter for phase control and grid amplifier for RF power. The advantages are no SLM is required for this approach, and the complete antenna system is capable of full monolithic integration.
Scalable software architectures for decision support.
Musen, M A
1999-12-01
Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.
An integrated healthcare system for personalized chronic disease care in home-hospital environments.
Jeong, Sangjin; Youn, Chan-Hyun; Shim, Eun Bo; Kim, Moonjung; Cho, Young Min; Peng, Limei
2012-07-01
Facing the increasing demands and challenges in the area of chronic disease care, various studies on the healthcare system which can, whenever and wherever, extract and process patient data have been conducted. Chronic diseases are the long-term diseases and require the processes of the real-time monitoring, multidimensional quantitative analysis, and the classification of patients' diagnostic information. A healthcare system for chronic diseases is characterized as an at-hospital and at-home service according to a targeted environment. Both services basically aim to provide patients with accurate diagnoses of disease by monitoring a variety of physical states with a number of monitoring methods, but there are differences between home and hospital environments, and the different characteristics should be considered in order to provide more accurate diagnoses for patients, especially, patients having chronic diseases. In this paper, we propose a patient status classification method for effectively identifying and classifying chronic diseases and show the validity of the proposed method. Furthermore, we present a new healthcare system architecture that integrates the at-home and at-hospital environment and discuss the applicability of the architecture using practical target services.
Image analysis of pulmonary nodules using micro CT
NASA Astrophysics Data System (ADS)
Niki, Noboru; Kawata, Yoshiki; Fujii, Masashi; Kakinuma, Ryutaro; Moriyama, Noriyuki; Tateno, Yukio; Matsui, Eisuke
2001-07-01
We are developing a micro-computed tomography (micro CT) system for imaging pulmonary nodules. The purpose is to enhance the physician performance in accessing the micro- architecture of the nodule for classification between malignant and benign nodules. The basic components of the micro CT system consist of microfocus X-ray source, a specimen manipulator, and an image intensifier detector coupled to charge-coupled device (CCD) camera. 3D image reconstruction was performed by the slice. A standard fan- beam convolution and backprojection algorithm was used to reconstruct the center plane intersecting the X-ray source. The preprocessing of the 3D image reconstruction included the correction of the geometrical distortions and the shading artifact introduced by the image intensifier. The main advantage of the system is to obtain a high spatial resolution which ranges between b micrometers and 25 micrometers . In this work we report on preliminary studies performed with the micro CT for imaging resected tissues of normal and abnormal lung. Experimental results reveal micro architecture of lung tissues, such as alveolar wall, septal wall of pulmonary lobule, and bronchiole. From the results, the micro CT system is expected to have interesting potentials for high confidential differential diagnosis.
NASA Technical Reports Server (NTRS)
Stehle, Roy H.; Ogier, Richard G.
1993-01-01
Alternatives for realizing a packet-based network switch for use on a frequency division multiple access/time division multiplexed (FDMA/TDM) geostationary communication satellite were investigated. Each of the eight downlink beams supports eight directed dwells. The design needed to accommodate multicast packets with very low probability of loss due to contention. Three switch architectures were designed and analyzed. An output-queued, shared bus system yielded a functionally simple system, utilizing a first-in, first-out (FIFO) memory per downlink dwell, but at the expense of a large total memory requirement. A shared memory architecture offered the most efficiency in memory requirements, requiring about half the memory of the shared bus design. The processing requirement for the shared-memory system adds system complexity that may offset the benefits of the smaller memory. An alternative design using a shared memory buffer per downlink beam decreases circuit complexity through a distributed design, and requires at most 1000 packets of memory more than the completely shared memory design. Modifications to the basic packet switch designs were proposed to accommodate circuit-switched traffic, which must be served on a periodic basis with minimal delay. Methods for dynamically controlling the downlink dwell lengths were developed and analyzed. These methods adapt quickly to changing traffic demands, and do not add significant complexity or cost to the satellite and ground station designs. Methods for reducing the memory requirement by not requiring the satellite to store full packets were also proposed and analyzed. In addition, optimal packet and dwell lengths were computed as functions of memory size for the three switch architectures.
I want what you've got: Cross platform portabiity and human-robot interaction assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Julie L. Marble, Ph.D.*.; Douglas A. Few; David J. Bruemmer
2005-08-01
Human-robot interaction is a subtle, yet critical aspect of design that must be assessed during the development of both the human-robot interface and robot behaviors if the human-robot team is to effectively meet the complexities of the task environment. Testing not only ensures that the system can successfully achieve the tasks for which it was designed, but more importantly, usability testing allows the designers to understand how humans and robots can, will, and should work together to optimize workload distribution. A lack of human-centered robot interface design, the rigidity of sensor configuration, and the platform-specific nature of research robot developmentmore » environments are a few factors preventing robotic solutions from reaching functional utility in real word environments. Often the difficult engineering challenge of implementing adroit reactive behavior, reliable communication, trustworthy autonomy that combines with system transparency and usable interfaces is overlooked in favor of other research aims. The result is that many robotic systems never reach a level of functional utility necessary even to evaluate the efficacy of the basic system, much less result in a system that can be used in a critical, real-world environment. Further, because control architectures and interfaces are often platform specific, it is difficult or even impossible to make usability comparisons between them. This paper discusses the challenges inherent to the conduct of human factors testing of variable autonomy control architectures and across platforms within a complex, real-world environment. It discusses the need to compare behaviors, architectures, and interfaces within a structured environment that contains challenging real-world tasks, and the implications for system acceptance and trust of autonomous robotic systems for how humans and robots interact in true interactive teams.« less
[Basic theory and research method of urban forest ecology].
He, Xingyuan; Jin, Yingshan; Zhu, Wenquan; Xu, Wenduo; Chen, Wei
2002-12-01
With the development of world economy and the increment of urban population, the urban environment problem hinders the urban sustainable development. Now, more and more people realized the importance of urban forests in improving the quality of urban ecology. Therefore, a new subject, urban forest ecology, and correlative new concept frame in the field formed. The theoretic foundation of urban forest ecology derived from the mutual combination of theory relating to forest ecology, landscape ecology, landscape architecture ecology and anthrop-ecology. People survey the development of city from the view of ecosystem, and regard the environment, a colony of human, animals and plants, as main factors of the system. The paper introduces systematically the urban forest ecology as follows: 1) the basic concept of urban forest ecology; 2) the meaning of urban forest ecology; 3) the basic principle and theoretic base of urban forest ecology; 4) the research method of urban forest ecology; 5) the developmental expectation of urban forest ecology.
The Core Avionics System for the DLR Compact-Satellite Series
NASA Astrophysics Data System (ADS)
Montenegro, S.; Dittrich, L.
2008-08-01
The Standard Satellite Bus's core avionics system is a further step in the development line of the software and hardware architecture which was first used in the bispectral infrared detector mission (BIRD). The next step improves dependability, flexibility and simplicity of the whole core avionics system. Important aspects of this concept were already implemented, simulated and tested in other ESA and industrial projects. Therefore we can say the basic concept is proven. This paper deals with different aspects of core avionics development and proposes an extension to the existing core avionics system of BIRD to meet current and future requirements regarding flexibility, availability, reliability of small satellite and the continuous increasing demand of mass memory and computational power.
A curriculum for real-time computer and control systems engineering
NASA Technical Reports Server (NTRS)
Halang, Wolfgang A.
1990-01-01
An outline of a syllabus for the education of real-time-systems engineers is given. This comprises the treatment of basic concepts, real-time software engineering, and programming in high-level real-time languages, real-time operating systems with special emphasis on such topics as task scheduling, hardware architectures, and especially distributed automation structures, process interfacing, system reliability and fault-tolerance, and integrated project development support systems. Accompanying course material and laboratory work are outlined, and suggestions for establishing a laboratory with advanced, but low-cost, hardware and software are provided. How the curriculum can be extended into a second semester is discussed, and areas for possible graduate research are listed. The suitable selection of a high-level real-time language and supporting operating system for teaching purposes is considered.
NASA Astrophysics Data System (ADS)
Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis
2013-10-01
Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.
A Systolic Array-Based FPGA Parallel Architecture for the BLAST Algorithm
Guo, Xinyu; Wang, Hong; Devabhaktuni, Vijay
2012-01-01
A design of systolic array-based Field Programmable Gate Array (FPGA) parallel architecture for Basic Local Alignment Search Tool (BLAST) Algorithm is proposed. BLAST is a heuristic biological sequence alignment algorithm which has been used by bioinformatics experts. In contrast to other designs that detect at most one hit in one-clock-cycle, our design applies a Multiple Hits Detection Module which is a pipelining systolic array to search multiple hits in a single-clock-cycle. Further, we designed a Hits Combination Block which combines overlapping hits from systolic array into one hit. These implementations completed the first and second step of BLAST architecture and achieved significant speedup comparing with previously published architectures. PMID:25969747
Big data processing in the cloud - Challenges and platforms
NASA Astrophysics Data System (ADS)
Zhelev, Svetoslav; Rozeva, Anna
2017-12-01
Choosing the appropriate architecture and technologies for a big data project is a difficult task, which requires extensive knowledge in both the problem domain and in the big data landscape. The paper analyzes the main big data architectures and the most widely implemented technologies used for processing and persisting big data. Clouds provide for dynamic resource scaling, which makes them a natural fit for big data applications. Basic cloud computing service models are presented. Two architectures for processing big data are discussed, Lambda and Kappa architectures. Technologies for big data persistence are presented and analyzed. Stream processing as the most important and difficult to manage is outlined. The paper highlights main advantages of cloud and potential problems.
Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations
Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali
2015-01-01
Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414
Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.
Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali
2015-01-01
Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.
Space-time dynamics of Stem Cell Niches: a unified approach for Plants.
Pérez, Maria Del Carmen; López, Alejandro; Padilla, Pablo
2013-06-01
Many complex systems cannot be analyzed using traditional mathematical tools, due to their irreducible nature. This makes it necessary to develop models that can be implemented computationally to simulate their evolution. Examples of these models are cellular automata, evolutionary algorithms, complex networks, agent-based models, symbolic dynamics and dynamical systems techniques. We review some representative approaches to model the stem cell niche in Arabidopsis thaliana and the basic biological mechanisms that underlie its formation and maintenance. We propose a mathematical model based on cellular automata for describing the space-time dynamics of the stem cell niche in the root. By making minimal assumptions on the cell communication process documented in experiments, we classify the basic developmental features of the stem-cell niche, including the basic structural architecture, and suggest that they could be understood as the result of generic mechanisms given by short and long range signals. This could be a first step in understanding why different stem cell niches share similar topologies, not only in plants. Also the fact that this organization is a robust consequence of the way information is being processed by the cells and to some extent independent of the detailed features of the signaling mechanism.
Space-time dynamics of stem cell niches: a unified approach for plants.
Pérez, Maria del Carmen; López, Alejandro; Padilla, Pablo
2013-04-02
Many complex systems cannot be analyzed using traditional mathematical tools, due to their irreducible nature. This makes it necessary to develop models that can be implemented computationally to simulate their evolution. Examples of these models are cellular automata, evolutionary algorithms, complex networks, agent-based models, symbolic dynamics and dynamical systems techniques. We review some representative approaches to model the stem cell niche in Arabidopsis thaliana and the basic biological mechanisms that underlie its formation and maintenance. We propose a mathematical model based on cellular automata for describing the space-time dynamics of the stem cell niche in the root. By making minimal assumptions on the cell communication process documented in experiments, we classify the basic developmental features of the stem-cell niche, including the basic structural architecture, and suggest that they could be understood as the result of generic mechanisms given by short and long range signals. This could be a first step in understanding why different stem cell niches share similar topologies, not only in plants. Also the fact that this organization is a robust consequence of the way information is being processed by the cells and to some extent independent of the detailed features of the signaling mechanism.
ERIC Educational Resources Information Center
Millan, Eva; Belmonte, Maria-Victoria; Ruiz-Montiel, Manuela; Gavilanes, Juan; Perez-de-la-Cruz, Jose-Luis
2016-01-01
In this paper, we present BH-ShaDe, a new software tool to assist architecture students learning the ill-structured domain/task of housing design. The software tool provides students with automatic or interactively generated floor plan schemas for basic houses. The students can then use the generated schemas as initial seeds to develop complete…
Use of a New "Moodle" Module for Improving the Teaching of a Basic Course on Computer Architecture
ERIC Educational Resources Information Center
Trenas, M. A.; Ramos, J.; Gutierrez, E. D.; Romero, S.; Corbera, F.
2011-01-01
This paper describes how a new "Moodle" module, called "CTPracticals", is applied to the teaching of the practical content of a basic computer organization course. In the core of the module, an automatic verification engine enables it to process the VHDL designs automatically as they are submitted. Moreover, a straightforward…
HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation
NASA Technical Reports Server (NTRS)
Sterling, Thomas; Bergman, Larry
2000-01-01
Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention semiconductor logic. Wave Division Multiplexing optical communications can approach a peak per fiber bandwidth of 1 Tbps and the new Data Vortex network topology employing this technology can connect tens of thousands of ports providing a bi-section bandwidth on the order of a Petabyte per second with latencies well below 100 nanoseconds, even under heavy loads. Processor-in-Memory (PIM) technology combines logic and memory on the same chip exposing the internal bandwidth of the memory row buffers at low latency. And holographic storage photorefractive storage technologies provide high-density memory with access a thousand times faster than conventional disk technologies. Together these technologies enable a new class of shared memory system architecture with a peak performance in the range of a Petaflops but size and power requirements comparable to today's largest Teraflops scale systems. To achieve high-sustained performance, HTMT combines an advanced multithreading processor architecture with a memory-driven coarse-grained latency management strategy called "percolation", yielding high efficiency while reducing the much of the parallel programming burden. This paper will present the basic system architecture characteristics made possible through this series of advanced technologies and then give a detailed description of the new percolation approach to runtime latency management.
Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic
NASA Astrophysics Data System (ADS)
Narendran, S.; Selvakumar, J.
2018-04-01
Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of manymore » computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.« less
Convolutional neural networks with balanced batches for facial expressions recognition
NASA Astrophysics Data System (ADS)
Battini Sönmez, Elena; Cangelosi, Angelo
2017-03-01
This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.
NASA Astrophysics Data System (ADS)
Uchida, Satoshi; Yamamoto, Hitoshi; Okada, Isamu; Sasaki, Tatsuya
2018-02-01
Indirect reciprocity is one of the basic mechanisms to sustain mutual cooperation, by which beneficial acts are returned, not by the recipient, but by third parties. This mechanism relies on the ability of individuals to know the past actions of others, and to assess those actions. There are many different systems of assessing others, which can be interpreted as rudimentary social norms (i.e., views on what is “good” or “bad”). In this paper, impacts of different adaptive architectures, i.e., ways for individuals to adapt to environments, on indirect reciprocity are investigated. We examine two representative architectures: one based on replicator dynamics and the other on genetic algorithm. Different from the replicator dynamics, the genetic algorithm requires describing the mixture of all possible norms in the norm space under consideration. Therefore, we also propose an analytic method to study norm ecosystems in which all possible second order social norms potentially exist and compete. The analysis reveals that the different adaptive architectures show different paths to the evolution of cooperation. Especially we find that so called Stern-Judging, one of the best studied norms in the literature, exhibits distinct behaviors in both architectures. On one hand, in the replicator dynamics, Stern-Judging remains alive and gets a majority steadily when the population reaches a cooperative state. On the other hand, in the genetic algorithm, it gets a majority only temporarily and becomes extinct in the end.
Overview of the preliminary design of the ITER plasma control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snipes, J. A.; Albanese, R.; Ambrosino, G.
An overview of the Preliminary Design of the ITER Plasma Control System (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemesmore » for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.« less
Overview of the preliminary design of the ITER plasma control system
NASA Astrophysics Data System (ADS)
Snipes, J. A.; Albanese, R.; Ambrosino, G.; Ambrosino, R.; Amoskov, V.; Blanken, T. C.; Bremond, S.; Cinque, M.; de Tommasi, G.; de Vries, P. C.; Eidietis, N.; Felici, F.; Felton, R.; Ferron, J.; Formisano, A.; Gribov, Y.; Hosokawa, M.; Hyatt, A.; Humphreys, D.; Jackson, G.; Kavin, A.; Khayrutdinov, R.; Kim, D.; Kim, S. H.; Konovalov, S.; Lamzin, E.; Lehnen, M.; Lukash, V.; Lomas, P.; Mattei, M.; Mineev, A.; Moreau, P.; Neu, G.; Nouailletas, R.; Pautasso, G.; Pironti, A.; Rapson, C.; Raupp, G.; Ravensbergen, T.; Rimini, F.; Schneider, M.; Travere, J.-M.; Treutterer, W.; Villone, F.; Walker, M.; Welander, A.; Winter, A.; Zabeo, L.
2017-12-01
An overview of the preliminary design of the ITER plasma control system (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemes for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.
Overview of the preliminary design of the ITER plasma control system
Snipes, J. A.; Albanese, R.; Ambrosino, G.; ...
2017-09-11
An overview of the Preliminary Design of the ITER Plasma Control System (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemesmore » for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.« less
Novel pervasive scenarios for home management: the Butlers architecture.
Denti, Enrico
2014-01-01
Many efforts today aim to energy saving, promoting the user's awareness and virtuous behavior in a sustainability perspective. Our houses, appliances, energy meters and devices are becoming smarter and connected, domotics is increasing possibilities in house automation and control, and ambient intelligence and assisted living are bringing attention onto people's needs from different viewpoints. Our assumption is that considering these aspects together allows for novel intriguing possibilities. To this end, in this paper we combine home energy management with domotics, coordination technologies, intelligent agents, ambient intelligence, ubiquitous technologies and gamification to devise novel scenarios, where energy monitoring and management is just the basic brick of a much wider and comprehensive home management system. The aim is to control home appliances well beyond energy consumption, combining home comfort, appliance scheduling, safety constraints, etc. with dynamically-changeable users' preferences, goals and priorities. At the same time, usability and attractiveness are seen as key success factors: so, the intriguing technologies available in most houses and smart devices are exploited to make the system configuration and use simpler, entertaining and attractive for users. These aspects are also integrated with ubiquitous and pervasive technologies, geo-localization, social networks and communities to provide enhanced functionalities and support smarter application scenarios, hereby further strengthening technology acceptation and diffusion. Accordingly, we first analyse the system requirements and define a reference multi-layer architectural model - the Butlers architecture - that specifies seven layers of functionalities, correlating the requirements, the corresponding technologies and the consequent value-added for users in each layer. Then, we outline a set of notable scenarios of increasing functionalities and complexity, discuss the structure of the corresponding system patterns in terms of the proposed architecture, and make this concrete by presenting some comprehensive interaction examples as comic strip stories. Next, we discuss the implementation requirements and how they can be met with the available technologies, discuss a possible architecture, refine it in the concrete case of the TuCSoN coordination technology, present a subsystem prototype and discuss its properties in the Butlers perspective.
Push-pull with recovery stage high-voltage DC converter for PV solar generator
NASA Astrophysics Data System (ADS)
Nguyen, The Vinh; Aillerie, Michel; Petit, Pierre; Pham, Hong Thang; Vo, Thành Vinh
2017-02-01
A lot of systems are basically developed on DC-DC or DC-AC converters including electronic switches such as MOS or bipolar transistors. The limits of efficiency are quickly reached when high output voltages and high input currents are needed. This work presents a new high-efficiency-high-step-up based on push-pull DC-DC converter integrating recovery stages dedicated to smart HVDC distributed architecture in PV solar energy production systems. Appropriate duty cycle ratio assumes that the recovery stage work with parallel charge and discharge to achieve high step-up voltage gain. Besides, the voltage stress on the main switch is reduced with a passive clamp circuit and thus, low on-state resistance Rdson of the main switch can be adopted to reduce conduction losses. Thus, the efficiency of a basic DC-HVDC converter dedicated to renewable energy production can be further improved with such topology. A prototype converter is developed, and experimentally tested for validation.
NASA Technical Reports Server (NTRS)
Rozenfeld, Pawel
1993-01-01
This paper describes the selection and training process of satellite controllers and data network operators performed at INPE's Satellite Tracking and Control Center in order to prepare them for the mission operations of the INPE's first (SCD1) satellite. An overview of the ground control system and SCD1 architecture and mission is given. Different training phases are described, taking into account that the applicants had no previous knowledge of space operations requiring, therefore, a training which started from the basics.
A Case Study of Two NRL Pump Prototypes
1996-01-01
n messages messages ACK Pump Low High ACK MA buffer Figure 1. The simpli ed Pump architecture The Pump (see...I C A T I O N S Y S T E M S E R VICES Application Soft wa re Figure 4. STOP security ring structure The security kernel provides basic system...A D Y PUMP_TIMEOUT M O V _ A V G RECORD_AVAILABLE Ack A c k Ac k or N ac k Ack or Nack Legend IPC msg FIFO msg Data msg Process Object FIFO
NASA Astrophysics Data System (ADS)
Haener, Rainer; Waechter, Joachim; Fleischer, Jens; Herrnkind, Stefan; Schwarting, Herrmann
2010-05-01
The German Indonesian Tsunami Early Warning System (GITEWS) is a multifaceted system consisting of various sensor types like seismometers, sea level sensors or GPS stations, and processing components, all with their own system behavior and proprietary data structure. To operate a warning chain, beginning from measurements scaling up to warning products, all components have to interact in a correct way, both syntactically and semantically. Designing the system great emphasis was laid on conformity to the Sensor Web Enablement (SWE) specification by the Open Geospatial Consortium (OGC). The technical infrastructure, the so called Tsunami Service Bus (TSB) follows the blueprint of Service Oriented Architectures (SOA). The TSB is an integration concept (SWE) where functionality (observe, task, notify, alert, and process) is grouped around business processes (Monitoring, Decision Support, Sensor Management) and packaged as interoperable services (SAS, SOS, SPS, WNS). The benefits of using a flexible architecture together with SWE lead to an open integration platform: • accessing and controlling heterogeneous sensors in a uniform way (Functional Integration) • assigns functionality to distinct services (Separation of Concerns) • allows resilient relationship between systems (Loose Coupling) • integrates services so that they can be accessed from everywhere (Location Transparency) • enables infrastructures which integrate heterogeneous applications (Encapsulation) • allows combination of services (Orchestration) and data exchange within business processes Warning systems will evolve over time: New sensor types might be added, old sensors will be replaced and processing components will be improved. From a collection of few basic services it shall be possible to compose more complex functionality essential for specific warning systems. Given these requirements a flexible infrastructure is a prerequisite for sustainable systems and their architecture must be tailored for evolution. The use of well-known techniques and widely used open source software implementing industrial standards reduces the impact of service modifications allowing the evolution of a system as a whole. GITEWS implemented a solution to feed sensor raw data from any (remote) system into the infrastructure. Specific dispatchers enable plugging in sensor-type specific processing without changing the architecture. Client components don't need to be adjusted if new sensor-types or individuals are added to the system, because they access them via standardized services. One of the outstanding features of service-oriented architectures is the possibility to compose new services from existing ones. The so called orchestration, allows the definition of new warning processes which can be adapted easily to new requirements. This approach has following advantages: • With implementing SWE it is possible to establish the "detection" and integration of sensors via the internet. Thus a system of systems combining early warning functionality at different levels of detail is feasible. • Any institution could add both its own components as well as components from third parties if they are developed in conformance to SOA principles. In a federation an institution keeps the ownership of its data and decides which data are provided by a service and when. • A system can be deployed at minor costs as a core for own development at any institution and thus enabling autonomous early warning- or monitoring systems. The presentation covers both design and various instantiations (live demonstration) of the GITEWS architecture. Experiences concerning the design and complexity of SWE will be addressed in detail. A substantial amount of attention is laid on the techniques and methods of extending the architecture, adapting proprietary components to SWE services and encoding, and their orchestration in high level workflows and processes. Furthermore the potential of the architecture concerning adaptive behavior, collaboration across boundaries and semantic interoperability will be addressed.
Space Adaptation of Active Mirror Segment Concepts
NASA Technical Reports Server (NTRS)
Ames, Gregory H.
1999-01-01
This report summarizes the results of a three year effort by Blue Line Engineering Co. to advance the state of segmented mirror systems in several separate but related areas. The initial set of tasks were designed to address the issues of system level architecture, digital processing system, cluster level support structures, and advanced mirror fabrication concepts. Later in the project new tasks were added to provide support to the existing segmented mirror testbed at Marshall Space Flight Center (MSFC) in the form of upgrades to the 36 subaperture wavefront sensor. Still later, tasks were added to build and install a new system processor based on the results of the new system architecture. The project was successful in achieving a number of important results. These include the following most notable accomplishments: 1) The creation of a new modular digital processing system that is extremely capable and may be applied to a wide range of segmented mirror systems as well as many classes of Multiple Input Multiple Output (MIMO) control systems such as active structures or industrial automation. 2) A new graphical user interface was created for operation of segmented mirror systems. 3) The development of a high bit rate serial data loop that permits bi-directional flow of data to and from as many as 39 segments daisy-chained to form a single cluster of segments. 4) Upgrade of the 36 subaperture Hartmann type Wave Front Sensor (WFS) of the Phased Array Mirror, Extendible Large Aperture (PAMELA) testbed at MSFC resulting in a 40 to 5OX improvement in SNR which in turn enabled NASA personnel to achieve many significant strides in improved closed-loop system operation in 1998. 5) A new system level processor was built and delivered to MSFC for use with the PAMELA testbed. This new system featured a new graphical user interface to replace the obsolete and non-supported menu system originally delivered with the PAMELA system. The hardware featured Blue Line's new stackable processing system which included fiber optic data links, a WFS digital interface, and a very compact and reliable electronics package. The project also resulted in substantial advances in the evolution of concepts for integrated structures to be used to support clusters of segments while also serving as the means to distribute power, timing, and data communications resources. A prototype cluster base was built and delivered that would support a small array of 7 cm mirror segments. Another conceptual design effort led to substantial progress in the area of laminated silicon mirror segments. While finished mirrors were never successfully produced in this exploratory effort, the basic feasibility of the concept was established through a significant amount of experimental development in microelectronics processing laboratories at the University of Colorado in Colorado Springs. Ultimately lightweighted aluminum mirrors with replicated front surfaces were produced and delivered as part of a separate contract to develop integrated segmented mirror assemblies. Overall the project was very successful in advancing segmented mirror system architectures on several fronts. In fact, the results of this work have already served as the basic foundation for the system architectures of several projects proposed by Blue Line for different missions and customers. These include the NMSD and AMSD procurements for NASA's Next Generation Space Telescope, the HET figure maintenance system, and the 1 meter FAST telescope project.
Tani, Jun; Nishimoto, Ryunosuke; Paine, Rainer W
2008-05-01
The current paper examines how compositional structures can self-organize in given neuro-dynamical systems when robot agents are forced to learn multiple goal-directed behaviors simultaneously. Firstly, we propose a basic model accounting for the roles of parietal-premotor interactions for representing skills for goal-directed behaviors. The basic model had been implemented in a set of robotics experiments employing different neural network architectures. The comparative reviews among those experimental results address the issues of local vs distributed representations in representing behavior and the effectiveness of level structures associated with different sensory-motor articulation mechanisms. It is concluded that the compositional structures can be acquired "organically" by achieving generalization in learning and by capturing the contextual nature of skilled behaviors under specific conditions. Furthermore, the paper discusses possible feedback for empirical neuroscience studies in the future.
Systems Architecture for Fully Autonomous Space Missions
NASA Technical Reports Server (NTRS)
Esper, Jamie; Schnurr, R.; VanSteenberg, M.; Brumfield, Mark (Technical Monitor)
2002-01-01
The NASA Goddard Space Flight Center is working to develop a revolutionary new system architecture concept in support of fully autonomous missions. As part of GSFC's contribution to the New Millenium Program (NMP) Space Technology 7 Autonomy and on-Board Processing (ST7-A) Concept Definition Study, the system incorporates the latest commercial Internet and software development ideas and extends them into NASA ground and space segment architectures. The unique challenges facing the exploration of remote and inaccessible locales and the need to incorporate corresponding autonomy technologies within reasonable cost necessitate the re-thinking of traditional mission architectures. A measure of the resiliency of this architecture in its application to a broad range of future autonomy missions will depend on its effectiveness in leveraging from commercial tools developed for the personal computer and Internet markets. Specialized test stations and supporting software come to past as spacecraft take advantage of the extensive tools and research investments of billion-dollar commercial ventures. The projected improvements of the Internet and supporting infrastructure go hand-in-hand with market pressures that provide continuity in research. By taking advantage of consumer-oriented methods and processes, space-flight missions will continue to leverage on investments tailored to provide better services at reduced cost. The application of ground and space segment architectures each based on Local Area Networks (LAN), the use of personal computer-based operating systems, and the execution of activities and operations through a Wide Area Network (Internet) enable a revolution in spacecraft mission formulation, implementation, and flight operations. Hardware and software design, development, integration, test, and flight operations are all tied-in closely to a common thread that enables the smooth transitioning between program phases. The application of commercial software development techniques lays the foundation for delivery of product-oriented flight software modules and models. Software can then be readily applied to support the on-board autonomy required for mission self-management. An on-board intelligent system, based on advanced scripting languages, facilitates the mission autonomy required to offload ground system resources, and enables the spacecraft to manage itself safely through an efficient and effective process of reactive planning, science data acquisition, synthesis, and transmission to the ground. Autonomous ground systems in turn coordinate and support schedule contact times with the spacecraft. Specific autonomy software modules on-board include mission and science planners, instrument and subsystem control, and fault tolerance response software, all residing within a distributed computing environment supported through the flight LAN. Autonomy also requires the minimization of human intervention between users on the ground and the spacecraft, and hence calls for the elimination of the traditional operations control center as a funnel for data manipulation. Basic goal-oriented commands are sent directly from the user to the spacecraft through a distributed internet-based payload operations "center". The ensuing architecture calls for the use of spacecraft as point extensions on the Internet. This paper will detail the system architecture implementation chosen to enable cost-effective autonomous missions with applicability to a broad range of conditions. It will define the structure needed for implementation of such missions, including software and hardware infrastructures. The overall architecture is then laid out as a common thread in the mission life cycle from formulation through implementation and flight operations.
Computer Architecture for Energy Efficient SFQ
2014-08-27
IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit
Next Generation Remote Agent Planner
NASA Technical Reports Server (NTRS)
Jonsson, Ari K.; Muscettola, Nicola; Morris, Paul H.; Rajan, Kanna
1999-01-01
In May 1999, as part of a unique technology validation experiment onboard the Deep Space One spacecraft, the Remote Agent became the first complete autonomous spacecraft control architecture to run as flight software onboard an active spacecraft. As one of the three components of the architecture, the Remote Agent Planner had the task of laying out the course of action to be taken, which included activities such as turning, thrusting, data gathering, and communicating. Building on the successful approach developed for the Remote Agent Planner, the Next Generation Remote Agent Planner is a completely redesigned and reimplemented version of the planner. The new system provides all the key capabilities of the original planner, while adding functionality, improving performance and providing a modular and extendible implementation. The goal of this ongoing project is to develop a system that provides both a basis for future applications and a framework for further research in the area of autonomous planning for spacecraft. In this article, we present an introductory overview of the Next Generation Remote Agent Planner. We present a new and simplified definition of the planning problem, describe the basics of the planning process, lay out the new system design and examine the functionality of the core reasoning module.
NASA Astrophysics Data System (ADS)
Bostwick, Todd W.
The Hohokam culture, one of the major pre-Columbian cultural groups in the American Southwest, is well known for their extensive irrigation systems, the largest in the New World. Choreographing the movement of people and scheduling the cleaning and repair of their canals during low water periods, as well as harvesting their bountiful crops during two growing seasons, would have required a calendar system that reflected the natural cycles of the Sonoran Desert. In addition, orienting their ritual architecture and public spaces such as ball courts, platform mounds, and plazas according to the cardinal directions would have required knowledge of the sun's daily and annual movement through the sky. This chapter describes archaeological evidence at Hohokam sites for marking of the sun's cycles, especially during the solstices and equinoxes, with rock art and adobe architecture. Several locations are identified in the Phoenix region of Arizona, including mountains and prominent rock formations, where the solstices and equinoxes could be tracked through horizon alignments during sunrise and sunset and by light-and-shadow patterns during midday on those solar events. Several Hohokam villages also are described where ritual space was oriented according to basic cardinal directions.
Energy storage requirements of dc microgrids with high penetration renewables under droop control
Weaver, Wayne W.; Robinett, Rush D.; Parker, Gordon G.; ...
2015-01-09
Energy storage is a important design component in microgrids with high penetration renewable sources to maintain the system because of the highly variable and sometimes stochastic nature of the sources. Storage devices can be distributed close to the sources and/or at the microgrid bus. In addition, storage requirements can be minimized with a centralized control architecture, but this creates a single point of failure. Distributed droop control enables a completely decentralized architecture but, the energy storage optimization becomes more difficult. Our paper presents an approach to droop control that enables the local and bus storage requirements to be determined. Givenmore » a priori knowledge of the design structure of a microgrid and the basic cycles of the renewable sources, we found that the droop settings of the sources are such that they minimize both the bus voltage variations and overall energy storage capacity required in the system. This approach can be used in the design phase of a microgrid with a decentralized control structure to determine appropriate droop settings as well as the sizing of energy storage devices.« less
Hao, Shuxin; Lü, Yiran; Liu, Jie; Liu, Yue; Xu, Dongqun
2018-01-01
To study the application of classified protection of information security in the information system of air pollution and health impact monitoring, so as to solve the possible safety risk of the information system. According to the relevant national standards and requirements for the information system security classified protection, and the professional characteristics of the information system, to design and implement the security architecture of information system, also to determine the protection level of information system. Basic security measures for the information system were developed in the technical safety and management safety aspects according to the protection levels, which effectively prevented the security risk of the information system. The information system established relatively complete information security protection measures, to enhanced the security of professional information and system service, and to ensure the safety of air pollution and health impact monitoring project carried out smoothly.
Strategy Revealing Phenotypic Differences among Synthetic Oscillator Designs
2015-01-01
Considerable progress has been made in identifying and characterizing the component parts of genetic oscillators, which play central roles in all organisms. Nonlinear interaction among components is sufficiently complex that mathematical models are required to elucidate their elusive integrated behavior. Although natural and synthetic oscillators exhibit common architectures, there are numerous differences that are poorly understood. Utilizing synthetic biology to uncover basic principles of simpler circuits is a way to advance understanding of natural circadian clocks and rhythms. Following this strategy, we address the following questions: What are the implications of different architectures and molecular modes of transcriptional control for the phenotypic repertoire of genetic oscillators? Are there designs that are more realizable or robust? We compare synthetic oscillators involving one of three architectures and various combinations of the two modes of transcriptional control using a methodology that provides three innovations: a rigorous definition of phenotype, a procedure for deconstructing complex systems into qualitatively distinct phenotypes, and a graphical representation for illuminating the relationship between genotype, environment, and the qualitatively distinct phenotypes of a system. These methods provide a global perspective on the behavioral repertoire, facilitate comparisons of alternatives, and assist the rational design of synthetic gene circuitry. In particular, the results of their application here reveal distinctive phenotypes for several designs that have been studied experimentally as well as a best design among the alternatives that has yet to be constructed and tested. PMID:25019938
Oh, Sungyoung; Cha, Jieun; Ji, Myungkyu; Kang, Hyekyung; Kim, Seok; Heo, Eunyoung; Han, Jong Soo; Kang, Hyunggoo; Chae, Hoseok; Hwang, Hee
2015-01-01
Objectives To design a cloud computing-based Healthcare Software-as-a-Service (SaaS) Platform (HSP) for delivering healthcare information services with low cost, high clinical value, and high usability. Methods We analyzed the architecture requirements of an HSP, including the interface, business services, cloud SaaS, quality attributes, privacy and security, and multi-lingual capacity. For cloud-based SaaS services, we focused on Clinical Decision Service (CDS) content services, basic functional services, and mobile services. Microsoft's Azure cloud computing for Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) was used. Results The functional and software views of an HSP were designed in a layered architecture. External systems can be interfaced with the HSP using SOAP and REST/JSON. The multi-tenancy model of the HSP was designed as a shared database, with a separate schema for each tenant through a single application, although healthcare data can be physically located on a cloud or in a hospital, depending on regulations. The CDS services were categorized into rule-based services for medications, alert registration services, and knowledge services. Conclusions We expect that cloud-based HSPs will allow small and mid-sized hospitals, in addition to large-sized hospitals, to adopt information infrastructures and health information technology with low system operation and maintenance costs. PMID:25995962
Sengupta, Abhronil; Shim, Yong; Roy, Kaushik
2016-12-01
Non-Boolean computing based on emerging post-CMOS technologies can potentially pave the way for low-power neural computing platforms. However, existing work on such emerging neuromorphic architectures have either focused on solely mimicking the neuron, or the synapse functionality. While memristive devices have been proposed to emulate biological synapses, spintronic devices have proved to be efficient at performing the thresholding operation of the neuron at ultra-low currents. In this work, we propose an All-Spin Artificial Neural Network where a single spintronic device acts as the basic building block of the system. The device offers a direct mapping to synapse and neuron functionalities in the brain while inter-layer network communication is accomplished via CMOS transistors. To the best of our knowledge, this is the first demonstration of a neural architecture where a single nanoelectronic device is able to mimic both neurons and synapses. The ultra-low voltage operation of low resistance magneto-metallic neurons enables the low-voltage operation of the array of spintronic synapses, thereby leading to ultra-low power neural architectures. Device-level simulations, calibrated to experimental results, was used to drive the circuit and system level simulations of the neural network for a standard pattern recognition problem. Simulation studies indicate energy savings by ∼ 100× in comparison to a corresponding digital/analog CMOS neuron implementation.
Venus Express Chemical Propulsion System - The Mars Express Legacy
NASA Astrophysics Data System (ADS)
Hunter, C. J.
2004-10-01
ESA's ambition of inter-planetary exploration using a fast-track low cost industrial programme was well achieved with Mars Express. Reusing the platform architecture for the service module and specifically the Propulsion system enabled Venus Express to benefit from several lessons learnt from the Mars Express experience. Using all existing components qualified for previous programmes, many of them commercial telecommunication spacecraft programmes with components available from stock, an industrial organisation familiar from Mars Express was able to compress the schedule to make the November 2005 launch window a realistic target. While initial inspection of the CPS schematic indicates a modified Eurostar type architecture, - a similar system using some Eurostar components - would be a fairer description. The use of many parts of the system on arrival at the destination (Mars or Venus in this case) is a departure from the usual mode of operation, where many components are used during the initial few weeks of GTO or GEO. The system modifications over the basic Eurostar system have catered for this in terms of reliability contingencies by replacing components, or providing different levels of test capability or isolation in flight. This paper aims to provide an introduction to the system, address the evolution from Eurostar, and provide an initial assessment of the success of these modifications using the Mars Express experience, and how measures have been adopted specifically for Venus Express.
NASA Technical Reports Server (NTRS)
Myers, Thomas T.; Mcruer, Duane T.
1988-01-01
The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design states starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. The FCX expert system as presently developed is only a limited prototype capable of supporting basic lateral-directional FCS design activities related to the design example used. FCX presently supports design of only one FCS architecture (yaw damper plus roll damper) and the rules are largely focused on Class IV (highly maneuverable) aircraft. Despite this limited scope, the major elements which appear necessary for application of knowledge-based software concepts to flight control design were assembled and thus FCX represents a prototype which can be tested, critiqued and evolved in an ongoing process of development.
Candidate Mission from Planet Earth control and data delivery system architecture
NASA Technical Reports Server (NTRS)
Shapiro, Phillip; Weinstein, Frank C.; Hei, Donald J., Jr.; Todd, Jacqueline
1992-01-01
Using a structured, experienced-based approach, Goddard Space Flight Center (GSFC) has assessed the generic functional requirements for a lunar mission control and data delivery (CDD) system. This analysis was based on lunar mission requirements outlined in GSFC-developed user traffic models. The CDD system will facilitate data transportation among user elements, element operations, and user teams by providing functions such as data management, fault isolation, fault correction, and link acquisition. The CDD system for the lunar missions must not only satisfy lunar requirements but also facilitate and provide early development of data system technologies for Mars. Reuse and evolution of existing data systems can help to maximize system reliability and minimize cost. This paper presents a set of existing and currently planned NASA data systems that provide the basic functionality. Reuse of such systems can have an impact on mission design and significantly reduce CDD and other system development costs.
The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform
NASA Astrophysics Data System (ADS)
Xie, Qingyun
2016-06-01
This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.
NASA Astrophysics Data System (ADS)
Qianyi, Zhang; Xiaoshun, Li; Ping, Hu; Lu, Ning
2018-03-01
With the promotion of undergraduate training mode of “3+1” in Beijing University of Agriculture, the mode and direction of applied and compound talents training should be further visualized, at the same time, in order to make up for the shortage of Double Teachers in the school and the lack of teaching cases that cover the advanced technology in the industry, the school actively encourages the cooperation between the two teaching units and enterprises, and closely connects the enterprise resources with the school teaching system, using the “1” in “3+1” to carry out innovative training work for students. This method is beneficial for college students to integrate theory into practice and realize the purpose of applying knowledge in Higher Education. However, in the actual student training management, this kind of cooperation involves three party units and personnel, so it is difficult to form a unified management, on the other hand, it may also result from poor communication, which leads to unsatisfactory training results. At the same time, there is no good training supervision mechanism, causes the student training work specious. To solve the above problem,this paper designs a training management system of student innovation and Entrepreneurship Based on school enterprise cooperation,the system can effectively manage the relevant work of students’ training, and effectively solve the above problems. The subject is based on the training of innovation and entrepreneurship in the school of computer and information engineering of Beijing University of Agriculture. The system software architecture is designed using B/S architecture technology, the system is divided into three layers, the application of logic layer includes student training management related business, and realized the user’s basic operation management for student training, users can not only realize the basic information management of enterprises, colleges and students through the system, at the same time, it also realizes the information operation of student training management [1]. The data layer of the system creates database applications through Mysql technology, and provides data storage for the whole system.
Pagès, Loïc; Picon-Cochard, Catherine
2014-10-01
Our objective was to calibrate a model of the root system architecture on several Poaceae species and to assess its value to simulate several 'integrated' traits measured at the root system level: specific root length (SRL), maximum root depth and root mass. We used the model ArchiSimple, made up of sub-models that represent and combine the basic developmental processes, and an experiment on 13 perennial grassland Poaceae species grown in 1.5-m-deep containers and sampled at two different dates after planting (80 and 120 d). Model parameters were estimated almost independently using small samples of the root systems taken at both dates. The relationships obtained for calibration validated the sub-models, and showed species effects on the parameter values. The simulations of integrated traits were relatively correct for SRL and were good for root depth and root mass at the two dates. We obtained some systematic discrepancies that were related to the slight decline of root growth in the last period of the experiment. Because the model allowed correct predictions on a large set of Poaceae species without global fitting, we consider that it is a suitable tool for linking root traits at different organisation levels. © 2014 INRA. New Phytologist © 2014 New Phytologist Trust.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peglow, S
2004-02-24
The purpose of this project was twofold: first, provide an understanding of the technical foundation and planning required for deployment of Intelligent Transportation System (ITS)-based system architectures for the protection of New York City from a terrorist attack using a vehicle-deployed nuclear device; second, work with stakeholders to develop mutual understanding of the technologies and tactics required for threat detection/identification and establish guidelines for designing operational systems and procedures. During the course of this project we interviewed and coordinated analysis with people from the New Jersey State Attorney General's office, the New Jersey State Police, the Port Authority of Newmore » York/New Jersey, the Counterterrorism Division of the New York City Police Department, the New Jersey Transit Authority, the State of New Jersey Department of Transportation, TRANSCOM and a number of contractors involved with state and federal intelligent transportation development and implementation. The basic system architecture is shown in the figure below. In an actual system deployment, radiation sensors would be co-located with existing ITS elements and the data will be sent to the Traffic Operations Center. A key element of successful system operation is the integration of vehicle data, such as license plate, EZ pass ID, vehicle type/color and radiation signature. A threat data base can also be implemented and utilized in cases where there is a suspect vehicle identified from other intelligence sources or a mobile detector system. Another key aspect of an operational architecture is the procedures used to verify the threat and plan interdiction. This was a major focus of our work and discussed later in detail. In support of the operational analysis, we developed a detailed traffic simulation model that is described extensively in the body of the report.« less
JCMT observatory control system
NASA Astrophysics Data System (ADS)
Rees, Nicholas P.; Economou, Frossie; Jenness, Tim; Kackley, Russell D.; Walther, Craig A.; Dent, William R. F.; Folger, Martin; Gao, Xiaofeng; Kelly, Dennis; Lightfoot, John F.; Pain, Ian; Hovey, Gary J.; Redman, Russell O.
2002-12-01
The JCMT, the world's largest sub-mm telescope, has had essentially the same VAX/VMS based control system since it was commissioned. For the next generation of instrumentation we are implementing a new Unix/VxWorks based system, based on the successful ORAC system that was recently released on UKIRT. The system is now entering the integration and testing phase. This paper gives a broad overview of the system architecture and includes some discussion on the choices made. (Other papers in this conference cover some areas in more detail). The basic philosophy is to control the sub-systems with a small and simple set of commands, but passing detailed XML configuration descriptions along with the commands to give the flexibility required. The XML files can be passed between various layers in the system without interpretation, and so simplify the design enormously. This has all been made possible by the adoption of an Observation Preparation Tool, which essentially serves as an intelligent XML editor.
Government Open Systems Interconnection Profile (GOSIP) transition strategy
NASA Astrophysics Data System (ADS)
Laxen, Mark R.
1993-09-01
This thesis analyzes the Government Open Systems Interconnection Profile (GOSIP) and the requirements of the Federal Information Processing Standard (FIPS) Publication 146-1. It begins by examining the International Organization for Standardization (ISO) Open Systems Interconnection (OSI) architecture and protocol suites and the distinctions between GOSIP version one and two. Additionally, it explores some of the GOSIP protocol details and discusses the process by which standards organizations have developed their recommendations. Implementation considerations from both government and vendor perspectives illustrate the barriers and requirements faced by information systems managers, as well as basic transition strategies. The result of this thesis is to show a transition strategy through an extended and coordinated period of coexistence due to extensive legacy systems and GOSIP product unavailability. Recommendations for GOSIP protocol standards to include capabilities outside the OSI model are also presented.
Recent Trends in Spintronics-Based Nanomagnetic Logic
NASA Astrophysics Data System (ADS)
Das, Jayita; Alam, Syed M.; Bhanja, Sanjukta
2014-09-01
With the growing concerns of standby power in sub-100-nm CMOS technologies, alternative computing techniques and memory technologies are explored. Spin transfer torque magnetoresistive RAM (STT-MRAM) is one such nonvolatile memory relying on magnetic tunnel junctions (MTJs) to store information. It uses spin transfer torque to write information and magnetoresistance to read information. In 2012, Everspin Technologies, Inc. commercialized the first 64Mbit Spin Torque MRAM. On the computing end, nanomagnetic logic (NML) is a promising technique with zero leakage and high data retention. In 2000, Cowburn and Welland first demonstrated its potential in logic and information propagation through magnetostatic interaction in a chain of single domain circular nanomagnetic dots of Supermalloy (Ni80Fe14Mo5X1, X is other metals). In 2006, Imre et al. demonstrated wires and majority gates followed by coplanar cross wire systems demonstration in 2010 by Pulecio et al. Since 2004 researchers have also investigated the potential of MTJs in logic. More recently with dipolar coupling between MTJs demonstrated in 2012, logic-in-memory architecture with STT-MRAM have been investigated. The architecture borrows the computing concept from NML and read and write style from MRAM. The architecture can switch its operation between logic and memory modes with clock as classifier. Further through logic partitioning between MTJ and CMOS plane, a significant performance boost has been observed in basic computing blocks within the architecture. In this work, we have explored the developments in NML, in MTJs and more recent developments in hybrid MTJ/CMOS logic-in-memory architecture and its unique logic partitioning capability.
Modularity in developmental biology and artificial organs: a missing concept in tissue engineering.
Lenas, Petros; Luyten, Frank P; Doblare, Manuel; Nicodemou-Lena, Eleni; Lanzara, Andreina Elena
2011-06-01
Tissue engineering is reviving itself, adopting the concept of biomimetics of in vivo tissue development. A basic concept of developmental biology is the modularity of the tissue architecture according to which intermediates in tissue development constitute semiautonomous entities. Both engineering and nature have chosen the modular architecture to optimize the product or organism development and evolution. Bioartificial tissues do not have a modular architecture. On the contrary, artificial organs of modular architecture have been already developed in the field of artificial organs. Therefore the conceptual support of tissue engineering by the field of artificial organs becomes critical in its new endeavor of recapitulating in vitro the in vivo tissue development. © 2011, Copyright the Authors. Artificial Organs © 2011, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Conceptual Launch Vehicle and Spacecraft Design for Risk Assessment
NASA Technical Reports Server (NTRS)
Motiwala, Samira A.; Mathias, Donovan L.; Mattenberger, Christopher J.
2014-01-01
One of the most challenging aspects of developing human space launch and exploration systems is minimizing and mitigating the many potential risk factors to ensure the safest possible design while also meeting the required cost, weight, and performance criteria. In order to accomplish this, effective risk analyses and trade studies are needed to identify key risk drivers, dependencies, and sensitivities as the design evolves. The Engineering Risk Assessment (ERA) team at NASA Ames Research Center (ARC) develops advanced risk analysis approaches, models, and tools to provide such meaningful risk and reliability data throughout vehicle development. The goal of the project presented in this memorandum is to design a generic launch 7 vehicle and spacecraft architecture that can be used to develop and demonstrate these new risk analysis techniques without relying on other proprietary or sensitive vehicle designs. To accomplish this, initial spacecraft and launch vehicle (LV) designs were established using historical sizing relationships for a mission delivering four crewmembers and equipment to the International Space Station (ISS). Mass-estimating relationships (MERs) were used to size the crew capsule and launch vehicle, and a combination of optimization techniques and iterative design processes were employed to determine a possible two-stage-to-orbit (TSTO) launch trajectory into a 350-kilometer orbit. Primary subsystems were also designed for the crewed capsule architecture, based on a 24-hour on-orbit mission with a 7-day contingency. Safety analysis was also performed to identify major risks to crew survivability and assess the system's overall reliability. These procedures and analyses validate that the architecture's basic design and performance are reasonable to be used for risk trade studies. While the vehicle designs presented are not intended to represent a viable architecture, they will provide a valuable initial platform for developing and demonstrating innovative risk assessment capabilities.
Cellular automata simulation of topological effects on the dynamics of feed-forward motifs
Apte, Advait A; Cain, John W; Bonchev, Danail G; Fong, Stephen S
2008-01-01
Background Feed-forward motifs are important functional modules in biological and other complex networks. The functionality of feed-forward motifs and other network motifs is largely dictated by the connectivity of the individual network components. While studies on the dynamics of motifs and networks are usually devoted to the temporal or spatial description of processes, this study focuses on the relationship between the specific architecture and the overall rate of the processes of the feed-forward family of motifs, including double and triple feed-forward loops. The search for the most efficient network architecture could be of particular interest for regulatory or signaling pathways in biology, as well as in computational and communication systems. Results Feed-forward motif dynamics were studied using cellular automata and compared with differential equation modeling. The number of cellular automata iterations needed for a 100% conversion of a substrate into a target product was used as an inverse measure of the transformation rate. Several basic topological patterns were identified that order the specific feed-forward constructions according to the rate of dynamics they enable. At the same number of network nodes and constant other parameters, the bi-parallel and tri-parallel motifs provide higher network efficacy than single feed-forward motifs. Additionally, a topological property of isodynamicity was identified for feed-forward motifs where different network architectures resulted in the same overall rate of the target production. Conclusion It was shown for classes of structural motifs with feed-forward architecture that network topology affects the overall rate of a process in a quantitatively predictable manner. These fundamental results can be used as a basis for simulating larger networks as combinations of smaller network modules with implications on studying synthetic gene circuits, small regulatory systems, and eventually dynamic whole-cell models. PMID:18304325
Martínez-de la Cruz, Enrique; García-Ramírez, Elpidio; Vázquez-Ramos, Jorge M; Reyes de la Cruz, Homero; López-Bucio, José
2015-03-15
Maize (Zea mays) root system architecture has a complex organization, with adventitious and lateral roots determining its overall absorptive capacity. To generate basic information about the earlier stages of root development, we compared the post-embryonic growth of maize seedlings germinated in water-embedded cotton beds with that of plants obtained from embryonic axes cultivated in liquid medium. In addition, the effect of four different auxins, namely indole-3-acetic acid (IAA), 1-naphthaleneacetic acid (NAA), indole-3-butyric acid (IBA) and 2,4-dichlorophenoxyacetic acid (2,4-D) on root architecture and levels of the heat shock protein HSP101 and the cell cycle proteins CKS1, CYCA1 and CDKA1 were analyzed. Our data show that during the first days after germination, maize seedlings develop several root types with a simultaneous and/or continuous growth. The post-embryonic root development started with the formation of the primary root (PR) and seminal scutellar roots (SSR) and then continued with the formation of adventitious crown roots (CR), brace roots (BR) and lateral roots (LR). Auxins affected root architecture in a dose-response fashion; whereas NAA and IBA mostly stimulated crown root formation, 2,4-D showed a strong repressing effect on growth. The levels of HSP101, CKS1, CYCA1 and CDKA in root and leaf tissues were differentially affected by auxins and interestingly, HSP101 registered an auxin-inducible and root specific expression pattern. Taken together, our results show the timing of early branching patterns of maize and indicate that auxins regulate root development likely through modulation of the HSP101 and cell cycle proteins. Copyright © 2014 Elsevier GmbH. All rights reserved.
The Best of Both Worlds: Developing a Hybrid Data System for the ASF DAAC
NASA Astrophysics Data System (ADS)
Arko, S. A.; Buechler, B.; Wolf, V. G.
2017-12-01
The Alaska Satellite Facility (ASF) at the University of Alaska Fairbanks hosts the NASA Distributed Active Archive Center (DAAC) specializing in synthetic aperture radar (SAR). Historically, the ASF DAAC has hosted hardware on-premises and developed DAAC-specific software to operate, manage, and maintain the DAAC data system. In the past year, ASF DAAC has been moving many of the standard DAAC operations into the Amazon Web Services (AWS) cloud. This includes data ingest, basic pre-processing, archiving, and distribution within the AWS environment. While the cloud offers nearly unbounded capacity for expansion and a great host of services, there also can be unexpected and unplanned costs for such. Additionally, these costs can be difficult to forecast even with historic data usage patterns and models for future usage. In an effort to maximize the effectiveness of the DAAC data system, while still managing and accurately forecasting costs, ASF DAAC has developed a hybrid, cloud and on-premises, data system. The goal of this project is to make extensive use of the AWS cloud, and when appropriate, utilize on-premises resources to help constrain costs. This hybrid system attempts to mimic, on premises, a cloud environment using Kubernetes container orchestration in order that software can be run in either location with little change. Combined with hybrid data storage architecture, the new data system makes use of the great capacity of the cloud while maintaining an on-premises options. This presentation will describe the development of the hybrid data system, including the micro-services architecture and design, the container orchestration, and hybrid storage. Additional we will highlight the lessons learned through the development process, cost forecasting for current and future SAR-mission operations, and provide a discussion of the pros and cons of hybrid architectures versus all-cloud deployments. This development effort has led to a system that is capable and flexible for the future while allowing ASF DAAC to continue supporting the SAR community with the highest level of services.
INTEGRATED MONITORING HARDWARE DEVELOPMENTS AT LOS ALAMOS
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. PARKER; J. HALBIG; ET AL
1999-09-01
The hardware of the integrated monitoring system supports a family of instruments having a common internal architecture and firmware. Instruments can be easily configured from application-specific personality boards combined with common master-processor and high- and low-voltage power supply boards, and basic operating firmware. The instruments are designed to function autonomously to survive power and communication outages and to adapt to changing conditions. The personality boards allow measurement of gross gammas and neutrons, neutron coincidence and multiplicity, and gamma spectra. In addition, the Intelligent Local Node (ILON) provides a moderate-bandwidth network to tie together instruments, sensors, and computers.
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.
Strogatz, S H
2001-03-08
The study of networks pervades all of science, from neurobiology to statistical physics. The most basic issues are structural: how does one characterize the wiring diagram of a food web or the Internet or the metabolic network of the bacterium Escherichia coli? Are there any unifying principles underlying their topology? From the perspective of nonlinear dynamics, we would also like to understand how an enormous network of interacting dynamical systems-be they neurons, power stations or lasers-will behave collectively, given their individual dynamics and coupling architecture. Researchers are only now beginning to unravel the structure and dynamics of complex networks.
Hydrogen isotope exchange in a metal hydride tube
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, David B.
2014-09-01
This report describes a model of the displacement of one hydrogen isotope within a metal hydride tube by a different isotope in the gas phase that is blown through the tube. The model incorporates only the most basic parameters to make a clear connection to the theory of open-tube gas chromatography, and to provide a simple description of how the behavior of the system scales with controllable parameters such as gas velocity and tube radius. A single tube can be seen as a building block for more complex architectures that provide higher molar flow rates or other advanced design goals.
Method for Reading Sensors and Controlling Actuators Using Audio Interfaces of Mobile Devices
Aroca, Rafael V.; Burlamaqui, Aquiles F.; Gonçalves, Luiz M. G.
2012-01-01
This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks. PMID:22438726
NASA Astrophysics Data System (ADS)
Martyniv, Oleksandra; Kinasz, Roman
2017-10-01
This material covers the row of basic factors that influence on architectonically-spatial solution formation of building of Higher educational establishments (hereinafter universities). For this purpose, the systematization process of factors that influence on the university architecture was conducted and presented. The conclusion of this article was the proposed concept of considering universities as a hierarchical system, elements of which act as factors of influence, which in the process of alternating influence lead to the main goal, namely the formation of a new university building.
NASA Astrophysics Data System (ADS)
Ghosh, Amal K.; Basuray, Amitabha
2008-11-01
The memory devices in multi-valued logic are of most significance in modern research. This paper deals with the implementation of basic memory devices in multi-valued logic using Savart plate and spatial light modulator (SLM) based optoelectronic circuits. Photons are used here as the carrier to speed up the operations. Optical tree architecture (OTA) has been also utilized in the optical interconnection network. We have exploited the advantages of Savart plates, SLMs and OTA and proposed the SLM based high speed JK, D-type and T-type flip-flops in a trinary system.
Method for reading sensors and controlling actuators using audio interfaces of mobile devices.
Aroca, Rafael V; Burlamaqui, Aquiles F; Gonçalves, Luiz M G
2012-01-01
This article presents a novel closed loop control architecture based on audio channels of several types of computing devices, such as mobile phones and tablet computers, but not restricted to them. The communication is based on an audio interface that relies on the exchange of audio tones, allowing sensors to be read and actuators to be controlled. As an application example, the presented technique is used to build a low cost mobile robot, but the system can also be used in a variety of mechatronics applications and sensor networks, where smartphones are the basic building blocks.
Program Helps Simulate Neural Networks
NASA Technical Reports Server (NTRS)
Villarreal, James; Mcintire, Gary
1993-01-01
Neural Network Environment on Transputer System (NNETS) computer program provides users high degree of flexibility in creating and manipulating wide variety of neural-network topologies at processing speeds not found in conventional computing environments. Supports back-propagation and back-propagation-related algorithms. Back-propagation algorithm used is implementation of Rumelhart's generalized delta rule. NNETS developed on INMOS Transputer(R). Predefines back-propagation network, Jordan network, and reinforcement network to assist users in learning and defining own networks. Also enables users to configure other neural-network paradigms from NNETS basic architecture. Small portion of software written in OCCAM(R) language.
Fast packet switch architectures for broadband integrated services digital networks
NASA Technical Reports Server (NTRS)
Tobagi, Fouad A.
1990-01-01
Background information on networking and switching is provided, and the various architectures that have been considered for fast packet switches are described. The focus is solely on switches designed to be implemented electronically. A set of definitions and a brief description of the functionality required of fast packet switches are given. Three basic types of packet switches are identified: the shared-memory, shared-medium, and space-division types. Each of these is described, and examples are given.
The architecture of a modern military health information system.
Mukherji, Raj J; Egyhazy, Csaba J
2004-06-01
This article describes a melding of a government-sponsored architecture for complex systems with open systems engineering architecture developed by the Institute for Electrical and Electronics Engineers (IEEE). Our experience in using these two architectures in building a complex healthcare system is described in this paper. The work described shows that it is possible to combine these two architectural frameworks in describing the systems, operational, and technical views of a complex automation system. The advantage in combining the two architectural frameworks lies in the simplicity of implementation and ease of understanding of automation system architectural elements by medical professionals.
FLEX: A Modular Software Architecture for Flight License Exam
NASA Astrophysics Data System (ADS)
Arsan, Taner; Saka, Hamit Emre; Sahin, Ceyhun
This paper is about the design and implementation of an examination system based on World Wide Web. It is called FLEX-Flight License Exam Software. We designed and implemented flexible and modular software architecture. The implemented system has basic specifications such as appending questions in system, building exams with these appended questions and making students to take these exams. There are three different types of users with different authorizations. These are system administrator, operators and students. System administrator operates and maintains the system, and also audits the system integrity. The system administrator can not be able to change the result of exams and can not take an exam. Operator module includes instructors. Operators have some privileges such as preparing exams, entering questions, changing the existing questions and etc. Students can log on the system and can be accessed to exams by a certain URL. The other characteristic of our system is that operators and system administrator are not able to delete questions due to the security problems. Exam questions can be inserted on their topics and lectures in the database. Thus; operators and system administrator can easily choose questions. When all these are taken into consideration, FLEX software provides opportunities to many students to take exams at the same time in safe, reliable and user friendly conditions. It is also reliable examination system for the authorized aviation administration companies. Web development platform - LAMP; Linux, Apache web server, MySQL, Object-oriented scripting Language - PHP are used for developing the system and page structures are developed by Content Management System - CMS.
ERIC Educational Resources Information Center
Kell, John H.
2001-01-01
Presents photos and basic information about a Texas middle school whose architecture reflects the hybrid culture of the borderlands and "regionalism" in which it is located. A line drawing of the site plan is included. (GR)
Bespoke physics for living technology.
Ackley, David H
2013-01-01
In the physics of the natural world, basic tasks of life, such as homeostasis and reproduction, are extremely complex operations, requiring the coordination of billions of atoms even in simple cases. By contrast, artificial living organisms can be implemented in computers using relatively few bits, and copying a data structure is trivial. Of course, the physical overheads of the computers themselves are huge, but since their programmability allows digital "laws of physics" to be tailored like a custom suit, deploying living technology atop an engineered computational substrate might be as or more effective than building directly on the natural laws of physics, for a substantial range of desirable purposes. This article suggests basic criteria and metrics for bespoke physics computing architectures, describes one such architecture, and offers data and illustrations of custom living technology competing to reproduce while collaborating on an externally useful computation.
Information Quality Evaluation of C2 Systems at Architecture Level
2014-06-01
based on architecture models of C2 systems, which can help to identify key factors impacting information quality and improve the system capability at the stage of architecture design of C2 system....capability evaluation of C2 systems at architecture level becomes necessary and important for improving the system capability at the stage of architecture ... design . This paper proposes a method for information quality evaluation of C2 system at architecture level. First, the information quality model is
NASA Astrophysics Data System (ADS)
Finaeva, O.
2017-11-01
The article represents a brief analysis of factors that influence the development of an urban green space system: territorial and climatic conditions, cultural and historical background as well as the modern strategy of historic cities development. The introduction defines the concept of urban greening, green spaces and green space distribution. The environmental parameters influenced by green spaces are determined. By the example of Italian cities the principles of the urban greening system development are considered: the historical aspects of formation of the urban greening system in Italian cities are analyzed, the role of green spaces in the formation of the urban environment structure and the creation of a favorable microclimate is determined, and a set of measures aimed at its improvement is highlighted. The modern principles of urban greening systems development and their characteristic features are considered. Special attention is paid to the interrelation of architectural and green structures in the formation of a favorable microclimate and psychological comfort in the urban environment; various methods of greening are considered by the example of existing architectural complexes depending on the climate of the area and the landscape features. The examples for the choice of plants and the application of compositional techniques are given. The results represent the basic principles of developing an urban green spaces system. The conclusion summarizes the techniques aimed at the microclimate improvement in the urban environment.
NASA Technical Reports Server (NTRS)
Bergamini, E. W.; Depaula, A. R., Jr.; Martins, R. C. D. O.
1984-01-01
Data relative to the on board supervision subsystem are presented which were considered in a conference between INPE and NASA personnel, with the purpose of initiating a joint effort leading to the implementation of the Brazilian remote sensing experiment - (BRESEX). The BRESEX should consist, basically, of a multispectral camera for Earth observation, to be tested in a future space shuttle flight.
A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems
NASA Astrophysics Data System (ADS)
Li, Yu; Oberweis, Andreas
Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.
ControlShell: A real-time software framework
NASA Technical Reports Server (NTRS)
Schneider, Stanley A.; Chen, Vincent W.; Pardo-Castellote, Gerardo
1994-01-01
The ControlShell system is a programming environment that enables the development and implementation of complex real-time software. It includes many building tools for complex systems, such as a graphical finite state machine (FSM) tool to provide strategic control. ControlShell has a component-based design, providing interface definitions and mechanisms for building real-time code modules along with providing basic data management. Some of the system-building tools incorporated in ControlShell are a graphical data flow editor, a component data requirement editor, and a state-machine editor. It also includes a distributed data flow package, an execution configuration manager, a matrix package, and an object database and dynamic binding facility. This paper presents an overview of ControlShell's architecture and examines the functions of several of its tools.
Flying qualities and control system characteristics for superaugmented aircraft
NASA Technical Reports Server (NTRS)
Myers, T. T.; Mcruer, D. T.; Johnston, D. E.
1984-01-01
Aircraft-alone dynamics and superaugmented control system fundamental regulatory properties including stability and regulatory responses of the basic closed-loop systems; fundamental high and low frequency margins and governing factors; and sensitivity to aircraft and controller parameters are addressed. Alternative FCS mechanizations, and mechanizational side effects are also discussed. An overview of flying qualities considerations encompasses general pilot operations as a controller in unattended, intermittent and trim, and full-attention regulatory or command control; effective vehicle primary and secondary response properties to pilot inputs and disturbances; pilot control architectural possibilities; and comparison of superaugmented and conventional aircraft path responses for different forms of pilot control. Results of a simple experimental investigation into pilot dynamic behavior in attitude control of superaugmented aircraft configurations with high frequency time laps and time delays are presented.
High speed bus technology development
NASA Astrophysics Data System (ADS)
Modrow, Marlan B.; Hatfield, Donald W.
1989-09-01
The development and demonstration of the High Speed Data Bus system, a 50 Million bits per second (Mbps) local data network intended for avionics applications in advanced military aircraft is described. The Advanced System Avionics (ASA)/PAVE PILLAR program provided the avionics architecture concept and basic requirements. Designs for wire and fiber optic media were produced and hardware demonstrations were performed. An efficient, robust token-passing protocol was developed and partially demonstrated. The requirements specifications, the trade-offs made, and the resulting designs for both a coaxial wire media system and a fiber optics design are examined. Also, the development of a message-oriented media access protocol is described, from requirements definition through analysis, simulation and experimentation. Finally, the testing and demonstrations conducted on the breadboard and brassboard hardware is presented.
DRS: Derivational Reasoning System
NASA Technical Reports Server (NTRS)
Bose, Bhaskar
1995-01-01
The high reliability requirements for airborne systems requires fault-tolerant architectures to address failures in the presence of physical faults, and the elimination of design flaws during the specification and validation phase of the design cycle. Although much progress has been made in developing methods to address physical faults, design flaws remain a serious problem. Formal methods provides a mathematical basis for removing design flaws from digital systems. DRS (Derivational Reasoning System) is a formal design tool based on advanced research in mathematical modeling and formal synthesis. The system implements a basic design algebra for synthesizing digital circuit descriptions from high level functional specifications. DRS incorporates an executable specification language, a set of correctness preserving transformations, verification interface, and a logic synthesis interface, making it a powerful tool for realizing hardware from abstract specifications. DRS integrates recent advances in transformational reasoning, automated theorem proving and high-level CAD synthesis systems in order to provide enhanced reliability in designs with reduced time and cost.
libdrdc: software standards library
NASA Astrophysics Data System (ADS)
Erickson, David; Peng, Tie
2008-04-01
This paper presents the libdrdc software standards library including internal nomenclature, definitions, units of measure, coordinate reference frames, and representations for use in autonomous systems research. This library is a configurable, portable C-function wrapped C++ / Object Oriented C library developed to be independent of software middleware, system architecture, processor, or operating system. It is designed to use the automatically-tuned linear algebra suite (ATLAS) and Basic Linear Algebra Suite (BLAS) and port to firmware and software. The library goal is to unify data collection and representation for various microcontrollers and Central Processing Unit (CPU) cores and to provide a common Application Binary Interface (ABI) for research projects at all scales. The library supports multi-platform development and currently works on Windows, Unix, GNU/Linux, and Real-Time Executive for Multiprocessor Systems (RTEMS). This library is made available under LGPL version 2.1 license.
Multigrid methods with space–time concurrency
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.; ...
2017-10-06
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
High Performance Descriptive Semantic Analysis of Semantic Graph Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joslyn, Cliff A.; Adolf, Robert D.; al-Saffar, Sinan
As semantic graph database technology grows to address components ranging from extant large triple stores to SPARQL endpoints over SQL-structured relational databases, it will become increasingly important to be able to understand their inherent semantic structure, whether codified in explicit ontologies or not. Our group is researching novel methods for what we call descriptive semantic analysis of RDF triplestores, to serve purposes of analysis, interpretation, visualization, and optimization. But data size and computational complexity makes it increasingly necessary to bring high performance computational resources to bear on this task. Our research group built a novel high performance hybrid system comprisingmore » computational capability for semantic graph database processing utilizing the large multi-threaded architecture of the Cray XMT platform, conventional servers, and large data stores. In this paper we describe that architecture and our methods, and present the results of our analyses of basic properties, connected components, namespace interaction, and typed paths such for the Billion Triple Challenge 2010 dataset.« less
Multigrid methods with space–time concurrency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falgout, R. D.; Friedhoff, S.; Kolev, Tz. V.
Here, we consider the comparison of multigrid methods for parabolic partial differential equations that allow space–time concurrency. With current trends in computer architectures leading towards systems with more, but not faster, processors, space–time concurrency is crucial for speeding up time-integration simulations. In contrast, traditional time-integration techniques impose serious limitations on parallel performance due to the sequential nature of the time-stepping approach, allowing spatial concurrency only. This paper considers the three basic options of multigrid algorithms on space–time grids that allow parallelism in space and time: coarsening in space and time, semicoarsening in the spatial dimensions, and semicoarsening in the temporalmore » dimension. We develop parallel software and performance models to study the three methods at scales of up to 16K cores and introduce an extension of one of them for handling multistep time integration. We then discuss advantages and disadvantages of the different approaches and their benefit compared to traditional space-parallel algorithms with sequential time stepping on modern architectures.« less
ASAC Executive Assistant Architecture Description Summary
NASA Technical Reports Server (NTRS)
Roberts, Eileen; Villani, James A.
1997-01-01
In this technical document, we describe the system architecture developed for the Aviation System Analysis Capability (ASAC) Executive Assistant (EA). We describe the genesis and role of the ASAC system, discuss the objectives of the ASAC system and provide an overview of components and models within the ASAC system, discuss our choice for an architecture methodology, the Domain Specific Software Architecture (DSSA), and the DSSA approach to developing a system architecture, and describe the development process and the results of the ASAC EA system architecture. The document has six appendices.
Business Intelligence in Process Control
NASA Astrophysics Data System (ADS)
Kopčeková, Alena; Kopček, Michal; Tanuška, Pavol
2013-12-01
The Business Intelligence technology, which represents a strong tool not only for decision making support, but also has a big potential in other fields of application, is discussed in this paper. Necessary fundamental definitions are offered and explained to better understand the basic principles and the role of this technology for company management. Article is logically divided into five main parts. In the first part, there is the definition of the technology and the list of main advantages. In the second part, an overview of the system architecture with the brief description of separate building blocks is presented. Also, the hierarchical nature of the system architecture is shown. The technology life cycle consisting of four steps, which are mutually interconnected into a ring, is described in the third part. In the fourth part, analytical methods incorporated in the online analytical processing and data mining used within the business intelligence as well as the related data mining methodologies are summarised. Also, some typical applications of the above-mentioned particular methods are introduced. In the final part, a proposal of the knowledge discovery system for hierarchical process control is outlined. The focus of this paper is to provide a comprehensive view and to familiarize the reader with the Business Intelligence technology and its utilisation.
Smart sensors development based on a distributed bus for microsystems applications
NASA Astrophysics Data System (ADS)
Ferrer, Carles; Lorente, Bibiana
2003-04-01
Our main objective in this work has been to develop a comunication system applicable between sensors and actuators and the data processing circuitry inside the microsystem in order to develop a flexible and modular architecture. This communication system is based on the use of a dedicated sensor bus composed by only two wires (a bidirectional data line and a clock line for sincronization). The basic philosophy of this development has been to create an IP model with VHDL for the bus driver that can be added to the sensor or the actuator to create an smart device that could be easily plugged with the other componets of the microsystem architecture. This methodology can be applied to a high integrated microsystem based on an extensively use of microelectronics technologies (ASICs, SoCs & MCMs). The reduced number of wires is an extraordinary advatage because produce a minimal interconnection between all the components and as a consequence the size of the microinstrument becomes smaller. The second aspect that we have considered in this development has been to reach a communication protocol that permits to built-up a very simple but robust bus driver interface that minimize the circuit overhead. This interconnection system has been applied to biomedical and aerospatial microsystems applications.
To Boldly Go Where No Man has Gone Before: Seeking Gaia's Astrometric Solution with AGIS
NASA Astrophysics Data System (ADS)
Lammers, U.; Lindegren, L.; O'Mullane, W.; Hobbs, D.
2009-09-01
Gaia is ESA's ambitious space astrometry mission with a foreseen launch date in late 2011. Its main objective is to perform a stellar census of the 1,000 million brightest objects in our galaxy (completeness to V=20 mag) from which an astrometric catalog of micro-arcsec (μas) level accuracy will be constructed. A key element in this endeavor is the Astrometric Global Iterative Solution (AGIS) - the mathematical and numerical framework for combining the ≈80 available observations per star obtained during Gaia's 5 yr lifetime into a single global astrometic solution. AGIS consists of four main algorithmic cores which improve the source astrometic parameters, satellite attitude, calibration, and global parameters in a block-iterative manner. We present and discuss this basic scheme, the algorithms themselves and the overarching system architecture. The latter is a data-driven distributed processing framework designed to achieve an overall system performance that is not I/O limited. AGIS is being developed as a pure Java system by a small number of geographically distributed European groups. We present some of the software engineering aspects of the project and show used methodologies and tools. Finally we will briefly discuss how AGIS is embedded into the overall Gaia data processing architecture.
Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?
NASA Technical Reports Server (NTRS)
Moore, Greg; Chainyk, Mike; Schiermeier, John
2004-01-01
The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.
Particle In Cell Codes on Highly Parallel Architectures
NASA Astrophysics Data System (ADS)
Tableman, Adam
2014-10-01
We describe strategies and examples of Particle-In-Cell Codes running on Nvidia GPU and Intel Phi architectures. This includes basic implementations in skeletons codes and full-scale development versions (encompassing 1D, 2D, and 3D codes) in Osiris. Both the similarities and differences between Intel's and Nvidia's hardware will be examined. Work supported by grants NSF ACI 1339893, DOE DE SC 000849, DOE DE SC 0008316, DOE DE NA 0001833, and DOE DE FC02 04ER 54780.
Modeling of a 3DTV service in the software-defined networking architecture
NASA Astrophysics Data System (ADS)
Wilczewski, Grzegorz
2014-11-01
In this article a newly developed concept towards modeling of a multimedia service offering stereoscopic motion imagery is presented. Proposed model is based on the approach of utilization of Software-defined Networking or Software Defined Networks architecture (SDN). The definition of 3D television service spanning SDN concept is identified, exposing basic characteristic of a 3DTV service in a modern networking organization layout. Furthermore, exemplary functionalities of the proposed 3DTV model are depicted. It is indicated that modeling of a 3DTV service in the Software-defined Networking architecture leads to multiplicity of improvements, especially towards flexibility of a service supporting heterogeneity of end user devices.
ADS's Dexter Data Extraction Applet
NASA Astrophysics Data System (ADS)
Demleitner, M.; Accomazzi, A.; Eichhorn, G.; Grant, C. S.; Kurtz, M. J.; Murray, S. S.
The NASA Astrophysics Data System (ADS) now holds 1.3 million scanned pages, containing numerous plots and figures for which the original data sets are lost or inaccessible. The availability of scans of the figures can significantly ease the regeneration of the data sets. For this purpose, the ADS has developed Dexter, a Java applet that supports the user in this process. Dexter's basic functionality is to let the user manually digitize a plot by marking points and defining the coordinate transformation from the logical to the physical coordinate system. Advanced features include automatic identification of axes, tracing lines and finding points matching a template. This contribution both describes the operation of Dexter from a user's point of view and discusses some of the architectural issues we faced during implementation.
On the development of a reactive sensor-based robotic system
NASA Technical Reports Server (NTRS)
Hexmoor, Henry H.; Underwood, William E., Jr.
1989-01-01
Flexible robotic systems for space applications need to use local information to guide their action in uncertain environments where the state of the environment and even the goals may change. They have to be tolerant of unexpected events and robust enough to carry their task to completion. Tactical goals should be modified while maintaining strategic goals. Furthermore, reactive robotic systems need to have a broader view of their environments than sensory-based systems. An architecture and a theory of representation extending the basic cycles of action and perception are described. This scheme allows for dynamic description of the environment and determining purposive and timely action. Applications of this scheme for assembly and repair tasks using a Universal Machine Intelligence RTX robot are being explored, but the ideas are extendable to other domains. The nature of reactivity for sensor-based robotic systems and implementation issues encountered in developing a prototype are discussed.
Reengineering a database for clinical trials management: lessons for system architects.
Brandt, C A; Nadkarni, P; Marenco, L; Karras, B T; Lu, C; Schacter, L; Fisk, J M; Miller, P L
2000-10-01
This paper describes the process of enhancing Trial/DB, a database system for clinical studies management. The system's enhancements have been driven by the need to maximize the effectiveness of developer personnel in supporting numerous and diverse users, of study designers in setting up new studies, and of administrators in managing ongoing studies. Trial/DB was originally designed to work over a local area network within a single institution, and basic architectural changes were necessary to make it work over the Internet efficiently as well as securely. Further, as its use spread to diverse communities of users, changes were made to let the processes of study design and project management adapt to the working styles of the principal investigators and administrators for each study. The lessons learned in the process should prove instructive for system architects as well as managers of electronic patient record systems.
Nuclear Hybrid Energy System Model Stability Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenwood, Michael Scott; Cetiner, Sacit M.; Fugate, David W.
2017-04-01
A Nuclear Hybrid Energy System (NHES) uses a nuclear reactor as the basic power generation unit, and the power generated is used by multiple customers as combinations of thermal power or electrical power. The definition and architecture of a particular NHES can be adapted based on the needs and opportunities of different localities and markets. For example, locations in need of potable water may be best served by coupling a desalination plant to the NHES. Similarly, a location near oil refineries may have a need for emission-free hydrogen production. Using the flexible, multi-domain capabilities of Modelica, Argonne National Laboratory, Idahomore » National Laboratory, and Oak Ridge National Laboratory are investigating the dynamics (e.g., thermal hydraulics and electrical generation/consumption) and cost of a hybrid system. This paper examines the NHES work underway, emphasizing the control system developed for individual subsystems and the overall supervisory control system.« less
Modeling the evolution of protein domain architectures using maximum parsimony.
Fong, Jessica H; Geer, Lewis Y; Panchenko, Anna R; Bryant, Stephen H
2007-02-09
Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture "neighbors" identified in this way may lead to new insights about the evolution of protein function.
Modeling the Evolution of Protein Domain Architectures Using Maximum Parsimony
Fong, Jessica H.; Geer, Lewis Y.; Panchenko, Anna R.; Bryant, Stephen H.
2007-01-01
Domains are basic evolutionary units of proteins and most proteins have more than one domain. Advances in domain modeling and collection are making it possible to annotate a large fraction of known protein sequences by a linear ordering of their domains, yielding their architecture. Protein domain architectures link evolutionarily related proteins and underscore their shared functions. Here, we attempt to better understand this association by identifying the evolutionary pathways by which extant architectures may have evolved. We propose a model of evolution in which architectures arise through rearrangements of inferred precursor architectures and acquisition of new domains. These pathways are ranked using a parsimony principle, whereby scenarios requiring the fewest number of independent recombination events, namely fission and fusion operations, are assumed to be more likely. Using a data set of domain architectures present in 159 proteomes that represent all three major branches of the tree of life allows us to estimate the history of over 85% of all architectures in the sequence database. We find that the distribution of rearrangement classes is robust with respect to alternative parsimony rules for inferring the presence of precursor architectures in ancestral species. Analyzing the most parsimonious pathways, we find 87% of architectures to gain complexity over time through simple changes, among which fusion events account for 5.6 times as many architectures as fission. Our results may be used to compute domain architecture similarities, for example, based on the number of historical recombination events separating them. Domain architecture “neighbors” identified in this way may lead to new insights about the evolution of protein function. PMID:17166515
The Solar Umbrella: A Low-cost Demonstration of Scalable Space Based Solar Power
NASA Technical Reports Server (NTRS)
Contreras, Michael T.; Trease, Brian P.; Sherwood, Brent
2013-01-01
Within the past decade, the Space Solar Power (SSP) community has seen an influx of stakeholders willing to entertain the SSP prospect of potentially boundless, base-load solar energy. Interested parties affiliated with the Department of Defense (DoD), the private sector, and various international entities have all agreed that while the benefits of SSP are tremendous and potentially profitable, the risk associated with developing an efficient end to end SSP harvesting system is still very high. In an effort to reduce the implementation risk for future SSP architectures, this study proposes a system level design that is both low-cost and seeks to demonstrate the furthest transmission of wireless power to date. The overall concept is presented and each subsystem is explained in detail with best estimates of current implementable technologies. Basic cost models were constructed based on input from JPL subject matter experts and assume that the technology demonstration would be carried out by a federally funded entity. The main thrust of the architecture is to demonstrate that a usable amount of solar power can be safely and reliably transmitted from space to the Earth's surface; however, maximum power scalability limits and their cost implications are discussed.
Construction of a multimedia application on public network
NASA Astrophysics Data System (ADS)
Liu, Jang; Wang, Chwan-Huei; Tseng, Ming-Yu; Hsiao, Sun-Lang; Luo, Wen-Hen; Tseng, Yung-Mean; Hung, Feng-Yue
1994-04-01
This paper describes our perception of current developments in networking, telecommunication and technology of multimedia. As such, we have taken a constructive view. From this standpoint, we devised a client server architecture that veils servers from their customers. It adheres to our conviction that network and location independence for serve access is a future trend. We have constructed an on-line KARAOKE on an existing CVS (Chinese Videotex System) to test the workability of this architecture and it works well. We are working on a prototype multimedia service network which is a miniature client server structure of our proposal. A specially designed protocol is described. Through this protocol, an one-to-many connection can be set up and to provide for multimedia applications, new connections can be established within a basic connection. So continuous media may have their own connections without being interrupted by other media, at least from the view of an application. We have advanced a constructive view which is not a framework itself. But it is tantamount to a framework, in building systems as assembly of methods, technics, designs, and ideas. This is what a framework does with more flexibility and availability.
ERIC Educational Resources Information Center
North Carolina State Dept. of Community Colleges, Raleigh.
A two-part articulation instructional objective guide for drafting (graphic communications) is provided. Part I contains summary information on seven blocks (courses) of instruction. They are as follow: introduction; basic technical drafting; problem solving in graphics; reproduction processes; freehand drawing and sketching; graphics composition;…
Novel All Digital Ring Cavity Locking Servo
NASA Astrophysics Data System (ADS)
Baker, J.; Gallant, D.; Lucero, A.; Miller, H.; Stohs, J.
We plan to use this servo in the new 50W 589-nm sodium guidestar laser to be installed in the AMOS facility in July 2010. Though the basic design is unchanged from the successful Hillman/Denman design, numerous improvements are being implemented in order to bring the device even further out of the lab and into the field. The basic building block of the Hillman/Denman design are two low noise master oscillators that are injected into higher power slave oscillators that are locked to the frequencies of the master oscillator cavities. In the previous system a traditional analog Pound-Drever-Hall (PDH) loop was employed to provide the frequency locking. Analog servos work well, in general, but robust locking for a complex set of multiply-interconnected PDH servos in the guidestar source challenges existing analog approaches. One of the significant changes demonstrated thus far is the implementation of an all-digital servo using only COTS components and a fast CISC processing architecture for orchestrating the basic PDH loops active within system. Compared to the traditionally used analog servo loops, an all-digital servo is a not only an orders-of-magnitude simpler servo loop to implement but the control loop can be modified by merely changing the computer code. Field conditions are often different from laboratory conditions, requiring subtle algorithm changes, and physical accessibility in the field is generally limited and difficult. Remotely implemented, trimmer-less and solderless servo upgrades are a much welcomed improvement in the field installed guidestar system. Also, OEM replacement of usual benchtop components saves considerable space and weight as well in the locking system. We will report on the details of the servo system and recent experimental results locking a master-slave laser oscillator system using the all-digital Pound-Drever-Hall loop.
A Service Oriented Infrastructure for Earth Science exchange
NASA Astrophysics Data System (ADS)
Burnett, M.; Mitchell, A.
2008-12-01
NASA's Earth Science Distributed Information System (ESDIS) program has developed an infrastructure for the exchange of Earth Observation related resources. Fundamentally a platform for Service Oriented Architectures, ECHO provides standards-based interfaces based on the basic interactions for a SOA pattern: Publish, Find and Bind. This infrastructure enables the realization of the benefits of Service Oriented Architectures, namely the reduction of stove-piped systems, the opportunity for reuse and flexibility to meet dynamic business needs, on a global scale. ECHO is the result of the infusion of IT technologies, including those standards of Web Services and Service Oriented Architecture technologies. The infrastructure is based on standards and leverages registries for data, services, clients and applications. As an operational system, ECHO currently representing over 110 million Earth Observation resources from a wide number of provider organizations. These partner organizations each have a primary mission - serving a particular facet of the Earth Observation community. Through ECHO, those partners can serve the needs of not only their target portion of the community, but also enable a wider range of users to discover and leverage their data resources, thereby increasing the value of their offerings. The Earth Observation community benefits from this infrastructure because it provides a set of common mechanisms for the discovery and access to resources from a much wider range of data and service providers. ECHO enables innovative clients to be built for targeted user types and missions. There several examples of those clients already in process. Applications built on this infrastructure can include User-driven, GUI-clients (web-based or thick clients), analysis programs (as intermediate components of larger systems), models or decision support systems. This paper will provide insight into the development of ECHO, as technologies were evaluated for infusion, and a summary of how technologies where leveraged into a significant operational system for the Earth Observation community.
Algorithm architecture co-design for ultra low-power image sensor
NASA Astrophysics Data System (ADS)
Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.
2012-03-01
In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.
A Summary of NASA Architecture Studies Utilizing Fission Surface Power Technology
NASA Technical Reports Server (NTRS)
Mason, Lee S.; Poston, David I.
2011-01-01
Beginning with the Exploration Systems Architecture Study in 2005, NASA has conducted various mission architecture studies to evaluate implementation options for the U.S. Space Policy. Several of the studies examined the use of Fission Surface Power (FSP) systems for human missions to the lunar and Martian surface. This paper summarizes the FSP concepts developed under four different NASA-sponsored architecture studies: Lunar Architecture Team, Mars Architecture Team, Lunar Surface Systems/Constellation Architecture Team, and International Architecture Working Group-Power Function Team.
Dynamics and design principles of a basic regulatory architecture controlling metabolic pathways.
Chin, Chen-Shan; Chubukov, Victor; Jolly, Emmitt R; DeRisi, Joe; Li, Hao
2008-06-17
The dynamic features of a genetic network's response to environmental fluctuations represent essential functional specifications and thus may constrain the possible choices of network architecture and kinetic parameters. To explore the connection between dynamics and network design, we have analyzed a general regulatory architecture that is commonly found in many metabolic pathways. Such architecture is characterized by a dual control mechanism, with end product feedback inhibition and transcriptional regulation mediated by an intermediate metabolite. As a case study, we measured with high temporal resolution the induction profiles of the enzymes in the leucine biosynthetic pathway in response to leucine depletion, using an automated system for monitoring protein expression levels in single cells. All the genes in the pathway are known to be coregulated by the same transcription factors, but we observed drastically different dynamic responses for enzymes upstream and immediately downstream of the key control point-the intermediate metabolite alpha-isopropylmalate (alphaIPM), which couples metabolic activity to transcriptional regulation. Analysis based on genetic perturbations suggests that the observed dynamics are due to differential regulation by the leucine branch-specific transcription factor Leu3, and that the downstream enzymes are strictly controlled and highly expressed only when alphaIPM is available. These observations allow us to build a simplified mathematical model that accounts for the observed dynamics and can correctly predict the pathway's response to new perturbations. Our model also suggests that transient dynamics and steady state can be separately tuned and that the high induction levels of the downstream enzymes are necessary for fast leucine recovery. It is likely that principles emerging from this work can reveal how gene regulation has evolved to optimize performance in other metabolic pathways with similar architecture.
Architecture of orogenic belts and convergent zones in Western Ishtar Terra, Venus
NASA Technical Reports Server (NTRS)
Head, James W.; Vorderbruegge, R. W.; Crumpler, L. S.
1989-01-01
Linear mountain belts in Ishtar Terra were recognized from Pioneer-Venus topography, and later Arecibo images showed banded terrain interpreted to represent folds. Subsequent analyses showed that the mountains represented orogenic belts, and that each had somewhat different features and characteristics. Orogenic belts are regions of focused shortening and compressional deformation and thus provide evidence for the nature of such deformation, processes of crustal thickening (brittle, ductile), and processes of crustal loss. Such information is important in understanding the nature of convergent zones on Venus (underthrusting, imbrication, subduction), the implications for rates of crustal recycling, and the nature of environments of melting and petrogenesis. The basic elements of four convergent zones and orogenic belts in western Ishtar Terra are identified and examined, and then assess the architecture of these zones (the manner in which the elements are arrayed), and their relationships. The basic nomenclature of the convergent zones is shown.
NASA Astrophysics Data System (ADS)
Stytz, Martin R.; May, Michael; Banks, Sheila B.
2009-04-01
Department of Defense (DoD) Information Technology (IT) systems operate in an environment different from the commercial world, the differences arise from the differences in the types of attacks, the interdependencies between DoD software systems, and the reliance upon commercial software to provide basic capabilities. The challenge that we face is determining how to specify the information assurance requirements for a system without requiring changes to the commercial software and in light of the interdependencies between systems. As a result of the interdependencies and interconnections between systems introduced by the global information grid (GIG), an assessment of the IA requirements for a system must consider three facets of a system's IA capabilities: 1) the IA vulnerabilities of the system, 2) the ability of a system to repel IA attacks, and 3) the ability of a system to insure that any IA attack that penetrates the system is contained within the system and does not spread. Each facet should be assessed independently and the requirements should be derived independently from the assessments. In addition to the desired IA technology capabilities of the system, a complete assessment of the system's overall IA security technology readiness level cannot be accomplished without an assessment of the capabilities required of the system for its capability to recover from and remediate IA vulnerabilities and compromises. To allow us to accomplish these three formidable tasks, we propose a general system architecture designed to separate the system's IA capabilities from its other capability requirements; thereby allowing the IA capabilities to be developed and assessed separately from the other system capabilities. The architecture also enables independent requirements specification, implementation, assessment, measurement, and improvement of a system's IA capabilities without requiring modification of the underlying application software.
NASA Astrophysics Data System (ADS)
Tamai, Isao; Hasegawa, Hideki
2007-04-01
As a combination of novel hardware architecture and novel system architecture for future ultrahigh-density III-V nanodevice LSIs, the authors' group has recently proposed a hexagonal binary decision diagram (BDD) quantum circuit approach where gate-controlled path switching BDD node devices for a single or few electrons are laid out on a hexagonal nanowire network to realize a logic function. In this paper, attempts are made to establish a method to grow highly dense hexagonal nanowire networks for future BDD circuits by selective molecular beam epitaxy (MBE) on (1 1 1)B substrates. The (1 1 1)B orientation is suitable for BDD architecture because of the basic three-fold symmetry of the BDD node device. The growth experiments showed complex evolution of the cross-sectional structures, and it was explained in terms of kinetics determining facet boundaries. Straight arrays of triangular nanowires with 60 nm base width as well as hexagonal arrays of trapezoidal nanowires with a node density of 7.5×10 6 cm -2 were successfully grown with the aid of computer simulation. The result shows feasibility of growing high-density hexagonal networks of GaAs nanowires with precise control of the shape and size.
Developing an Intelligent Computer-Aided Trainer
NASA Technical Reports Server (NTRS)
Hua, Grace
1990-01-01
The Payload-assist module Deploys/Intelligent Computer-Aided Training (PD/ICAT) system was developed as a prototype for intelligent tutoring systems with the intention of seeing PD/ICAT evolve and produce a general ICAT architecture and development environment that can be adapted by a wide variety of training tasks. The proposed architecture is composed of a user interface, a domain expert, a training session manager, a trainee model and a training scenario generator. The PD/ICAT prototype was developed in the LISP environment. Although it has been well received by its peers and users, it could not be delivered toe its end users for practical use because of specific hardware and software constraints. To facilitate delivery of PD/ICAT to its users and to prepare for a more widely accepted development and delivery environment for future ICAT applications, we have ported this training system to a UNIX workstation and adopted use of a conventional language, C, and a C-based rule-based language, CLIPS. A rapid conversion of the PD/ICAT expert system to CLIPS was possible because the knowledge was basically represented as a forward chaining rule base. The resulting CLIPS rule base has been tested successfully in other ICATs as well. Therefore, the porting effort has proven to be a positive step toward our ultimate goal of building a general purpose ICAT development environment.
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
This paper presents an overview of the application of the Space Generic Open Avionics Architecture (SGOAA) to the Space Shuttle Data Processing System (DPS) architecture design. This application has been performed to validate the SGOAA, and its potential use in flight critical systems. The paper summarizes key elements of the Space Shuttle avionics architecture, data processing system requirements and software architecture as currently implemented. It then summarizes the SGOAA architecture and describes a tailoring of the SGOAA to the Space Shuttle. The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing external and internal hardware architecture, a six class model of interfaces and functional subsystem architectures for data services and operations control capabilities. It has been proposed as an avionics architecture standard with the National Aeronautics and Space Administration (NASA), through its Strategic Avionics Technology Working Group, and is being considered by the Society of Aeronautic Engineers (SAE) as an SAE Avionics Standard. This architecture was developed for the Flight Data Systems Division of JSC by the Lockheed Engineering and Sciences Company, Houston, Texas.
Automated Synthesis of Architecture of Avionic Systems
NASA Technical Reports Server (NTRS)
Chau, Savio; Xu, Joseph; Dang, Van; Lu, James F.
2006-01-01
The Architecture Synthesis Tool (AST) is software that automatically synthesizes software and hardware architectures of avionic systems. The AST is expected to be most helpful during initial formulation of an avionic-system design, when system requirements change frequently and manual modification of architecture is time-consuming and susceptible to error. The AST comprises two parts: (1) an architecture generator, which utilizes a genetic algorithm to create a multitude of architectures; and (2) a functionality evaluator, which analyzes the architectures for viability, rejecting most of the non-viable ones. The functionality evaluator generates and uses a viability tree a hierarchy representing functions and components that perform the functions such that the system as a whole performs system-level functions representing the requirements for the system as specified by a user. Architectures that survive the functionality evaluator are further evaluated by the selection process of the genetic algorithm. Architectures found to be most promising to satisfy the user s requirements and to perform optimally are selected as parents to the next generation of architectures. The foregoing process is iterated as many times as the user desires. The final output is one or a few viable architectures that satisfy the user s requirements.
Expert system validation in prolog
NASA Technical Reports Server (NTRS)
Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline
1988-01-01
An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.
Simplified programming and control of automated radiosynthesizers through unit operations.
Claggett, Shane B; Quinn, Kevin M; Lazari, Mark; Moore, Melissa D; van Dam, R Michael
2013-07-15
Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client-server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client-server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client-server architecture provided robustness and flexibility.
Simplified programming and control of automated radiosynthesizers through unit operations
2013-01-01
Background Many automated radiosynthesizers for producing positron emission tomography (PET) probes provide a means for the operator to create custom synthesis programs. The programming interfaces are typically designed with the engineer rather than the radiochemist in mind, requiring lengthy programs to be created from sequences of low-level, non-intuitive hardware operations. In some cases, the user is even responsible for adding steps to update the graphical representation of the system. In light of these unnecessarily complex approaches, we have created software to perform radiochemistry on the ELIXYS radiosynthesizer with the goal of being intuitive and easy to use. Methods Radiochemists were consulted, and a wide range of radiosyntheses were analyzed to determine a comprehensive set of basic chemistry unit operations. Based around these operations, we created a software control system with a client–server architecture. In an attempt to maximize flexibility, the client software was designed to run on a variety of portable multi-touch devices. The software was used to create programs for the synthesis of several 18F-labeled probes on the ELIXYS radiosynthesizer, with [18F]FDG detailed here. To gauge the user-friendliness of the software, program lengths were compared to those from other systems. A small sample group with no prior radiosynthesizer experience was tasked with creating and running a simple protocol. Results The software was successfully used to synthesize several 18F-labeled PET probes, including [18F]FDG, with synthesis times and yields comparable to literature reports. The resulting programs were significantly shorter and easier to debug than programs from other systems. The sample group of naive users created and ran a simple protocol within a couple of hours, revealing a very short learning curve. The client–server architecture provided reliability, enabling continuity of the synthesis run even if the computer running the client software failed. The architecture enabled a single user to control the hardware while others observed the run in progress or created programs for other probes. Conclusions We developed a novel unit operation-based software interface to control automated radiosynthesizers that reduced the program length and complexity and also exhibited a short learning curve. The client–server architecture provided robustness and flexibility. PMID:23855995
Diagnostic-management system and test pulse acquisition for WEST plasma measurement system
NASA Astrophysics Data System (ADS)
Wojenski, A.; Kasprowicz, G.; Pozniak, K. T.; Byszuk, A.; Juszczyk, B.; Zabolotny, W.; Zienkiewicz, P.; Chernyshova, M.; Czarski, T.; Mazon, D.; Malard, P.
2014-11-01
This paper describes current status of electronics, firmware and software development for new plasma measurement system for use in WEST facility. The system allows to perform two dimensional plasma visualization (in time) with spectrum measurement. The analog front-end is connected to Gas Electron Multiplier detector (GEM detector). The system architecture have high data throughput due to use of PCI-Express interface, Gigabit Transceivers and sampling frequency of ADC integrated circuits. The hardware is based on several years of experience in building X-ray spectrometer system for Joint European Torus (JET) facility. Data streaming is done using Artix7 FPGA devices. The system in basic configuration can work with up to 256 channels, while the maximum number of measurement channels is 2048. Advanced firmware for the FPGA is required in order to perform high speed data streaming and analog signal sampling. Diagnostic system management has been developed in order to configure measurement system, perform necessary calibration and prepare hardware for data acquisition.
The NASA Auralization Framework and Plugin Architecture
NASA Technical Reports Server (NTRS)
Aumann, Aric R.; Tuttle, Brian C.; Chapin, William L.; Rizzi, Stephen A.
2015-01-01
NASA has a long history of investigating human response to aircraft flyover noise and in recent years has developed a capability to fully auralize the noise of aircraft during their design. This capability is particularly useful for unconventional designs with noise signatures significantly different from the current fleet. To that end, a flexible software architecture has been developed to facilitate rapid integration of new simulation techniques for noise source synthesis and propagation, and to foster collaboration amongst researchers through a common releasable code base. The NASA Auralization Framework (NAF) is a skeletal framework written in C++ with basic functionalities and a plugin architecture that allows users to mix and match NAF capabilities with their own methods through the development and use of dynamically linked libraries. This paper presents the NAF software architecture and discusses several advanced auralization techniques that have been implemented as plugins to the framework.
Integrated flight/propulsion control - Adaptive engine control system mode
NASA Technical Reports Server (NTRS)
Yonke, W. A.; Terrell, L. A.; Meyers, L. P.
1985-01-01
The adaptive engine control system mode (ADECS) which is developed and tested on an F-15 aircraft with PW1128 engines, using the NASA sponsored highly integrated digital electronic control program, is examined. The operation of the ADECS mode, as well as the basic control logic, the avionic architecture, and the airframe/engine interface are described. By increasing engine pressure ratio (EPR) additional thrust is obtained at intermediate power and above. To modulate the amount of EPR uptrim and to prevent engine stall, information from the flight control system is used. The performance benefits, anticipated from control integration are shown for a range of flight conditions and power settings. It is found that at higher altitudes, the ADECS mode can increase thrust as much as 12 percent, which is used for improved acceleration, improved turn rate, or sustained turn angle.
Studying Spatial Resolution of CZT Detectors Using Sub-Pixel Positioning for SPECT
NASA Astrophysics Data System (ADS)
Montémont, Guillaume; Lux, Silvère; Monnet, Olivier; Stanchina, Sylvain; Verger, Loïck
2014-10-01
CZT detectors are the basic building block of a variety of new SPECT systems. Their modularity allows adapting system architecture to specific applications such as cardiac, breast, brain or small animal imaging. In semiconductors, a high number of electron-hole pairs is produced by a single interaction. This direct conversion process allows better energy and spatial resolutions than usual scintillation detectors based on NaI(Tl). However, it remains often unclear if SPECT imaging can really benefit of that performance gain. We investigate the system performance of a detection module, which is based on 5 mm thick CZT with a segmented anode having a 2.5 mm pitch by simulation and experimentation. This pitch allows an easy assembly of the crystal on the readout board and limits the space occupied by electronics without significantly degrading energy and spatial resolution.
[PVFS 2000: An operational parallel file system for Beowulf
NASA Technical Reports Server (NTRS)
Ligon, Walt
2004-01-01
The approach has been to develop Parallel Virtual File System version 2 (PVFS2) , retaining the basic philosophy of the original file system but completely rewriting the code. It shows the architecture of the server and client components. BMI - BMI is the network abstraction layer. It is designed with a common driver and modules for each protocol supported. The interface is non-blocking, and provides mechanisms for optimizations including pinning user buffers. Currently TCP/IP and GM(Myrinet) modules have been implemented. Trove -Trove is the storage abstraction layer. It provides for storing both data spaces and name/value pairs. Trove can also be implemented using different underlying storage mechanisms including native files, raw disk partitions, SQL and other databases. The current implementation uses native files for data spaces and Berkeley db for name/value pairs.
NASA Astrophysics Data System (ADS)
Martin, Adrian
As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.
User-Friendly Interface Developed for a Web-Based Service for SpaceCAL Emulations
NASA Technical Reports Server (NTRS)
Liszka, Kathy J.; Holtz, Allen P.
2004-01-01
A team at the NASA Glenn Research Center is developing a Space Communications Architecture Laboratory (SpaceCAL) for protocol development activities for coordinated satellite missions. SpaceCAL will provide a multiuser, distributed system to emulate space-based Internet architectures, backbone networks, formation clusters, and constellations. As part of a new effort in 2003, building blocks are being defined for an open distributed system to make the satellite emulation test bed accessible through an Internet connection. The first step in creating a Web-based service to control the emulation remotely is providing a user-friendly interface for encoding the data into a well-formed and complete Extensible Markup Language (XML) document. XML provides coding that allows data to be transferred between dissimilar systems. Scenario specifications include control parameters, network routes, interface bandwidths, delay, and bit error rate. Specifications for all satellite, instruments, and ground stations in a given scenario are also included in the XML document. For the SpaceCAL emulation, the XML document can be created using XForms, a Webbased forms language for data collection. Contrary to older forms technology, the interactive user interface makes the science prevalent, not the data representation. Required versus optional input fields, default values, automatic calculations, data validation, and reuse will help researchers quickly and accurately define missions. XForms can apply any XML schema defined for the test mission to validate data before forwarding it to the emulation facility. New instrument definitions, facilities, and mission types can be added to the existing schema. The first prototype user interface incorporates components for interactive input and form processing. Internet address, data rate, and the location of the facility are implemented with basic form controls with default values provided for convenience and efficiency using basic XForms operations. Because different emulation scenarios will vary widely in their component structure, more complex operations are used to add and delete facilities.
Design of a Knowledge Driven HIS
Pryor, T. Allan; Clayton, Paul D.; Haug, Peter J.; Wigertz, Ove
1987-01-01
Design of the software architecture for a knowledge driven HIS is presented. In our design the frame has been used as the basic unit of knowledge representation. The structure of the frame is being designed to be sufficiently universal to contain knowledge required to implement not only expert systems, but almost all traditional HIS functions including ADT, order entry and results review. The design incorporates a two level format for the knowledge. The first level as ASCII records is used to maintain the knowledge base while the second level converted by special knowledge compilers to standard computer languages is used for efficient implementation of the knowledge applications.
A protect solution for data security in mobile cloud storage
NASA Astrophysics Data System (ADS)
Yu, Xiaojun; Wen, Qiaoyan
2013-03-01
It is popular to access the cloud storage by mobile devices. However, this application suffer data security risk, especial the data leakage and privacy violate problem. This risk exists not only in cloud storage system, but also in mobile client platform. To reduce the security risk, this paper proposed a new security solution. It makes full use of the searchable encryption and trusted computing technology. Given the performance limit of the mobile devices, it proposes the trusted proxy based protection architecture. The design basic idea, deploy model and key flows are detailed. The analysis from the security and performance shows the advantage.
Walt, David R
2010-01-01
This tutorial review describes how fibre optic microarrays can be used to create a variety of sensing and measurement systems. This review covers the basics of optical fibres and arrays, the different microarray architectures, and describes a multitude of applications. Such arrays enable multiplexed sensing for a variety of analytes including nucleic acids, vapours, and biomolecules. Polymer-coated fibre arrays can be used for measuring microscopic chemical phenomena, such as corrosion and localized release of biochemicals from cells. In addition, these microarrays can serve as a substrate for fundamental studies of single molecules and single cells. The review covers topics of interest to chemists, biologists, materials scientists, and engineers.
Optoelectronic Reservoir Computing
Paquot, Y.; Duport, F.; Smerieri, A.; Dambre, J.; Schrauwen, B.; Haelterman, M.; Massar, S.
2012-01-01
Reservoir computing is a recently introduced, highly efficient bio-inspired approach for processing time dependent data. The basic scheme of reservoir computing consists of a non linear recurrent dynamical system coupled to a single input layer and a single output layer. Within these constraints many implementations are possible. Here we report an optoelectronic implementation of reservoir computing based on a recently proposed architecture consisting of a single non linear node and a delay line. Our implementation is sufficiently fast for real time information processing. We illustrate its performance on tasks of practical importance such as nonlinear channel equalization and speech recognition, and obtain results comparable to state of the art digital implementations. PMID:22371825
A Real-Time Rover Executive based On Model-Based Reactive Planning
NASA Technical Reports Server (NTRS)
Bias, M. Bernardine; Lemai, Solange; Muscettola, Nicola; Korsmeyer, David (Technical Monitor)
2003-01-01
This paper reports on the experimental verification of the ability of IDEA (Intelligent Distributed Execution Architecture) effectively operate at multiple levels of abstraction in an autonomous control system. The basic hypothesis of IDEA is that a large control system can be structured as a collection of interacting control agents, each organized around the same fundamental structure. Two IDEA agents, a system-level agent and a mission-level agent, are designed and implemented to autonomously control the K9 rover in real-time. The system is evaluated in the scenario where the rover must acquire images from a specified set of locations. The IDEA agents are responsible for enabling the rover to achieve its goals while monitoring the execution and safety of the rover and recovering from dangerous states when necessary. Experiments carried out both in simulation and on the physical rover, produced highly promising results.
Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems.
González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier
2017-06-02
Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named "CARMEN" are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances.
System Requirement Analyses for Ubiquitous Environment Management System
NASA Astrophysics Data System (ADS)
Lim, Sang Boem; Gil, Kyung Jun; Choe, Ho Rim; Eo, Yang Dam
We are living in new stage of society. U-City introduces new paradigm that cannot be archived in traditional city to future city. Korea is one of the most active countries to construct U-City based on advances of IT technologies - especially based on high-speed network through out country [1]. Peoples are realizing ubiquitous service is key factor of success of U-City. Among the U-services, U-security service is one of the most important services. Nowadays we have to concern about traditional threat and also personal information. Since apartment complex is the most common residence type in Korea. We are developing security rules and system based on analyses of apartment complex and assert of apartment complex. Based on these analyses, we are developing apartment complex security using various technologies including home network system. We also will discuss basic home network security architecture.
A Module for Adaptive Course Configuration and Assessment in Moodle
NASA Astrophysics Data System (ADS)
Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia
Personalization and Adaptation are among the main challenges in the field of e-learning, where currently just few Learning Management Systems, mostly experimental ones, support such features. In this work we present an architecture that allows Moodle to interact with the Lecomps system, an adaptive learning system developed earlier by our research group, that has been working in a stand-alone modality so far. In particular, the Lecomps responsibilities are circumscribed to the sole production of personalized learning objects sequences and to the management of the student model, leaving to Moodle all the rest of the activities for course delivery. The Lecomps system supports the "dynamic" adaptation of learning objects sequences, basing on the student model, i.e., learner's Cognitive State and Learning Style. Basically, this work integrates two main Lecomps tasks into Moodle, to be directly managed by it: Authentication and Quizzes.
Comparative Study of Neural Network Frameworks for the Next Generation of Adaptive Optics Systems
González-Gutiérrez, Carlos; Santos, Jesús Daniel; Martínez-Zarzuela, Mario; Basden, Alistair G.; Osborn, James; Díaz-Pernas, Francisco Javier; De Cos Juez, Francisco Javier
2017-01-01
Many of the next generation of adaptive optics systems on large and extremely large telescopes require tomographic techniques in order to correct for atmospheric turbulence over a large field of view. Multi-object adaptive optics is one such technique. In this paper, different implementations of a tomographic reconstructor based on a machine learning architecture named “CARMEN” are presented. Basic concepts of adaptive optics are introduced first, with a short explanation of three different control systems used on real telescopes and the sensors utilised. The operation of the reconstructor, along with the three neural network frameworks used, and the developed CUDA code are detailed. Changes to the size of the reconstructor influence the training and execution time of the neural network. The native CUDA code turns out to be the best choice for all the systems, although some of the other frameworks offer good performance under certain circumstances. PMID:28574426
A reference architecture for integrated EHR in Colombia.
de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd
2011-01-01
The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged.
A Comparative Study : Microprogrammed Vs Risc Architectures For Symbolic Processing
NASA Astrophysics Data System (ADS)
Heudin, J. C.; Metivier, C.; Demigny, D.; Maurin, T.; Zavidovique, B.; Devos, F.
1987-05-01
It is oftenclaimed that conventional computers are not well suited for human-like tasks : Vision (Image Processing), Intelligence (Symbolic Processing) ... In the particular case of Artificial Intelligence, dynamic type-checking is one example of basic task that must be improved. The solution implemented in most Lisp work-stations consists in a microprogrammed architecture with a tagged memory. Another way to gain efficiency is to design a well suited instruction set for symbolic processing, which reduces the semantic gap between the high level language and the machine code. In this framework, the RISC concept provides a convenient approach to study new architectures for symbolic processing. This paper compares both approaches and describes our projectof designing a compact symbolic processor for Artificial Intelligence applications.
Dynamical principles in neuroscience
NASA Astrophysics Data System (ADS)
Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.; Abarbanel, Henry D. I.
2006-10-01
Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?
Dynamical principles in neuroscience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rabinovich, Mikhail I.; Varona, Pablo; Selverston, Allen I.
Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only amore » few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?.« less
From self-observation to imitation: visuomotor association on a robotic hand.
Chaminade, Thierry; Oztop, Erhan; Cheng, Gordon; Kawato, Mitsuo
2008-04-15
Being at the crux of human cognition and behaviour, imitation has become the target of investigations ranging from experimental psychology and neurophysiology to computational sciences and robotics. It is often assumed that the imitation is innate, but it has more recently been argued, both theoretically and experimentally, that basic forms of imitation could emerge as a result of self-observation. Here, we tested this proposal on a realistic experimental platform, comprising an associative network linking a 16 degrees of freedom robotic hand and a simple visual system. We report that this minimal visuomotor association is sufficient to bootstrap basic imitation. Our results indicate that crucial features of human imitation, such as generalization to new actions, may emerge from a connectionist associative network. Therefore, we suggest that a behaviour as complex as imitation could be, at the neuronal level, founded on basic mechanisms of associative learning, a notion supported by a recent proposal on the developmental origin of mirror neurons. Our approach can be applied to the development of realistic cognitive architectures for humanoid robots as well as to shed new light on the cognitive processes at play in early human cognitive development.
Spacelab output processing system architectural study
NASA Technical Reports Server (NTRS)
1977-01-01
Two different system architectures are presented. The two architectures are derived from two different data flows within the Spacelab Output Processing System. The major differences between these system architectures are in the position of the decommutation function (the first architecture performs decommutation in the latter half of the system and the second architecture performs that function in the front end of the system). In order to be examined, the system was divided into five stand-alone subsystems; Work Assembler, Mass Storage System, Output Processor, Peripheral Pool, and Resource Monitor. The work load of each subsystem was estimated independent of the specific devices to be used. The candidate devices were surveyed from a wide sampling of off-the-shelf devices. Analytical expressions were developed to quantify the projected workload in conjunction with typical devices which would adequately handle the subsystem tasks. All of the study efforts were then directed toward preparing performance and cost curves for each architecture subsystem.
The System of Systems Architecture Feasibility Assessment Model
2016-06-01
OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL by Stephen E. Gillespie June 2016 Dissertation Supervisor Eugene Paulo THIS PAGE...Dissertation 4. TITLE AND SUBTITLE THE SYSTEM OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL 5. FUNDING NUMBERS 6. AUTHOR(S) Stephen E...SoS architecture feasibility assessment model (SoS-AFAM). Together, these extend current model- based systems engineering (MBSE) and SoS engineering
Nation-wide primary healthcare research network: a privacy protection assessment.
De Clercq, Etienne; Van Casteren, Viviane; Bossuyt, Nathalie; Moreels, Sarah; Goderis, Geert; Bartholomeeusen, Stefaan; Bonte, Pierre; Bangels, Marc
2012-01-01
Efficiency and privacy protection are essential when setting up nationwide research networks. This paper investigates the extent to which basic services developed to support the provision of care can be re-used, whilst preserving an acceptable privacy protection level, within a large Belgian primary care research network. The generic sustainable confidentiality management model used to assess the privacy protection level of the selected network architecture is described. A short analysis of the current architecture is provided. Our generic model could also be used in other countries.
Evaluation of floating-point sum or difference of products in carry-save domain
NASA Technical Reports Server (NTRS)
Wahab, A.; Erdogan, S.; Premkumar, A. B.
1992-01-01
An architecture to evaluate a 24-bit floating-point sum or difference of products using modified sequential carry-save multipliers with extensive pipelining is described. The basic building block of the architecture is a carry-save multiplier with built-in mantissa alignment for the summation during the multiplication cycles. A carry-save adder, capable of mantissa alignment, correctly positions products with the current carry-save sum. Carry propagation in individual multipliers is avoided and is only required once to produce the final result.
A Reference Architecture for Space Information Management
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.
2006-01-01
We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.
Robust Software Architecture for Robots
NASA Technical Reports Server (NTRS)
Aghazanian, Hrand; Baumgartner, Eric; Garrett, Michael
2009-01-01
Robust Real-Time Reconfigurable Robotics Software Architecture (R4SA) is the name of both a software architecture and software that embodies the architecture. The architecture was conceived in the spirit of current practice in designing modular, hard, realtime aerospace systems. The architecture facilitates the integration of new sensory, motor, and control software modules into the software of a given robotic system. R4SA was developed for initial application aboard exploratory mobile robots on Mars, but is adaptable to terrestrial robotic systems, real-time embedded computing systems in general, and robotic toys.
Molecular communication among biological nanomachines: a layered architecture and research issues.
Nakano, Tadashi; Suda, Tatsuya; Okaie, Yutaka; Moore, Michael J; Vasilakos, Athanasios V
2014-09-01
Molecular communication is an emerging communication paradigm for biological nanomachines. It allows biological nanomachines to communicate through exchanging molecules in an aqueous environment and to perform collaborative tasks through integrating functionalities of individual biological nanomachines. This paper develops the layered architecture of molecular communication and describes research issues that molecular communication faces at each layer of the architecture. Specifically, this paper applies a layered architecture approach, traditionally used in communication networks, to molecular communication, decomposes complex molecular communication functionality into a set of manageable layers, identifies basic functionalities of each layer, and develops a descriptive model consisting of key components of the layer for each layer. This paper also discusses open research issues that need to be addressed at each layer. In addition, this paper provides an example design of targeted drug delivery, a nanomedical application, to illustrate how the layered architecture helps design an application of molecular communication. The primary contribution of this paper is to provide an in-depth architectural view of molecular communication. Establishing a layered architecture of molecular communication helps organize various research issues and design concerns into layers that are relatively independent of each other, and thus accelerates research in each layer and facilitates the design and development of applications of molecular communication.
Fault tolerant architectures for integrated aircraft electronics systems
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.
1983-01-01
Work into possible architectures for future flight control computer systems is described. Ada for Fault-Tolerant Systems, the NETS Network Error-Tolerant System architecture, and voting in asynchronous systems are covered.
Qualitative similarities in the visual short-term memory of pigeons and people.
Gibson, Brett; Wasserman, Edward; Luck, Steven J
2011-10-01
Visual short-term memory plays a key role in guiding behavior, and individual differences in visual short-term memory capacity are strongly predictive of higher cognitive abilities. To provide a broader evolutionary context for understanding this memory system, we directly compared the behavior of pigeons and humans on a change detection task. Although pigeons had a lower storage capacity and a higher lapse rate than humans, both species stored multiple items in short-term memory and conformed to the same basic performance model. Thus, despite their very different evolutionary histories and neural architectures, pigeons and humans have functionally similar visual short-term memory systems, suggesting that the functional properties of visual short-term memory are subject to similar selective pressures across these distant species.
Analysis of the packet formation process in packet-switched networks
NASA Astrophysics Data System (ADS)
Meditch, J. S.
Two new queueing system models for the packet formation process in packet-switched telecommunication networks are developed, and their applications in process stability, performance analysis, and optimization studies are illustrated. The first, an M/M/1 queueing system characterization of the process, is a highly aggregated model which is useful for preliminary studies. The second, a marked extension of an earlier M/G/1 model, permits one to investigate stability, performance characteristics, and design of the packet formation process in terms of the details of processor architecture, and hardware and software implementations with processor structure and as many parameters as desired as variables. The two new models together with the earlier M/G/1 characterization span the spectrum of modeling complexity for the packet formation process from basic to advanced.
Black Sea GIS developed in MHI
NASA Astrophysics Data System (ADS)
Zhuk, E.; Khaliulin, A.; Zodiatis, G.; Nikolaidis, A.; Isaeva, E.
2016-08-01
The work aims at creating the Black Sea geoinformation system (GIS) and complementing it with a model bank. The software for data access and visualization was developed using client server architecture. A map service based on MapServer and MySQL data management system were chosen for the Black Sea GIS. Php-modules and python-scripts are used to provide data access, processing, and exchange between the client application and the server. According to the basic data types, the module structure of GIS was developed. Each type of data is matched to a module which allows selection and visualization of the data. At present, a GIS complement with a model bank (the models build in to the GIS) and users' models (programs launched on users' PCs but receiving and displaying data via GIS) is developed.
A Robust Scalable Transportation System Concept
NASA Technical Reports Server (NTRS)
Hahn, Andrew; DeLaurentis, Daniel
2006-01-01
This report documents the 2005 Revolutionary System Concept for Aeronautics (RSCA) study entitled "A Robust, Scalable Transportation System Concept". The objective of the study was to generate, at a high-level of abstraction, characteristics of a new concept for the National Airspace System, or the new NAS, under which transportation goals such as increased throughput, delay reduction, and improved robustness could be realized. Since such an objective can be overwhelmingly complex if pursued at the lowest levels of detail, instead a System-of-Systems (SoS) approach was adopted to model alternative air transportation architectures at a high level. The SoS approach allows the consideration of not only the technical aspects of the NAS", but also incorporates policy, socio-economic, and alternative transportation system considerations into one architecture. While the representations of the individual systems are basic, the higher level approach allows for ways to optimize the SoS at the network level, determining the best topology (i.e. configuration of nodes and links). The final product (concept) is a set of rules of behavior and network structure that not only satisfies national transportation goals, but represents the high impact rules that accomplish those goals by getting the agents to "do the right thing" naturally. The novel combination of Agent Based Modeling and Network Theory provides the core analysis methodology in the System-of-Systems approach. Our method of approach is non-deterministic which means, fundamentally, it asks and answers different questions than deterministic models. The nondeterministic method is necessary primarily due to our marriage of human systems with technological ones in a partially unknown set of future worlds. Our goal is to understand and simulate how the SoS, human and technological components combined, evolve.
Hu, T H; Wan, L; Liu, T A; Wang, M W; Chen, T; Wang, Y H
2017-12-01
Deep learning and neural network models have been new research directions and hot issues in the fields of machine learning and artificial intelligence in recent years. Deep learning has made a breakthrough in the applications of image and speech recognitions, and also has been extensively used in the fields of face recognition and information retrieval because of its special superiority. Bone X-ray images express different variations in black-white-gray gradations, which have image features of black and white contrasts and level differences. Based on these advantages of deep learning in image recognition, we combine it with the research of bone age assessment to provide basic datum for constructing a forensic automatic system of bone age assessment. This paper reviews the basic concept and network architectures of deep learning, and describes its recent research progress on image recognition in different research fields at home and abroad, and explores its advantages and application prospects in bone age assessment. Copyright© by the Editorial Department of Journal of Forensic Medicine.
NASA Astrophysics Data System (ADS)
Yen, Y. N.; Weng, K. H.; Huang, H. Y.
2013-07-01
After over 30 years of practise and development, Taiwan's architectural conservation field is moving rapidly into digitalization and its applications. Compared to modern buildings, traditional Chinese architecture has considerably more complex elements and forms. To document and digitize these unique heritages in their conservation lifecycle is a new and important issue. This article takes the caisson ceiling of the Taipei Confucius Temple, octagonal with 333 elements in 8 types, as a case study for digitization practise. The application of metadata representation and 3D modelling are the two key issues to discuss. Both Revit and SketchUp were appliedin this research to compare its effectiveness to metadata representation. Due to limitation of the Revit database, the final 3D models wasbuilt with SketchUp. The research found that, firstly, cultural heritage databasesmustconvey that while many elements are similar in appearance, they are unique in value; although 3D simulations help the general understanding of architectural heritage, software such as Revit and SketchUp, at this stage, could onlybe used tomodel basic visual representations, and is ineffective indocumenting additional critical data ofindividually unique elements. Secondly, when establishing conservation lifecycle information for application in management systems, a full and detailed presentation of the metadata must also be implemented; the existing applications of BIM in managing conservation lifecycles are still insufficient. Results of the research recommends SketchUp as a tool for present modelling needs, and BIM for sharing data between users, but the implementation of metadata representation is of the utmost importance.
Modeling functional neuroanatomy for an anatomy information system.
Niggemann, Jörg M; Gebert, Andreas; Schulz, Stefan
2008-01-01
Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the "internal wiring" of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Internal wiring as well as functional pathways can correctly be represented and tracked. This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems.
Uncoupling File System Components for Bridging Legacy and Modern Storage Architectures
NASA Astrophysics Data System (ADS)
Golpayegani, N.; Halem, M.; Tilmes, C.; Prathapan, S.; Earp, D. N.; Ashkar, J. S.
2016-12-01
Long running Earth Science projects can span decades of architectural changes in both processing and storage environments. As storage architecture designs change over decades such projects need to adjust their tools, systems, and expertise to properly integrate such new technologies with their legacy systems. Traditional file systems lack the necessary support to accommodate such hybrid storage infrastructure resulting in more complex tool development to encompass all possible storage architectures used for the project. The MODIS Adaptive Processing System (MODAPS) and the Level 1 and Atmospheres Archive and Distribution System (LAADS) is an example of a project spanning several decades which has evolved into a hybrid storage architecture. MODAPS/LAADS has developed the Lightweight Virtual File System (LVFS) which ensures a seamless integration of all the different storage architectures, including standard block based POSIX compliant storage disks, to object based architectures such as the S3 compliant HGST Active Archive System, and the Seagate Kinetic disks utilizing the Kinetic Protocol. With LVFS, all analysis and processing tools used for the project continue to function unmodified regardless of the underlying storage architecture enabling MODAPS/LAADS to easily integrate any new storage architecture without the costly need to modify existing tools to utilize such new systems. Most file systems are designed as a single application responsible for using metadata to organizing the data into a tree, determine the location for data storage, and a method of data retrieval. We will show how LVFS' unique approach of treating these components in a loosely coupled fashion enables it to merge different storage architectures into a single uniform storage system which bridges the underlying hybrid architecture.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2014-01-01
Architecture development is conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this presentation characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2015-01-01
Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2015-01-01
Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles
Advanced computer architecture specification for automated weld systems
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.
NASA Technical Reports Server (NTRS)
1976-01-01
Only a few efforts are currently underway to develop an adequate technology base for the various themes. Particular attention must be given to software commonality and evolutionary capability, to increased system integrity and autonomy; and to improved communications among the program users, the program developers, and the programs themselves. There is a need for quantum improvement in software development methods and increasing the awareness of software by all concerned. Major thrusts identified include: (1) data and systems management; (2) software technology for autonomous systems; (3) technology and methods for improving the software development process; (4) advances related to systems of software elements including their architecture, their attributes as systems, and their interfaces with users and other systems; and (5) applications of software including both the basic algorithms used in a number of applications and the software specific to a particular theme or discipline area. The impact of each theme on software is assessed.
A 50Mbit/Sec. CMOS Video Linestore System
NASA Astrophysics Data System (ADS)
Jeung, Yeun C.
1988-10-01
This paper reports the architecture, design and test results of a CMOS single chip programmable video linestore system which has 16-bit data words with 1024 bit depth. The delay is fully programmable from 9 to 1033 samples by a 10 bit binary control word. The large 16 bit data word width makes the chip useful for a wide variety of digital video signal processing applications such as DPCM coding, High-Definition TV, and Video scramblers/descramblers etc. For those applications, the conventional large fixed-length shift register or static RAM scheme is not very popular because of its lack of versatility, high power consumption, and required support circuitry. The very high throughput of 50Mbit/sec is made possible by a highly parallel, pipelined dynamic memory architecture implemented in a 2-um N-well CMOS technology. The basic cell of the programmable video linestore chip is an four transistor dynamic RAM element. This cell comprises the majority of the chip's real estate, consumes no static power, and gives good noise immunity to the simply designed sense amplifier. The chip design was done using Bellcore's version of the MULGA virtual grid symbolic layout system. The chip contains approximately 90,000 transistors in an area of 6.5 x 7.5 square mm and the I/Os are TTL compatible. The chip is packaged in a 68-pin leadless ceramic chip carrier package.
EnerCage: A Smart Experimental Arena With Scalable Architecture for Behavioral Experiments
Uei-Ming Jow; Peter McMenamin; Mehdi Kiani; Manns, Joseph R.; Ghovanloo, Maysam
2014-01-01
Wireless power, when coupled with miniaturized implantable electronics, has the potential to provide a solution to several challenges facing neuroscientists during basic and preclinical studies with freely behaving animals. The EnerCage system is one such solution as it allows for uninterrupted electrophysiology experiments over extended periods of time and vast experimental arenas, while eliminating the need for bulky battery payloads or tethering. It has a scalable array of overlapping planar spiral coils (PSCs) and three-axis magnetic sensors for focused wireless power transmission to devices on freely moving subjects. In this paper, we present the first fully functional EnerCage system, in which the number of PSC drivers and magnetic sensors was reduced to one-third of the number used in our previous design via multicoil coupling. The power transfer efficiency (PTE) has been improved to 5.6% at a 120 mm coupling distance and a 48.5 mm lateral misalignment (worst case) between the transmitter (Tx) array and receiver (Rx) coils. The new EnerCage system is equipped with an Ethernet backbone, further supporting its modular/scalable architecture, which, in turn, allows experimental arenas with arbitrary shapes and dimensions. A set of experiments on a freely behaving rat were conducted by continuously delivering 20 mW to the electronics in the animal headstage for more than one hour in a powered 3538 cm2 experimental area. PMID:23955695
Archetype-based semantic integration and standardization of clinical data.
Moner, David; Maldonado, Jose A; Bosca, Diego; Fernandez, Jesualdo T; Angulo, Carlos; Crespo, Pere; Vivancos, Pedro J; Robles, Montserrat
2006-01-01
One of the basic needs for any healthcare professional is to be able to access to clinical information of patients in an understandable and normalized way. The lifelong clinical information of any person supported by electronic means configures his/her Electronic Health Record (EHR). This information is usually distributed among several independent and heterogeneous systems that may be syntactically or semantically incompatible. The Dual Model architecture has appeared as a new proposal for maintaining a homogeneous representation of the EHR with a clear separation between information and knowledge. Information is represented by a Reference Model which describes common data structures with minimal semantics. Knowledge is specified by archetypes, which are formal representations of clinical concepts built upon a particular Reference Model. This kind of architecture is originally thought for implantation of new clinical information systems, but archetypes can be also used for integrating data of existing and not normalized systems, adding at the same time a semantic meaning to the integrated data. In this paper we explain the possible use of a Dual Model approach for semantic integration and standardization of heterogeneous clinical data sources and present LinkEHR-Ed, a tool for developing archetypes as elements for integration purposes. LinkEHR-Ed has been designed to be easily used by the two main participants of the creation process of archetypes for clinical data integration: the Health domain expert and the Information Technologies domain expert.
EnerCage: a smart experimental arena with scalable architecture for behavioral experiments.
Uei-Ming Jow; McMenamin, Peter; Kiani, Mehdi; Manns, Joseph R; Ghovanloo, Maysam
2014-01-01
Wireless power, when coupled with miniaturized implantable electronics, has the potential to provide a solution to several challenges facing neuroscientists during basic and preclinical studies with freely behaving animals. The EnerCage system is one such solution as it allows for uninterrupted electrophysiology experiments over extended periods of time and vast experimental arenas, while eliminating the need for bulky battery payloads or tethering. It has a scalable array of overlapping planar spiral coils (PSCs) and three-axis magnetic sensors for focused wireless power transmission to devices on freely moving subjects. In this paper, we present the first fully functional EnerCage system, in which the number of PSC drivers and magnetic sensors was reduced to one-third of the number used in our previous design via multicoil coupling. The power transfer efficiency (PTE) has been improved to 5.6% at a 120 mm coupling distance and a 48.5 mm lateral misalignment (worst case) between the transmitter (Tx) array and receiver (Rx) coils. The new EnerCage system is equipped with an Ethernet backbone, further supporting its modular/scalable architecture, which, in turn, allows experimental arenas with arbitrary shapes and dimensions. A set of experiments on a freely behaving rat were conducted by continuously delivering 20 mW to the electronics in the animal headstage for more than one hour in a powered 3538 cm(2) experimental area.
Software Architecture for Big Data Systems
2014-03-27
Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University Software Architecture for Big Data Systems...AND SUBTITLE Software Architecture for Big Data Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT...ih - . Software Architecture: Trends and New Directions #SEIswArch © 2014 Carnegie Mellon University WHAT IS BIG DATA ? FROM A SOFTWARE
Data Compression for Maskless Lithography Systems: Architecture, Algorithms and Implementation
2008-05-19
Data Compression for Maskless Lithography Systems: Architecture, Algorithms and Implementation Vito Dai Electrical Engineering and Computer Sciences...servers or to redistribute to lists, requires prior specific permission. Data Compression for Maskless Lithography Systems: Architecture, Algorithms and...for Maskless Lithography Systems: Architecture, Algorithms and Implementation Copyright 2008 by Vito Dai 1 Abstract Data Compression for Maskless
Hybridization of Architectural Styles for Integrated Enterprise Information Systems
NASA Astrophysics Data System (ADS)
Bagusyte, Lina; Lupeikiene, Audrone
Current enterprise systems engineering theory does not provide adequate support for the development of information systems on demand. To say more precisely, it is forming. This chapter proposes the main architectural decisions that underlie the design of integrated enterprise information systems. This chapter argues for the extending service-oriented architecture - for merging it with component-based paradigm at the design stage and using connectors of different architectural styles. The suitability of general-purpose language SysML for the modeling of integrated enterprise information systems architectures is described and arguments pros are presented.
Incorporation of EGPWS in the NASA Ames Research Center 747-400 Flight Simulator
NASA Technical Reports Server (NTRS)
Sallant, Ghislain; DeGennaro, Robert A.
2001-01-01
The NASA Ames Research Center CAE Boeing 747300 flight simulator is used primarily for the study of human factors in aviation safety. The simulator is constantly upgraded to maintain a configuration match to a specific United Airlines aircraft and maintains the highest level of FAA certification to ensure credibility to the results of research programs. United's 747-400 fleet and hence the simulator are transitioning from the older Ground Proximity Warning System (GPWS) to the state-of-the-art Enhanced Ground Proximity Warning System (EGPWS). GPWS was an early attempt to reduce or eliminate Controlled Flight Into Terrain (CFIT). Basic GPWS alerting modes include: excessive descent rate, excessive terrain closure rate, altitude loss after takeoff, unsafe terrain clearance, excessive deviation below glideslope, advisory callouts and windshear alerting. However, since GPWS uses the radar altimeter which looks straight down, ample warning is not always provided. EGPWS retains all of the basic functions of GPWS but adds the ability to look ahead by comparing the aircraft position to an internal database and provide additional alerting and display capabilities. This paper evaluates three methods of incorporating EGPWS in the simulator and describes the implementation and architecture of the preferred option.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
Modeling and Verification of Dependable Electronic Power System Architecture
NASA Astrophysics Data System (ADS)
Yuan, Ling; Fan, Ping; Zhang, Xiao-fang
The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.
Development of the New Educational Content "small Uas in Civil Engineering Application Scenarios"
NASA Astrophysics Data System (ADS)
Levin, E.; Vach, K.; Shults, R.
2017-12-01
The key point of this paper is presentation of the main idea and some results of the project "Small UAS in civil engineering application scenarios" (SUAS-CAS). This project was proposed by newly established in 2016 ISPRS WG V/7: "Innovative Technologies in Training Civil Engineers and Architects". Here we are presenting our experience in using low-cost UAS in training architects at Kyiv National University of Construction and Architecture, which was chosen as basic for this project. In the first part of paper, the project outline is presented. Then the first and possible follow project outcomes were described. In some details is described the training module "Small UAS in architecture" which was developed and included as a part of the subject "Architectural photogrammetry".
NASA Astrophysics Data System (ADS)
Barlas, Thanasis; Pettas, Vasilis; Gertz, Drew; Madsen, Helge A.
2016-09-01
The application of active trailing edge flaps in an industrial oriented implementation is evaluated in terms of capability of alleviating design extreme loads. A flap system with basic control functionality is implemented and tested in a realistic full Design Load Basis (DLB) for the DTU 10MW Reference Wind Turbine (RWT) model and for an upscaled rotor version in DTU's aeroelastic code HAWC2. The flap system implementation shows considerable potential in reducing extreme loads in components of interest including the blades, main bearing and tower top, with no influence on fatigue loads and power performance. In addition, an individual flap controller for fatigue load reduction in above rated power conditions is also implemented and integrated in the general controller architecture. The system is shown to be a technology enabler for rotor upscaling, by combining extreme and fatigue load reduction.
The Perception of Human Resources Enterprise Architecture within the Department of Defense
ERIC Educational Resources Information Center
Delaquis, Richard Serge
2012-01-01
The Clinger Cohen Act of 1996 requires that all major Federal Government Information Technology (IT) systems prepare an Enterprise Architecture prior to IT acquisitions. Enterprise Architecture, like house blueprints, represents the system build, capabilities, processes, and data across the enterprise of IT systems. Enterprise Architecture is used…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-13
... Architecture Proposal Review Meetings and Webinars; Notice of Public Meeting AGENCY: Research and Innovative... webinars to discuss the Vehicle to Infrastructure (V2I) Core System Requirements and Architecture Proposal... review of System Requirements Specification and Architecture Proposal. The second meeting will be a...
Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution
NASA Astrophysics Data System (ADS)
Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin
2018-04-01
The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.
StegoWall: blind statistical detection of hidden data
NASA Astrophysics Data System (ADS)
Voloshynovskiy, Sviatoslav V.; Herrigel, Alexander; Rytsar, Yuri B.; Pun, Thierry
2002-04-01
Novel functional possibilities, provided by recent data hiding technologies, carry out the danger of uncontrolled (unauthorized) and unlimited information exchange that might be used by people with unfriendly interests. The multimedia industry as well as the research community recognize the urgent necessity for network security and copyright protection, or rather the lack of adequate law for digital multimedia protection. This paper advocates the need for detecting hidden data in digital and analog media as well as in electronic transmissions, and for attempting to identify the underlying hidden data. Solving this problem calls for the development of an architecture for blind stochastic hidden data detection in order to prevent unauthorized data exchange. The proposed architecture is called StegoWall; its key aspects are the solid investigation, the deep understanding, and the prediction of possible tendencies in the development of advanced data hiding technologies. The basic idea of our complex approach is to exploit all information about hidden data statistics to perform its detection based on a stochastic framework. The StegoWall system will be used for four main applications: robust watermarking, secret communications, integrity control and tamper proofing, and internet/network security.
Tiede, Dirk; Baraldi, Andrea; Sudmanns, Martin; Belgiu, Mariana; Lang, Stefan
2017-01-01
ABSTRACT Spatiotemporal analytics of multi-source Earth observation (EO) big data is a pre-condition for semantic content-based image retrieval (SCBIR). As a proof of concept, an innovative EO semantic querying (EO-SQ) subsystem was designed and prototypically implemented in series with an EO image understanding (EO-IU) subsystem. The EO-IU subsystem is automatically generating ESA Level 2 products (scene classification map, up to basic land cover units) from optical satellite data. The EO-SQ subsystem comprises a graphical user interface (GUI) and an array database embedded in a client server model. In the array database, all EO images are stored as a space-time data cube together with their Level 2 products generated by the EO-IU subsystem. The GUI allows users to (a) develop a conceptual world model based on a graphically supported query pipeline as a combination of spatial and temporal operators and/or standard algorithms and (b) create, save and share within the client-server architecture complex semantic queries/decision rules, suitable for SCBIR and/or spatiotemporal EO image analytics, consistent with the conceptual world model. PMID:29098143
High-Purity Aluminum Magnet Technology for Advanced Space Transportation Systems
NASA Technical Reports Server (NTRS)
Goodrich, R. G.; Pullam, B.; Rickle, D.; Litchford, R. J.; Robertson, G. A.; Schmidt, D. D.; Cole, John (Technical Monitor)
2001-01-01
Basic research on advanced plasma-based propulsion systems is routinely focused on plasmadynamics, performance, and efficiency aspects while relegating the development of critical enabling technologies, such as flight-weight magnets, to follow-on development work. Unfortunately, the low technology readiness levels (TRLs) associated with critical enabling technologies tend to be perceived as an indicator of high technical risk, and this, in turn, hampers the acceptance of advanced system architectures for flight development. Consequently, there is growing recognition that applied research on the critical enabling technologies needs to be conducted hand in hand with basic research activities. The development of flight-weight magnet technology, for example, is one area of applied research having broad crosscutting applications to a number of advanced propulsion system architectures. Therefore, NASA Marshall Space Flight Center, Louisiana State University (LSU), and the National High Magnetic Field Laboratory (NHMFL) have initiated an applied research project aimed at advancing the TRL of flight-weight magnets. This Technical Publication reports on the group's initial effort to demonstrate the feasibility of cryogenic high-purity aluminum magnet technology and describes the design, construction, and testing of a 6-in-diameter by 12-in-long aluminum solenoid magnet. The coil was constructed in the machine shop of the Department of Physics and Astronomy at LSU and testing was conducted in NHMFL facilities at Florida State University and at Los Alamos National Laboratory. The solenoid magnet was first wound, reinforced, potted in high thermal conductivity epoxy, and bench tested in the LSU laboratories. A cryogenic container for operation at 77 K was also constructed and mated to the solenoid. The coil was then taken to NHMFL facilities in Tallahassee, FL. where its magnetoresistance was measured in a 77 K environment under steady magnetic fields as high as 10 T. In addition, the temperature dependence of the coil's resistance was measured from 77 to 300 K. Following this series of tests, the coil was transported to NHMFL facilities in Los Alamos, NM, and pulsed to 2 T using an existing capacitor bank pulse generator. The coil was completely successful in producing the desired field without damage to the windings.
From supramolecular polymers to multi-component biomaterials.
Goor, Olga J G M; Hendrikse, Simone I S; Dankers, Patricia Y W; Meijer, E W
2017-10-30
The most striking and general property of the biological fibrous architectures in the extracellular matrix (ECM) is the strong and directional interaction between biologically active protein subunits. These fibers display rich dynamic behavior without losing their architectural integrity. The complexity of the ECM taking care of many essential properties has inspired synthetic chemists to mimic these properties in artificial one-dimensional fibrous structures with the aim to arrive at multi-component biomaterials. Due to the dynamic character required for interaction with natural tissue, supramolecular biomaterials are promising candidates for regenerative medicine. Depending on the application area, and thereby the design criteria of these multi-component fibrous biomaterials, they are used as elastomeric materials or hydrogel systems. Elastomeric materials are designed to have load bearing properties whereas hydrogels are proposed to support in vitro cell culture. Although the chemical structures and systems designed and studied today are rather simple compared to the complexity of the ECM, the first examples of these functional supramolecular biomaterials reaching the clinic have been reported. The basic concept of many of these supramolecular biomaterials is based on their ability to adapt to cell behavior as a result of dynamic non-covalent interactions. In this review, we show the translation of one-dimensional supramolecular polymers into multi-component functional biomaterials for regenerative medicine applications.
An application of business process method to the clinical efficiency of hospital.
Leu, Jun-Der; Huang, Yu-Tsung
2011-06-01
The concept of Total Quality Management (TQM) has come to be applied in healthcare over the last few years. The process management category in the Baldrige Health Care Criteria for Performance Excellence model is designed to evaluate the quality of medical services. However, a systematic approach for implementation support is necessary to achieve excellence in the healthcare business process. The Architecture of Integrated Information Systems (ARIS) is a business process architecture developed by IDS Scheer AG and has been applied in a variety of industrial application. It starts with a business strategy to identify the core and support processes, and encompasses the whole life-cycle range, from business process design to information system deployment, which is compatible with the concept of healthcare performance excellence criteria. In this research, we apply the basic ARIS framework to optimize the clinical processes of an emergency department in a mid-size hospital with 300 clinical beds while considering the characteristics of the healthcare organization. Implementation of the case is described, and 16 months of clinical data are then collected, which are used to study the performance and feasibility of the method. The experience gleaned in this case study can be used a reference for mid-size hospitals with similar business models.
Planning and Execution: The Spirit of Opportunity for Robust Autonomous Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola
2004-01-01
One of the most exciting endeavors pursued by human kind is the search for life in the Solar System and the Universe at large. NASA is leading this effort by designing, deploying and operating robotic systems that will reach planets, planet moons, asteroids and comets searching for water, organic building blocks and signs of past or present microbial life. None of these missions will be achievable without substantial advances in.the design, implementation and validation of autonomous control agents. These agents must be capable of robustly controlling a robotic explorer in a hostile environment with very limited or no communication with Earth. The talk focuses on work pursued at the NASA Ames Research center ranging from basic research on algorithm to deployed mission support systems. We will start by discussing how planning and scheduling technology derived from the Remote Agent experiment is being used daily in the operations of the Spirit and Opportunity rovers. Planning and scheduling is also used as the fundamental paradigm at the core of our research in real-time autonomous agents. In particular, we will describe our efforts in the Intelligent Distributed Execution Architecture (IDEA), a multi-agent real-time architecture that exploits artificial intelligence planning as the core reasoning engine of an autonomous agent. We will also describe how the issue of plan robustness at execution can be addressed by novel constraint propagation algorithms capable of giving the tightest exact bounds on resource consumption or all possible executions of a flexible plan.
VASSAR: Value assessment of system architectures using rules
NASA Astrophysics Data System (ADS)
Selva, D.; Crawley, E. F.
A key step of the mission development process is the selection of a system architecture, i.e., the layout of the major high-level system design decisions. This step typically involves the identification of a set of candidate architectures and a cost-benefit analysis to compare them. Computational tools have been used in the past to bring rigor and consistency into this process. These tools can automatically generate architectures by enumerating different combinations of decisions and options. They can also evaluate these architectures by applying cost models and simplified performance models. Current performance models are purely quantitative tools that are best fit for the evaluation of the technical performance of mission design. However, assessing the relative merit of a system architecture is a much more holistic task than evaluating performance of a mission design. Indeed, the merit of a system architecture comes from satisfying a variety of stakeholder needs, some of which are easy to quantify, and some of which are harder to quantify (e.g., elegance, scientific value, political robustness, flexibility). Moreover, assessing the merit of a system architecture at these very early stages of design often requires dealing with a mix of: a) quantitative and semi-qualitative data; objective and subjective information. Current computational tools are poorly suited for these purposes. In this paper, we propose a general methodology that can used to assess the relative merit of several candidate system architectures under the presence of objective, subjective, quantitative, and qualitative stakeholder needs. The methodology called VASSAR (Value ASsessment for System Architectures using Rules). The major underlying assumption of the VASSAR methodology is that the merit of a system architecture can assessed by comparing the capabilities of the architecture with the stakeholder requirements. Hence for example, a candidate architecture that fully satisfies all critical sta- eholder requirements is a good architecture. The assessment process is thus fundamentally seen as a pattern matching process where capabilities match requirements, which motivates the use of rule-based expert systems (RBES). This paper describes the VASSAR methodology and shows how it can be applied to a large complex space system, namely an Earth observation satellite system. Companion papers show its applicability to the NASA space communications and navigation program and the joint NOAA-DoD NPOESS program.
Connecting Architecture and Implementation
NASA Astrophysics Data System (ADS)
Buchgeher, Georg; Weinreich, Rainer
Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.
A surface code quantum computer in silicon
Hill, Charles D.; Peretz, Eldad; Hile, Samuel J.; House, Matthew G.; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y.; Hollenberg, Lloyd C. L.
2015-01-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel—posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited. PMID:26601310
A surface code quantum computer in silicon.
Hill, Charles D; Peretz, Eldad; Hile, Samuel J; House, Matthew G; Fuechsle, Martin; Rogge, Sven; Simmons, Michelle Y; Hollenberg, Lloyd C L
2015-10-01
The exceptionally long quantum coherence times of phosphorus donor nuclear spin qubits in silicon, coupled with the proven scalability of silicon-based nano-electronics, make them attractive candidates for large-scale quantum computing. However, the high threshold of topological quantum error correction can only be captured in a two-dimensional array of qubits operating synchronously and in parallel-posing formidable fabrication and control challenges. We present an architecture that addresses these problems through a novel shared-control paradigm that is particularly suited to the natural uniformity of the phosphorus donor nuclear spin qubit states and electronic confinement. The architecture comprises a two-dimensional lattice of donor qubits sandwiched between two vertically separated control layers forming a mutually perpendicular crisscross gate array. Shared-control lines facilitate loading/unloading of single electrons to specific donors, thereby activating multiple qubits in parallel across the array on which the required operations for surface code quantum error correction are carried out by global spin control. The complexities of independent qubit control, wave function engineering, and ad hoc quantum interconnects are explicitly avoided. With many of the basic elements of fabrication and control based on demonstrated techniques and with simulated quantum operation below the surface code error threshold, the architecture represents a new pathway for large-scale quantum information processing in silicon and potentially in other qubit systems where uniformity can be exploited.
Contextual analysis of immunological response through whole-organ fluorescent imaging.
Woodruff, Matthew C; Herndon, Caroline N; Heesters, B A; Carroll, Michael C
2013-09-01
As fluorescent microscopy has developed, significant insights have been gained into the establishment of immune response within secondary lymphoid organs, particularly in draining lymph nodes. While established techniques such as confocal imaging and intravital multi-photon microscopy have proven invaluable, they provide limited insight into the architectural and structural context in which these responses occur. To interrogate the role of the lymph node environment in immune response effectively, a new set of imaging tools taking into account broader architectural context must be implemented into emerging immunological questions. Using two different methods of whole-organ imaging, optical clearing and three-dimensional reconstruction of serially sectioned lymph nodes, fluorescent representations of whole lymph nodes can be acquired at cellular resolution. Using freely available post-processing tools, images of unlimited size and depth can be assembled into cohesive, contextual snapshots of immunological response. Through the implementation of robust iterative analysis techniques, these highly complex three-dimensional images can be objectified into sortable object data sets. These data can then be used to interrogate complex questions at the cellular level within the broader context of lymph node biology. By combining existing imaging technology with complex methods of sample preparation and capture, we have developed efficient systems for contextualizing immunological phenomena within lymphatic architecture. In combination with robust approaches to image analysis, these advances provide a path to integrating scientific understanding of basic lymphatic biology into the complex nature of immunological response.
Design of Power System Architectures for Small Spacecraft Systems
NASA Technical Reports Server (NTRS)
Momoh, James A.; Subramonian, Rama; Dias, Lakshman G.
1996-01-01
The objective of this research is to perform a trade study on several candidate power system architectures for small spacecrafts to be used in NASA's new millennium program. Three initial candidate architectures have been proposed by NASA and two other candidate architectures have been proposed by Howard University. Howard University is currently conducting the necessary analysis, synthesis, and simulation needed to perform the trade studies and arrive at the optimal power system architecture. Statistical, sensitivity and tolerant studies has been performed on the systems. It is concluded from present studies that certain components such as the series regulators, buck-boost converters and power converters can be minimized while retaining the desired functionality of the overall architecture. This in conjunction with battery scalability studies and system efficiency studies have enabled us to develop more economic architectures. Future studies will include artificial neural networks and fuzzy logic to analyze the performance of the systems. Fault simulation studies and fault diagnosis studies using EMTP and artificial neural networks will also be conducted.
Integrated Nationwide Electronic Health Records system: Semi-distributed architecture approach.
Fragidis, Leonidas L; Chatzoglou, Prodromos D; Aggelidis, Vassilios P
2016-11-14
The integration of heterogeneous electronic health records systems by building an interoperable nationwide electronic health record system provides undisputable benefits in health care, like superior health information quality, medical errors prevention and cost saving. This paper proposes a semi-distributed system architecture approach for an integrated national electronic health record system incorporating the advantages of the two dominant approaches, the centralized architecture and the distributed architecture. The high level design of the main elements for the proposed architecture is provided along with diagrams of execution and operation and data synchronization architecture for the proposed solution. The proposed approach effectively handles issues related to redundancy, consistency, security, privacy, availability, load balancing, maintainability, complexity and interoperability of citizen's health data. The proposed semi-distributed architecture offers a robust interoperability framework without healthcare providers to change their local EHR systems. It is a pragmatic approach taking into account the characteristics of the Greek national healthcare system along with the national public administration data communication network infrastructure, for achieving EHR integration with acceptable implementation cost.
NASA Technical Reports Server (NTRS)
Katti, Romney R.
1995-01-01
Random-access memory (RAM) devices of proposed type exploit magneto-optical properties of magnetic garnets exhibiting perpendicular anisotropy. Magnetic writing and optical readout used. Provides nonvolatile storage and resists damage by ionizing radiation. Because of basic architecture and pinout requirements, most likely useful as small-capacity memory devices.
An integrated systems engineering approach to aircraft design
NASA Astrophysics Data System (ADS)
Price, M.; Raghunathan, S.; Curran, R.
2006-06-01
The challenge in Aerospace Engineering, in the next two decades as set by Vision 2020, is to meet the targets of reduction of nitric oxide emission by 80%, carbon monoxide and carbon dioxide both by 50%, reduce noise by 50% and of course with reduced cost and improved safety. All this must be achieved with expected increase in capacity and demand. Such a challenge has to be in a background where the understanding of physics of flight has changed very little over the years and where industrial growth is driven primarily by cost rather than new technology. The way forward to meet the challenges is to introduce innovative technologies and develop an integrated, effective and efficient process for the life cycle design of aircraft, known as systems engineering (SE). SE is a holistic approach to a product that comprises several components. Customer specifications, conceptual design, risk analysis, functional analysis and architecture, physical architecture, design analysis and synthesis, and trade studies and optimisation, manufacturing, testing validation and verification, delivery, life cycle cost and management. Further, it involves interaction between traditional disciplines such as Aerodynamics, Structures and Flight Mechanics with people- and process-oriented disciplines such as Management, Manufacturing, and Technology Transfer. SE has become the state-of-the-art methodology for organising and managing aerospace production. However, like many well founded methodologies, it is more difficult to embody the core principles into formalised models and tools. The key contribution of the paper will be to review this formalisation and to present the very latest knowledge and technology that facilitates SE theory. Typically, research into SE provides a deeper understanding of the core principles and interactions, and helps one to appreciate the required technical architecture for fully exploiting it as a process, rather than a series of events. There are major issues as regards to systems approach to aircraft design and these include lack of basic scientific/practical models and tools for interfacing and integrating the components of SE and within a given component, for example, life cycle cost, basic models for linking the key drivers. The paper will review the current state of art in SE approach to aircraft design and identify some of the major challenges, the current state of the art and visions for the future. The review moves from an initial basis in traditional engineering design processes to consideration of costs and manufacturing in this integrated environment. Issues related to the implementation of integration in design at the detailed physics level are discussed in the case studies.
The neuron classification problem
Bota, Mihail; Swanson, Larry W.
2007-01-01
A systematic account of neuron cell types is a basic prerequisite for determining the vertebrate nervous system global wiring diagram. With comprehensive lineage and phylogenetic information unavailable, a general ontology based on structure-function taxonomy is proposed and implemented in a knowledge management system, and a prototype analysis of select regions (including retina, cerebellum, and hypothalamus) presented. The supporting Brain Architecture Knowledge Management System (BAMS) Neuron ontology is online and its user interface allows queries about terms and their definitions, classification criteria based on the original literature and “Petilla Convention” guidelines, hierarchies, and relations—with annotations documenting each ontology entry. Combined with three BAMS modules for neural regions, connections between regions and neuron types, and molecules, the Neuron ontology provides a general framework for physical descriptions and computational modeling of neural systems. The knowledge management system interacts with other web resources, is accessible in both XML and RDF/OWL, is extendible to the whole body, and awaits large-scale data population requiring community participation for timely implementation. PMID:17582506
Intelligent community management system based on the devicenet fieldbus
NASA Astrophysics Data System (ADS)
Wang, Yulan; Wang, Jianxiong; Liu, Jiwen
2013-03-01
With the rapid development of the national economy and the improvement of people's living standards, people are making higher demands on the living environment. And the estate management content, management efficiency and service quality have been higher required. This paper in-depth analyzes about the intelligent community of the structure and composition. According to the users' requirements and related specifications, it achieves the district management systems, which includes Basic Information Management: the management level of housing, household information management, administrator-level management, password management, etc. Service Management: standard property costs, property charges collecting, the history of arrears and other property expenses. Security Management: household gas, water, electricity and security and other security management, security management district and other public places. Systems Management: backup database, restore database, log management. This article also carries out on the Intelligent Community System analysis, proposes an architecture which is based on B / S technology system. And it has achieved a global network device management with friendly, easy to use, unified human - machine interface.
Analysis and design of energy monitoring platform for smart city
NASA Astrophysics Data System (ADS)
Wang, Hong-xia
2016-09-01
The development and utilization of energy has greatly promoted the development and progress of human society. It is the basic material foundation for human survival. City running is bound to consume energy inevitably, but it also brings a lot of waste discharge. In order to speed up the process of smart city, improve the efficiency of energy saving and emission reduction work, maintain the green and livable environment, a comprehensive management platform of energy monitoring for government departments is constructed based on cloud computing technology and 3-tier architecture in this paper. It is assumed that the system will provide scientific guidance for the environment management and decision making in smart city.
Parallel optoelectronic trinary signed-digit division
NASA Astrophysics Data System (ADS)
Alam, Mohammad S.
1999-03-01
The trinary signed-digit (TSD) number system has been found to be very useful for parallel addition and subtraction of any arbitrary length operands in constant time. Using the TSD addition and multiplication modules as the basic building blocks, we develop an efficient algorithm for performing parallel TSD division in constant time. The proposed division technique uses one TSD subtraction and two TSD multiplication steps. An optoelectronic correlator based architecture is suggested for implementation of the proposed TSD division algorithm, which fully exploits the parallelism and high processing speed of optics. An efficient spatial encoding scheme is used to ensure better utilization of space bandwidth product of the spatial light modulators used in the optoelectronic implementation.
[Neuroscientific basic in addiction].
Johann-Ridinger, Monika
2014-10-01
The growing evidence of Neuroscience leads to a better understanding of cerebral processes in cases of acute or chronic intake of psychotropic substances (ps). Predominantly, structures of the "reward system" contributed to the development of addiction. Chronic consumption of ps provides changing in brain equilibrium and leads to adaptations in the brain architecture. In this article, the complex responses of neurons and neuronal networks are presented in cases of chronic intake of ps. The alterations affect the cognitive, emotional and behavioral processings and influence learning and stress regulation. In summary, all cerebral adaptations are integrated in a complex model of biological, psychological and social factors and therefore, addiction arises as a consequence of combination of individual protecting and risk factors.
psiTurk: An open-source framework for conducting replicable behavioral experiments online.
Gureckis, Todd M; Martin, Jay; McDonnell, John; Rich, Alexander S; Markant, Doug; Coenen, Anna; Halpern, David; Hamrick, Jessica B; Chan, Patricia
2016-09-01
Online data collection has begun to revolutionize the behavioral sciences. However, conducting carefully controlled behavioral experiments online introduces a number of new of technical and scientific challenges. The project described in this paper, psiTurk, is an open-source platform which helps researchers develop experiment designs which can be conducted over the Internet. The tool primarily interfaces with Amazon's Mechanical Turk, a popular crowd-sourcing labor market. This paper describes the basic architecture of the system and introduces new users to the overall goals. psiTurk aims to reduce the technical hurdles for researchers developing online experiments while improving the transparency and collaborative nature of the behavioral sciences.
A Summary of NASA Architecture Studies Utilizing Fission Surface Power Technology
NASA Technical Reports Server (NTRS)
Mason, Lee; Poston, Dave
2010-01-01
Beginning with the Exploration Systems Architecture Study in 2005, NASA has conducted various mission architecture studies to evaluate implementation options for the U.S. Space Policy (formerly the Vision for Space Exploration). Several of the studies examined the use of Fission Surface Power (FSP) systems for human missions to the lunar and Martian surface. This paper summarizes the FSP concepts developed under four different NASA-sponsored architecture studies: Lunar Architecture Team, Mars Architecture Team, Lunar Surface Systems/Constellation Architecture team, and International Architecture Working Group-Power Function team. The results include a summary of FSP design characteristics, a compilation of mission-compatible FSP configuration options, and an FSP concept-of-operations that is consistent with the overall mission objectives.
Design and Analysis of Architectures for Structural Health Monitoring Systems
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi; Sixto, S. L. (Technical Monitor)
2002-01-01
During the two-year project period, we have worked on several aspects of Health Usage and Monitoring Systems for structural health monitoring. In particular, we have made contributions in the following areas. 1. Reference HUMS architecture: We developed a high-level architecture for health monitoring and usage systems (HUMS). The proposed reference architecture is shown. It is compatible with the Generic Open Architecture (GOA) proposed as a standard for avionics systems. 2. HUMS kernel: One of the critical layers of HUMS reference architecture is the HUMS kernel. We developed a detailed design of a kernel to implement the high level architecture.3. Prototype implementation of HUMS kernel: We have implemented a preliminary version of the HUMS kernel on a Unix platform.We have implemented both a centralized system version and a distributed version. 4. SCRAMNet and HUMS: SCRAMNet (Shared Common Random Access Memory Network) is a system that is found to be suitable to implement HUMS. For this reason, we have conducted a simulation study to determine its stability in handling the input data rates in HUMS. 5. Architectural specification.
A fast, programmable hardware architecture for the processing of spaceborne SAR data
NASA Technical Reports Server (NTRS)
Bennett, J. R.; Cumming, I. G.; Lim, J.; Wedding, R. M.
1984-01-01
The development of high-throughput SAR processors (HTSPs) for the spaceborne SARs being planned by NASA, ESA, DFVLR, NASDA, and the Canadian Radarsat Project is discussed. The basic parameters and data-processing requirements of the SARs are listed in tables, and the principal problems are identified as real-operations rates in excess of 2 x 10 to the 9th/sec, I/O rates in excess of 8 x 10 to the 6th samples/sec, and control computation loads (as for range cell migration correction) as high as 1.4 x 10 to the 6th instructions/sec. A number of possible HTSP architectures are reviewed; host/array-processor (H/AP) and distributed-control/data-path (DCDP) architectures are examined in detail and illustrated with block diagrams; and a cost/speed comparison of these two architectures is presented. The H/AP approach is found to be adequate and economical for speeds below 1/200 of real time, while DCDP is more cost-effective above 1/50 of real time.
On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences
Thiyagalingam, Jeyarajan; Goodman, Daniel; Schnabel, Julia A.; Trefethen, Anne; Grau, Vicente
2011-01-01
Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results. PMID:21869880
Executable Architecture Research at Old Dominion University
NASA Technical Reports Server (NTRS)
Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.
2011-01-01
Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.
A Distributed Intelligent E-Learning System
ERIC Educational Resources Information Center
Kristensen, Terje
2016-01-01
An E-learning system based on a multi-agent (MAS) architecture combined with the Dynamic Content Manager (DCM) model of E-learning, is presented. We discuss the benefits of using such a multi-agent architecture. Finally, the MAS architecture is compared with a pure service-oriented architecture (SOA). This MAS architecture may also be used within…
Structuring clinical workflows for diabetes care: an overview of the OntoHealth approach.
Schweitzer, M; Lasierra, N; Oberbichler, S; Toma, I; Fensel, A; Hoerbst, A
2014-01-01
Electronic health records (EHRs) play an important role in the treatment of chronic diseases such as diabetes mellitus. Although the interoperability and selected functionality of EHRs are already addressed by a number of standards and best practices, such as IHE or HL7, the majority of these systems are still monolithic from a user-functionality perspective. The purpose of the OntoHealth project is to foster a functionally flexible, standards-based use of EHRs to support clinical routine task execution by means of workflow patterns and to shift the present EHR usage to a more comprehensive integration concerning complete clinical workflows. The goal of this paper is, first, to introduce the basic architecture of the proposed OntoHealth project and, second, to present selected functional needs and a functional categorization regarding workflow-based interactions with EHRs in the domain of diabetes. A systematic literature review regarding attributes of workflows in the domain of diabetes was conducted. Eligible references were gathered and analyzed using a qualitative content analysis. Subsequently, a functional workflow categorization was derived from diabetes-specific raw data together with existing general workflow patterns. This paper presents the design of the architecture as well as a categorization model which makes it possible to describe the components or building blocks within clinical workflows. The results of our study lead us to identify basic building blocks, named as actions, decisions, and data elements, which allow the composition of clinical workflows within five identified contexts. The categorization model allows for a description of the components or building blocks of clinical workflows from a functional view.
Structuring Clinical Workflows for Diabetes Care
Lasierra, N.; Oberbichler, S.; Toma, I.; Fensel, A.; Hoerbst, A.
2014-01-01
Summary Background Electronic health records (EHRs) play an important role in the treatment of chronic diseases such as diabetes mellitus. Although the interoperability and selected functionality of EHRs are already addressed by a number of standards and best practices, such as IHE or HL7, the majority of these systems are still monolithic from a user-functionality perspective. The purpose of the OntoHealth project is to foster a functionally flexible, standards-based use of EHRs to support clinical routine task execution by means of workflow patterns and to shift the present EHR usage to a more comprehensive integration concerning complete clinical workflows. Objectives The goal of this paper is, first, to introduce the basic architecture of the proposed OntoHealth project and, second, to present selected functional needs and a functional categorization regarding workflow-based interactions with EHRs in the domain of diabetes. Methods A systematic literature review regarding attributes of workflows in the domain of diabetes was conducted. Eligible references were gathered and analyzed using a qualitative content analysis. Subsequently, a functional workflow categorization was derived from diabetes-specific raw data together with existing general workflow patterns. Results This paper presents the design of the architecture as well as a categorization model which makes it possible to describe the components or building blocks within clinical workflows. The results of our study lead us to identify basic building blocks, named as actions, decisions, and data elements, which allow the composition of clinical workflows within five identified contexts. Conclusions The categorization model allows for a description of the components or building blocks of clinical workflows from a functional view. PMID:25024765
A heterogeneous hierarchical architecture for real-time computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skroch, D.A.; Fornaro, R.J.
The need for high-speed data acquisition and control algorithms has prompted continued research in the area of multiprocessor systems and related programming techniques. The result presented here is a unique hardware and software architecture for high-speed real-time computer systems. The implementation of a prototype of this architecture has required the integration of architecture, operating systems and programming languages into a cohesive unit. This report describes a Heterogeneous Hierarchial Architecture for Real-Time (H{sup 2} ART) and system software for program loading and interprocessor communication.
CVISN system design description
DOT National Transportation Integrated Search
1999-05-01
This document focuses on the Commercial Vehicle Information Systems and Networks (CVISN) System Design and Architecture. It begins with a discussion on the relationships between the National ITS Architecture the CVISN Architecture, and the Internatio...
Role of System Architecture in Architecture in Developing New Drafting Tools
NASA Astrophysics Data System (ADS)
Sorguç, Arzu Gönenç
In this study, the impact of information technologies in architectural design process is discussed. In this discussion, first the differences/nuances between the concept of software engineering and system architecture are clarified. Then, the design process in engineering, and design process in architecture has been compared by considering 3-D models as the center of design process over which the other disciplines involve the design. It is pointed out that in many high-end engineering applications, 3-D solid models and consequently digital mock-up concept has become a common practice. But, architecture as one of the important customers of CAD systems employing these tools has not started to use these 3-D models. It is shown that the reason of this time lag between architecture and engineering lies behind the tradition of design attitude. Therefore, it is proposed a new design scheme a meta-model to develop an integrated design model being centered on 3-D model. It is also proposed a system architecture to achieve the transformation of architectural design process by replacing 2-D thinking with 3-D thinking. It is stated that in the proposed system architecture, the CAD systems are included and adapted for 3-D architectural design in order to provide interfaces for integration of all possible disciplines to design process. It is also shown that such a change will allow to elaborate the intelligent or smart building concept in future.
23 CFR 940.9 - Regional ITS architecture.
Code of Federal Regulations, 2014 CFR
2014-04-01
... FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION INTELLIGENT TRANSPORTATION SYSTEMS INTELLIGENT TRANSPORTATION SYSTEM ARCHITECTURE AND STANDARDS § 940.9 Regional ITS architecture. (a) A regional... ITS project for that region advancing to final design. (d) The regional ITS architecture shall include...
23 CFR 940.9 - Regional ITS architecture.
Code of Federal Regulations, 2013 CFR
2013-04-01
... FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION INTELLIGENT TRANSPORTATION SYSTEMS INTELLIGENT TRANSPORTATION SYSTEM ARCHITECTURE AND STANDARDS § 940.9 Regional ITS architecture. (a) A regional... ITS project for that region advancing to final design. (d) The regional ITS architecture shall include...
23 CFR 940.9 - Regional ITS architecture.
Code of Federal Regulations, 2012 CFR
2012-04-01
... FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION INTELLIGENT TRANSPORTATION SYSTEMS INTELLIGENT TRANSPORTATION SYSTEM ARCHITECTURE AND STANDARDS § 940.9 Regional ITS architecture. (a) A regional... ITS project for that region advancing to final design. (d) The regional ITS architecture shall include...
Constellation Architecture Team-Lunar: Lunar Habitat Concepts
NASA Technical Reports Server (NTRS)
Toups, Larry; Kennedy, Kriss J.
2008-01-01
This paper will describe lunar habitat concepts that were defined as part of the Constellation Architecture Team-Lunar (CxAT-Lunar) in support of the Vision for Space Exploration. There are many challenges to designing lunar habitats such as mission objectives, launch packaging, lander capability, and risks. Surface habitats are required in support of sustaining human life to meet the mission objectives of lunar exploration, operations, and sustainability. Lunar surface operations consist of crew operations, mission operations, EVA operations, science operations, and logistics operations. Habitats are crewed pressurized vessels that include surface mission operations, science laboratories, living support capabilities, EVA support, logistics, and maintenance facilities. The challenge is to deliver, unload, and deploy self-contained habitats and laboratories to the lunar surface. The CxAT-Lunar surface campaign analysis focused on three primary trade sets of analysis. Trade set one (TS1) investigated sustaining a crew of four for six months with full outpost capability and the ability to perform long surface mission excursions using large mobility systems. Two basic habitat concepts of a hard metallic horizontal cylinder and a larger inflatable torus concept were investigated as options in response to the surface exploration architecture campaign analysis. Figure 1 and 2 depicts the notional outpost configurations for this trade set. Trade set two (TS2) investigated a mobile architecture approach with the campaign focused on early exploration using two small pressurized rovers and a mobile logistics support capability. This exploration concept will not be described in this paper. Trade set three (TS3) investigated delivery of a "core' habitation capability in support of an early outpost that would mature into the TS1 full outpost capability. Three core habitat concepts were defined for this campaign analysis. One with a four port core habitat, another with a 2 port core habitat, and the third investigated leveraging commonality of the lander ascent module and airlock pressure vessel hard shell. The paper will describe an overview of the various habitat concepts and their functionality. The Crew Operations area includes basic crew accommodations such as sleeping, eating, hygiene and stowage. The EVA Operations area includes additional EVA capability beyond the suit-port airlock function such as redundant airlock(s), suit maintenance, spares stowage, and suit stowage. The Logistics Operations area includes the enhanced accommodations for 180 days such as closed loop life support systems hardware, consumable stowage, spares stowage, interconnection to the other Hab units, and a common interface mechanism for future growth and mating to a pressurized rover. The Mission & Science Operations area includes enhanced outpost autonomy such as an IVA glove box, life support, and medical operations.
New optical architecture for holographic data storage system compatible with Blu-ray Disc™ system
NASA Astrophysics Data System (ADS)
Shimada, Ken-ichi; Ide, Tatsuro; Shimano, Takeshi; Anderson, Ken; Curtis, Kevin
2014-02-01
A new optical architecture for holographic data storage system which is compatible with a Blu-ray Disc™ (BD) system is proposed. In the architecture, both signal and reference beams pass through a single objective lens with numerical aperture (NA) 0.85 for realizing angularly multiplexed recording. The geometry of the architecture brings a high affinity with an optical architecture in the BD system because the objective lens can be placed parallel to a holographic medium. Through the comparison of experimental results with theory, the validity of the optical architecture was verified and demonstrated that the conventional objective lens motion technique in the BD system is available for angularly multiplexed recording. The test-bed composed of a blue laser system and an objective lens of the NA 0.85 was designed. The feasibility of its compatibility with BD is examined through the designed test-bed.
Space Generic Open Avionics Architecture (SGOAA) reference model technical guide
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1993-01-01
This report presents a full description of the Space Generic Open Avionics Architecture (SGOAA). The SGOAA consists of a generic system architecture for the entities in spacecraft avionics, a generic processing architecture, and a six class model of interfaces in a hardware/software system. The purpose of the SGOAA is to provide an umbrella set of requirements for applying the generic architecture interface model to the design of specific avionics hardware/software systems. The SGOAA defines a generic set of system interface points to facilitate identification of critical interfaces and establishes the requirements for applying appropriate low level detailed implementation standards to those interface points. The generic core avionics system and processing architecture models provided herein are robustly tailorable to specific system applications and provide a platform upon which the interface model is to be applied.
The Technology of Teaching Young Handicapped Children.
ERIC Educational Resources Information Center
Bijou, Sidney W.
To fabricate a technology for teaching young school children with serious behavior problems, classroom materials, curriculum format, and teaching procedures were developed, and problems that evolve from the technology investigated. Two classrooms were architecturally designed to provide the basic needs of a special classroom and to facilitate…
Running TCP/IP over ATM Networks.
ERIC Educational Resources Information Center
Witt, Michael
1995-01-01
Discusses Internet protocol (IP) and subnets and describes how IP may operate over asynchronous transfer mode (ATM). Topics include TCP (transmission control protocol), ATM cells and adaptation layers, a basic architectural model for IP over ATM, address resolution, mapping IP to a subnet technology, and connection management strategy. (LRW)
The Molecular Basis of Development.
ERIC Educational Resources Information Center
Gehring, Walter J.
1985-01-01
Basic architecture of embryo development appears to be under homeobox control (a short stretch of DNA). Outlines research on this genetic segment in fruit flies which led to identification of this control on the embryo's spatial organization. Indicates that molecular mechanisms underlying development may be much more universal than previously…
Rule-based graph theory to enable exploration of the space system architecture design space
NASA Astrophysics Data System (ADS)
Arney, Dale Curtis
The primary goal of this research is to improve upon system architecture modeling in order to enable the exploration of design space options. A system architecture is the description of the functional and physical allocation of elements and the relationships, interactions, and interfaces between those elements necessary to satisfy a set of constraints and requirements. The functional allocation defines the functions that each system (element) performs, and the physical allocation defines the systems required to meet those functions. Trading the functionality between systems leads to the architecture-level design space that is available to the system architect. The research presents a methodology that enables the modeling of complex space system architectures using a mathematical framework. To accomplish the goal of improved architecture modeling, the framework meets five goals: technical credibility, adaptability, flexibility, intuitiveness, and exhaustiveness. The framework is technically credible, in that it produces an accurate and complete representation of the system architecture under consideration. The framework is adaptable, in that it provides the ability to create user-specified locations, steady states, and functions. The framework is flexible, in that it allows the user to model system architectures to multiple destinations without changing the underlying framework. The framework is intuitive for user input while still creating a comprehensive mathematical representation that maintains the necessary information to completely model complex system architectures. Finally, the framework is exhaustive, in that it provides the ability to explore the entire system architecture design space. After an extensive search of the literature, graph theory presents a valuable mechanism for representing the flow of information or vehicles within a simple mathematical framework. Graph theory has been used in developing mathematical models of many transportation and network flow problems in the past, where nodes represent physical locations and edges represent the means by which information or vehicles travel between those locations. In space system architecting, expressing the physical locations (low-Earth orbit, low-lunar orbit, etc.) and steady states (interplanetary trajectory) as nodes and the different means of moving between the nodes (propulsive maneuvers, etc.) as edges formulates a mathematical representation of this design space. The selection of a given system architecture using graph theory entails defining the paths that the systems take through the space system architecture graph. A path through the graph is defined as a list of edges that are traversed, which in turn defines functions performed by the system. A structure to compactly represent this information is a matrix, called the system map, in which the column indices are associated with the systems that exist and row indices are associated with the edges, or functions, to which each system has access. Several contributions have been added to the state of the art in space system architecture analysis. The framework adds the capability to rapidly explore the design space without the need to limit trade options or the need for user interaction during the exploration process. The unique mathematical representation of a system architecture, through the use of the adjacency, incidence, and system map matrices, enables automated design space exploration using stochastic optimization processes. The innovative rule-based graph traversal algorithm ensures functional feasibility of each system architecture that is analyzed, and the automatic generation of the system hierarchy eliminates the need for the user to manually determine the relationships between systems during or before the design space exploration process. Finally, the rapid evaluation of system architectures for various mission types enables analysis of the system architecture design space for multiple destinations within an evolutionary exploration program. (Abstract shortened by UMI.).
Open architecture design and approach for the Integrated Sensor Architecture (ISA)
NASA Astrophysics Data System (ADS)
Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael
2015-05-01
Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-09
...: Digital systems architecture composed of several connected networks. The proposed network architecture..., communication, and navigation systems (Aircraft Control Domain), 2. Airline business and administrative support... system architectures. Furthermore, 14 CFR regulations and current system safety assessment policy and...
Born to run: creating the muscle fiber.
Schejter, Eyal D; Baylies, Mary K
2010-10-01
From the muscles that control the blink of your eye to those that allow you to walk, the basic architecture of muscle is the same: muscles consist of bundles of the unit muscle cell, the muscle fiber. The unique morphology of the individual muscle fiber is dictated by the functional demands necessary to generate and withstand the forces of contraction, which in turn leads to movement. Contractile muscle fibers are elongated, syncytial cells, which interact with both the nervous and skeletal systems to govern body motion. In this review, we focus on three key cell-cell and cell-matrix contact processes, that are necessary to create this exquisitely specialized cell: cell fusion, cell elongation, and establishment of a myotendinous junction. We address these processes by highlighting recent findings from the Drosophila model system. Copyright © 2010 Elsevier Ltd. All rights reserved.
Survivability design for a hybrid underwater vehicle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Biao; Wu, Chao; Li, Xiang
A novel hybrid underwater robotic vehicle (HROV) capable of working to the full ocean depth has been developed. The battery powered vehicle operates in two modes: operate as an untethered autonomous vehicle in autonomous underwater vehicle (AUV) mode and operate under remote control connected to the surface vessel by a lightweight, fiber optic tether in remotely operated vehicle (ROV) mode. Considering the hazardous underwater environment at the limiting depth and the hybrid operating modes, survivability has been placed on an equal level with the other design attributes of the HROV since the beginning of the project. This paper reports themore » survivability design elements for the HROV including basic vehicle design of integrated navigation and integrated communication, emergency recovery strategy, distributed architecture, redundant bus, dual battery package, emergency jettison system and self-repairing control system.« less
2015-05-01
Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and Mobile Devices Walt Scacchi and Thomas...2015 to 00-00-2015 4. TITLE AND SUBTITLE Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and...architecture (OA) software systems Emerging challenges in achieving Better Buying Power (BBP) via OA software systems for Web- based and Mobile devices
NASA Technical Reports Server (NTRS)
Nauda, A.
1982-01-01
Performance and reliability models of alternate microcomputer architectures as a methodology for optimizing system design were examined. A methodology for selecting an optimum microcomputer architecture for autonomous operation of planetary spacecraft power systems was developed. Various microcomputer system architectures are analyzed to determine their application to spacecraft power systems. It is suggested that no standardization formula or common set of guidelines exists which provides an optimum configuration for a given set of specifications.
NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations
NASA Astrophysics Data System (ADS)
Frisbie, T. E.; Hall, C. M.
2006-12-01
Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.
System design in an evolving system-of-systems architecture and concept of operations
NASA Astrophysics Data System (ADS)
Rovekamp, Roger N., Jr.
Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.
A Ground Systems Architecture Transition for a Distributed Operations System
NASA Technical Reports Server (NTRS)
Sellers, Donna; Pitts, Lee; Bryant, Barry
2003-01-01
The Marshall Space Flight Center (MSFC) Ground Systems Department (GSD) recently undertook an architecture change in the product line that serves the ISS program. As a result, the architecture tradeoffs between data system product lines that serve remote users versus those that serve control center flight control teams were explored extensively. This paper describes the resulting architecture that will be used in the International Space Station (ISS) payloads program, and the resulting functional breakdown of the products that support this architecture. It also describes the lessons learned from the path that was followed, as a migration of products cause the need to reevaluate the allocation of functions across the architecture. The result is a set of innovative ground system solutions that is scalable so it can support facilities of wide-ranging sizes, from a small site up to large control centers. Effective use of system automation, custom components, design optimization for data management, data storage, data transmissions, and advanced local and wide area networking architectures, plus the effective use of Commercial-Off-The-Shelf (COTS) products, provides flexible Remote Ground System options that can be tailored to the needs of each user. This paper offers a description of the efficiency and effectiveness of the Ground Systems architectural options that have been implemented, and includes successful implementation examples and lessons learned.
NASA Technical Reports Server (NTRS)
Watson, Steve; Orr, Jim; O'Neil, Graham
2004-01-01
A mission-systems architecture based on a highly modular "systems of systems" infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is absolutely essential for an affordable and sustainable space exploration program. This architecture requires (a) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimum sustaining engineering. This paper proposes such an architecture. Lessons learned from the space shuttle program are applied to help define and refine the model.
Paranoia.Ada: Sample output reports
NASA Technical Reports Server (NTRS)
1986-01-01
Paranoia.Ada is a program to diagnose floating point arithmetic in the context of the Ada programming language. The program evaluates the quality of a floating point arithmetic implementation with respect to the proposed IEEE Standards P754 and P854. Paranoia.Ada is derived from the original BASIC programming language version of Paranoia. The Paranoia.Ada replicates in Ada the test algorithms originally implemented in BASIC and adheres to the evaluation criteria established by W. M. Kahan. Paranoia.Ada incorporates a major structural redesign and employs applicable Ada architectural and stylistic features.
DOT National Transportation Integrated Search
1991-07-01
A SYSTEM ARCHITECTURE IS THE MASTER BUILDING PLAN. IT CAN BE THOUGHT OF AS THE FRAMEWORK THAT CONCEPTUALLY DESCRIBES HOW COMPONENTS INTERACT AND WORK TOGETHER TO ACHIEVE TOTAL SYSTEM GOALS AND OBJECTIVES. IDEALLY, A SYSTEM ARCHITECTURE PROVIDES FOR A...
Functional Interface Considerations within an Exploration Life Support System Architecture
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Sargusingh, Miriam J.; Toomarian, Nikzad
2016-01-01
As notional life support system (LSS) architectures are developed and evaluated, myriad options must be considered pertaining to process technologies, components, and equipment assemblies. Each option must be evaluated relative to its impact on key functional interfaces within the LSS architecture. A leading notional architecture has been developed to guide the path toward realizing future crewed space exploration goals. This architecture includes atmosphere revitalization, water recovery and management, and environmental monitoring subsystems. Guiding requirements for developing this architecture are summarized and important interfaces within the architecture are discussed. The role of environmental monitoring within the architecture is described.