Shalom, Erez; Shahar, Yuval; Lunenfeld, Eitan
2016-02-01
Design, implement, and evaluate a new architecture for realistic continuous guideline (GL)-based decision support, based on a series of requirements that we have identified, such as support for continuous care, for multiple task types, and for data-driven and user-driven modes. We designed and implemented a new continuous GL-based support architecture, PICARD, which accesses a temporal reasoning engine, and provides several different types of application interfaces. We present the new architecture in detail in the current paper. To evaluate the architecture, we first performed a technical evaluation of the PICARD architecture, using 19 simulated scenarios in the preeclampsia/toxemia domain. We then performed a functional evaluation with the help of two domain experts, by generating patient records that simulate 60 decision points from six clinical guideline-based scenarios, lasting from two days to four weeks. Finally, 36 clinicians made manual decisions in half of the scenarios, and had access to the automated GL-based support in the other half. The measures used in all three experiments were correctness and completeness of the decisions relative to the GL. Mean correctness and completeness in the technical evaluation were 1±0.0 and 0.96±0.03 respectively. The functional evaluation produced only several minor comments from the two experts, mostly regarding the output's style; otherwise the system's recommendations were validated. In the clinically oriented evaluation, the 36 clinicians applied manually approximately 41% of the GL's recommended actions. Completeness increased to approximately 93% when using PICARD. Manual correctness was approximately 94.5%, and remained similar when using PICARD; but while 68% of the manual decisions included correct but redundant actions, only 3% of the actions included in decisions made when using PICARD were redundant. The PICARD architecture is technically feasible and is functionally valid, and addresses the realistic continuous GL-based application requirements that we have defined; in particular, the requirement for care over significant time frames. The use of the PICARD architecture in the domain we examined resulted in enhanced completeness and in reduction of redundancies, and is potentially beneficial for general GL-based management of chronic patients. Copyright © 2015 Elsevier Inc. All rights reserved.
López, Diego M; Blobel, Bernd; Gonzalez, Carolina
2010-01-01
Requirement analysis, design, implementation, evaluation, use, and maintenance of semantically interoperable Health Information Systems (HIS) have to be based on eHealth standards. HIS-DF is a comprehensive approach for HIS architectural development based on standard information models and vocabulary. The empirical validity of HIS-DF has not been demonstrated so far. Through an empirical experiment, the paper demonstrates that using HIS-DF and HL7 information models, semantic quality of HIS architecture can be improved, compared to architectures developed using traditional RUP process. Semantic quality of the architecture has been measured in terms of model's completeness and validity metrics. The experimental results demonstrated an increased completeness of 14.38% and an increased validity of 16.63% when using the HIS-DF and HL7 information models in a sample HIS development project. Quality assurance of the system architecture in earlier stages of HIS development presumes an increased quality of final HIS systems, which supposes an indirect impact on patient care.
24 CFR 905.314 - Cost and other limitations.
Code of Federal Regulations, 2014 CFR
2014-04-01
... excluding any costs related to lead-based paint or asbestos testing, in-house architectural and engineering... lead-based paint or asbestos testing, in-house Architectural and Engineering work, or other special... completion of the project, the actual project cost is determined based upon the amount of public housing...
GPU-completeness: theory and implications
NASA Astrophysics Data System (ADS)
Lin, I.-Jong
2011-01-01
This paper formalizes a major insight into a class of algorithms that relate parallelism and performance. The purpose of this paper is to define a class of algorithms that trades off parallelism for quality of result (e.g. visual quality, compression rate), and we propose a similar method for algorithmic classification based on NP-Completeness techniques, applied toward parallel acceleration. We will define this class of algorithm as "GPU-Complete" and will postulate the necessary properties of the algorithms for admission into this class. We will also formally relate his algorithmic space and imaging algorithms space. This concept is based upon our experience in the print production area where GPUs (Graphic Processing Units) have shown a substantial cost/performance advantage within the context of HPdelivered enterprise services and commercial printing infrastructure. While CPUs and GPUs are converging in their underlying hardware and functional blocks, their system behaviors are clearly distinct in many ways: memory system design, programming paradigms, and massively parallel SIMD architecture. There are applications that are clearly suited to each architecture: for CPU: language compilation, word processing, operating systems, and other applications that are highly sequential in nature; for GPU: video rendering, particle simulation, pixel color conversion, and other problems clearly amenable to massive parallelization. While GPUs establishing themselves as a second, distinct computing architecture from CPUs, their end-to-end system cost/performance advantage in certain parts of computation inform the structure of algorithms and their efficient parallel implementations. While GPUs are merely one type of architecture for parallelization, we show that their introduction into the design space of printing systems demonstrate the trade-offs against competing multi-core, FPGA, and ASIC architectures. While each architecture has its own optimal application, we believe that the selection of architecture can be defined in terms of properties of GPU-Completeness. For a welldefined subset of algorithms, GPU-Completeness is intended to connect the parallelism, algorithms and efficient architectures into a unified framework to show that multiple layers of parallel implementation are guided by the same underlying trade-off.
2017-10-01
to patient safety by addressing key methodological and conceptual gaps in healthcare simulation-based team training. The investigators are developing...primary outcome of Aim 1a is a conceptually and methodologically sound training design architecture that supports the development and integration of team...should be delivered. This subtask was delayed by approximately 1 month and is now completed. Completed Evaluation of existing experimental dataset to
A U.S. perspective on the human exploration and expansion on the planet Mars
NASA Technical Reports Server (NTRS)
Roberts, Barney B.; Connolly, John F.
1992-01-01
A NASA perspective on the human exploration of Mars is presented which is based on the fundamental background available from the many previous studies. A hypothetical architecture of the Mars surface system is described which represents the complete spectrum of envisioned activities. Using the Strategic Implementation Architecture it is possible to construct a thoughtful roadmap which would enable a logical and flexible evolution of missions. Based on that architecture a suite of Martian surface elements is proposed to provide increasing levels of capability to the maturing infrastructure.
A Systolic Array-Based FPGA Parallel Architecture for the BLAST Algorithm
Guo, Xinyu; Wang, Hong; Devabhaktuni, Vijay
2012-01-01
A design of systolic array-based Field Programmable Gate Array (FPGA) parallel architecture for Basic Local Alignment Search Tool (BLAST) Algorithm is proposed. BLAST is a heuristic biological sequence alignment algorithm which has been used by bioinformatics experts. In contrast to other designs that detect at most one hit in one-clock-cycle, our design applies a Multiple Hits Detection Module which is a pipelining systolic array to search multiple hits in a single-clock-cycle. Further, we designed a Hits Combination Block which combines overlapping hits from systolic array into one hit. These implementations completed the first and second step of BLAST architecture and achieved significant speedup comparing with previously published architectures. PMID:25969747
The architecture of enterprise hospital information system.
Lu, Xudong; Duan, Huilong; Li, Haomin; Zhao, Chenhui; An, Jiye
2005-01-01
Because of the complexity of the hospital environment, there exist a lot of medical information systems from different vendors with incompatible structures. In order to establish an enterprise hospital information system, the integration among these heterogeneous systems must be considered. Complete integration should cover three aspects: data integration, function integration and workflow integration. However most of the previous design of architecture did not accomplish such a complete integration. This article offers an architecture design of the enterprise hospital information system based on the concept of digital neural network system in hospital. It covers all three aspects of integration, and eventually achieves the target of one virtual data center with Enterprise Viewer for users of different roles. The initial implementation of the architecture in the 5-year Digital Hospital Project in Huzhou Central hospital of Zhejiang Province is also described.
A digital protection system incorporating knowledge based learning
NASA Astrophysics Data System (ADS)
Watson, Karan; Russell, B. Don; McCall, Kurt
A digital system architecture used to diagnoses the operating state and health of electric distribution lines and to generate actions for line protection is presented. The architecture is described functionally and to a limited extent at the hardware level. This architecture incorporates multiple analysis and fault-detection techniques utilizing a variety of parameters. In addition, a knowledge-based decision maker, a long-term memory retention and recall scheme, and a learning environment are described. Preliminary laboratory implementations of the system elements have been completed. Enhanced protection for electric distribution feeders is provided by this system. Advantages of the system are enumerated.
Software synthesis using generic architectures
NASA Technical Reports Server (NTRS)
Bhansali, Sanjay
1993-01-01
A framework for synthesizing software systems based on abstracting software system designs and the design process is described. The result of such an abstraction process is a generic architecture and the process knowledge for customizing the architecture. The customization process knowledge is used to assist a designer in customizing the architecture as opposed to completely automating the design of systems. Our approach using an implemented example of a generic tracking architecture which was customized in two different domains is illustrated. How the designs produced using KASE compare to the original designs of the two systems, and current work and plans for extending KASE to other application areas are described.
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
ERIC Educational Resources Information Center
Travis, James L., III
2014-01-01
This study investigated how and to what extent the development and use of the OV-5a operational architecture decomposition tree (OADT) from the Department of Defense (DoD) Architecture Framework (DoDAF) affects requirements analysis with respect to complete performance metrics for performance-based services acquisition of ICT under rigid…
Key Technologies of Phone Storage Forensics Based on ARM Architecture
NASA Astrophysics Data System (ADS)
Zhang, Jianghan; Che, Shengbing
2018-03-01
Smart phones are mainly running Android, IOS and Windows Phone three mobile platform operating systems. The android smart phone has the best market shares and its processor chips are almost ARM software architecture. The chips memory address mapping mechanism of ARM software architecture is different with x86 software architecture. To forensics to android mart phone, we need to understand three key technologies: memory data acquisition, the conversion mechanism from virtual address to the physical address, and find the system’s key data. This article presents a viable solution which does not rely on the operating system API for a complete solution to these three issues.
Pax permanent Martian base: Space architecture for the first human habitation on Mars, volume 5
NASA Technical Reports Server (NTRS)
Huebner-Moths, Janis; Fieber, Joseph P.; Rebholz, Patrick J.; Paruleski, Kerry L.; Moore, Gary T. (Editor)
1992-01-01
America at the Threshold: Report of the Synthesis Group on America's Space Exploration Initiative (the 'Synthesis Report,' sometimes called the Stafford Report after its astronaut chair, published in 1991) recommended that NASA explore what it called four 'architectures,' i.e., four different scenarios for habitation on Mars. The Advanced Design Program in Space Architecture at the University of Wisconsin-Milwaukee supported this report and two of its scenarios--'Architecture 1' and 'Architecture 4'--during the spring of 1992. This report investigates the implications of different mission scenarios, the Martian environment, supporting technologies, and especially human factors and environment-behavior considerations for the design of the first permanent Martian base. The report is comprised of sections on mission analysis, implications of the Martian atmosphere and geologic environment, development of habitability design requirements based on environment-behavior and human factors research, and a full design proposed (concept design and design development) for the first permanent Martian base and habitat. The design is presented in terms of a base site plan, master plan based on a Mars direct scenario phased through IOC, and design development details of a complete Martian habitat for 18 crew members including all laboratory, mission control, and crew support spaces.
The Aeronautical Data Link: Taxonomy, Architectural Analysis, and Optimization
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Goode, Plesent W.
2002-01-01
The future Communication, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) System will rely on global satellite navigation, and ground-based and satellite based communications via Multi-Protocol Networks (e.g. combined Aeronautical Telecommunications Network (ATN)/Internet Protocol (IP)) to bring about needed improvements in efficiency and safety of operations to meet increasing levels of air traffic. This paper will discuss the development of an approach that completely describes optimal data link architecture configuration and behavior to meet the multiple conflicting objectives of concurrent and different operations functions. The practical application of the approach enables the design and assessment of configurations relative to airspace operations phases. The approach includes a formal taxonomic classification, an architectural analysis methodology, and optimization techniques. The formal taxonomic classification provides a multidimensional correlation of data link performance with data link service, information protocol, spectrum, and technology mode; and to flight operations phase and environment. The architectural analysis methodology assesses the impact of a specific architecture configuration and behavior on the local ATM system performance. Deterministic and stochastic optimization techniques maximize architectural design effectiveness while addressing operational, technology, and policy constraints.
Lunar Outpost Life Support Architecture Study Based on a High-Mobility Exploration Scenario
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Anderson, Molly S.
2010-01-01
This paper presents results of a life support architecture study based on a 2009 NASA lunar surface exploration scenario known as Scenario 12. The study focuses on the assembly complete outpost configuration and includes pressurized rovers as part of a distributed outpost architecture in both stand-alone and integrated configurations. A range of life support architectures are examined reflecting different levels of closure and distributed functionality. Monte Carlo simulations are used to assess the sensitivity of results to volatile high-impact mission variables, including the quantity of residual Lander oxygen and hydrogen propellants available for scavenging, the fraction of crew time away from the outpost on excursions, total extravehicular activity hours, and habitat leakage. Surpluses or deficits of water and oxygen are reported for each architecture, along with fixed and 10-year total equivalent system mass estimates relative to a reference case. System robustness is discussed in terms of the probability of no water or oxygen resupply as determined from the Monte Carlo simulations.
SME2EM: Smart mobile end-to-end monitoring architecture for life-long diseases.
Serhani, Mohamed Adel; Menshawy, Mohamed El; Benharref, Abdelghani
2016-01-01
Monitoring life-long diseases requires continuous measurements and recording of physical vital signs. Most of these diseases are manifested through unexpected and non-uniform occurrences and behaviors. It is impractical to keep patients in hospitals, health-care institutions, or even at home for long periods of time. Monitoring solutions based on smartphones combined with mobile sensors and wireless communication technologies are a potential candidate to support complete mobility-freedom, not only for patients, but also for physicians. However, existing monitoring architectures based on smartphones and modern communication technologies are not suitable to address some challenging issues, such as intensive and big data, resource constraints, data integration, and context awareness in an integrated framework. This manuscript provides a novel mobile-based end-to-end architecture for live monitoring and visualization of life-long diseases. The proposed architecture provides smartness features to cope with continuous monitoring, data explosion, dynamic adaptation, unlimited mobility, and constrained devices resources. The integration of the architecture׳s components provides information about diseases׳ recurrences as soon as they occur to expedite taking necessary actions, and thus prevent severe consequences. Our architecture system is formally model-checked to automatically verify its correctness against designers׳ desirable properties at design time. Its components are fully implemented as Web services with respect to the SOA architecture to be easy to deploy and integrate, and supported by Cloud infrastructure and services to allow high scalability, availability of processes and data being stored and exchanged. The architecture׳s applicability is evaluated through concrete experimental scenarios on monitoring and visualizing states of epileptic diseases. The obtained theoretical and experimental results are very promising and efficiently satisfy the proposed architecture׳s objectives, including resource awareness, smart data integration and visualization, cost reduction, and performance guarantee. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evolution of System Architectures: Where Do We Need to Fail Next?
NASA Astrophysics Data System (ADS)
Bermudez, Luis; Alameh, Nadine; Percivall, George
2013-04-01
Innovation requires testing and failing. Thomas Edison was right when he said "I have not failed. I've just found 10,000 ways that won't work". For innovation and improvement of standards to happen, service Architectures have to be tested and tested. Within the Open Geospatial Consortium (OGC), testing of service architectures has occurred for the last 15 years. This talk will present an evolution of these service architectures and a possible future path. OGC is a global forum for the collaboration of developers and users of spatial data products and services, and for the advancement and development of international standards for geospatial interoperability. The OGC Interoperability Program is a series of hands-on, fast paced, engineering initiatives to accelerate the development and acceptance of OGC standards. Each initiative is organized in threads that provide focus under a particular theme. The first testbed, OGC Web Services phase 1, completed in 2003 had four threads: Common Architecture, Web Mapping, Sensor Web and Web Imagery Enablement. The Common Architecture was a cross-thread theme, to ensure that the Web Mapping and Sensor Web experiments built on a base common architecture. The architecture was based on the three main SOA components: Broker, Requestor and Provider. It proposed a general service model defining service interactions and dependencies; categorization of service types; registries to allow discovery and access of services; data models and encodings; and common services (WMS, WFS, WCS). For the latter, there was a clear distinction on the different services: Data Services (e.g. WMS), Application services (e.g. Coordinate transformation) and server-side client applications (e.g. image exploitation). The latest testbed, OGC Web Service phase 9, completed in 2012 had 5 threads: Aviation, Cross-Community Interoperability (CCI), Security and Services Interoperability (SSI), OWS Innovations and Compliance & Interoperability Testing & Evaluation (CITE). Compared to the first testbed, OWS-9 did not have a separate common architecture thread. Instead the emphasis was on brokering information models, securing them and making data available efficiently on mobile devices. The outcome is an architecture based on usability and non-intrusiveness while leveraging mediation of information models from different communities. This talk will use lessons learned from the evolution from OGC Testbed phase 1 to phase 9 to better understand how global and complex infrastructures evolve to support many communities including the Earth System Science Community.
A GaAs vector processor based on parallel RISC microprocessors
NASA Astrophysics Data System (ADS)
Misko, Tim A.; Rasset, Terry L.
A vector processor architecture based on the development of a 32-bit microprocessor using gallium arsenide (GaAs) technology has been developed. The McDonnell Douglas vector processor (MVP) will be fabricated completely from GaAs digital integrated circuits. The MVP architecture includes a vector memory of 1 megabyte, a parallel bus architecture with eight processing elements connected in parallel, and a control processor. The processing elements consist of a reduced instruction set CPU (RISC) with four floating-point coprocessor units and necessary memory interface functions. This architecture has been simulated for several benchmark programs including complex fast Fourier transform (FFT), complex inner product, trigonometric functions, and sort-merge routine. The results of this study indicate that the MVP can process a 1024-point complex FFT at a speed of 112 microsec (389 megaflops) while consuming approximately 618 W of power in a volume of approximately 0.1 ft-cubed.
Uribe, Gustavo A; Blobel, Bernd; López, Diego M; Ruiz, Alonso A
2015-01-01
The development of software supporting inter-disciplinary systems like the type 2 diabetes mellitus care requires the deployment of methodologies designed for this type of interoperability. The GCM framework allows the architectural description of such systems and the development of software solutions based on it. A first step of the GCM methodology is the definition of a generic architecture, followed by its specialization for specific use cases. This paper describes the specialization of the generic architecture of a system, supporting Type 2 diabetes mellitus glycemic control, for a pharmacotherapy use case. It focuses on the behavioral aspect of the system, i.e. the policy domain and the definition of the rules governing the system. The design of this architecture reflects the inter-disciplinary feature of the methodology. Finally, the resulting architecture allows building adaptive, intelligent and complete systems.
Component-Level Electronic-Assembly Repair (CLEAR) System Architecture
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.; Bradish, Martin A.; Juergens, Jeffrey R.; Lewis, Michael J.; Vrnak, Daniel R.
2011-01-01
This document captures the system architecture for a Component-Level Electronic-Assembly Repair (CLEAR) capability needed for electronics maintenance and repair of the Constellation Program (CxP). CLEAR is intended to improve flight system supportability and reduce the mass of spares required to maintain the electronics of human rated spacecraft on long duration missions. By necessity it allows the crew to make repairs that would otherwise be performed by Earth based repair depots. Because of practical knowledge and skill limitations of small spaceflight crews they must be augmented by Earth based support crews and automated repair equipment. This system architecture covers the complete system from ground-user to flight hardware and flight crew and defines an Earth segment and a Space segment. The Earth Segment involves database management, operational planning, and remote equipment programming and validation processes. The Space Segment involves the automated diagnostic, test and repair equipment required for a complete repair process. This document defines three major subsystems including, tele-operations that links the flight hardware to ground support, highly reconfigurable diagnostics and test instruments, and a CLEAR Repair Apparatus that automates the physical repair process.
Consistent model driven architecture
NASA Astrophysics Data System (ADS)
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Fall 2014 Data-Intensive Systems
2014-10-29
Oct 2014 © 2014 Carnegie Mellon University Big Data Systems NoSQL and horizontal scaling are changing architecture principles by creating...University Status LEAP4BD • Ready to pilot QuABase • Prototype is complete – covers 8 NoSQL /NewSQL implementations • Completing validation testing Big...machine learning to automate population of knowledge base • Initial focus on NoSQL /NewSQL technology domain • Extend to create knowledge bases in other
NASA Astrophysics Data System (ADS)
Arestova, M. L.; Bykovskii, A. Yu
1995-10-01
An architecture is proposed for a specialised optoelectronic multivalued logic processor based on the Allen—Givone algebra. The processor is intended for multiparametric processing of data arriving from a large number of sensors or for tackling spectral analysis tasks. The processor architecture makes it possible to obtain an approximate general estimate of the state of an object being diagnosed on a p-level scale. Optoelectronic systems are proposed for MAXIMUM, MINIMUM, and LITERAL logic gates, based on optical-frequency encoding of logic levels. Corresponding logic gates form a complete set of logic functions in the Allen—Givone algebra.
Baladrón, Carlos; Aguiar, Javier M; Calavia, Lorena; Carro, Belén; Sánchez-Esguevillas, Antonio; Hernández, Luis
2012-01-01
This paper presents a proposal for an Artificial Neural Network (ANN)-based architecture for completion and prediction of data retrieved by underwater sensors. Due to the specific conditions under which these sensors operate, it is not uncommon for them to fail, and maintenance operations are difficult and costly. Therefore, completion and prediction of the missing data can greatly improve the quality of the underwater datasets. A performance study using real data is presented to validate the approach, concluding that the proposed architecture is able to provide very low errors. The numbers show as well that the solution is especially suitable for cases where large portions of data are missing, while in situations where the missing values are isolated the improvement over other simple interpolation methods is limited.
A Comparative Study on the Architecture Internet of Things and its’ Implementation method
NASA Astrophysics Data System (ADS)
Xiao, Zhiliang
2017-08-01
With the rapid development of science and technology, Internet-based the Internet of things was born and achieved good results. In order to further build a complete Internet of things system, to achieve the design of the Internet of things, we need to constitute the object of the network structure of the indicators of comparative study, and on this basis, the Internet of things connected to the way and do more in-depth to achieve the unity of the object network architecture and implementation methods. This paper mainly analyzes the two types of Internet of Things system, and makes a brief comparative study of the important indicators, and then introduces the connection method and realization method of Internet of Things based on the concept of Internet of Things and architecture.
Pax: A permanent base for human habitation of Mars
NASA Technical Reports Server (NTRS)
Moore, Gary T.; Rebholz, Patrick J.; Fieber, Joseph P.; Huebner-Moths, Janis; Paruleski, Kerry L.
1992-01-01
The Advanced Design Program in Space Architecture at the University of Wisconsin-Milwaukee supported the synthesis report and two of its scenarios - 'Architecture 1' and 'Architecture 4' - and the Weaver ExPO report on near-term extraterrestrial explorations during the spring of 1992. The project investigated the implications of different mission scenarios, the Martian environment, supporting technologies, and especially human factors and environment-behavior considerations for the design of the first permanent Martian base. This paper presents the results of that investigation. The paper summarizes site selection, development of habitability design requirements based on environment-behavior research, construction sequencing, and a full concept design and design development for a first permanent Martian base and habitat. The proposed design is presented in terms of an integrative mission scenario and master plan phased through initial operational configuration, base site plan, and design development details of a complete Martian habitat for 18 crew members including all laboratory, mission control, and crew support spaces.
NASA Astrophysics Data System (ADS)
Shakeri, Nadim; Jalili, Saeed; Ahmadi, Vahid; Rasoulzadeh Zali, Aref; Goliaei, Sama
2015-01-01
The problem of finding the Hamiltonian path in a graph, or deciding whether a graph has a Hamiltonian path or not, is an NP-complete problem. No exact solution has been found yet, to solve this problem using polynomial amount of time and space. In this paper, we propose a two dimensional (2-D) optical architecture based on optical electronic devices such as micro ring resonators, optical circulators and MEMS based mirror (MEMS-M) to solve the Hamiltonian Path Problem, for undirected graphs in linear time. It uses a heuristic algorithm and employs n+1 different wavelengths of a light ray, to check whether a Hamiltonian path exists or not on a graph with n vertices. Then if a Hamiltonian path exists, it reports the path. The device complexity of the proposed architecture is O(n2).
MonALISA, an agent-based monitoring and control system for the LHC experiments
NASA Astrophysics Data System (ADS)
Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.
2017-10-01
MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.
Adaptive Architectures for Effects Based Operations
2006-08-12
laLb c d elfl I A IB Ic d e f Parent 2 Figure 3: One-Point Crossover System Architectures Lab 85 Aug-06 6.4. ECAD -EA Methodology The previous two...that accomplishes this task is termed as ECAD -EA (Effective Courses of Action Determination Using Evolutionary Algorithms). Besides a completely...items are given below followed by their explanations, while Figure 4 shows the inputs and outputs of the ECAD -EA methodology in the form of a block
NASA Astrophysics Data System (ADS)
Pleros, Nikos; Maniotis, Pavlos; Alexoudi, Theonitsa; Fitsios, Dimitris; Vagionas, Christos; Papaioannou, Sotiris; Vyrsokinos, K.; Kanellos, George T.
2014-03-01
The processor-memory performance gap, commonly referred to as "Memory Wall" problem, owes to the speed mismatch between processor and electronic RAM clock frequencies, forcing current Chip Multiprocessor (CMP) configurations to consume more than 50% of the chip real-estate for caching purposes. In this article, we present our recent work spanning from Si-based integrated optical RAM cell architectures up to complete optical cache memory architectures for Chip Multiprocessor configurations. Moreover, we discuss on e/o router subsystems with up to Tb/s routing capacity for cache interconnection purposes within CMP configurations, currently pursued within the FP7 PhoxTrot project.
A Proposed Data Fusion Architecture for Micro-Zone Analysis and Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kevin McCarthy; Milos Manic
Data Fusion requires the ability to combine or “fuse” date from multiple data sources. Time Series Analysis is a data mining technique used to predict future values from a data set based upon past values. Unlike other data mining techniques, however, Time Series places special emphasis on periodicity and how seasonal and other time-based factors tend to affect trends over time. One of the difficulties encountered in developing generic time series techniques is the wide variability of the data sets available for analysis. This presents challenges all the way from the data gathering stage to results presentation. This paper presentsmore » an architecture designed and used to facilitate the collection of disparate data sets well suited to Time Series analysis as well as other predictive data mining techniques. Results show this architecture provides a flexible, dynamic framework for the capture and storage of a myriad of dissimilar data sets and can serve as a foundation from which to build a complete data fusion architecture.« less
An eConsent-based System Architecture Supporting Cooperation in Integrated Healthcare Networks.
Bergmann, Joachim; Bott, Oliver J; Hoffmann, Ina; Pretschner, Dietrich P
2005-01-01
The economical need for efficient healthcare leads to cooperative shared care networks. A virtual electronic health record is required, which integrates patient related information but reflects the distributed infrastructure and restricts access only to those health professionals involved into the care process. Our work aims on specification and development of a system architecture fulfilling these requirements to be used in concrete regional pilot studies. Methodical analysis and specification have been performed in a healthcare network using the formal method and modelling tool MOSAIK-M. The complexity of the application field was reduced by focusing on the scenario of thyroid disease care, which still includes various interdisciplinary cooperation. Result is an architecture for a secure distributed electronic health record for integrated care networks, specified in terms of a MOSAIK-M-based system model. The architecture proposes business processes, application services, and a sophisticated security concept, providing a platform for distributed document-based, patient-centred, and secure cooperation. A corresponding system prototype has been developed for pilot studies, using advanced application server technologies. The architecture combines a consolidated patient-centred document management with a decentralized system structure without needs for replication management. An eConsent-based approach assures, that access to the distributed health record remains under control of the patient. The proposed architecture replaces message-based communication approaches, because it implements a virtual health record providing complete and current information. Acceptance of the new communication services depends on compatibility with the clinical routine. Unique and cross-institutional identification of a patient is also a challenge, but will loose significance with establishing common patient cards.
NASA Astrophysics Data System (ADS)
Oliynyk, Olena
2018-03-01
Khreschatyk is a page apart in the history of world architecture. While it has a number of distinct characteristics of totalitarian architecture, Khreschatyk is the only architectural ensemble of the period to combine na-tional tradition with the exalted sentiment of Soviet architecture of the Stalin era. Also, it uniquely matched architecture and landscape. The façades has elements of Ukrainian baroque, which sets Khreschatyk apart from similar ensembles of the 1940s-1950s in other countries that mainly drew upon Ne-oclassicism or Modernism. While period architecture in other countries is typically marked by its grand scale and heavily accentuated civic spirit - complete with a denigration of the individual at the expense of the manifest greatness of Authority, Khreschatyk stand out for its pronounced harmony as an environment based on the careful preservation of old heritage, the skill-ful use of the landscape, and the introduction of traditional motifs, alongside an almost total lack of Soviet symbols. Unlike the grim grandness of totali-tarian architecture in other countries, the facades of the residential buildings that line Khreschatyk emanate joie de vivre and admiration for the fertility of Ukrainian soil.
Howard, Réka; Carriquiry, Alicia L.; Beavis, William D.
2014-01-01
Parametric and nonparametric methods have been developed for purposes of predicting phenotypes. These methods are based on retrospective analyses of empirical data consisting of genotypic and phenotypic scores. Recent reports have indicated that parametric methods are unable to predict phenotypes of traits with known epistatic genetic architectures. Herein, we review parametric methods including least squares regression, ridge regression, Bayesian ridge regression, least absolute shrinkage and selection operator (LASSO), Bayesian LASSO, best linear unbiased prediction (BLUP), Bayes A, Bayes B, Bayes C, and Bayes Cπ. We also review nonparametric methods including Nadaraya-Watson estimator, reproducing kernel Hilbert space, support vector machine regression, and neural networks. We assess the relative merits of these 14 methods in terms of accuracy and mean squared error (MSE) using simulated genetic architectures consisting of completely additive or two-way epistatic interactions in an F2 population derived from crosses of inbred lines. Each simulated genetic architecture explained either 30% or 70% of the phenotypic variability. The greatest impact on estimates of accuracy and MSE was due to genetic architecture. Parametric methods were unable to predict phenotypic values when the underlying genetic architecture was based entirely on epistasis. Parametric methods were slightly better than nonparametric methods for additive genetic architectures. Distinctions among parametric methods for additive genetic architectures were incremental. Heritability, i.e., proportion of phenotypic variability, had the second greatest impact on estimates of accuracy and MSE. PMID:24727289
2013-01-01
Background Guanine-cytosine (GC) composition is an important feature of genomes. Likewise, amino acid composition is a distinct, but less valued, feature of proteomes. A major concern is that it is not clear what valuable information can be acquired from amino acid composition data. To address this concern, in-depth analyses of the amino acid composition of the complete proteomes from 63 archaea, 270 bacteria, and 128 eukaryotes were performed. Results Principal component analysis of the amino acid matrices showed that the main contributors to proteomic architecture were genomic GC variation, phylogeny, and environmental influences. GC pressure drove positive selection on Ala, Arg, Gly, Pro, Trp, and Val, and adverse selection on Asn, Lys, Ile, Phe, and Tyr. The physico-chemical framework of the complete proteomes withstood GC pressure by frequency complementation of GC-dependent amino acid pairs with similar physico-chemical properties. Gln, His, Ser, and Val were responsible for phylogeny and their constituted components could differentiate archaea, bacteria, and eukaryotes. Environmental niche was also a significant factor in determining proteomic architecture, especially for archaea for which the main amino acids were Cys, Leu, and Thr. In archaea, hyperthermophiles, acidophiles, mesophiles, psychrophiles, and halophiles gathered successively along the environment-based principal component. Concordance between proteomic architecture and the genetic code was also related closely to genomic GC content, phylogeny, and lifestyles. Conclusions Large-scale analyses of the complete proteomes of a wide range of organisms suggested that amino acid composition retained the trace of GC variation, phylogeny, and environmental influences during evolution. The findings from this study will help in the development of a global understanding of proteome evolution, and even biological evolution. PMID:24088322
Collaboration pathway(s) using new tools for optimizing `operational' climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2015-09-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a long term solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the collective needs of policy makers, scientific communities and global academic users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent rule-based expert system (RBES) optimization modeling of the intended NPOESS architecture becomes a surrogate for global operational climate monitoring architecture(s). These rulebased systems tools provide valuable insight for global climate architectures, by comparison/evaluation of alternatives and the sheer range of trade space explored. Optimization of climate monitoring architecture(s) for a partial list of ECV (essential climate variables) is explored and described in detail with dialogue on appropriate rule-based valuations. These optimization tool(s) suggest global collaboration advantages and elicit responses from the audience and climate science community. This paper will focus on recent research exploring joint requirement implications of the high profile NPOESS architecture and extends the research and tools to optimization for a climate centric case study. This reflects work from SPIE RS Conferences 2013 and 2014, abridged for simplification30, 32. First, the heavily securitized NPOESS architecture; inspired the recent research question - was Complexity (as a cost/risk factor) overlooked when considering the benefits of aggregating different missions into a single platform. Now years later a complete reversal; should agencies considering Disaggregation as the answer. We'll discuss what some academic research suggests. Second, using the GCOS requirements of earth climate observations via ECV (essential climate variables) many collected from space-based sensors; and accepting their definitions of global coverages intended to ensure the needs of major global and international organizations (UNFCCC and IPCC) are met as a core objective. Consider how new optimization tools like rule-based engines (RBES) offer alternative methods of evaluating collaborative architectures and constellations? What would the trade space of optimized operational climate monitoring architectures of ECV look like? Third, using the RBES tool kit (2014) demonstrate with application to a climate centric rule-based decision engine - optimizing architectural trades of earth observation satellite systems, allowing comparison(s) to existing architectures and gaining insights for global collaborative architectures. How difficult is it to pull together an optimized climate case study - utilizing for example 12 climate based instruments on multiple existing platforms and nominal handful of orbits; for best cost and performance benefits against the collection requirements of representative set of ECV. How much effort and resources would an organization expect to invest to realize these analysis and utility benefits?
A open loop guidance architecture for navigationally robust on-orbit docking
NASA Technical Reports Server (NTRS)
Chern, Hung-Sheng
1995-01-01
The development of an open-hop guidance architecture is outlined for autonomous rendezvous and docking (AR&D) missions to determine whether the Global Positioning System (GPS) can be used in place of optical sensors for relative initial position determination of the chase vehicle. Feasible command trajectories for one, two, and three impulse AR&D maneuvers are determined using constrained trajectory optimization. Early AR&D command trajectory results suggest that docking accuracies are most sensitive to vertical position errors at the initial conduction of the chase vehicle. Thus, a feasible command trajectory is based on maximizing the size of the locus of initial vertical positions for which a fixed sequence of impulses will translate the chase vehicle into the target while satisfying docking accuracy requirements. Documented accuracies are used to determine whether relative GPS can achieve the vertical position error requirements of the impulsive command trajectories. Preliminary development of a thruster management system for the Cargo Transfer Vehicle (CTV) based on optimal throttle settings is presented to complete the guidance architecture. Results show that a guidance architecture based on a two impulse maneuvers generated the best performance in terms of initial position error and total velocity change for the chase vehicle.
GASP-PL/I Simulation of Integrated Avionic System Processor Architectures. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brent, G. A.
1978-01-01
A development study sponsored by NASA was completed in July 1977 which proposed a complete integration of all aircraft instrumentation into a single modular system. Instead of using the current single-function aircraft instruments, computers compiled and displayed inflight information for the pilot. A processor architecture called the Team Architecture was proposed. This is a hardware/software approach to high-reliability computer systems. A follow-up study of the proposed Team Architecture is reported. GASP-PL/1 simulation models are used to evaluate the operating characteristics of the Team Architecture. The problem, model development, simulation programs and results at length are presented. Also included are program input formats, outputs and listings.
A new intrusion prevention model using planning knowledge graph
NASA Astrophysics Data System (ADS)
Cai, Zengyu; Feng, Yuan; Liu, Shuru; Gan, Yong
2013-03-01
Intelligent plan is a very important research in artificial intelligence, which has applied in network security. This paper proposes a new intrusion prevention model base on planning knowledge graph and discuses the system architecture and characteristics of this model. The Intrusion Prevention based on plan knowledge graph is completed by plan recognition based on planning knowledge graph, and the Intrusion response strategies and actions are completed by the hierarchical task network (HTN) planner in this paper. Intrusion prevention system has the advantages of intelligent planning, which has the advantage of the knowledge-sharing, the response focused, learning autonomy and protective ability.
A performance analysis of advanced I/O architectures for PC-based network file servers
NASA Astrophysics Data System (ADS)
Huynh, K. D.; Khoshgoftaar, T. M.
1994-12-01
In the personal computing and workstation environments, more and more I/O adapters are becoming complete functional subsystems that are intelligent enough to handle I/O operations on their own without much intervention from the host processor. The IBM Subsystem Control Block (SCB) architecture has been defined to enhance the potential of these intelligent adapters by defining services and conventions that deliver command information and data to and from the adapters. In recent years, a new storage architecture, the Redundant Array of Independent Disks (RAID), has been quickly gaining acceptance in the world of computing. In this paper, we would like to discuss critical system design issues that are important to the performance of a network file server. We then present a performance analysis of the SCB architecture and disk array technology in typical network file server environments based on personal computers (PCs). One of the key issues investigated in this paper is whether a disk array can outperform a group of disks (of same type, same data capacity, and same cost) operating independently, not in parallel as in a disk array.
Computer-aided tissue engineering: benefiting from the control over scaffold micro-architecture.
Tarawneh, Ahmad M; Wettergreen, Matthew; Liebschner, Michael A K
2012-01-01
Minimization schema in nature affects the material arrangements of most objects, independent of scale. The field of cellular solids has focused on the generalization of these natural architectures (bone, wood, coral, cork, honeycombs) for material improvement and elucidation into natural growth mechanisms. We applied this approach for the comparison of a set of complex three-dimensional (3D) architectures containing the same material volume but dissimilar architectural arrangements. Ball and stick representations of these architectures at varied material volumes were characterized according to geometric properties, such as beam length, beam diameter, surface area, space filling efficiency, and pore volume. Modulus, deformation properties, and stress distributions as contributed solely by architectural arrangements was revealed through finite element simulations. We demonstrated that while density is the greatest factor in controlling modulus, optimal material arrangement could result in equal modulus values even with volumetric discrepancies of up to 10%. We showed that at low porosities, loss of architectural complexity allows these architectures to be modeled as closed celled solids. At these lower porosities, the smaller pores do not greatly contribute to the overall modulus of the architectures and that a stress backbone is responsible for the modulus. Our results further indicated that when considering a deposition-based growth pattern, such as occurs in nature, surface area plays a large role in the resulting strength of these architectures, specifically for systems like bone. This completed study represents the first step towards the development of mathematical algorithms to describe the mechanical properties of regular and symmetric architectures used for tissue regenerative applications. The eventual goal is to create logical set of rules that can explain the structural properties of an architecture based solely upon its geometry. The information could then be used in an automatic fashion to generate patient-specific scaffolds for the treatment of tissue defects.
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.
Dai, Zhaohe; Liu, Luqi; Qi, Xiaoying; Kuang, Jun; Wei, Yueguang; Zhu, Hongwei; Zhang, Zhong
2016-01-01
Efficient assembly of carbon nanotube (CNT) based cellular solids with appropriate structure is the key to fully realize the potential of individual nanotubes in macroscopic architecture. In this work, the macroscopic CNT sponge consisting of randomly interconnected individual carbon nanotubes was grown by CVD, exhibiting a combination of super-elasticity, high strength to weight ratio, fatigue resistance, thermo-mechanical stability and electro-mechanical stability. To deeply understand such extraordinary mechanical performance compared to that of conventional cellular materials and other nanostructured cellular architectures, a thorough study on the response of this CNT-based spongy structure to compression is conducted based on classic elastic theory. The strong inter-tube bonding between neighboring nanotubes is examined, believed to play a critical role in the reversible deformation such as bending and buckling without structural collapse under compression. Based on in-situ scanning electron microscopy observation and nanotube deformation analysis, structural evolution (completely elastic bending-buckling transition) of the carbon nanotubes sponges to deformation is proposed to clarify their mechanical properties and nonlinear electromechanical coupling behavior. PMID:26732143
Evolutionary dynamics of protein domain architecture in plants
2012-01-01
Background Protein domains are the structural, functional and evolutionary units of the protein. Protein domain architectures are the linear arrangements of domain(s) in individual proteins. Although the evolutionary history of protein domain architecture has been extensively studied in microorganisms, the evolutionary dynamics of domain architecture in the plant kingdom remains largely undefined. To address this question, we analyzed the lineage-based protein domain architecture content in 14 completed green plant genomes. Results Our analyses show that all 14 plant genomes maintain similar distributions of species-specific, single-domain, and multi-domain architectures. Approximately 65% of plant domain architectures are universally present in all plant lineages, while the remaining architectures are lineage-specific. Clear examples are seen of both the loss and gain of specific protein architectures in higher plants. There has been a dynamic, lineage-wise expansion of domain architectures during plant evolution. The data suggest that this expansion can be largely explained by changes in nuclear ploidy resulting from rounds of whole genome duplications. Indeed, there has been a decrease in the number of unique domain architectures when the genomes were normalized into a presumed ancestral genome that has not undergone whole genome duplications. Conclusions Our data show the conservation of universal domain architectures in all available plant genomes, indicating the presence of an evolutionarily conserved, core set of protein components. However, the occurrence of lineage-specific domain architectures indicates that domain architecture diversity has been maintained beyond these core components in plant genomes. Although several features of genome-wide domain architecture content are conserved in plants, the data clearly demonstrate lineage-wise, progressive changes and expansions of individual protein domain architectures, reinforcing the notion that plant genomes have undergone dynamic evolution. PMID:22252370
Complete all-optical processing polarization-based binary logic gates and optical processors.
Zaghloul, Y A; Zaghloul, A R M
2006-10-16
We present a complete all-optical-processing polarization-based binary-logic system, by which any logic gate or processor can be implemented. Following the new polarization-based logic presented in [Opt. Express 14, 7253 (2006)], we develop a new parallel processing technique that allows for the creation of all-optical-processing gates that produce a unique output either logic 1 or 0 only once in a truth table, and those that do not. This representation allows for the implementation of simple unforced OR, AND, XOR, XNOR, inverter, and more importantly NAND and NOR gates that can be used independently to represent any Boolean expression or function. In addition, the concept of a generalized gate is presented which opens the door for reconfigurable optical processors and programmable optical logic gates. Furthermore, the new design is completely compatible with the old one presented in [Opt. Express 14, 7253 (2006)], and with current semiconductor based devices. The gates can be cascaded, where the information is always on the laser beam. The polarization of the beam, and not its intensity, carries the information. The new methodology allows for the creation of multiple-input-multiple-output processors that implement, by itself, any Boolean function, such as specialized or non-specialized microprocessors. Three all-optical architectures are presented: orthoparallel optical logic architecture for all known and unknown binary gates, singlebranch architecture for only XOR and XNOR gates, and the railroad (RR) architecture for polarization optical processors (POP). All the control inputs are applied simultaneously leading to a single time lag which leads to a very-fast and glitch-immune POP. A simple and easy-to-follow step-by-step algorithm is provided for the POP, and design reduction methodologies are briefly discussed. The algorithm lends itself systematically to software programming and computer-assisted design. As examples, designs of all binary gates, multiple-input gates, and sequential and non-sequential Boolean expressions are presented and discussed. The operation of each design is simply understood by a bullet train traveling at the speed of light on a railroad system preconditioned by the crossover states predetermined by the control inputs. The presented designs allow for optical processing of the information eliminating the need to convert it, back and forth, to an electronic signal for processing purposes. All gates with a truth table, including for example Fredkin, Toffoli, testable reversible logic, and threshold logic gates, can be designed and implemented using the railroad architecture. That includes any future gates not known today. Those designs and the quantum gates are not discussed in this paper.
Architectural approaches for HL7-based health information systems implementation.
López, D M; Blobel, B
2010-01-01
Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.
A real-time architecture for time-aware agents.
Prouskas, Konstantinos-Vassileios; Pitt, Jeremy V
2004-06-01
This paper describes the specification and implementation of a new three-layer time-aware agent architecture. This architecture is designed for applications and environments where societies of humans and agents play equally active roles, but interact and operate in completely different time frames. The architecture consists of three layers: the April real-time run-time (ART) layer, the time aware layer (TAL), and the application agents layer (AAL). The ART layer forms the underlying real-time agent platform. An original online, real-time, dynamic priority-based scheduling algorithm is described for scheduling the computation time of agent processes, and it is shown that the algorithm's O(n) complexity and scalable performance are sufficient for application in real-time domains. The TAL layer forms an abstraction layer through which human and agent interactions are temporally unified, that is, handled in a common way irrespective of their temporal representation and scale. A novel O(n2) interaction scheduling algorithm is described for predicting and guaranteeing interactions' initiation and completion times. The time-aware predicting component of a workflow management system is also presented as an instance of the AAL layer. The described time-aware architecture addresses two key challenges in enabling agents to be effectively configured and applied in environments where humans and agents play equally active roles. It provides flexibility and adaptability in its real-time mechanisms while placing them under direct agent control, and it temporally unifies human and agent interactions.
Model based design introduction: modeling game controllers to microprocessor architectures
NASA Astrophysics Data System (ADS)
Jungwirth, Patrick; Badawy, Abdel-Hameed
2017-04-01
We present an introduction to model based design. Model based design is a visual representation, generally a block diagram, to model and incrementally develop a complex system. Model based design is a commonly used design methodology for digital signal processing, control systems, and embedded systems. Model based design's philosophy is: to solve a problem - a step at a time. The approach can be compared to a series of steps to converge to a solution. A block diagram simulation tool allows a design to be simulated with real world measurement data. For example, if an analog control system is being upgraded to a digital control system, the analog sensor input signals can be recorded. The digital control algorithm can be simulated with the real world sensor data. The output from the simulated digital control system can then be compared to the old analog based control system. Model based design can compared to Agile software develop. The Agile software development goal is to develop working software in incremental steps. Progress is measured in completed and tested code units. Progress is measured in model based design by completed and tested blocks. We present a concept for a video game controller and then use model based design to iterate the design towards a working system. We will also describe a model based design effort to develop an OS Friendly Microprocessor Architecture based on the RISC-V.
An Autonomous Flight Safety System
NASA Technical Reports Server (NTRS)
Bull, James B.; Lanzi, Raymond J.
2007-01-01
The Autonomous Flight Safety System (AFSS) being developed by NASA s Goddard Space Flight Center s Wallops Flight Facility and Kennedy Space Center has completed two successful developmental flights and is preparing for a third. AFSS has been demonstrated to be a viable architecture for implementation of a completely vehicle based system capable of protecting life and property in event of an errant vehicle by terminating the flight or initiating other actions. It is capable of replacing current human-in-the-loop systems or acting in parallel with them. AFSS is configured prior to flight in accordance with a specific rule set agreed upon by the range safety authority and the user to protect the public and assure mission success. This paper discusses the motivation for the project, describes the method of development, and presents an overview of the evolving architecture and the current status.
FPS-RAM: Fast Prefix Search RAM-Based Hardware for Forwarding Engine
NASA Astrophysics Data System (ADS)
Zaitsu, Kazuya; Yamamoto, Koji; Kuroda, Yasuto; Inoue, Kazunari; Ata, Shingo; Oka, Ikuo
Ternary content addressable memory (TCAM) is becoming very popular for designing high-throughput forwarding engines on routers. However, TCAM has potential problems in terms of hardware and power costs, which limits its ability to deploy large amounts of capacity in IP routers. In this paper, we propose new hardware architecture for fast forwarding engines, called fast prefix search RAM-based hardware (FPS-RAM). We designed FPS-RAM hardware with the intent of maintaining the same search performance and physical user interface as TCAM because our objective is to replace the TCAM in the market. Our RAM-based hardware architecture is completely different from that of TCAM and has dramatically reduced the costs and power consumption to 62% and 52%, respectively. We implemented FPS-RAM on an FPGA to examine its lookup operation.
MEDIC: medical embedded device for individualized care.
Wu, Winston H; Bui, Alex A T; Batalin, Maxim A; Au, Lawrence K; Binney, Jonathan D; Kaiser, William J
2008-02-01
Presented work highlights the development and initial validation of a medical embedded device for individualized care (MEDIC), which is based on a novel software architecture, enabling sensor management and disease prediction capabilities, and commercially available microelectronic components, sensors and conventional personal digital assistant (PDA) (or a cell phone). In this paper, we present a general architecture for a wearable sensor system that can be customized to an individual patient's needs. This architecture is based on embedded artificial intelligence that permits autonomous operation, sensor management and inference, and may be applied to a general purpose wearable medical diagnostics. A prototype of the system has been developed based on a standard PDA and wireless sensor nodes equipped with commercially available Bluetooth radio components, permitting real-time streaming of high-bandwidth data from various physiological and contextual sensors. We also present the results of abnormal gait diagnosis using the complete system from our evaluation, and illustrate how the wearable system and its operation can be remotely configured and managed by either enterprise systems or medical personnel at centralized locations. By using commercially available hardware components and software architecture presented in this paper, the MEDIC system can be rapidly configured, providing medical researchers with broadband sensor data from remote patients and platform access to best adapt operation for diagnostic operation objectives.
2007-11-01
Since the completion of the program in 2003, OSA-CBM has been merged in MIMOSA consortium. The following areas are covered by the this standard...Data architecture design based on the CRIS data model from MIMOSA . • Implementation guidance among available middleware technologies
Predictors of Visualization: A Structural Equation Model.
ERIC Educational Resources Information Center
Robichaux, Rebecca R.; Guarino, A. J.
This study tested a causal model of the development of spatial visualization based on a synthesis of past and present research. During the summer and fall of 1999, 117 third- and fourth-year undergraduates majoring in architecture, mathematics, mathematics education, and mechanical engineering completed a spatial visualization test and a…
Achille, Cristiana; Adami, Andrea; Chiarini, Silvia; Cremonesi, Stefano; Fassi, Francesco; Fregonese, Luigi; Taffurelli, Laura
2015-01-01
This paper examines the survey of tall buildings in an emergency context like in the case of post-seismic events. The after-earthquake survey has to guarantee time-savings, high precision and security during the operational stages. The main goal is to optimize the application of methodologies based on acquisition and automatic elaborations of photogrammetric data even with the use of Unmanned Aerial Vehicle (UAV) systems in order to provide fast and low cost operations. The suggested methods integrate new technologies with commonly used technologies like TLS and topographic acquisition. The value of the photogrammetric application is demonstrated by a test case, based on the comparison of acquisition, calibration and 3D modeling results in case of use of a laser scanner, metric camera and amateur reflex camera. The test would help us to demonstrate the efficiency of image based methods in the acquisition of complex architecture. The case study is Santa Barbara Bell tower in Mantua. The applied survey solution allows a complete 3D database of the complex architectural structure to be obtained for the extraction of all the information needed for significant intervention. This demonstrates the applicability of the photogrammetry using UAV for the survey of vertical structures, complex buildings and difficult accessible architectural parts, providing high precision results. PMID:26134108
Achille, Cristiana; Adami, Andrea; Chiarini, Silvia; Cremonesi, Stefano; Fassi, Francesco; Fregonese, Luigi; Taffurelli, Laura
2015-06-30
This paper examines the survey of tall buildings in an emergency context like in the case of post-seismic events. The after-earthquake survey has to guarantee time-savings, high precision and security during the operational stages. The main goal is to optimize the application of methodologies based on acquisition and automatic elaborations of photogrammetric data even with the use of Unmanned Aerial Vehicle (UAV) systems in order to provide fast and low cost operations. The suggested methods integrate new technologies with commonly used technologies like TLS and topographic acquisition. The value of the photogrammetric application is demonstrated by a test case, based on the comparison of acquisition, calibration and 3D modeling results in case of use of a laser scanner, metric camera and amateur reflex camera. The test would help us to demonstrate the efficiency of image based methods in the acquisition of complex architecture. The case study is Santa Barbara Bell tower in Mantua. The applied survey solution allows a complete 3D database of the complex architectural structure to be obtained for the extraction of all the information needed for significant intervention. This demonstrates the applicability of the photogrammetry using UAV for the survey of vertical structures, complex buildings and difficult accessible architectural parts, providing high precision results.
Extracting microtubule networks from superresolution single-molecule localization microscopy data
Zhang, Zhen; Nishimura, Yukako; Kanchanawong, Pakorn
2017-01-01
Microtubule filaments form ubiquitous networks that specify spatial organization in cells. However, quantitative analysis of microtubule networks is hampered by their complex architecture, limiting insights into the interplay between their organization and cellular functions. Although superresolution microscopy has greatly facilitated high-resolution imaging of microtubule filaments, extraction of complete filament networks from such data sets is challenging. Here we describe a computational tool for automated retrieval of microtubule filaments from single-molecule-localization–based superresolution microscopy images. We present a user-friendly, graphically interfaced implementation and a quantitative analysis of microtubule network architecture phenotypes in fibroblasts. PMID:27852898
Implementation of a frame-based representation in CLIPS
NASA Technical Reports Server (NTRS)
Assal, Hisham; Myers, Leonard
1990-01-01
Knowledge representation is one of the major concerns in expert systems. The representation of domain-specific knowledge should agree with the nature of the domain entities and their use in the real world. For example, architectural applications deal with objects and entities such as spaces, walls, and windows. A natural way of representing these architectural entities is provided by frames. This research explores the potential of using the expert system shell CLIPS, developed by NASA, to implement a frame-based representation that can accommodate architectural knowledge. These frames are similar but quite different from the 'template' construct in version 4.3 of CLIPS. Templates support only the grouping of related information and the assignment of default values to template fields. In addition to these features frames provide other capabilities including definition of classes, inheritance between classes and subclasses, relation of objects of different classes with 'has-a', association of methods (demons) of different types (standard and user-defined) to fields (slots), and creation of new fields at run-time. This frame-based representation is implemented completely in CLIPS. No change to the source code is necessary.
Air Traffic Control: Complete and Enforced Architecture Needed for FAA Systems Modernization
DOT National Transportation Integrated Search
1997-02-01
Because of the size, complexity, and importance of FAA's air traffic control : (ATC) modernization, the General Accounting Office (GAO) reviewed it to : determine (1) whether FAA has a target architecture(s), and associated : subarchitectures, to gui...
A subsumptive, hierarchical, and distributed vision-based architecture for smart robotics.
DeSouza, Guilherme N; Kak, Avinash C
2004-10-01
We present a distributed vision-based architecture for smart robotics that is composed of multiple control loops, each with a specialized level of competence. Our architecture is subsumptive and hierarchical, in the sense that each control loop can add to the competence level of the loops below, and in the sense that the loops can present a coarse-to-fine gradation with respect to vision sensing. At the coarsest level, the processing of sensory information enables a robot to become aware of the approximate location of an object in its field of view. On the other hand, at the finest end, the processing of stereo information enables a robot to determine more precisely the position and orientation of an object in the coordinate frame of the robot. The processing in each module of the control loops is completely independent and it can be performed at its own rate. A control Arbitrator ranks the results of each loop according to certain confidence indices, which are derived solely from the sensory information. This architecture has clear advantages regarding overall performance of the system, which is not affected by the "slowest link," and regarding fault tolerance, since faults in one module does not affect the other modules. At this time we are able to demonstrate the utility of the architecture for stereoscopic visual servoing. The architecture has also been applied to mobile robot navigation and can easily be extended to tasks such as "assembly-on-the-fly."
P2P-Based Data System for the EAST Experiment
NASA Astrophysics Data System (ADS)
Shu, Yantai; Zhang, Liang; Zhao, Weifeng; Chen, Haiming; Luo, Jiarong
2006-06-01
A peer-to-peer (P2P)-based EAST Data System is being designed to provide data acquisition and analysis support for the EAST superconducting tokamak. Instead of transferring data to the servers, all collected data are stored in the data acquisition subsystems locally and the PC clients can access the raw data directly using the P2P architecture. Both online and offline systems are based on Napster-like P2P architecture. This allows the peer (PC) to act both as a client and as a server. A simulation-based method and a steady-state operational analysis technique are used for performance evaluation. These analyses show that the P2P technique can significantly reduce the completion time of raw data display and real-time processing on the online system, and raise the workload capacity and reduce the delay on the offline system.
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, F.; Morel, M.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis. The algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm is successfully implemented on a tightly coupled MIMD parallel processor. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts, and the dimension of the subspace on the performance of the algorithm is investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18, and 3.61 are achieved on two, four, six, and eight processors, respectively.
NASA Astrophysics Data System (ADS)
Shabiev, S. G.
2017-11-01
The article deals with the vital problem of the implementation of the Program to enhance the competitiveness of the South Ural State University (SUSU) among other scientific and educational centers, which defines the main objective - to form a world-class university. According to the set objective, the most important task is to build a landscaped campus, which can be efficiently solved by the architectural means. The solution of this task is based on the scientific methods of the territorial and architectural improvement of the main university building complex development in the northern academic area and the architectural and aesthetic improvement of the space structural arrangement of the buildings. The author analyzes the global practice of modern campuses in Russia and abroad based on the Internet resources. The author carried out some additional on-site surveys of foreign campuses in Australia, Canada and China. The essence of the architectural concept of the first university campus development stage lies in the science-based achievement of a harmonious architectural and space unity of solid and plane elements of the site development, landscape arrangement of the main building’s courtyard and the adjacent territories with an efficient use of the relief, water areas and planting, allotment of additional spaces for landscaped areas due to a split-level arrangement, including a landscaped platform, increase of the underground space utilization share with the arrangement of an underground car parking and an underground walkway considering the environmental requirements. Further, it is planned to use the author’s methodological approach for the southern academic and the northern residential university areas, which will allow to create a duly completed landscaped SUSU campus with a developed infrastructure according to the international standards.
Spatial Cognition Support for Exploring the Design Mechanics of Building Structures
ERIC Educational Resources Information Center
Rudy, Margit; Hauck, Richard
2008-01-01
A web-based tool for visualizing the simulated structural behavior of building models was developed to support the teaching of structural design to architecture and engineering students by activating their spatial cognition capabilities. The main didactic issues involved establishing a consistent and complete three-dimensional vocabulary (3D)…
Test Program of the "Combined Data and Power Management Infrastructure"
NASA Astrophysics Data System (ADS)
Eickhoff, Jens; Fritz, Michael; Witt, Rouven; Bucher, Nico; Roser, Hans-Peter
2013-08-01
As already published in previous DASIA papers, the University of Stuttgart, Germany, is developing an advanced 3-axis stabilized small satellite applying industry standards for command/control techniques and Onboard Software design. This satellite furthermore features an innovative hybrid architecture of Onboard Computer and Power Control and Distribution Unit. One of the main challenges was the development of an ultra-compact and performing Onboard Computer (OBC), which was intended to support an RTEMS operating system, a PUS standard based Onboard Software (OBSW) and CCSDS standard based ground/space communication. The developed architecture (see [1, 2, 3]) is called a “Combined Onboard Data and Power Management Infrastructure” - CDPI. It features: The OBC processor boards based on a LEON3FT architecture - from Aeroflex Inc., USA The I/O Boards for all OBC digital interfaces to S/C equipment (digital RIU) - from 4Links Ltd. UK CCSDS TC/TM decoder/encoder boards - with same HW design as I/O boards - just with limited number of interfaces. HW from 4Links Ltd, UK, driver SW and IP-Core from Aeroflex Gaisler, SE Analog RIU functions via enhanced PCDU from Vectronic Aerospace, D OBC reconfiguration unit functions via Common Controller - here in PCDU [4] The CDPI overall assembly is meanwhile complete and a exhaustive description can be found in [5]. The EM test campaign including the HW/SW compatibility testing is finalized. This comprises all OBC EM units, OBC EM assembly and the EM PCDU. The unit test program for the FM Processor-Boards and Power-Boards of the OBC are completed and the unit tests of FM I/O-Boards and CCSDS-Boards have been completed by 4Links at the assembly house. The subsystem tests of the assembled OBC also are completed and the overall System tests of the CDPI with system reconfiguration in diverse possible FDIR cases also reach the last steps. Still ongoing is the subsequent integration of the CDPI with the satellite's avionics components encompassing TTC, AOCS, Power and Payload Control. This paper provides a full picture of the test campaign. Further details can be taken from
Reusable Rocket Engine Turbopump Health Management System
NASA Technical Reports Server (NTRS)
Surko, Pamela
1994-01-01
A health monitoring expert system software architecture has been developed to support condition-based health monitoring of rocket engines. Its first application is in the diagnosis decisions relating to the health of the high pressure oxidizer turbopump (HPOTP) of Space Shuttle Main Engine (SSME). The post test diagnostic system runs off-line, using as input the data recorded from hundreds of sensors, each running typically at rates of 25, 50, or .1 Hz. The system is invoked after a test has been completed, and produces an analysis and an organized graphical presentation of the data with important effects highlighted. The overall expert system architecture has been developed and documented so that expert modules analyzing other line replaceable units may easily be added. The architecture emphasizes modularity, reusability, and open system interfaces so that it may be used to analyze other engines as well.
Eastman, Peter; Friedrichs, Mark S; Chodera, John D; Radmer, Randall J; Bruns, Christopher M; Ku, Joy P; Beauchamp, Kyle A; Lane, Thomas J; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R; Pande, Vijay S
2013-01-08
OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added.
Eastman, Peter; Friedrichs, Mark S.; Chodera, John D.; Radmer, Randall J.; Bruns, Christopher M.; Ku, Joy P.; Beauchamp, Kyle A.; Lane, Thomas J.; Wang, Lee-Ping; Shukla, Diwakar; Tye, Tony; Houston, Mike; Stich, Timo; Klein, Christoph; Shirts, Michael R.; Pande, Vijay S.
2012-01-01
OpenMM is a software toolkit for performing molecular simulations on a range of high performance computing architectures. It is based on a layered architecture: the lower layers function as a reusable library that can be invoked by any application, while the upper layers form a complete environment for running molecular simulations. The library API hides all hardware-specific dependencies and optimizations from the users and developers of simulation programs: they can be run without modification on any hardware on which the API has been implemented. The current implementations of OpenMM include support for graphics processing units using the OpenCL and CUDA frameworks. In addition, OpenMM was designed to be extensible, so new hardware architectures can be accommodated and new functionality (e.g., energy terms and integrators) can be easily added. PMID:23316124
Efficient Graph Based Assembly of Short-Read Sequences on Hybrid Core Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sczyrba, Alex; Pratap, Abhishek; Canon, Shane
2011-03-22
Advanced architectures can deliver dramatically increased throughput for genomics and proteomics applications, reducing time-to-completion in some cases from days to minutes. One such architecture, hybrid-core computing, marries a traditional x86 environment with a reconfigurable coprocessor, based on field programmable gate array (FPGA) technology. In addition to higher throughput, increased performance can fundamentally improve research quality by allowing more accurate, previously impractical approaches. We will discuss the approach used by Convey?s de Bruijn graph constructor for short-read, de-novo assembly. Bioinformatics applications that have random access patterns to large memory spaces, such as graph-based algorithms, experience memory performance limitations on cache-based x86more » servers. Convey?s highly parallel memory subsystem allows application-specific logic to simultaneously access 8192 individual words in memory, significantly increasing effective memory bandwidth over cache-based memory systems. Many algorithms, such as Velvet and other de Bruijn graph based, short-read, de-novo assemblers, can greatly benefit from this type of memory architecture. Furthermore, small data type operations (four nucleotides can be represented in two bits) make more efficient use of logic gates than the data types dictated by conventional programming models.JGI is comparing the performance of Convey?s graph constructor and Velvet on both synthetic and real data. We will present preliminary results on memory usage and run time metrics for various data sets with different sizes, from small microbial and fungal genomes to very large cow rumen metagenome. For genomes with references we will also present assembly quality comparisons between the two assemblers.« less
NASA Astrophysics Data System (ADS)
Saha, Rony Kumer; Aswakul, Chaodit
2017-01-01
In this paper, a multi-band enabled femtocell base station (FCBS) and user equipment (UE) architecture is proposed in a multi-tier network that consists of small cells, including femtocells and picocells deployed over the coverage of a macrocell for splitting uplink and downlink (UL/DL) as well as control-plane and user-plane (C-/U-plane) for 5G mobile networks. Since splitting is performed at the same FCBS, we define this architecture as the same base station based split architecture (SBSA). For multiple bands, we consider co-channel (CC) microwave and different frequency (DF) 60 GHz millimeter wave (mmWave) bands for FCBSs and UEs with respect to the microwave band used by their over-laid macrocell base station. All femtocells are assumed to be deployed in a 3-dimensional multi-storage building. For CC microwave band, cross-tier CC interference of femtocells with macrocell is avoided using almost blank subframe based enhanced inter-cell interference coordination techniques. The co-existence of CC microwave and DF mmWave bands for SBSA on the same FCBS and UE is first studied to show their performance disparities in terms of system capacity and spectral efficiency in order to provide incentives for employing multiple bands at the same FCBS and UE and identify a suitable band for routing decoupled UL/DL or C-/U-plane traffic. We then present a number of disruptive architectural design alternatives of multi-band enabled SBSA for 5G mobile networks for UL/DL and C-/U-plane splitting, including a disruptive and complete splitting of UL/DL and C-/U-plane as well as a combined UL/DL and C-/U-plane splitting, by exploiting dual connectivity on CC microwave and DF mmWave bands. The outperformances of SBSA in terms of system level capacity, average spectral efficiency, energy efficiency, and control-plane overhead traffic capacity in comparison with different base stations based split architecture (DBSA) are shown. Finally, a number of technical and business perspectives as well as key research issues of SBSA are discussed.
Sparsity based target detection for compressive spectral imagery
NASA Astrophysics Data System (ADS)
Boada, David Alberto; Arguello Fuentes, Henry
2016-09-01
Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.
NASA Astrophysics Data System (ADS)
Ammendola, R.; Barbanera, M.; Bizzarri, M.; Bonaiuto, V.; Ceccucci, A.; Checcucci, B.; De Simone, N.; Fantechi, R.; Federici, L.; Fucci, A.; Lupi, M.; Paoluzzi, G.; Papi, A.; Piccini, M.; Ryjov, V.; Salamon, A.; Salina, G.; Sargeni, F.; Venditti, S.
2017-03-01
The NA62 experiment at CERN SPS has started its data-taking. Its aim is to measure the branching ratio of the ultra-rare decay K+ → π+ν ν̅ . In this context, rejecting the background is a crucial topic. One of the main background to the measurement is represented by the K+ → π+π0 decay. In the 1-8.5 mrad decay region this background is rejected by the calorimetric trigger processor (Cal-L0). In this work we present the performance of a soft-core based parallel architecture built on FPGAs for the energy peak reconstruction as an alternative to an implementation completely founded on VHDL language.
Scattering and/or diffusing elements in a variety of recently completed music auditoria
NASA Astrophysics Data System (ADS)
McKay, Ronald L.
2002-11-01
Architectural elements which provide effective acoustic scattering and/or diffusion in a variety of recently completed auditoria for music performance will be presented. Color slides depicting the various elements will be shown. Each will be discussed with respect to its acoustic performance and architectural logic. Measured time-energy reflection patterns will be presented in many cases.
GECKO: a complete large-scale gene expression analysis platform.
Theilhaber, Joachim; Ulyanov, Anatoly; Malanthara, Anish; Cole, Jack; Xu, Dapeng; Nahf, Robert; Heuer, Michael; Brockel, Christoph; Bushnell, Steven
2004-12-10
Gecko (Gene Expression: Computation and Knowledge Organization) is a complete, high-capacity centralized gene expression analysis system, developed in response to the needs of a distributed user community. Based on a client-server architecture, with a centralized repository of typically many tens of thousands of Affymetrix scans, Gecko includes automatic processing pipelines for uploading data from remote sites, a data base, a computational engine implementing approximately 50 different analysis tools, and a client application. Among available analysis tools are clustering methods, principal component analysis, supervised classification including feature selection and cross-validation, multi-factorial ANOVA, statistical contrast calculations, and various post-processing tools for extracting data at given error rates or significance levels. On account of its open architecture, Gecko also allows for the integration of new algorithms. The Gecko framework is very general: non-Affymetrix and non-gene expression data can be analyzed as well. A unique feature of the Gecko architecture is the concept of the Analysis Tree (actually, a directed acyclic graph), in which all successive results in ongoing analyses are saved. This approach has proven invaluable in allowing a large (approximately 100 users) and distributed community to share results, and to repeatedly return over a span of years to older and potentially very complex analyses of gene expression data. The Gecko system is being made publicly available as free software http://sourceforge.net/projects/geckoe. In totality or in parts, the Gecko framework should prove useful to users and system developers with a broad range of analysis needs.
The symbolic computation and automatic analysis of trajectories
NASA Technical Reports Server (NTRS)
Grossman, Robert
1991-01-01
Research was generally done on computation of trajectories of dynamical systems, especially control systems. Algorithms were further developed for rewriting expressions involving differential operators. The differential operators involved arise in the local analysis of nonlinear control systems. An initial design was completed of the system architecture for software to analyze nonlinear control systems using data base computing.
3-D Survey Applied to Industrial Archaeology by Tls Methodology
NASA Astrophysics Data System (ADS)
Monego, M.; Fabris, M.; Menin, A.; Achilli, V.
2017-05-01
This work describes the three-dimensional survey of "Ex Stazione Frigorifera Specializzata": initially used for agricultural storage, during the years it was allocated to different uses until the complete neglect. The historical relevance and the architectural heritage that this building represents has brought the start of a recent renovation project and functional restoration. In this regard it was necessary a global 3-D survey that was based on the application and integration of different geomatic methodologies (mainly terrestrial laser scanner, classical topography, and GNSS). The acquisitions of point clouds was performed using different laser scanners: with time of flight (TOF) and phase shift technologies for the distance measurements. The topographic reference network, needed for scans alignment in the same system, was measured with a total station. For the complete survey of the building, 122 scans were acquired and 346 targets were measured from 79 vertices of the reference network. Moreover, 3 vertices were measured with GNSS methodology in order to georeference the network. For the detail survey of machine room were executed 14 scans with 23 targets. The 3-D global model of the building have less than one centimeter of error in the alignment (for the machine room the error in alignment is not greater than 6 mm) and was used to extract products such as longitudinal and transversal sections, plans, architectural perspectives, virtual scans. A complete spatial knowledge of the building is obtained from the processed data, providing basic information for restoration project, structural analysis, industrial and architectural heritage valorization.
NASA Astrophysics Data System (ADS)
Kozicki, Janek
This talk focuses on recent advances in the construction of a prototype 1000 m2 Martian out-post for 8 inhabitants. The architectural design for such a Martian base has been presented previously on COSPAR 2008, the presentation being entitled ,,Architectural design proposal for a Martian base to continue NASA Mars Design Reference Mission". The presentation was welcomed with warm interest by various institutions, some of which offered help in building a prototype such as providing the building site or funding. This year's oral presentation will focus on a progress report and will briefly describe the architectural design. The architectural design is inspired by terrestrial pneumatic architecture. It has small volume, can be easily transported and provides a large habitable space. An architectural solution analo-gous to a terrestrial house with a studio and a workshop was assumed. The spatial placement of the following zones was carefully considered: residential, agricultural and science, as well as garage and workshop. Further attention was paid to transportation routes and a control and communications center. The issues of a life support system, energy, food, water and waste recycling were also discussed. This Martian base was designed to be crewed by a team of eight people to stay on Mars for at least one and a half year. An Open Plan architectural solution was assumed, with a high level of modularity. Walls of standardized sizes with zip-fasteners allow free rearrangement of the interior to adapt to a new situation. The prototype of such a Polish-origin Martian outpost will be used in a manner similar to MDRS or FMARS but to a larger extent. The prototype's design itself will be tested and corrected to achieve a design which can be used on Mars. The procedure of unfolding the pneumatic modules and floor leveling will be tested. The 1000 m2 interior will be used for various simulation exercises: socio-psychological testing, interior arrangement experiments, agricultural simulations, growing plants in Martian conditions and other kinds of tests. The presented prototype focuses on the ergonomic and psychological aspects of longer stay in a Martian environment. It provides the Martian crew with a comfortable habitable space larger than DRM modules. The practical proposal is to send this base to Mars in a DRM transpor-tation module after prototype testing is completed. The author hopes that this or other similar Martian base designs will help in establishing a permanent presence of humans on Mars.
Programming a hillslope water movement model on the MPP
NASA Technical Reports Server (NTRS)
Devaney, J. E.; Irving, A. R.; Camillo, P. J.; Gurney, R. J.
1987-01-01
A physically based numerical model was developed of heat and moisture flow within a hillslope on a parallel architecture computer, as a precursor to a model of a complete catchment. Moisture flow within a catchment includes evaporation, overland flow, flow in unsaturated soil, and flow in saturated soil. Because of the empirical evidence that moisture flow in unsaturated soil is mainly in the vertical direction, flow in the unsaturated zone can be modeled as a series of one dimensional columns. This initial version of the hillslope model includes evaporation and a single column of one dimensional unsaturated zone flow. This case has already been solved on an IBM 3081 computer and is now being applied to the massively parallel processor architecture so as to make the extension to the one dimensional case easier and to check the problems and benefits of using a parallel architecture machine.
End-to-end network models encompassing terrestrial, wireless, and satellite components
NASA Astrophysics Data System (ADS)
Boyarko, Chandler L.; Britton, John S.; Flores, Phil E.; Lambert, Charles B.; Pendzick, John M.; Ryan, Christopher M.; Shankman, Gordon L.; Williams, Ramon P.
2004-08-01
Development of network models that reflect true end-to-end architectures such as the Transformational Communications Architecture need to encompass terrestrial, wireless and satellite component to truly represent all of the complexities in a world wide communications network. Use of best-in-class tools including OPNET, Satellite Tool Kit (STK), Popkin System Architect and their well known XML-friendly definitions, such as OPNET Modeler's Data Type Description (DTD), or socket-based data transfer modules, such as STK/Connect, enable the sharing of data between applications for more rapid development of end-to-end system architectures and a more complete system design. By sharing the results of and integrating best-in-class tools we are able to (1) promote sharing of data, (2) enhance the fidelity of our results and (3) allow network and application performance to be viewed in the context of the entire enterprise and its processes.
NASA Technical Reports Server (NTRS)
Thangavelu, Madhu
1994-01-01
Traditional concepts of lunar bases describe scenarios where components of the bases are landed on the lunar surface, one at a time, and then put together to form a complete stationary lunar habitat. Recently, some concepts have described the advantages of operating a mobile or 'roving' lunar base. Such a base vastly improves the exploration range from a primary lunar base. Roving bases would also allow the crew to first deploy, test, operationally certify, and then regularly maintain, service, and evolve long life-cycle facilities like observatories or other science payload platforms that are operated far apart from each other across the extraterrestrial surface. The Nomad Explorer is such a mobile lunar base. This paper describes the architectural program of the Nomad Explorer, its advantages over a stationary lunar base, and some of the embedded system concepts which help the roving base to speedily establish a global extraterrestrial infrastructure. A number of modular autonomous logistics landers will carry deployable or erectable payloads, service, and logistically resupply the Nomad Explorer at regular intercepts along the traverse. Starting with the deployment of science experiments and telecommunication networks, and the manned emplacement of a variety of remote outposts using a unique EVA Bell system that enhances manned EVA, the Nomad Explorer architecture suggests the capability for a rapid global development of the extraterrestrial body. The Moon and Mars are candidates for this 'mission oriented' strategy. The lunar case is emphasized in this paper.
A substitution method to improve completeness of events documentation in anesthesia records.
Lamer, Antoine; De Jonckheere, Julien; Marcilly, Romaric; Tavernier, Benoît; Vallet, Benoît; Jeanne, Mathieu; Logier, Régis
2015-12-01
AIMS are optimized to find and display data and curves about one specific intervention but is not retrospective analysis on a huge volume of interventions. Such a system present two main limitation; (1) the transactional database architecture, (2) the completeness of documentation. In order to solve the architectural problem, data warehouses were developed to propose architecture suitable for analysis. However, completeness of documentation stays unsolved. In this paper, we describe a method which allows determining of substitution rules in order to detect missing anesthesia events in an anesthesia record. Our method is based on the principle that missing event could be detected using a substitution one defined as the nearest documented event. As an example, we focused on the automatic detection of the start and the end of anesthesia procedure when these events were not documented by the clinicians. We applied our method on a set of records in order to evaluate; (1) the event detection accuracy, (2) the improvement of valid records. For the year 2010-2012, we obtained event detection with a precision of 0.00 (-2.22; 2.00) min for the start of anesthesia and 0.10 (0.00; 0.35) min for the end of anesthesia. On the other hand, we increased by 21.1% the data completeness (from 80.3 to 97.2% of the total database) for the start and the end of anesthesia events. This method seems to be efficient to replace missing "start and end of anesthesia" events. This method could also be used to replace other missing time events in this particular data warehouse as well as in other kind of data warehouses.
Tough and deformable glasses with bioinspired cross-ply architectures.
Yin, Zhen; Dastjerdi, Ahmad; Barthelat, Francois
2018-05-15
Glasses are optically transparent, hard materials that have been in sustained demand and usage in architectural windows, optical devices, electronics and solar panels. Despite their outstanding optical qualities and durability, their brittleness and low resistance to impact still limits wider applications. Here we present new laminated glass designs that contain toughening cross-ply architectures inspired from fish scales and arthropod cuticles. This seemingly minor enrichment completely transforms the way laminated glass deforms and fractures, and it turns a traditionally brittle material into a stretchy and tough material with little impact on surface hardness and optical quality. Large ply rotation propagates over large volumes, and localization is delayed in tension, even if a strain softening interlayer is used, in a remarkable mechanism which is generated by the kinematics of the plies and geometrical hardening. Compared to traditional laminated glass which degrades significantly in performance when damaged, our cross-ply architecture glass is damage-tolerant and 50 times tougher in energy terms. Despite the outstanding optical qualities and durability of glass, its brittleness and low resistance to impact still limits its wider application. Here we present new laminated glass designs that contain toughening cross-ply architectures inspired from fish scales and arthropod cuticles. Enriching laminated designs with crossplies completely transforms the material deforms and fractures, and turns a traditionally brittle material into a stretchy and tough material - with little impact on surface hardness and optical quality. Large ply rotation propagates over large volumes and localization is delayed in tension because of a remarkable and unexpected geometrical hardening effect. Compared to traditional laminated glass which degrades significantly in performance when damaged, our cross-ply architecture glass is damage-tolerant and it is 50 times tougher in energy terms. Our glass-based, transparent material is highly innovative and it is the first of its kind. We believe it will have impact in broad range of applications in construction, coatings, chemical engineering, electronics, photovoltaics. Copyright © 2018. Published by Elsevier Ltd.
An Architecture to Enable Autonomous Control of Spacecraft
NASA Technical Reports Server (NTRS)
May, Ryan D.; Dever, Timothy P.; Soeder, James F.; George, Patrick J.; Morris, Paul H.; Colombano, Silvano P.; Frank, Jeremy D.; Schwabacher, Mark A.; Wang, Liu; LawLer, Dennis
2014-01-01
Autonomy is required for manned spacecraft missions distant enough that light-time communication delays make ground-based mission control infeasible. Presently, ground controllers develop a complete schedule of power modes for all spacecraft components based on a large number of factors. The proposed architecture is an early attempt to formalize and automate this process using on-vehicle computation resources. In order to demonstrate this architecture, an autonomous electrical power system controller and vehicle Mission Manager are constructed. These two components are designed to work together in order to plan upcoming load use as well as respond to unanticipated deviations from the plan. The communication protocol was developed using "paper" simulations prior to formally encoding the messages and developing software to implement the required functionality. These software routines exchange data via TCP/IP sockets with the Mission Manager operating at NASA Ames Research Center and the autonomous power controller running at NASA Glenn Research Center. The interconnected systems are tested and shown to be effective at planning the operation of a simulated quasi-steady state spacecraft power system and responding to unexpected disturbances.
Comparing a Japanese and a German hospital information system.
Jahn, F; Issler, L; Winter, A; Takabayashi, K
2009-01-01
To examine the architectural differences and similarities of a Japanese and German hospital information system (HIS) in a case study. This cross-cultural comparison, which focuses on structural quality characteristics, offers the chance to get new insights into different HIS architectures, which possibly cannot be obtained by inner-country comparisons. A reference model for the domain layer of hospital information systems containing the typical enterprise functions of a hospital provides the basis of comparison for the two different hospital information systems. 3LGM(2) models, which describe the two HISs and which are based on that reference model, are used to assess several structural quality criteria. Four of these criteria are introduced in detail. The two examined HISs are different in terms of the four structural quality criteria examined. Whereas the centralized architecture of the hospital information system at Chiba University Hospital causes only few functional redundancies and leads to a low implementation of communication standards, the hospital information system at the University Hospital of Leipzig, having a decentralized architecture, exhibits more functional redundancies and a higher use of communication standards. Using a model-based comparison, it was possible to detect remarkable differences between the observed hospital information systems of completely different cultural areas. However, the usability of 3LGM(2) models for comparisons has to be improved in order to apply key figures and to assess or benchmark the structural quality of health information systems architectures more thoroughly.
Retinal Connectomics: Towards Complete, Accurate Networks
Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott
2013-01-01
Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532
Rule-based graph theory to enable exploration of the space system architecture design space
NASA Astrophysics Data System (ADS)
Arney, Dale Curtis
The primary goal of this research is to improve upon system architecture modeling in order to enable the exploration of design space options. A system architecture is the description of the functional and physical allocation of elements and the relationships, interactions, and interfaces between those elements necessary to satisfy a set of constraints and requirements. The functional allocation defines the functions that each system (element) performs, and the physical allocation defines the systems required to meet those functions. Trading the functionality between systems leads to the architecture-level design space that is available to the system architect. The research presents a methodology that enables the modeling of complex space system architectures using a mathematical framework. To accomplish the goal of improved architecture modeling, the framework meets five goals: technical credibility, adaptability, flexibility, intuitiveness, and exhaustiveness. The framework is technically credible, in that it produces an accurate and complete representation of the system architecture under consideration. The framework is adaptable, in that it provides the ability to create user-specified locations, steady states, and functions. The framework is flexible, in that it allows the user to model system architectures to multiple destinations without changing the underlying framework. The framework is intuitive for user input while still creating a comprehensive mathematical representation that maintains the necessary information to completely model complex system architectures. Finally, the framework is exhaustive, in that it provides the ability to explore the entire system architecture design space. After an extensive search of the literature, graph theory presents a valuable mechanism for representing the flow of information or vehicles within a simple mathematical framework. Graph theory has been used in developing mathematical models of many transportation and network flow problems in the past, where nodes represent physical locations and edges represent the means by which information or vehicles travel between those locations. In space system architecting, expressing the physical locations (low-Earth orbit, low-lunar orbit, etc.) and steady states (interplanetary trajectory) as nodes and the different means of moving between the nodes (propulsive maneuvers, etc.) as edges formulates a mathematical representation of this design space. The selection of a given system architecture using graph theory entails defining the paths that the systems take through the space system architecture graph. A path through the graph is defined as a list of edges that are traversed, which in turn defines functions performed by the system. A structure to compactly represent this information is a matrix, called the system map, in which the column indices are associated with the systems that exist and row indices are associated with the edges, or functions, to which each system has access. Several contributions have been added to the state of the art in space system architecture analysis. The framework adds the capability to rapidly explore the design space without the need to limit trade options or the need for user interaction during the exploration process. The unique mathematical representation of a system architecture, through the use of the adjacency, incidence, and system map matrices, enables automated design space exploration using stochastic optimization processes. The innovative rule-based graph traversal algorithm ensures functional feasibility of each system architecture that is analyzed, and the automatic generation of the system hierarchy eliminates the need for the user to manually determine the relationships between systems during or before the design space exploration process. Finally, the rapid evaluation of system architectures for various mission types enables analysis of the system architecture design space for multiple destinations within an evolutionary exploration program. (Abstract shortened by UMI.).
Architecture of Eph receptor clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Himanen, Juha P.; Yermekbayeva, Laila; Janes, Peter W.
2010-10-04
Eph receptor tyrosine kinases and their ephrin ligands regulate cell navigation during normal and oncogenic development. Signaling of Ephs is initiated in a multistep process leading to the assembly of higher-order signaling clusters that set off bidirectional signaling in interacting cells. However, the structural and mechanistic details of this assembly remained undefined. Here we present high-resolution structures of the complete EphA2 ectodomain and complexes with ephrin-A1 and A5 as the base unit of an Eph cluster. The structures reveal an elongated architecture with novel Eph/Eph interactions, both within and outside of the Eph ligand-binding domain, that suggest the molecular mechanismmore » underlying Eph/ephrin clustering. Structure-function analysis, by using site-directed mutagenesis and cell-based signaling assays, confirms the importance of the identified oligomerization interfaces for Eph clustering.« less
NASA Technical Reports Server (NTRS)
Korzennik, Sylvain
1997-01-01
Under the direction of Dr. Rhodes, and the technical supervision of Dr. Korzennik, the data assimilation of high spatial resolution solar dopplergrams has been carried out throughout the program on the Intel Delta Touchstone supercomputer. With the help of a research assistant, partially supported by this grant, and under the supervision of Dr. Korzennik, code development was carried out at SAO, using various available resources. To ensure cross-platform portability, PVM was selected as the message passing library. A parallel implementation of power spectra computation for helioseismology data reduction, using PVM was successfully completed. It was successfully ported to SMP architectures (i.e. SUN), and to some MPP architectures (i.e. the CM5). Due to limitation of the implementation of PVM on the Cray T3D, the port to that architecture was not completed at the time.
Application developer's tutorial for the CSM testbed architecture
NASA Technical Reports Server (NTRS)
Underwood, Phillip; Felippa, Carlos A.
1988-01-01
This tutorial serves as an illustration of the use of the programmer interface on the CSM Testbed Architecture (NICE). It presents a complete, but simple, introduction to using both the GAL-DBM (Global Access Library-Database Manager) and CLIP (Command Language Interface Program) to write a NICE processor. Familiarity with the CSM Testbed architecture is required.
ERIC Educational Resources Information Center
National Association for Retarded Children, New York, NY.
Conference papers consider designing facilities to meet the needs of the mentally retarded and other handicapped persons. Complete texts and summaries are provided of R.B. Price on environmental design, E.D. Helsel on architectural barriers, H. Gordon on preschool programs and facilities, and H. Palmer on training centers for young adults. Also…
The ATLAS EventIndex: architecture, design choices, deployment and first operation experience
NASA Astrophysics Data System (ADS)
Barberis, D.; Cárdenas Zárate, S. E.; Cranshaw, J.; Favareto, A.; Fernández Casaní, Á.; Gallas, E. J.; Glasman, C.; González de la Hoz, S.; Hřivnáč, J.; Malon, D.; Prokoshin, F.; Salt Cairols, J.; Sánchez, J.; Többicke, R.; Yuan, R.
2015-12-01
The EventIndex is the complete catalogue of all ATLAS events, keeping the references to all files that contain a given event in any processing stage. It replaces the TAG database, which had been in use during LHC Run 1. For each event it contains its identifiers, the trigger pattern and the GUIDs of the files containing it. Major use cases are event picking, feeding the Event Service used on some production sites, and technical checks of the completion and consistency of processing campaigns. The system design is highly modular so that its components (data collection system, storage system based on Hadoop, query web service and interfaces to other ATLAS systems) could be developed separately and in parallel during LSI. The EventIndex is in operation for the start of LHC Run 2. This paper describes the high-level system architecture, the technical design choices and the deployment process and issues. The performance of the data collection and storage systems, as well as the query services, are also reported.
Learning Agents for Autonomous Space Asset Management (LAASAM)
NASA Astrophysics Data System (ADS)
Scally, L.; Bonato, M.; Crowder, J.
2011-09-01
Current and future space systems will continue to grow in complexity and capabilities, creating a formidable challenge to monitor, maintain, and utilize these systems and manage their growing network of space and related ground-based assets. Integrated System Health Management (ISHM), and in particular, Condition-Based System Health Management (CBHM), is the ability to manage and maintain a system using dynamic real-time data to prioritize, optimize, maintain, and allocate resources. CBHM entails the maintenance of systems and equipment based on an assessment of current and projected conditions (situational and health related conditions). A complete, modern CBHM system comprises a number of functional capabilities: sensing and data acquisition; signal processing; conditioning and health assessment; diagnostics and prognostics; and decision reasoning. In addition, an intelligent Human System Interface (HSI) is required to provide the user/analyst with relevant context-sensitive information, the system condition, and its effect on overall situational awareness of space (and related) assets. Colorado Engineering, Inc. (CEI) and Raytheon are investigating and designing an Intelligent Information Agent Architecture that will provide a complete range of CBHM and HSI functionality from data collection through recommendations for specific actions. The research leverages CEI’s expertise with provisioning management network architectures and Raytheon’s extensive experience with learning agents to define a system to autonomously manage a complex network of current and future space-based assets to optimize their utilization.
Parallel computing of physical maps--a comparative study in SIMD and MIMD parallelism.
Bhandarkar, S M; Chirravuri, S; Arnold, J
1996-01-01
Ordering clones from a genomic library into physical maps of whole chromosomes presents a central computational problem in genetics. Chromosome reconstruction via clone ordering is usually isomorphic to the NP-complete Optimal Linear Arrangement problem. Parallel SIMD and MIMD algorithms for simulated annealing based on Markov chain distribution are proposed and applied to the problem of chromosome reconstruction via clone ordering. Perturbation methods and problem-specific annealing heuristics are proposed and described. The SIMD algorithms are implemented on a 2048 processor MasPar MP-2 system which is an SIMD 2-D toroidal mesh architecture whereas the MIMD algorithms are implemented on an 8 processor Intel iPSC/860 which is an MIMD hypercube architecture. A comparative analysis of the various SIMD and MIMD algorithms is presented in which the convergence, speedup, and scalability characteristics of the various algorithms are analyzed and discussed. On a fine-grained, massively parallel SIMD architecture with a low synchronization overhead such as the MasPar MP-2, a parallel simulated annealing algorithm based on multiple periodically interacting searches performs the best. For a coarse-grained MIMD architecture with high synchronization overhead such as the Intel iPSC/860, a parallel simulated annealing algorithm based on multiple independent searches yields the best results. In either case, distribution of clonal data across multiple processors is shown to exacerbate the tendency of the parallel simulated annealing algorithm to get trapped in a local optimum.
Designing flexible engineering systems utilizing embedded architecture options
NASA Astrophysics Data System (ADS)
Pierce, Jeff G.
This dissertation develops and applies an integrated framework for embedding flexibility in an engineered system architecture. Systems are constantly faced with unpredictability in the operational environment, threats from competing systems, obsolescence of technology, and general uncertainty in future system demands. Current systems engineering and risk management practices have focused almost exclusively on mitigating or preventing the negative consequences of uncertainty. This research recognizes that high uncertainty also presents an opportunity to design systems that can flexibly respond to changing requirements and capture additional value throughout the design life. There does not exist however a formalized approach to designing appropriately flexible systems. This research develops a three stage integrated flexibility framework based on the concept of architecture options embedded in the system design. Stage One defines an eight step systems engineering process to identify candidate architecture options. This process encapsulates the operational uncertainty though scenario development, traces new functional requirements to the affected design variables, and clusters the variables most sensitive to change. The resulting clusters can generate insight into the most promising regions in the architecture to embed flexibility in the form of architecture options. Stage Two develops a quantitative option valuation technique, grounded in real options theory, which is able to value embedded architecture options that exhibit variable expiration behavior. Stage Three proposes a portfolio optimization algorithm, for both discrete and continuous options, to select the optimal subset of architecture options, subject to budget and risk constraints. Finally, the feasibility, extensibility and limitations of the framework are assessed by its application to a reconnaissance satellite system development problem. Detailed technical data, performance models, and cost estimates were compiled for the Tactical Imaging Constellation Architecture Study and leveraged to complete a realistic proof-of-concept.
NASA IVHM Technology Experiment for X-vehicles (NITEX)
NASA Technical Reports Server (NTRS)
Sandra, Hayden; Bajwa, Anupa
2001-01-01
The purpose of the NASA IVHM Technology Experiment for X-vehicles (NITEX) is to advance the development of selected IVHM technologies in a flight environment and to demonstrate the potential for reusable launch vehicle ground processing savings. The technologies to be developed and demonstrated include system-level and detailed diagnostics for real-time fault detection and isolation, prognostics for fault prediction, automated maintenance planning based on diagnostic and prognostic results, and a microelectronics hardware platform. Complete flight The Evolution of Flexible Insulation as IVHM consists of advanced sensors, distributed data acquisition, data processing that includes model-based diagnostics, prognostics and vehicle autonomy for control or suggested action, and advanced data storage. Complete ground IVHM consists of evolved control room architectures, advanced applications including automated maintenance planning and automated ground support equipment. This experiment will advance the development of a subset of complete IVHM.
Near-Earth Phase Risk Comparison of Human Mars Campaign Architectures
NASA Technical Reports Server (NTRS)
Manning, Ted A.; Nejad, Hamed S.; Mattenberger, Chris
2013-01-01
A risk analysis of the launch, orbital assembly, and Earth-departure phases of human Mars exploration campaign architectures was completed as an extension of a probabilistic risk assessment (PRA) originally carried out under the NASA Constellation Program Ares V Project. The objective of the updated analysis was to study the sensitivity of loss-of-campaign risk to such architectural factors as composition of the propellant delivery portion of the launch vehicle fleet (Ares V heavy-lift launch vehicle vs. smaller/cheaper commercial launchers) and the degree of launcher or Mars-bound spacecraft element sparing. Both a static PRA analysis and a dynamic, event-based Monte Carlo simulation were developed and used to evaluate the probability of loss of campaign under different sparing options. Results showed that with no sparing, loss-of-campaign risk is strongly driven by launcher count and on-orbit loiter duration, favoring an all-Ares V launch approach. Further, the reliability of the all-Ares V architecture showed significant improvement with the addition of a single spare launcher/payload. Among architectures utilizing a mix of Ares V and commercial launchers, those that minimized the on-orbit loiter duration of Mars-bound elements were found to exceed the reliability of no spare all-Ares V campaign if unlimited commercial vehicle sparing was assumed
Future Short Range Ground-Based Air Defence: System Drivers, Characteristics and Architectures
2001-03-01
vulnerable being on the right. Although for completeness the defended asset characteristics shown in Table 1 are based upon a conventional armoured formation...Camouflage scrimmed draped visual full/thermal EMCON 4 3 2 1 Visibility line of sight occulting/obscured non line of sight "Contact static FLOT fluid...confused mel~e Armour soft semi-hard hard defensive aids Protection Digging in open under cover dug in full o/h protection AD none AAAD CAD fully
Ultra-Stable Segmented Telescope Sensing and Control Architecture
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Bolcar, Matthew; Knight, Scott; Redding, David
2017-01-01
The LUVOIR team is conducting two full architecture studies Architecture A 15 meter telescope that folds up in an 8.4m SLS Block 2 shroud is nearly complete. Architecture B 9.2 meter that uses an existing fairing size will begin study this Fall. This talk will summarize the ultra-stable architecture of the 15m segmented telescope including the basic requirements, the basic rationale for the architecture, the technologies employed, and the expected performance. This work builds on several dynamics and thermal studies performed for ATLAST segmented telescope configurations. The most important new element was an approach to actively control segments for segment to segment motions which will be discussed later.
2012-10-01
expected as new nanomaterial capabilities as well as new nanoscale-centric circuit architectures are developed. However, the emerging trend in IT-focused...was eventually achieved. Proceeding from LRS to HRS afterwards shows a similar trend : several pulses are needed to completely switch the device from...resulted in unstable switching behavior. Similar trends were observed by Vallee, et al. for HfOx-based devices in which the switching behavior for Pt TEs
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.
2004-01-01
The NASA Glenn Research Center is investigating the development and suitability of a software-based open-architecture for space-based reconfigurable transceivers (RTs) and software-defined radios (SDRs). The main objectives of this project are to enable advanced operations and reduce mission costs. SDRs are becoming more common because of the capabilities of reconfigurable digital signal processing technologies such as field programmable gate arrays and digital signal processors, which place radio functions in firmware and software that were traditionally performed with analog hardware components. Features of interest of this communications architecture include nonproprietary open standards and application programming interfaces to enable software reuse and portability, independent hardware and software development, and hardware and software functional separation. The goals for RT and SDR technologies for NASA space missions include prelaunch and on-orbit frequency and waveform reconfigurability and programmability, high data rate capability, and overall communications and processing flexibility. These operational advances over current state-of-art transceivers will be provided to reduce the power, mass, and cost of RTs and SDRs for space communications. The open architecture for NASA communications will support existing (legacy) communications needs and capabilities while providing a path to more capable, advanced waveform development and mission concepts (e.g., ad hoc constellations with self-healing networks and high-rate science data return). A study was completed to assess the state of the art in RT architectures, implementations, and technologies. In-house researchers conducted literature searches and analysis, interviewed Government and industry contacts, and solicited information and white papers from industry on space-qualifiable RTs and SDRs and their associated technologies for space-based NASA applications. The white papers were evaluated, compiled, and used to assess RT and SDR system architectures and core technology elements to determine an appropriate investment strategy to advance these technologies to meet future mission needs. The use of these radios in the space environment represents a challenge because of the space radiation suitability of the components, which drastically reduces the processing capability. The radios available for space are considered to be RTs (as opposed to SDRs), which are digitally programmable radios with selectable changes from an architecture combining analog and digital components. The limited flexibility of this design contrasts against the desire to have a power-efficient solution and open architecture.
A new flight control and management system architecture and configuration
NASA Astrophysics Data System (ADS)
Kong, Fan-e.; Chen, Zongji
2006-11-01
The advanced fighter should possess the performance such as super-sound cruising, stealth, agility, STOVL(Short Take-Off Vertical Landing),powerful communication and information processing. For this purpose, it is not enough only to improve the aerodynamic and propulsion system. More importantly, it is necessary to enhance the control system. A complete flight control system provides not only autopilot, auto-throttle and control augmentation, but also the given mission management. F-22 and JSF possess considerably outstanding flight control system on the basis of pave pillar and pave pace avionics architecture. But their control architecture is not enough integrated. The main purpose of this paper is to build a novel fighter control system architecture. The control system constructed on this architecture should be enough integrated, inexpensive, fault-tolerant, high safe, reliable and effective. And it will take charge of both the flight control and mission management. Starting from this purpose, this paper finishes the work as follows: First, based on the human nervous control, a three-leveled hierarchical control architecture is proposed. At the top of the architecture, decision level is in charge of decision-making works. In the middle, organization & coordination level will schedule resources, monitor the states of the fighter and switch the control modes etc. And the bottom is execution level which holds the concrete drive and measurement; then, according to their function and resources all the tasks involving flight control and mission management are sorted to individual level; at last, in order to validate the three-leveled architecture, a physical configuration is also showed. The configuration is distributed and applies some new advancement in information technology industry such line replaced module and cluster technology.
Services, architectures, and protocols for space data systems
NASA Technical Reports Server (NTRS)
Helgert, Hermann J.
1991-01-01
The author presents a comprehensive discussion of three major aspects of the work of the Consultative Committee for Space Data Systems (CCSDS), a worldwide cooperative effort of national space agencies. The author examines the CCSDS space data communications network concept on which the data communications facilities of future advanced orbiting systems will be based. He derives the specifications of an open communications architecture as a reference model for the development of services and protocols that support the transfer of information over space data communications networks. Detailed specifications of the communication services and information transfer protocols that have reached a high degree of maturity and stability are offered. The author also includes a complete list of currently available CCSDS standards and supporting documentation.
IDC Reengineering Iteration I2 Architectural Prototype Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamlet, Benjamin R.
To fulfill the inception phase deliverable “Demonstration of architectural prototype“ the SNL IDC Reengineering project team is providing seven reports describing system prototyping work completed between October 2012 and October 2014as part of the SNL US NDC Modernization project.
Control of Architecture in Rhombic Dodecahedral Pt–Ni Nanoframe Electrocatalysts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becknell, Nigel; Son, Yoonkook; Kim, Dohyung
Platinum-based alloys are known to demonstrate advanced properties in electrochemical reactions that are relevant for proton exchange membrane fuel cells and electrolyzers. Further development of Pt alloy electrocatalysts relies on the design of architectures with highly active surfaces and optimized utilization of the expensive elpment, Pt. Here, we show that the three-dimensional Pt anisotropy of Pt-Ni rhombic dodecahedra can be tuned by controlling the ratio between Pt and Ni precursors such that either a completely hollow nanoframe or a new architecture, the excavated nanoframe, can be obtained. The excavated nanoframe showed similar to 10 times higher specific and similar tomore » 6 times higher mass activity for the oxygen reduction reaction than Pt/C, and twice the mass activity of the hollow nanoframe. The high activity is attributed to enhanced Ni content in the near-surface region and the extended two-dimensional sheet structure within the nanoframe that minimizes the number of buried Pt sites.« less
A physical model of sensorimotor interactions during locomotion
NASA Astrophysics Data System (ADS)
Klein, Theresa J.; Lewis, M. Anthony
2012-08-01
In this paper, we describe the development of a bipedal robot that models the neuromuscular architecture of human walking. The body is based on principles derived from human muscular architecture, using muscles on straps to mimic agonist/antagonist muscle action as well as bifunctional muscles. Load sensors in the straps model Golgi tendon organs. The neural architecture is a central pattern generator (CPG) composed of a half-center oscillator combined with phase-modulated reflexes that is simulated using a spiking neural network. We show that the interaction between the reflex system, body dynamics and CPG results in a walking cycle that is entrained to the dynamics of the system. We also show that the CPG helped stabilize the gait against perturbations relative to a purely reflexive system, and compared the joint trajectories to human walking data. This robot represents a complete physical, or ‘neurorobotic’, model of the system, demonstrating the usefulness of this type of robotics research for investigating the neurophysiological processes underlying walking in humans and animals.
SCOS 2: A distributed architecture for ground system control
NASA Astrophysics Data System (ADS)
Keyte, Karl P.
The current generation of spacecraft ground control systems in use at the European Space Agency/European Space Operations Centre (ESA/ESOC) is based on the SCOS 1. Such systems have become difficult to manage in both functional and financial terms. The next generation of spacecraft is demanding more flexibility in the use, configuration and distribution of control facilities as well as functional requirements capable of matching those being planned for future missions. SCOS 2 is more than a successor to SCOS 1. Many of the shortcomings of the existing system have been carefully analyzed by user and technical communities and a complete redesign was made. Different technologies were used in many areas including hardware platform, network architecture, user interfaces and implementation techniques, methodologies and language. As far as possible a flexible design approach has been made using popular industry standards to provide vendor independence in both hardware and software areas. This paper describes many of the new approaches made in the architectural design of the SCOS 2.
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
NTR-Enhanced Lunar-Base Supply using Existing Launch Fleet Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
John D. Bess; Emily Colvin; Paul G. Cummings
During the summer of 2006, students at the Center for Space Nuclear Research sought to augment the current NASA lunar exploration architecture with a nuclear thermal rocket (NTR). An additional study investigated the possible use of an NTR with existing launch vehicles to provide 21 metric tons of supplies to the lunar surface in support of a lunar outpost. Current cost estimates show that the complete mission cost for an NTR-enhanced assembly of Delta-IV and Atlas V vehicles may cost 47-86% more than the estimated Ares V launch cost of $1.5B; however, development costs for the current NASA architecture havemore » not been assessed. The additional cost of coordinating the rendezvous of four to six launch vehicles with an in-orbit assembly facility also needs more thorough analysis and review. Future trends in launch vehicle use will also significantly impact the results from this comparison. The utility of multiple launch vehicles allows for the development of a more robust and lower risk exploration architecture.« less
Application of Risk within Net Present Value Calculations for Government Projects
NASA Technical Reports Server (NTRS)
Grandl, Paul R.; Youngblood, Alisha D.; Componation, Paul; Gholston, Sampson
2007-01-01
In January 2004, President Bush announced a new vision for space exploration. This included retirement of the current Space Shuttle fleet by 2010 and the development of new set of launch vehicles. The President's vision did not include significant increases in the NASA budget, so these development programs need to be cost conscious. Current trade study procedures address factors such as performance, reliability, safety, manufacturing, maintainability, operations, and costs. It would be desirable, however, to have increased insight into the cost factors behind each of the proposed system architectures. This paper reports on a set of component trade studies completed on the upper stage engine for the new launch vehicles. Increased insight into architecture costs was developed by including a Net Present Value (NPV) method and applying a set of associated risks to the base parametric cost data. The use of the NPV method along with the risks was found to add fidelity to the trade study and provide additional information to support the selection of a more robust design architecture.
Relation of melatonin to sleep architecture in children with autism.
Leu, Roberta M; Beyderman, Liya; Botzolakis, Emmanuel J; Surdyka, Kyla; Wang, Lily; Malow, Beth A
2011-04-01
Children with autism often suffer from sleep disturbances, and compared to age-matched controls, have decreased melatonin levels, as indicated by urine levels of the primary melatonin metabolite, 6-sulfatoxymelatonin (6-SM). We therefore investigated the relationship between 6-SM levels and sleep architecture in children with autism spectrum disorders (ASD). Twenty-three children, aged 4-10 years, completed two nights of polysomnography and one overnight urine collection for measurement of urinary 6-SM excretion rate. Parents completed the Children's Sleep Habits Questionnaire. We found that higher urinary 6-SM excretion rates were associated with increased N3 sleep, decreased N2 sleep, and decreased daytime sleepiness. The results warrant further examination to examine the effects of supplemental melatonin on sleep architecture and daytime sleepiness.
Advances in Orion's On-Orbit Guidance and Targeting System Architecture
NASA Technical Reports Server (NTRS)
Scarritt, Sara K.; Fill, Thomas; Robinson, Shane
2015-01-01
NASA's manned spaceflight programs have a rich history of advancing onboard guidance and targeting technology. In order to support future missions, the guidance and targeting architecture for the Orion Multi-Purpose Crew Vehicle must be able to operate in complete autonomy, without any support from the ground. Orion's guidance and targeting system must be sufficiently flexible to easily adapt to a wide array of undecided future missions, yet also not cause an undue computational burden on the flight computer. This presents a unique design challenge from the perspective of both algorithm development and system architecture construction. The present work shows how Orion's guidance and targeting system addresses these challenges. On the algorithm side, the system advances the state-of-the-art by: (1) steering burns with a simple closed-loop guidance strategy based on Shuttle heritage, and (2) planning maneuvers with a cutting-edge two-level targeting routine. These algorithms are then placed into an architecture designed to leverage the advantages of each and ensure that they function in concert with one another. The resulting system is characterized by modularity and simplicity. As such, it is adaptable to the on-orbit phases of any future mission that Orion may attempt.
Space Mobile Network: A Near Earth Communication and Navigation Architecture
NASA Technical Reports Server (NTRS)
Israel, Dave J.; Heckler, Greg; Menrad, Robert J.
2016-01-01
This paper describes a Space Mobile Network architecture, the result of a recently completed NASA study exploring architectural concepts to produce a vision for the future Near Earth communications and navigation systems. The Space Mobile Network (SMN) incorporates technologies, such as Disruption Tolerant Networking (DTN) and optical communications, and new operations concepts, such as User Initiated Services, to provide user services analogous to a terrestrial smartphone user. The paper will describe the SMN Architecture, envisioned future operations concepts, opportunities for industry and international collaboration and interoperability, and technology development areas and goals.
NASA Astrophysics Data System (ADS)
Aquino Neto, Sidney; Milton, Ross D.; Hickey, David P.; De Andrade, Adalgisa R.; Minteer, Shelley D.
2016-08-01
The bioelectrooxidation of ethanol was investigated in a fully enzymatic membraneless ethanol/O2 biofuel cell assembly using hybrid bioanodes containing multi-walled carbon nanotube (MWCNT)-decorated gold metallic nanoparticles with either a pyrroloquinoline quinone (PQQ)-dependent alcohol dehydrogenase (ADH) enzyme or a nicotinamide adenine dinucleotide (NAD+)-dependent ADH enzyme. The biofuel cell anode was prepared with the PQQ-dependent enzyme and designed using either a direct electron transfer (DET) architecture or via a mediated electron transfer (MET) configuration through a redox polymer, 1,1‧-dimethylferrocene-modified linear polyethyleneimine (FcMe2-C3-LPEI). In the case of the bioanode containing the NAD+-dependent enzyme, only the mediated electron transfer mechanism was employed using an electropolymerized methylene green film to regenerate the NAD+ cofactor. Regardless of the enzyme being employed at the anode, a bilirubin oxidase-based biocathode prepared within a DET architecture afforded efficient electrocatalytic oxygen reduction in an ethanol/O2 biofuel cell. The power curves showed that DET-based bioanodes via the PQQ-dependent ADH still lack high current densities, whereas the MET architecture furnished maximum power density values as high as 226 ± 21 μW cm-2. Considering the complete membraneless enzymatic biofuel cell with the NAD+-dependent ADH-based bioanode, power densities as high as 111 ± 14 μW cm-2 were obtained. This shows the advantage of PQQ-dependent ADH for membraneless ethanol/O2 biofuel cell applications.
Using an Integrated Distributed Test Architecture to Develop an Architecture for Mars
NASA Technical Reports Server (NTRS)
Othon, William L.
2016-01-01
The creation of a crew-rated spacecraft architecture capable of sending humans to Mars requires the development and integration of multiple vehicle systems and subsystems. Important new technologies will be identified and matured within each technical discipline to support the mission. Architecture maturity also requires coordination with mission operations elements and ground infrastructure. During early architecture formulation, many of these assets will not be co-located and will required integrated, distributed test to show that the technologies and systems are being developed in a coordinated way. When complete, technologies must be shown to function together to achieve mission goals. In this presentation, an architecture will be described that promotes and advances integration of disparate systems within JSC and across NASA centers.
Lymphatic Malformation Architecture: Implications for Treatment With OK-432.
Malic, Claudia C; Guilfoyle, Regan; Courtemanche, Rebecca J M; Arneja, Jugpal S; Heran, Manraj K S; Courtemanche, Douglas J
2017-10-01
Herein, the authors aim to describe their findings of novel architectural types of lymphatic malformations (LM) and explain the relationship between these architectures and OK-432 treatment outcomes. A retrospective review was conducted of all patients diagnosed with a LM treated with OK-432 at the Vascular Anomalies Clinic at BC Children's Hospital from December 2002 to January 2012. Twenty-seven patients were included in the study. Sixty percent of lesions were present by 2 years of age with the majority located in the head and neck (59%). The average number of sclerotherapy procedures was 1.4 per patient. Treatment under fluoroscopic guidance revealed 3 new LM architectures: open-cell microcystic, closed-cell microcystic, and lymphatic channel. Response to treatment was complete or good for 14/19 macrocystic and for 1/2 mixed lesions. Open-cell microcystic LMs gave a complete or good response for 3/3, which was attributed to OK-432 freely communicating between cysts. Closed-cell microcystic LM had localized cysts that did not allow OK-432 to freely communicate and were associated with partial responses, 2/2. The lymphatic channel had a partial response. There were 2 minor complications and 1 instance of recurrence. The identification of 3 new LM architectures expands the current accepted classification to include: open-cell microcystic, closed-cell microcystic, and lymphatic channels. The majority of complete responses to OK-432 were found with macrocystic lesions. Open-cell microcystic lesions respond better to OK-432 than closed-cell microcystic lesions, and lymphatic channels may respond to OK-432. These key architecture-response relationships have direct clinical implications for treatment with OK-432 sclerotherapy.
2013-09-01
processes used in space system acquisitions, simply implementing a data exchange specification would not fundamentally improve how information is...instruction, searching existing data sources , gathering and maintaining the data needed, and completing and reviewing the collection of information ...and manage the configuration of all critical program models, processes , and tools used throughout the DoD. Second, mandate a data exchange
Li, Zhifei; Qin, Dongliang
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation. PMID:24790572
Li, Zhifei; Qin, Dongliang; Yang, Feng
2014-01-01
In defense related programs, the use of capability-based analysis, design, and acquisition has been significant. In order to confront one of the most challenging features of a huge design space in capability based analysis (CBA), a literature review of design space exploration was first examined. Then, in the process of an aerospace system of systems design space exploration, a bilayer mapping method was put forward, based on the existing experimental and operating data. Finally, the feasibility of the foregoing approach was demonstrated with an illustrative example. With the data mining RST (rough sets theory) and SOM (self-organized mapping) techniques, the alternative to the aerospace system of systems architecture was mapping from P-space (performance space) to C-space (configuration space), and then from C-space to D-space (design space), respectively. Ultimately, the performance space was mapped to the design space, which completed the exploration and preliminary reduction of the entire design space. This method provides a computational analysis and implementation scheme for large-scale simulation.
Theory of electronically phased coherent beam combination without a reference beam
NASA Astrophysics Data System (ADS)
Shay, Thomas M.
2006-12-01
The first theory for two novel coherent beam combination architectures that are the first electronic beam combination architectures that completely eliminate the need for a separate reference beam are presented. Detailed theoretical models are developed and presented for the first time.
Mars power system concept definition study. Volume 1: Study results
NASA Technical Reports Server (NTRS)
Littman, Franklin D.
1994-01-01
A preliminary top level study was completed to define power system concepts applicable to Mars surface applications. This effort included definition of power system requirements and selection of power systems with the potential for high commonality. These power systems included dynamic isotope, Proton Exchange Membrane (PEM) regenerative fuel cell, sodium sulfur battery, photovoltaic, and reactor concepts. Design influencing factors were identified. Characterization studies were then done for each concept to determine system performance, size/volume, and mass. Operations studies were done to determine emplacement/deployment maintenance/servicing, and startup/shutdown requirements. Technology development roadmaps were written for each candidate power system (included in Volume 2). Example power system architectures were defined and compared on a mass basis. The dynamic isotope power system and nuclear reactor power system architectures had significantly lower total masses than the photovoltaic system architectures. Integrated development and deployment time phasing plans were completed for an example DIPS and reactor architecture option to determine the development strategies required to meet the mission scenario requirements.
Study on the standard architecture for geoinformation common services
NASA Astrophysics Data System (ADS)
Zha, Z.; Zhang, L.; Wang, C.; Jiang, J.; Huang, W.
2014-04-01
The construction of platform for geoinformation common services was completed or on going in in most provinces and cities in these years in China, and the platforms plays an important role in the economic and social activities. Geoinfromation and geoinfromation based services are the key issues in the platform. The standards on geoinormation common services play as bridges among the users, systems and designers of the platform. The standard architecture for geoinformation common services is the guideline for designing and using the standard system in which the standards integrated to each other to promote the development, sharing and services of geoinformation resources. To establish the standard architecture for geoinformation common services is one of the tasks of "Study on important standards for geonformation common services and management of public facilities in city". The scope of the standard architecture is defined, such as data or information model, interoperability interface or service, information management. Some Research work on the status of international standards of geoinormation common services in organization and countries, like ISO/TC 211, OGC and other countries or unions like USA, EU, Japan have done. Some principles are set up to evaluate the standard, such as availability, suitability and extensible ability. Then the development requirement and practical situation are analyzed, and a framework of the standard architecture for geoinformation common services are proposed. Finally, a summary and prospects of the geoinformation standards are made.
Jeddah Historical Building Information Modelling "JHBIM" - Object Library
NASA Astrophysics Data System (ADS)
Baik, A.; Alitany, A.; Boehm, J.; Robson, S.
2014-05-01
The theory of using Building Information Modelling "BIM" has been used in several Heritage places in the worldwide, in the case of conserving, documenting, managing, and creating full engineering drawings and information. However, one of the most serious issues that facing many experts in order to use the Historical Building Information Modelling "HBIM", is creating the complicated architectural elements of these Historical buildings. In fact, many of these outstanding architectural elements have been designed and created in the site to fit the exact location. Similarly, this issue has been faced the experts in Old Jeddah in order to use the BIM method for Old Jeddah historical Building. Moreover, The Saudi Arabian City has a long history as it contains large number of historic houses and buildings that were built since the 16th century. Furthermore, the BIM model of the historical building in Old Jeddah always take a lot of time, due to the unique of Hijazi architectural elements and no such elements library, which have been took a lot of time to be modelled. This paper will focus on building the Hijazi architectural elements library based on laser scanner and image survey data. This solution will reduce the time to complete the HBIM model and offering in depth and rich digital architectural elements library to be used in any heritage projects in Al-Balad district, Jeddah City.
NASA Technical Reports Server (NTRS)
Zelkin, Natalie; Henriksen, Stephen
2010-01-01
This NASA Contractor Report summarizes and documents the work performed to develop concepts of use (ConUse) and high-level system requirements and architecture for the proposed L-band (960 to 1164 MHz) terrestrial en route communications system. This work was completed as a follow-on to the technology assessment conducted by NASA Glenn Research Center and ITT for the Future Communications Study (FCS). ITT assessed air-to-ground (A/G) communications concepts of use and operations presented in relevant NAS-level, international, and NAS-system-level documents to derive the appropriate ConUse relevant to potential A/G communications applications and services for domestic continental airspace. ITT also leveraged prior concepts of use developed during the earlier phases of the FCS. A middle-out functional architecture was adopted by merging the functional system requirements identified in the bottom-up assessment of existing requirements with those derived as a result of the top-down analysis of ConUse and higher level functional requirements. Initial end-to-end system performance requirements were derived to define system capabilities based on the functional requirements and on NAS-SR-1000 and the Operational Performance Assessment conducted as part of the COCR. A high-level notional architecture of the L-DACS supporting A/G communication was derived from the functional architecture and requirements.
Relation of Melatonin to Sleep Architecture in Children with Autism
Leu, Roberta M.; Beyderman, Liya; Botzolakis, Emmanuel J.; Surdyka, Kyla; Wang, Lily; Malow, Beth A.
2013-01-01
Children with autism often suffer from sleep disturbances, and compared to age-matched controls, have decreased melatonin levels, as indicated by urine levels of the primary melatonin metabolite, 6-sulfatoxymelatonin (6-SM). We therefore investigated the relationship between 6-SM levels and sleep architecture in children with autism spectrum disorders (ASD). Twenty-three children, aged 4–10 years, completed two nights of polysomnography and one overnight urine collection for measurement of urinary 6-SM excretion rate. Parents completed the Children’s Sleep Habits Questionnaire. We found that higher urinary 6-SM excretion rates were associated with increased N3 sleep, decreased N2 sleep, and decreased daytime sleepiness. The results warrant further examination to examine the effects of supplemental melatonin on sleep architecture and daytime sleepiness. PMID:20683768
Architectural Drafting: Commercial Applications. Teacher Guide.
ERIC Educational Resources Information Center
Whitney, Terry A.
This curriculum guide contains the technical information and tasks necessary for a student (who has already completed basic drafting) to be employed as an architectural drafter trainee. The curriculum is written in terms of student performance using measurable objectives, technical information, tasks developed to accomplish those objectives, and…
Theory of electronic phase locking of an optical array without a reference beam
NASA Astrophysics Data System (ADS)
Shay, Thomas M.
2006-08-01
The first theory for two novel coherent beam combination architectures that are the first electronic beam combination architectures that completely eliminate the need for a separate reference beam are presented. Detailed theoretical models are developed and presented for the first time.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-12
... design feature associated with the architecture and connectivity capabilities of the airplanes' computer... the comment for an association, business, labor union, etc.). DOT's complete Privacy Act Statement can...; facsimile 425-227-1149. SUPPLEMENTARY INFORMATION: The proposed network architecture includes the following...
Radar Unix: a complete package for GPR data processing
NASA Astrophysics Data System (ADS)
Grandjean, Gilles; Durand, Herve
1999-03-01
A complete package for ground penetrating radar data interpretation including data processing, forward modeling and a case history database consultation is presented. Running on an Unix operating system, its architecture consists of a graphical user interface generating batch files transmitted to a library of processing routines. This design allows a better software maintenance and the possibility for the user to run processing or modeling batch files by itself and differed in time. A case history data base is available and consists of an hypertext document which can be consulted by using a standard HTML browser. All the software specifications are presented through a realistic example.
Architecture of PAU survey camera readout electronics
NASA Astrophysics Data System (ADS)
Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo
2012-07-01
PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.
Project Integration Architecture: Implementation of the CORBA-Served Application Infrastructure
NASA Technical Reports Server (NTRS)
Jones, William Henry
2005-01-01
The Project Integration Architecture (PIA) has been demonstrated in a single-machine C++ implementation prototype. The architecture is in the process of being migrated to a Common Object Request Broker Architecture (CORBA) implementation. The migration of the Foundation Layer interfaces is fundamentally complete. The implementation of the Application Layer infrastructure for that migration is reported. The Application Layer provides for distributed user identification and authentication, per-user/per-instance access controls, server administration, the formation of mutually-trusting application servers, a server locality protocol, and an ability to search for interface implementations through such trusted server networks.
The Comparative Rowhouse Study: An Introduction to Architectural Design.
ERIC Educational Resources Information Center
Hirshorn, Paul
1982-01-01
A course is described that involves a comparative study of Philadelphia rowhouses. Students each visit a house and complete its architectural drawing according to established guidelines. The drawings are later reduced and offset printed for sale. A series of exercises focuses on a number of design elements. (MSE)
Development of a Conceptual Structure for Architectural Solar Energy Systems.
ERIC Educational Resources Information Center
Ringel, Robert F.
Solar subsystems and components were identified and conceptual structure was developed for architectural solar energy heating and cooling systems. Recent literature related to solar energy systems was reviewed and analyzed. Solar heating and cooling system, subsystem, and component data were compared for agreement and completeness. Significant…
The Stumbling Campaign for Free-Access Buildings
ERIC Educational Resources Information Center
Building Design and Construction, 1974
1974-01-01
Although a concerted effort has been made during the past 15 years to knock down the architectural barriers facing the handicapped, the existence of over 400,000 wheelchair handicapped and more than two million people wearing leg braces or with artificial legs makes it imperative that architectural designs produce completely barrier-free…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-29
... complete transformation in recent years. The booming economy and growing middle class has prompted... a transformation in the way projects are designed and built in India. Many foreign architecture... need for all building types, but corporate campuses, education, housing, infrastructure, and master...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-18
... architecture and connectivity capabilities of the airplane's computer systems and networks, which may allow... an association, business, labor union, etc.). DOT's complete Privacy Act Statement can be found in... or unusual design features: Digital systems architecture composed of several connected networks. The...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... design feature associated with the architecture and connectivity capabilities of the airplanes' computer... the comment (or signing the comment for an association, business, labor union, etc.). DOT's complete... passengers and two crew members. The proposed Learjet Model 45 avionics architecture is new and novel for...
Cool Apps: Building Cryospheric Data Applications With Standards-Based Service Oriented Architecture
NASA Astrophysics Data System (ADS)
Collins, J. A.; Truslove, I.; Billingsley, B. W.; Oldenburg, J.; Brodzik, M.; Lewis, S.; Liu, M.
2012-12-01
The National Snow and Ice Data Center (NSIDC) holds a large collection of cryospheric data, and is involved in a number of informatics research and development projects aimed at improving the discoverability and accessibility of these data. To develop high-quality software in a timely manner, we have adopted a Service-Oriented Architecture (SOA) approach for our core technical infrastructure development. Data services at NSIDC are internally exposed to other tools and applications through standards-based service interfaces. These standards include OAI-PMH (Open Archives Initiative Protocol for Metadata Harvesting), various OGC (Open Geospatial Consortium) standards including WMS (Web Map Service) and WFS (Web Feature Service), ESIP (Federation of Earth Sciences Information Partners) OpenSearch, and NSIDC-specific RESTful services. By taking a standards-based approach, we are able to use off-the-shelf tools and libraries to consume, translate and broker these data services, and thus develop applications faster. Additionally, by exposing public interfaces to these services we provide valuable data services to technical collaborators; for example, NASA Reverb (http://reverb.echo.nasa.gov) uses NSIDC's WMS services. Our latest generation of web applications consume these data services directly. The most complete example of this is the Operation IceBridge Data Portal (http://nsidc.org/icebridge/portal) which depends on many of the aforementioned services, and clearly exhibits many of the advantages of building applications atop a service-oriented architecture. This presentation outlines the architectural approach and components and open standards and protocols adopted at NSIDC, demonstrates the interactions and uses of public and internal service interfaces currently powering applications including the IceBridge Data Portal, and outlines the benefits and challenges of this approach.
Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.
Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen
2018-01-19
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.
Green infrastructure and low energy architecture for eco-tourism in Asinara island
NASA Astrophysics Data System (ADS)
Trombadore, Antonella; Rolovic, Dusan; Congiatu, Pier Paolo
2018-05-01
The paper will present the sustainable and low energy architecture approach that has been developed for a small island in Sardinia, Italy. The island has hosted several prison complexes in the past two centuries, now converted into a National Park, since its creation as a national park its architectural and urban patrimony have been completely abandoned. Its few built-up areas and/or urban developments do have an enormous potential, but past administrations failed in the attempt to offer a commercially attractive model. The project focuses mainly on the development of a Strategic Plan for the regeneration of the island: the main goal is to create completely new activities and functions which are both compatible with its touristic potential and especially with the natural fragility of the ecosystem. These functions have been planned in order to give life and continuous activity to the island, but with extreme care towards its cohesion with the environment and the biodiversity. Results consist in various minor agricultural activities that have been reinstated based on the past activities, and different touristic functions focused on a specific and Eco-responsible market niche. These activities are supported by a complex network of structures and services dedicated to maintaining the balance of the ecosystem intact, while this increases the quality of its offer, thus allowing the creation of a model of sustainable management of natural resources and commercial exploitation without risk for the environment.
Phadnis, Nitin
2011-11-01
Understanding the genetic basis of reproductive isolation between recently diverged species is a central problem in evolutionary genetics. Here, I present analyses of the genetic architecture underlying hybrid male sterility and segregation distortion between the Bogota and USA subspecies of Drosophila pseudoobscura. Previously, a single gene, Overdrive (Ovd), was shown to be necessary but not sufficient for both male sterility and segregation distortion in F(1) hybrids between these subspecies, requiring several interacting partner loci for full manifestation of hybrid phenomena. I map these partner loci separately on the Bogota X chromosome and USA autosomes using a combination of different mapping strategies. I find that hybrid sterility involves a single hybrid incompatibility of at least seven interacting partner genes that includes three large-effect loci. Segregation distortion involves three loci on the Bogota X chromosome and one locus on the autosomes. The genetic bases of hybrid sterility and segregation distortion are at least partially--but not completely--overlapping. My results lay the foundation for fine-mapping experiments to identify the complete set of genes that interact with Overdrive. While individual genes that cause hybrid sterility or inviability have been identified in a few cases, my analysis provides a comprehensive look at the genetic architecture of all components of a hybrid incompatibility underlying F(1) hybrid sterility. Such an analysis would likely be unfeasible for most species pairs due to their divergence time and emphasizes the importance of young species pairs such as the D. pseudoobscura subspecies studied here.
Ganther, Jr., Kenneth R.; Snapp, Lowell D.
2002-01-01
Architecture for frequency multiplexing multiple flux locked loops in a system comprising an array of DC SQUID sensors. The architecture involves dividing the traditional flux locked loop into multiple unshared components and a single shared component which, in operation, form a complete flux locked loop relative to each DC SQUID sensor. Each unshared flux locked loop component operates on a different flux modulation frequency. The architecture of the present invention allows a reduction from 2N to N+1 in the number of connections between the cryogenic DC SQUID sensors and their associated room temperature flux locked loops. Furthermore, the 1.times.N architecture of the present invention can be paralleled to form an M.times.N array architecture without increasing the required number of flux modulation frequencies.
Integrated wide-angle scanner based on translating a curved mirror of acylindrical shape.
Sabry, Yasser M; Khalil, Diaa; Saadany, Bassam; Bourouina, Tarik
2013-06-17
A wide angle microscanning architecture is presented in which the angular deflection is achieved by displacing the principle axis of a curved silicon micromirror of acylindrical shape, with respect to the incident beam optical axis. The micromirror curvature is designed to overcome the possible deformation of the scanned beam spot size during scanning. In the presented architecture, the optical axis of the beam lays in-plane with respect to the substrate opening the door for a completely integrated and self-aligned miniaturized scanner. A micro-optical bench scanning device, based on translating a 200 μm focal length micromirror by an electrostatic comb-drive actuator, is implemented on a silicon chip. The microelectromechanical system has a resonance frequency of 329 Hz and a quality factor of 22. A single-mode optical fiber is used as the optical source and inserted into a micromachined groove fabricated and lithographically aligned with the microbench. Optical deflection angles up to 110 degrees are demonstrated.
VIPRAM_L1CMS: a 2-Tier 3D Architecture for Pattern Recognition for Track Finding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoff, J. R.; Joshi, Joshi,S.; Liu, Liu,
In HEP tracking trigger applications, flagging an individual detector hit is not important. Rather, the path of a charged particle through many detector layers is what must be found. Moreover, given the increased luminosity projected for future LHC experiments, this type of track finding will be required within the Level 1 Trigger system. This means that future LHC experiments require not just a chip capable of high-speed track finding but also one with a high-speed readout architecture. VIPRAM_L1CMS is 2-Tier Vertically Integrated chip designed to fulfill these requirements. It is a complete pipelined Pattern Recognition Associative Memory (PRAM) architecture includingmore » pattern recognition, result sparsification, and readout for Level 1 trigger applications in CMS with 15-bit wide detector addresses and eight detector layers included in the track finding. Pattern recognition is based on classic Content Addressable Memories with a Current Race Scheme to reduce timing complexity and a 4-bit Selective Precharge to minimize power consumption. VIPRAM_L1CMS uses a pipelined set of priority-encoded binary readout structures to sparsify and readout active road flags at frequencies of at least 100MHz. VIPRAM_L1CMS is designed to work directly with the Pulsar2b Architecture.« less
GREAT: a web portal for Genome Regulatory Architecture Tools
Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François
2016-01-01
GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. PMID:27151196
Rattner, J B; Matyas, J R; Barclay, L; Holowaychuk, S; Sciore, P; Lo, I K Y; Shrive, N G; Frank, C B; Achari, Y; Hart, D A
2011-08-01
Menisci help maintain the structural integrity of the knee. However, the poor healing potential of the meniscus following a knee injury can not only end a career in sports but lead to osteoarthritis later in life. Complete understanding of meniscal structure is essential for evaluating its risk for injury and subsequent successful repair. This study used novel approaches to elucidate meniscal architecture. The radial and circumferential collagen fibrils in the meniscus were investigated using novel tissue-preparative techniques for light and electron microscopic studies. The results demonstrate a unique architecture based on differences in the packaging of the fundamental collagen fibrils. For radial arrays, the collagen fibrils are arranged in parallel into ∼10 μm bundles, which associate laterally to form flat sheets of varying dimensions that bifurcate and come together to form a honeycomb network within the body of the meniscus. In contrast, the circumferential arrays display a complex network of collagen fibrils arranged into ∼5 μm bundles. Interestingly, both types of architectural organization of collagen fibrils in meniscus are conserved across mammalian species and are age and sex independent. These findings imply that disruptions in meniscal architecture following an injury contribute to poor prognosis for functional repair. © 2010 John Wiley & Sons A/S.
2015-12-24
network, allowing each to communicate with all nodes on the network. Additionally , the transmission power will be turned down to the lowest value . This...reserved for these unmanned agents are gen- erally too dull, dirty, dangerous, or difficult for onboard human pilots to complete. Additionally , the use...architectures do have a much higher level of complexity than single vehicle architectures. Additionally , the weight, size, and power limitations of the
Memristor-Based Computing Architecture: Design Methodologies and Circuit Techniques
2013-03-01
MEMRISTOR-BASED COMPUTING ARCHITECTURE : DESIGN METHODOLOGIES AND CIRCUIT TECHNIQUES POLYTECHNIC INSTITUTE OF NEW YORK UNIVERSITY...TECHNICAL REPORT 3. DATES COVERED (From - To) OCT 2010 – OCT 2012 4. TITLE AND SUBTITLE MEMRISTOR-BASED COMPUTING ARCHITECTURE : DESIGN METHODOLOGIES...schemes for a memristor-based reconfigurable architecture design have not been fully explored yet. Therefore, in this project, we investigated
Assigning a Thesis Project In the Two-Year Architectural Technology Program
ERIC Educational Resources Information Center
Obermeyer, Thomas
1977-01-01
The "thesis project" assigned in the sixth quarter of the eight-quarter architectural technology program at the Dakota County Area Vocational-Technical Institute in Rosemont, Minnesota, requires the students to design a building for a local public service organization or government agency. The complete project will include a program, a…
Designing the invisible architecture of your hospital.
Tye, Joe
2011-01-01
Before building or remodeling a hospital, architects develop a complete set of blueprints. That same sort of detailed attention should be given to the "invisible architecture" of core values, corporate culture, and emotional attitude because this has a much greater impact on the patient and employee experience than do the bricks and mortar.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-25
... design feature associated with the architecture and connectivity capabilities of the airplanes' computer... the comment for an association, business, labor union, etc.). DOT's complete Privacy Act Statement can... architecture for the Embraer Model EMB-550 series of airplanes is composed of several connected networks. This...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-17
..., business, labor union, etc.). DOT's complete Privacy Act Statement can be found in the Federal Register... unusual design feature: an electronics network system architecture which is new and novel for commercial... series architecture is new and novel for commercial transport airplanes because it allows connection to...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-17
... the comment (or signing the comment for an association, business, labor union, etc.). DOT's complete... design feature: The digital systems architecture for the Airbus Model A350-900 series airplanes is composed of several connected networks. This proposed network architecture is used for a diverse set of...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... an association, business, labor union, etc.). DOT's complete Privacy Act Statement can be found in... supplemental type certificate (STC) change in the digital systems architecture in the Learjet Model 45 series... plus two crew members. The proposed Learjet Model 45 architecture is new and novel for commercial...
Imaging through Fog Using Polarization Imaging in the Visible/NIR/SWIR Spectrum
2017-01-11
few haze effects as possible. One post processing step on the image in order to complete image dehazing Figure 6: Basic architecture of the...Page 16 Figure 7: Basic architecture of post-processing techniques to recover an image dehazed from a raw image This first study was limited on the
A Dual Launch Robotic and Human Lunar Mission Architecture
NASA Technical Reports Server (NTRS)
Jones, David L.; Mulqueen, Jack; Percy, Tom; Griffin, Brand; Smitherman, David
2010-01-01
This paper describes a comprehensive lunar exploration architecture developed by Marshall Space Flight Center's Advanced Concepts Office that features a science-based surface exploration strategy and a transportation architecture that uses two launches of a heavy lift launch vehicle to deliver human and robotic mission systems to the moon. The principal advantage of the dual launch lunar mission strategy is the reduced cost and risk resulting from the development of just one launch vehicle system. The dual launch lunar mission architecture may also enhance opportunities for commercial and international partnerships by using expendable launch vehicle services for robotic missions or development of surface exploration elements. Furthermore, this architecture is particularly suited to the integration of robotic and human exploration to maximize science return. For surface operations, an innovative dual-mode rover is presented that is capable of performing robotic science exploration as well as transporting human crew conducting surface exploration. The dual-mode rover can be deployed to the lunar surface to perform precursor science activities, collect samples, scout potential crew landing sites, and meet the crew at a designated landing site. With this approach, the crew is able to evaluate the robotically collected samples to select the best samples for return to Earth to maximize the scientific value. The rovers can continue robotic exploration after the crew leaves the lunar surface. The transportation system for the dual launch mission architecture uses a lunar-orbit-rendezvous strategy. Two heavy lift launch vehicles depart from Earth within a six hour period to transport the lunar lander and crew elements separately to lunar orbit. In lunar orbit, the crew transfer vehicle docks with the lander and the crew boards the lander for descent to the surface. After the surface mission, the crew returns to the orbiting transfer vehicle for the return to the Earth. This paper describes a complete transportation architecture including the analysis of transportation element options and sensitivities including: transportation element mass to surface landed mass; lander propellant options; and mission crew size. Based on this analysis, initial design concepts for the launch vehicle, crew module and lunar lander are presented. The paper also describes how the dual launch lunar mission architecture would fit into a more general overarching human space exploration philosophy that would allow expanded application of mission transportation elements for missions beyond the Earth-moon realm.
Feasibility study on sensor data fusion for the CP-140 aircraft: fusion architecture analyses
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa
1995-09-01
Loral Canada completed (May 1995) a Department of National Defense (DND) Chief of Research and Development (CRAD) contract, to study the feasibility of implementing a multi- sensor data fusion (MSDF) system onboard the CP-140 Aurora aircraft. This system is expected to fuse data from: (a) attributed measurement oriented sensors (ESM, IFF, etc.); (b) imaging sensors (FLIR, SAR, etc.); (c) tracking sensors (radar, acoustics, etc.); (d) data from remote platforms (data links); and (e) non-sensor data (intelligence reports, environmental data, visual sightings, encyclopedic data, etc.). Based on purely theoretical considerations a central-level fusion architecture will lead to a higher performance fusion system. However, there are a number of systems and fusion architecture issues involving fusion of such dissimilar data: (1) the currently existing sensors are not designed to provide the type of data required by a fusion system; (2) the different types (attribute, imaging, tracking, etc.) of data may require different degree of processing, before they can be used within a fusion system efficiently; (3) the data quality from different sensors, and more importantly from remote platforms via the data links must be taken into account before fusing; and (4) the non-sensor data may impose specific requirements on the fusion architecture (e.g. variable weight/priority for the data from different sensors). This paper presents the analyses performed for the selection of the fusion architecture for the enhanced sensor suite planned for the CP-140 aircraft in the context of the mission requirements and environmental conditions.
Enterprise Management Network Architecture Distributed Knowledge Base Support
1990-11-01
Advantages Potentially, this makes a distributed system more powerful than a conventional, centralized one in two ways: " First, it can be more reliable...does not completely apply [35]. The grain size of the processors measures the individual problem-solving power of the agents. In this definition...problem-solving power amounts to the conceptual size of a single action taken by an agent visible to the other agents in the system. If the grain is coarse
Mechanics Methodology for Textile Preform Composite Materials
NASA Technical Reports Server (NTRS)
Poe, Clarence C., Jr.
1996-01-01
NASA and its contractors have completed a program to develop a basic mechanics underpinning for textile composites. Three major deliverables were produced by the program: 1. A set of test methods for measuring material properties and design allowables; 2. Mechanics models to predict the effects of the fiber preform architecture and constituent properties on engineering moduli, strength, damage resistance, and fatigue life; and 3. An electronic data base of coupon type test data. This report describes these three deliverables.
In-body tissue-engineered aortic valve (Biovalve type VII) architecture based on 3D printer molding.
Nakayama, Yasuhide; Takewa, Yoshiaki; Sumikura, Hirohito; Yamanami, Masashi; Matsui, Yuichi; Oie, Tomonori; Kishimoto, Yuichiro; Arakawa, Mamoru; Ohmuma, Kentaro; Tajikawa, Tsutomu; Kanda, Keiichi; Tatsumi, Eisuke
2015-01-01
In-body tissue architecture--a novel and practical regeneration medicine technology--can be used to prepare a completely autologous heart valve, based on the shape of a mold. In this study, a three-dimensional (3D) printer was used to produce the molds. A 3D printer can easily reproduce the 3D-shape and size of native heart valves within several processing hours. For a tri-leaflet, valved conduit with a sinus of Valsalva (Biovalve type VII), the mold was assembled using two conduit parts and three sinus parts produced by the 3D printer. Biovalves were generated from completely autologous connective tissue, containing collagen and fibroblasts, within 2 months following the subcutaneous embedding of the molds (success rate, 27/30). In vitro evaluation, using a pulsatile circulation circuit, showed excellent valvular function with a durability of at least 10 days. Interposed between two expanded polytetrafluoroethylene grafts, the Biovalves (N = 3) were implanted in goats through an apico-aortic bypass procedure. Postoperative echocardiography showed smooth movement of the leaflets with minimal regurgitation under systemic circulation. After 1 month of implantation, smooth white leaflets were observed with minimal thrombus formation. Functional, autologous, 3D-shaped heart valves with clinical application potential were formed following in-body embedding of specially designed molds that were created within several hours by 3D printer. © 2014 Wiley Periodicals, Inc.
Memristor-Based Synapse Design and Training Scheme for Neuromorphic Computing Architecture
2012-06-01
system level built upon the conventional Von Neumann computer architecture [2][3]. Developing the neuromorphic architecture at chip level by...SCHEME FOR NEUROMORPHIC COMPUTING ARCHITECTURE 5a. CONTRACT NUMBER FA8750-11-2-0046 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...creation of memristor-based neuromorphic computing architecture. Rather than the existing crossbar-based neuron network designs, we focus on memristor
A high-availability architecture for continuous monitoring of sleep disorders.
Iovanovici, Alexandru; Topirceanu, Alexandru; Udrescu, Mihai; Prodan, Lucian; Mihaicuta, Stefan
2015-01-01
We present a complete technical solution for continuously monitoring vital signs required for observing sleep apnoea events, one of the major sleep respiratory disorders. Based on industry accepted medical devices, we developed a GSM-based remote data acquisition and transfer module that is integrated via a set of web services into the server side of the application. The back-end is responsible with aggregating all the data, and, based on machine learning techniques, it provides a first level of filtering in order to warn about possible abnormalities. The proposed solution is currently under the test phase at the "Victor Babes" Hospital in Timisoara, Romania.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe Lederman
This report contains the comprehensive summary of the work performed on the SBIR Phase II project (“Distributed Relevance Ranking in Heterogeneous Document Collections”) at Deep Web Technologies (http://www.deepwebtech.com). We have successfully completed all of the tasks defined in our SBIR Proposal work plan (See Table 1 - Phase II Tasks Status). The project was completed on schedule and we have successfully deployed an initial production release of the software architecture at DOE-OSTI for the Science.gov Alliance's search portal (http://www.science.gov). We have implemented a set of grid services that supports the extraction, filtering, aggregation, and presentation of search results from numerousmore » heterogeneous document collections. Illustration 3 depicts the services required to perform QuickRank™ filtering of content as defined in our architecture documentation. Functionality that has been implemented is indicated by the services highlighted in green. We have successfully tested our implementation in a multi-node grid deployment both within the Deep Web Technologies offices, and in a heterogeneous geographically distributed grid environment. We have performed a series of load tests in which we successfully simulated 100 concurrent users submitting search requests to the system. This testing was performed on deployments of one, two, and three node grids with services distributed in a number of different configurations. The preliminary results from these tests indicate that our architecture will scale well across multi-node grid deployments, but more work will be needed, beyond the scope of this project, to perform testing and experimentation to determine scalability and resiliency requirements. We are pleased to report that a production quality version (1.4) of the science.gov Alliance's search portal based on our grid architecture was released in June of 2006. This demonstration portal is currently available at http://science.gov/search30 . The portal allows the user to select from a number of collections grouped by category and enter a query expression (See Illustration 1 - Science.gov 3.0 Search Page). After the user clicks “search” a results page is displayed that provides a list of results from the selected collections ordered by relevance based on the query expression the user provided. Our grid based solution to deep web search and document ranking has already gained attention within DOE, other Government Agencies and a fortune 50 company. We are committed to the continued development of grid based solutions to large scale data access, filtering, and presentation problems within the domain of Information Retrieval and the more general categories of content management, data mining and data analysis.« less
Prodinger, Birgit; Tennant, Alan; Stucki, Gerold; Cieza, Alarcos; Üstün, Tevfik Bedirhan
2016-10-01
Our aim was to specify the requirements of an architecture to serve as the foundation for standardized reporting of health information and to provide an exemplary application of this architecture. The World Health Organization's International Classification of Functioning, Disability and Health (ICF) served as the conceptual framework. Methods to establish content comparability were the ICF Linking Rules. The Rasch measurement model, as a special case of additive conjoint measurement, which satisfies the required criteria for fundamental measurement, allowed for the development of a common metric foundation for measurement unit conversion. Secondary analysis of data from the North Yorkshire Survey was used to illustrate these methods. Patients completed three instruments and the items were linked to the ICF. The Rasch measurement model was applied, first to each scale, and then to items across scales which were linked to a common domain. Based on the linking of items to the ICF, the majority of items were grouped into two domains, Mobility and Self-care. Analysis of the individual scales and of items linked to a common domain across scales satisfied the requirements of the Rasch measurement model. The measurement unit conversion between items from the three instruments linked to the Mobility and Self-care domains, respectively, was demonstrated. The realization of an ICF-based architecture for information on patients' functioning enables harmonization of health information while allowing clinicians and researchers to continue using their existing instruments. This architecture will facilitate access to comprehensive and consistently reported health information to serve as the foundation for informed decision-making. © The Author(s) 2016.
Baseline Architecture of ITER Control System
NASA Astrophysics Data System (ADS)
Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.
2011-08-01
The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.
NASA Astrophysics Data System (ADS)
Kouimtzoglou, T.; Stathopoulou, E. K.; Agrafiotis, P.; Georgopoulos, A.
2017-02-01
Μodern advances in the field of image-based 3D reconstruction of complex architectures are valuable tools that may offer the researchers great possibilities integrating the use of such procedures in their studies. In the same way that photogrammetry was a well-known useful tool among the cultural heritage community for years, the state of the art reconstruction techniques generate complete and easy to use 3D data, thus enabling engineers, architects and other cultural heritage experts to approach their case studies in an exhaustive and efficient way. The generated data can be a valuable and accurate basis upon which further plans and studies will be drafted. These and other aspects of the use of image-based 3D data for architectural studies are to be presented and analysed in this paper, based on the experience gained from a specific case study, the Plaka Bridge. This historic structure is of particular interest, as it was recently lost due to extreme weather conditions and serves as a strong proof that preventive actions are of utmost importance in order to preserve our common past.
Using a cognitive architecture for general purpose service robot control
NASA Astrophysics Data System (ADS)
Puigbo, Jordi-Ysard; Pumarola, Albert; Angulo, Cecilio; Tellez, Ricardo
2015-04-01
A humanoid service robot equipped with a set of simple action skills including navigating, grasping, recognising objects or people, among others, is considered in this paper. By using those skills the robot should complete a voice command expressed in natural language encoding a complex task (defined as the concatenation of a number of those basic skills). As a main feature, no traditional planner has been used to decide skills to be activated, as well as in which sequence. Instead, the SOAR cognitive architecture acts as the reasoner by selecting which action the robot should complete, addressing it towards the goal. Our proposal allows to include new goals for the robot just by adding new skills (without the need to encode new plans). The proposed architecture has been tested on a human-sized humanoid robot, REEM, acting as a general purpose service robot.
Manifold parametrization of the left ventricle for a statistical modelling of its complete anatomy
NASA Astrophysics Data System (ADS)
Gil, D.; Garcia-Barnes, J.; Hernández-Sabate, A.; Marti, E.
2010-03-01
Distortion of Left Ventricle (LV) external anatomy is related to some dysfunctions, such as hypertrophy. The architecture of myocardial fibers determines LV electromechanical activation patterns as well as mechanics. Thus, their joined modelling would allow the design of specific interventions (such as peacemaker implantation and LV remodelling) and therapies (such as resynchronization). On one hand, accurate modelling of external anatomy requires either a dense sampling or a continuous infinite dimensional approach, which requires non-Euclidean statistics. On the other hand, computation of fiber models requires statistics on Riemannian spaces. Most approaches compute separate statistical models for external anatomy and fibers architecture. In this work we propose a general mathematical framework based on differential geometry concepts for computing a statistical model including, both, external and fiber anatomy. Our framework provides a continuous approach to external anatomy supporting standard statistics. We also provide a straightforward formula for the computation of the Riemannian fiber statistics. We have applied our methodology to the computation of complete anatomical atlas of canine hearts from diffusion tensor studies. The orientation of fibers over the average external geometry agrees with the segmental description of orientations reported in the literature.
Lessons Learned from Engineering a Multi-Mission Satellite Operations Center
NASA Technical Reports Server (NTRS)
Madden, Maureen; Cary, Everett, Jr.; Esposito, Timothy; Parker, Jeffrey; Bradley, David
2006-01-01
NASA's Small Explorers (SMEX) satellites have surpassed their designed science-lifetimes and their flight operations teams are now facing the challenge of continuing operations with reduced funding. At present, these missions are being reengineered into a fleet-oriented ground system at Goddard Space Flight Center (GSFC). When completed, this ground system will provide command and control of four SMEX missions and will demonstrate fleet automation and control concepts. As a path-finder for future mission consolidation efforts, this ground system will also demonstrate new ground-based technologies that show promise of supporting longer mission lifecycles and simplifying component integration. One of the core technologies being demonstrated in the SMEiX Mission Operations Center is the GSFC Mission Services Evolution Center (GMSEC) architecture. The GMSEC architecture uses commercial Message Oriented Middleware with a common messaging standard to realize a higher level of component interoperability, allowing for interchangeable components in ground systems. Moreover, automation technologies utilizing the GMSEC architecture are being evaluated and implemented to provide extended lights-out operations. This mode of operation will provide routine monitoring and control of the heterogeneous spacecraft fleet. The operational concepts being developed will reduce the need for staffed contacts and is seen as a necessity for fleet management. This paper will describe the experiences of the integration team throughout the reengineering effort of the SMEX ground system. Additionally, lessons learned will be presented based on the team s experiences with integrating multiple missions into a fleet-based automated ground system.
A security architecture for interconnecting health information systems.
Gritzalis, Dimitris; Lambrinoudakis, Costas
2004-03-31
Several hereditary and other chronic diseases necessitate continuous and complicated health care procedures, typically offered in different, often distant, health care units. Inevitably, the medical records of patients suffering from such diseases become complex, grow in size very fast and are scattered all over the units involved in the care process, hindering communication of information between health care professionals. Web-based electronic medical records have been recently proposed as the solution to the above problem, facilitating the interconnection of the health care units in the sense that health care professionals can now access the complete medical record of the patient, even if it is distributed in several remote units. However, by allowing users to access information from virtually anywhere, the universe of ineligible people who may attempt to harm the system is dramatically expanded, thus severely complicating the design and implementation of a secure environment. This paper presents a security architecture that has been mainly designed for providing authentication and authorization services in web-based distributed systems. The architecture has been based on a role-based access scheme and on the implementation of an intelligent security agent per site (i.e. health care unit). This intelligent security agent: (a). authenticates the users, local or remote, that can access the local resources; (b). assigns, through temporary certificates, access privileges to the authenticated users in accordance to their role; and (c). communicates to other sites (through the respective security agents) information about the local users that may need to access information stored in other sites, as well as about local resources that can be accessed remotely.
Single-image-based Modelling Architecture from a Historical Photograph
NASA Astrophysics Data System (ADS)
Dzwierzynska, Jolanta
2017-10-01
Historical photographs are proved to be very useful to provide a dimensional and geometrical analysis of buildings as well as to generate 3D reconstruction of the whole structure. The paper addresses the problem of single historical photograph analysis and modelling of an architectural object from it. Especially, it focuses on reconstruction of the original look of New-Town synagogue from the single historic photograph, when camera calibration is completely unknown. Due to the fact that the photograph faithfully followed the geometric rules of perspective, it was possible to develop and apply the method to obtain a correct 3D reconstruction of the building. The modelling process consisted of a series of familiar steps: feature extraction, determination of base elements of perspective, dimensional analyses and 3D reconstruction. Simple formulas were proposed in order to estimate location of characteristic points of the building in 3D Cartesian system of axes on the base of their location in 2D Cartesian system of axes. The reconstruction process proceeded well, although slight corrections were necessary. It was possible to reconstruct the shape of the building in general, and two of its facades in detail. The reconstruction of the other two facades requires some additional information or the additional picture. The success of the presented reconstruction method depends on the geometrical content of the photograph as well as quality of the picture, which ensures the legibility of building edges. The presented method of reconstruction is a combination of the descriptive method of reconstruction and computer aid; therefore, it seems to be universal. It can prove useful for single-image-based modelling architecture.
Issues in Defining Software Architectures in a GIS Environment
NASA Technical Reports Server (NTRS)
Acosta, Jesus; Alvorado, Lori
1997-01-01
The primary mission of the Pan-American Center for Earth and Environmental Studies (PACES) is to advance the research areas that are relevant to NASA's Mission to Planet Earth program. One of the activities at PACES is the establishment of a repository for geographical, geological and environmental information that covers various regions of Mexico and the southwest region of the U.S. and that is acquired from NASA and other sources through remote sensing, ground studies or paper-based maps. The center will be providing access of this information to other government entities in the U.S. and Mexico, and research groups from universities, national laboratories and industry. Geographical Information Systems(GIS) provide the means to manage, manipulate, analyze and display geographically referenced information that will be managed by PACES. Excellent off-the-shelf software exists for a complete GIS as well as software for storing and managing spatial databases, processing images, networking and viewing maps with layered information. This allows the user flexibility in combining systems to create a GIS or to mix these software packages with custom-built application programs. Software architectural languages provide the ability to specify the computational components and interactions among these components, an important topic in the domain of GIS because of the need to integrate numerous software packages. This paper discusses the characteristics that architectural languages address with respect to the issues relating to the data that must be communicated between software systems and components when systems interact. The paper presents a background on GIS in section 2. Section 3 gives an overview of software architecture and architectural languages. Section 4 suggests issues that may be of concern when defining the software architecture of a GIS. The last section discusses the future research effort and finishes with a summary.
Residence Hall Architecture and Sense of Community: Everything Old Is New Again
ERIC Educational Resources Information Center
Devlin, Ann Sloan; Donovan, Sarah; Nicolov, Arianne; Nold, Olivia; Zandan, Gabrielle
2008-01-01
This study of almost 600 students examines the relationship between sense of community and college dormitory architecture on the campus of a small residential liberal arts college in the Northeast. Respondents of all class years completed an online survey that included the Sense of Community Index and the Relationship dimension of the University…
NASA Astrophysics Data System (ADS)
Rashid, Md. M.; Rahaman, H.
2013-07-01
This study embarked upon a premise that considers architecture of building as a dynamic phenomenon. A building from its conception is susceptible to change due to various reasons. An historical building that is several hundred years old must have undergone through changes due to political, social, religious and most importantly functional reasons. Hence capturing building and its dynamic evolution is necessary to appreciate its architecture as well as its heritage value. Whereas the conventional method of fact based historiography only captures the building in particular moment. It makes architectural historians to become perplexed over to which particular moment to be documented. It is a great challenge for the architectural historians to bring back these dynamic characters of the building that are mostly inconspicuous in nature from this point of time. In this situation the historical discourse also remains elusive and blurred. The idea of 4d capturing comes in front in this scenario. Current research would venture into this emerging idea to record the architecture of the early period. This paper highlights the need for a flexible tool to capture this dynamic character of the building. By citing the case study of the 7th century Buddhist Monastery in Bengal, this paper thus argues for the need of capturing the narrative of a historical building than the facts to get a complete picture of its architecture. This study aims at capturing the narrative of Sompur Mahavihara, the UNESCO World Heritage site in Bangladesh, which is currently in ruinous condition. However, it's few hundred years life suggests that as architecture it was subject to change due to different reasons, mainly political, religious and rituals. Being a monument that belongs to the flourishing phase of a society, traditionally this monastery architecture certainly played a role as a stage for religious and political pageantry as well as different religious performances. As architecture it works as complex process of interaction of different layers of ideas, agendas and authorship through time. This paper would further explore different tools for historians to capturing this process of interaction and preserving/ conserving the narrative of this building using virtual modelling.
An S N Algorithm for Modern Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Randal Scott
2016-08-29
LANL discrete ordinates transport packages are required to perform large, computationally intensive time-dependent calculations on massively parallel architectures, where even a single such calculation may need many months to complete. While KBA methods scale out well to very large numbers of compute nodes, we are limited by practical constraints on the number of such nodes we can actually apply to any given calculation. Instead, we describe a modified KBA algorithm that allows realization of the reductions in solution time offered by both the current, and future, architectural changes within a compute node.
Satellite control system nucleus for the Brazilian complete space mission
NASA Astrophysics Data System (ADS)
Yamaguti, Wilson; Decarvalhovieira, Anastacio Emanuel; Deoliveira, Julia Leocadia; Cardoso, Paulo Eduardo; Dacosta, Petronio Osorio
1990-10-01
The nucleus of the satellite control system for the Brazilian data collecting and remote sensing satellites is described. The system is based on Digital Equipment Computers and the VAX/VMS operating system. The nucleus provides the access control, the system configuration, the event management, history files management, time synchronization, wall display control, and X25 data communication network access facilities. The architecture of the nucleus and its main implementation aspects are described. The implementation experience acquired is considered.
2013-02-01
2 : REBUILDING THE TOWER OF BABEL – BETTER COMMUNICATION WITH STANDARDS – MATTHEW HAUSE ...................... 99 UNCLASSIFIED UNCLASSIFIED...Communications with Standards Matthew Hause, Object Management Group 9:30 A Proposed Pattern of Enterprise Architecture Dr Clive Boughton 10:00...complete a project at lower cost inevitably results in longer schedules or reduced capability/lower quality. As the standard saying goes today, “faster
2012-11-01
2 : REBUILDING THE TOWER OF BABEL – BETTER COMMUNICATION WITH STANDARDS – MATTHEW HAUSE ...................... 99 UNCLASSIFIED UNCLASSIFIED...Communications with Standards Matthew Hause, Object Management Group 9:30 A Proposed Pattern of Enterprise Architecture Dr Clive Boughton 10:00...complete a project at lower cost inevitably results in longer schedules or reduced capability/lower quality. As the standard saying goes today, “faster
Parallelization of Program to Optimize Simulated Trajectories (POST3D)
NASA Technical Reports Server (NTRS)
Hammond, Dana P.; Korte, John J. (Technical Monitor)
2001-01-01
This paper describes the parallelization of the Program to Optimize Simulated Trajectories (POST3D). POST3D uses a gradient-based optimization algorithm that reaches an optimum design point by moving from one design point to the next. The gradient calculations required to complete the optimization process, dominate the computational time and have been parallelized using a Single Program Multiple Data (SPMD) on a distributed memory NUMA (non-uniform memory access) architecture. The Origin2000 was used for the tests presented.
NASA Technical Reports Server (NTRS)
Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.
2006-01-01
System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.
NASA Astrophysics Data System (ADS)
Litinski, Daniel; Kesselring, Markus S.; Eisert, Jens; von Oppen, Felix
2017-07-01
We present a scalable architecture for fault-tolerant topological quantum computation using networks of voltage-controlled Majorana Cooper pair boxes and topological color codes for error correction. Color codes have a set of transversal gates which coincides with the set of topologically protected gates in Majorana-based systems, namely, the Clifford gates. In this way, we establish color codes as providing a natural setting in which advantages offered by topological hardware can be combined with those arising from topological error-correcting software for full-fledged fault-tolerant quantum computing. We provide a complete description of our architecture, including the underlying physical ingredients. We start by showing that in topological superconductor networks, hexagonal cells can be employed to serve as physical qubits for universal quantum computation, and we present protocols for realizing topologically protected Clifford gates. These hexagonal-cell qubits allow for a direct implementation of open-boundary color codes with ancilla-free syndrome read-out and logical T gates via magic-state distillation. For concreteness, we describe how the necessary operations can be implemented using networks of Majorana Cooper pair boxes, and we give a feasibility estimate for error correction in this architecture. Our approach is motivated by nanowire-based networks of topological superconductors, but it could also be realized in alternative settings such as quantum-Hall-superconductor hybrids.
The μ-RWELL: A compact, spark protected, single amplification-stage MPGD
NASA Astrophysics Data System (ADS)
Poli Lener, M.; Bencivenni, G.; de Olivera, R.; Felici, G.; Franchino, S.; Gatta, M.; Maggi, M.; Morello, G.; Sharma, A.
2016-07-01
In this work we present two innovative architectures of resistive MPGDs based on the WELL-amplification concept: - the micro-Resistive WELL (μ-RWELL) is a compact spark-protected single amplification-stage Micro-Pattern Gas Detector (MPGD). The amplification stage, realized with a structure very similar to a GEM foil (called WELL), is embedded through a resistive layer in the readout board. A cathode electrode, defining the gas conversion/drift gap, completes the detector mechanics. The new architecture, showing an excellent space resolution, 50 μm, is a very compact device, robust against discharges and exhibiting a large gain (>104), simple to construct and easy for engineering and then suitable for large area tracking devices as well as digital calorimeters. - the Fast Timing Micro-pattern (FTM): a new device with an architecture based on a stack of several coupled full-resistive layers where drift and multiplication stages (WELL type) alternate in the structure. The signals from each multiplication stage can be read out from any external readout boards through the capacitive couplings, providing a signal with a gain of 104-105. The main advantage of this new device is the improvement of the timing provided by the competition of the ionization processes in the different drift regions, which can be exploited for fast timing at the high luminosity accelerators (e.g. HL-LHC upgrade) as well as for applications like medical imaging.
NASA Technical Reports Server (NTRS)
Sorini, Chris; Chattopadhyay, Aditi; Goldberg, Robert K.; Kohlman, Lee W.
2016-01-01
Understanding the high velocity impact response of polymer matrix composites with complex architectures is critical to many aerospace applications, including engine fan blade containment systems where the structure must be able to completely contain fan blades in the event of a blade-out. Despite the benefits offered by these materials, the complex nature of textile composites presents a significant challenge for the prediction of deformation and damage under both quasi-static and impact loading conditions. The relatively large mesoscale repeating unit cell (in comparison to the size of structural components) causes the material to behave like a structure rather than a homogeneous material. Impact experiments conducted at NASA Glenn Research Center have shown the damage patterns to be a function of the underlying material architecture. Traditional computational techniques that involve modeling these materials using smeared homogeneous, orthotropic material properties at the macroscale result in simulated damage patterns that are a function of the structural geometry, but not the material architecture. In order to preserve heterogeneity at the highest length scale in a robust yet computationally efficient manner, and capture the architecturally dependent damage patterns, a previously-developed subcell modeling approach where the braided composite unit cell is approximated as a series of four adjacent laminated composites is utilized. This work discusses the implementation of the subcell methodology into the commercial transient dynamic finite element code LS-DYNA (Livermore Software Technology Corp.). Verification and validation studies are also presented, including simulation of the tensile response of straight-sided and notched quasi-static coupons composed of a T700/PR520 triaxially braided [0deg/60deg/-60deg] composite. Based on the results of the verification and validation studies, advantages and limitations of the methodology as well as plans for future work are discussed.
On implementation of DCTCP on three-tier and fat-tree data center network topologies.
Zafar, Saima; Bashir, Abeer; Chaudhry, Shafique Ahmad
2016-01-01
A data center is a facility for housing computational and storage systems interconnected through a communication network called data center network (DCN). Due to a tremendous growth in the computational power, storage capacity and the number of inter-connected servers, the DCN faces challenges concerning efficiency, reliability and scalability. Although transmission control protocol (TCP) is a time-tested transport protocol in the Internet, DCN challenges such as inadequate buffer space in switches and bandwidth limitations have prompted the researchers to propose techniques to improve TCP performance or design new transport protocols for DCN. Data center TCP (DCTCP) emerge as one of the most promising solutions in this domain which employs the explicit congestion notification feature of TCP to enhance the TCP congestion control algorithm. While DCTCP has been analyzed for two-tier tree-based DCN topology for traffic between servers in the same rack which is common in cloud applications, it remains oblivious to the traffic patterns common in university and private enterprise networks which traverse the complete network interconnect spanning upper tier layers. We also recognize that DCTCP performance cannot remain unaffected by the underlying DCN architecture hence there is a need to test and compare DCTCP performance when implemented over diverse DCN architectures. Some of the most notable DCN architectures are the legacy three-tier, fat-tree, BCube, DCell, VL2, and CamCube. In this research, we simulate the two switch-centric DCN architectures; the widely deployed legacy three-tier architecture and the promising fat-tree architecture using network simulator and analyze the performance of DCTCP in terms of throughput and delay for realistic traffic patterns. We also examine how DCTCP prevents incast and outcast congestion when realistic DCN traffic patterns are employed in above mentioned topologies. Our results show that the underlying DCN architecture significantly impacts DCTCP performance. We find that DCTCP gives optimal performance in fat-tree topology and is most suitable for large networks.
GREAT: a web portal for Genome Regulatory Architecture Tools.
Bouyioukos, Costas; Bucchini, François; Elati, Mohamed; Képès, François
2016-07-08
GREAT (Genome REgulatory Architecture Tools) is a novel web portal for tools designed to generate user-friendly and biologically useful analysis of genome architecture and regulation. The online tools of GREAT are freely accessible and compatible with essentially any operating system which runs a modern browser. GREAT is based on the analysis of genome layout -defined as the respective positioning of co-functional genes- and its relation with chromosome architecture and gene expression. GREAT tools allow users to systematically detect regular patterns along co-functional genomic features in an automatic way consisting of three individual steps and respective interactive visualizations. In addition to the complete analysis of regularities, GREAT tools enable the use of periodicity and position information for improving the prediction of transcription factor binding sites using a multi-view machine learning approach. The outcome of this integrative approach features a multivariate analysis of the interplay between the location of a gene and its regulatory sequence. GREAT results are plotted in web interactive graphs and are available for download either as individual plots, self-contained interactive pages or as machine readable tables for downstream analysis. The GREAT portal can be reached at the following URL https://absynth.issb.genopole.fr/GREAT and each individual GREAT tool is available for downloading. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander
2008-04-01
We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.
NASA Astrophysics Data System (ADS)
Dore, C.; Murphy, M.
2013-02-01
This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.
Pipelined CPU Design with FPGA in Teaching Computer Architecture
ERIC Educational Resources Information Center
Lee, Jong Hyuk; Lee, Seung Eun; Yu, Heon Chang; Suh, Taeweon
2012-01-01
This paper presents a pipelined CPU design project with a field programmable gate array (FPGA) system in a computer architecture course. The class project is a five-stage pipelined 32-bit MIPS design with experiments on the Altera DE2 board. For proper scheduling, milestones were set every one or two weeks to help students complete the project on…
Design Skills and Prototyping for Defense Systems
2015-04-30
however, the utility of prototyping has had a demonstrably mixed record in defense acquisition. Some programs, such as the Manhattan Project , were...almost completely undefined. The first production reactors for the Manhattan Project suffered a near- catastrophic engineering design flaw stemming...architecture, as was seen in the F-117 and Manhattan Project development efforts. Architectural Prototyping Simply maintaining design teams or developing
Space station data system analysis/architecture study. Task 3: Trade studies, DR-5, volume 2
NASA Technical Reports Server (NTRS)
1985-01-01
Results of a Space Station Data System Analysis/Architecture Study for the Goddard Space Flight Center are presented. This study, which emphasized a system engineering design for a complete, end-to-end data system, was divided into six tasks: (1); Functional requirements definition; (2) Options development; (3) Trade studies; (4) System definitions; (5) Program plan; and (6) Study maintenance. The Task inter-relationship and documentation flow are described. Information in volume 2 is devoted to Task 3: trade Studies. Trade Studies have been carried out in the following areas: (1) software development test and integration capability; (2) fault tolerant computing; (3) space qualified computers; (4) distributed data base management system; (5) system integration test and verification; (6) crew workstations; (7) mass storage; (8) command and resource management; and (9) space communications. Results are presented for each task.
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; ...
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretization ismore » based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
NASA Astrophysics Data System (ADS)
Coulais, Corentin; Kettenis, Chris; van Hecke, Martin
2018-01-01
The architecture of mechanical metamaterials is designed to harness geometry, nonlinearity and topology to obtain advanced functionalities such as shape morphing, programmability and one-way propagation. Although a purely geometric framework successfully captures the physics of small systems under idealized conditions, large systems or heterogeneous driving conditions remain essentially unexplored. Here we uncover strong anomalies in the mechanics of a broad class of metamaterials, such as auxetics, shape changers or topological insulators; a non-monotonic variation of their stiffness with system size, and the ability of textured boundaries to completely alter their properties. These striking features stem from the competition between rotation-based deformations--relevant for small systems--and ordinary elasticity, and are controlled by a characteristic length scale which is entirely tunable by the architectural details. Our study provides new vistas for designing, controlling and programming the mechanics of metamaterials.
Enhancing Cassini Operations & Science Planning Tools
NASA Technical Reports Server (NTRS)
Castello, Jonathan
2012-01-01
The Cassini team uses a variety of software utilities as they manage and coordinate their mission to Saturn. Most of these tools have been unchanged for many years, and although stability is a virtue for long-lived space missions, there are some less-fragile tools that could greatly benefit from modern improvements. This report shall describe three such upgrades, including their architectural differences and their overall impact. Emphasis is placed on the motivation and rationale behind architectural choices rather than the final product, so as to illuminate the lessons learned and discoveries made.These three enhancements included developing a strategy for migrating Science Planning utilities to a new execution model, rewriting the team's internal portal for ease of use and maintenance, and developing a web-based agenda application for tracking the sequence of files being transmitted to the Cassini spacecraft. Of this set, the first two have been fully completed, while the agenda application is currently in the early prototype stage.
Development of Network Interface Cards for TRIDAQ systems with the NaNet framework
NASA Astrophysics Data System (ADS)
Ammendola, R.; Biagioni, A.; Cretaro, P.; Di Lorenzo, S.; Fiorini, M.; Frezza, O.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Valente, P.; Vicini, P.
2017-03-01
NaNet is a framework for the development of FPGA-based PCI Express (PCIe) Network Interface Cards (NICs) with real-time data transport architecture that can be effectively employed in TRIDAQ systems. Key features of the architecture are the flexibility in the configuration of the number and kind of the I/O channels, the hardware offloading of the network protocol stack, the stream processing capability, and the zero-copy CPU and GPU Remote Direct Memory Access (RDMA). Three NIC designs have been developed with the NaNet framework: NaNet-1 and NaNet-10 for the CERN NA62 low level trigger and NaNet3 for the KM3NeT-IT underwater neutrino telescope DAQ system. We will focus our description on the NaNet-10 design, as it is the most complete of the three in terms of capabilities and integrated IPs of the framework.
Integrated microelectronics for smart textiles.
Lauterbach, Christl; Glaser, Rupert; Savio, Domnic; Schnell, Markus; Weber, Werner
2005-01-01
The combination of textile fabrics with microelectronics will lead to completely new applications, thus achieving elements of ambient intelligence. The integration of sensor or actuator networks, using fabrics with conductive fibres as a textile motherboard enable the fabrication of large active areas. In this paper we describe an integration technology for the fabrication of a "smart textile" based on a wired peer-to-peer network of microcontrollers with integrated sensors or actuators. A self-organizing and fault-tolerant architecture is accomplished which detects the physical shape of the network. Routing paths are formed for data transmission, automatically circumventing defective or missing areas. The network architecture allows the smart textiles to be produced by reel-to-reel processes, cut into arbitrary shapes subsequently and implemented in systems at low installation costs. The possible applications are manifold, ranging from alarm systems to intelligent guidance systems, passenger recognition in car seats, air conditioning control in interior lining and smart wallpaper with software-defined light switches.
Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Ousterhout, John K.; Patterson, David A.
1993-01-01
Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.
An epidemic model for biological data fusion in ad hoc sensor networks
NASA Astrophysics Data System (ADS)
Chang, K. C.; Kotari, Vikas
2009-05-01
Bio terrorism can be a very refined and a catastrophic approach of attacking a nation. This requires the development of a complete architecture dedicatedly designed for this purpose which includes but is not limited to Sensing/Detection, Tracking and Fusion, Communication, and others. In this paper we focus on one such architecture and evaluate its performance. Various sensors for this specific purpose have been studied. The accent has been on use of Distributed systems such as ad-hoc networks and on application of epidemic data fusion algorithms to better manage the bio threat data. The emphasis has been on understanding the performance characteristics of these algorithms under diversified real time scenarios which are implemented through extensive JAVA based simulations. Through comparative studies on communication and fusion the performance of channel filter algorithm for the purpose of biological sensor data fusion are validated.
2015-05-01
Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and Mobile Devices Walt Scacchi and Thomas...2015 to 00-00-2015 4. TITLE AND SUBTITLE Achieving Better Buying Power through Acquisition of Open Architecture Software Systems for Web-Based and...architecture (OA) software systems Emerging challenges in achieving Better Buying Power (BBP) via OA software systems for Web- based and Mobile devices
Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B
2013-01-01
The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.
Analysis of Android Device-Based Solutions for Fall Detection
Casilari, Eduardo; Luque, Rafael; Morón, María-José
2015-01-01
Falls are a major cause of health and psychological problems as well as hospitalization costs among older adults. Thus, the investigation on automatic Fall Detection Systems (FDSs) has received special attention from the research community during the last decade. In this area, the widespread popularity, decreasing price, computing capabilities, built-in sensors and multiplicity of wireless interfaces of Android-based devices (especially smartphones) have fostered the adoption of this technology to deploy wearable and inexpensive architectures for fall detection. This paper presents a critical and thorough analysis of those existing fall detection systems that are based on Android devices. The review systematically classifies and compares the proposals of the literature taking into account different criteria such as the system architecture, the employed sensors, the detection algorithm or the response in case of a fall alarms. The study emphasizes the analysis of the evaluation methods that are employed to assess the effectiveness of the detection process. The review reveals the complete lack of a reference framework to validate and compare the proposals. In addition, the study also shows that most research works do not evaluate the actual applicability of the Android devices (with limited battery and computing resources) to fall detection solutions. PMID:26213928
Micromechanical slit positioning system as a transmissive spatial light modulator
NASA Astrophysics Data System (ADS)
Riesenberg, Rainer
2001-11-01
Micro-slits have been prepared with a slit-width and a slit- length of 2 ... 1000 micrometers . Linear and two-dimensional arrays up to 10 x 110 slits have been developed and completed with a piezo-actuator for shifting. This system is a so-called mechanical slit positioning system. The light is switched by simple one- or two-dimensional displacement of coded slit masks in a one- or two-layer architecture. The slit positioning system belongs to the transmissive class of MEMS-based spatial light modulators (SLM). It has fundamental advantages for optical contrast and also can be used in the full spectral region. Therefore transmissive versions of SLM should be a future solution. Instrument architectures based on the slit positioning system can increase the resolution by subpixel generation, the throughput by HADAMARD transform mode, or select objects for multi-object-spectroscopy. The linear slit positioning system was space qualified within an advanced micro- spectrometer. A NIR multi-object-spectrometer for the Next Generation Space Telescope (NGST) is based on a field selector for selecting objects. The field selector is a SLM, which could be implemented by a slit positioning system.
Analysis of Android Device-Based Solutions for Fall Detection.
Casilari, Eduardo; Luque, Rafael; Morón, María-José
2015-07-23
Falls are a major cause of health and psychological problems as well as hospitalization costs among older adults. Thus, the investigation on automatic Fall Detection Systems (FDSs) has received special attention from the research community during the last decade. In this area, the widespread popularity, decreasing price, computing capabilities, built-in sensors and multiplicity of wireless interfaces of Android-based devices (especially smartphones) have fostered the adoption of this technology to deploy wearable and inexpensive architectures for fall detection. This paper presents a critical and thorough analysis of those existing fall detection systems that are based on Android devices. The review systematically classifies and compares the proposals of the literature taking into account different criteria such as the system architecture, the employed sensors, the detection algorithm or the response in case of a fall alarms. The study emphasizes the analysis of the evaluation methods that are employed to assess the effectiveness of the detection process. The review reveals the complete lack of a reference framework to validate and compare the proposals. In addition, the study also shows that most research works do not evaluate the actual applicability of the Android devices (with limited battery and computing resources) to fall detection solutions.
Architecture of the ring formed by the tubulin homologue FtsZ in bacterial cell division
Szwedziak, Piotr; Wang, Qing; Bharat, Tanmay A M; Tsim, Matthew; Löwe, Jan
2014-01-01
Membrane constriction is a prerequisite for cell division. The most common membrane constriction system in prokaryotes is based on the tubulin homologue FtsZ, whose filaments in E. coli are anchored to the membrane by FtsA and enable the formation of the Z-ring and divisome. The precise architecture of the FtsZ ring has remained enigmatic. In this study, we report three-dimensional arrangements of FtsZ and FtsA filaments in C. crescentus and E. coli cells and inside constricting liposomes by means of electron cryomicroscopy and cryotomography. In vivo and in vitro, the Z-ring is composed of a small, single-layered band of filaments parallel to the membrane, creating a continuous ring through lateral filament contacts. Visualisation of the in vitro reconstituted constrictions as well as a complete tracing of the helical paths of the filaments with a molecular model favour a mechanism of FtsZ-based membrane constriction that is likely to be accompanied by filament sliding. DOI: http://dx.doi.org/10.7554/eLife.04601.001 PMID:25490152
Klein, Karsten; Wolff, Astrid C; Ziebold, Oliver; Liebscher, Thomas
2008-01-01
The ICW eHealth Framework (eHF) is a powerful infrastructure and platform for the development of service-oriented solutions in the health care business. It is the culmination of many years of experience of ICW in the development and use of in-house health care solutions and represents the foundation of ICW product developments based on the Java Enterprise Edition (Java EE). The ICW eHealth Framework has been leveraged to allow development by external partners - enabling adopters a straightforward integration into ICW solutions. The ICW eHealth Framework consists of reusable software components, development tools, architectural guidelines and conventions defining a full software-development and product lifecycle. From the perspective of a partner, the framework provides services and infrastructure capabilities for integrating applications within an eHF-based solution. This article introduces the ICW eHealth Framework's basic architectural concepts and technologies. It provides an overview of its module and component model, describes the development platform that supports the complete software development lifecycle of health care applications and outlines technological aspects, mainly focusing on application development frameworks and open standards.
Wellness health care and the architectural environment.
Verderber, S; Grice, S; Gutentag, P
1987-01-01
The stress management-wellness health care environment is emerging as a distinct facility type in the 1980s. Yet the idea is not a new one, with roots based in the Greek Asklepieon dating from 480 B.C. This and later Western transformations for health promotion embraced the therapeutic amenity inherent in meditation, solace and communality with nature based on the premise that the need for refuge from the stress inherent in one's daily life is deep-rooted in humans. A two-phase study is reported on wellness health care provider priorities, relative to the architectural features of stress-wellness centers. Representatives of 11 health care organizations responded to a telephone survey questionnaire, and 128 respondents completed a user needs questionnaire. Four major issues were addressed: image and appearance, location and setting, services provided and costs, and patterns of use. Convenience to one's place of work, a balanced mixture of clinical and nonclinical programs, a noninstitutional retreat-like environment, and membership cost structures were found to be major user considerations with respect to planning and design concepts for wellness health care environments. Directions for further research are discussed.
A universal data access and protocol integration mechanism for smart home
NASA Astrophysics Data System (ADS)
Shao, Pengfei; Yang, Qi; Zhang, Xuan
2013-03-01
With the lack of standardized or completely missing communication interfaces in home electronics, there is no perfect solution to address every aspect in smart homes based on existing protocols and technologies. In addition, the central control unit (CCU) of smart home system working point-to-point between the multiple application interfaces and the underlying hardware interfaces leads to its complicated architecture and unpleasant performance. A flexible data access and protocol integration mechanism is required. The current paper offers a universal, comprehensive data access and protocol integration mechanism for a smart home. The universal mechanism works as a middleware adapter with unified agreements of the communication interfaces and protocols, offers an abstraction of the application level from the hardware specific and decoupling the hardware interface modules from the application level. Further abstraction for the application interfaces and the underlying hardware interfaces are executed based on adaption layer to provide unified interfaces for more flexible user applications and hardware protocol integration. This new universal mechanism fundamentally changes the architecture of the smart home and in some way meets the practical requirement of smart homes more flexible and desirable.
Developing a comprehensive conceptual arhictecture to support Earth sciences
NASA Astrophysics Data System (ADS)
Yang, C. P.; Xu, C.; Sun, M.; Li, Z.
2014-12-01
Global challenges require the comprehensive understanding of the earth system to make smarter descisions about scientific research, operational management, and educational activities. We conducted in the one and half year a comprehensive investigation about how to develop a comprehensive conceptual architecture for developing a cyberinfrastructure that can help address such global challenges. This includes three aspects of research and outreach: we first analyzed the conceptual architecture requirements from the earth science domains and the exisiting global and national systems from different agencies and organizations to consolidate a list of requirements from scientific, technological, and educational aspects. A conceptual design by considering these reqquirements and the latest development in enterprise arhictecture was conducted based on our past decade's investigation about cyberinfrastructure architecture for supporting different aspects. We also organized several levels of reviews by different levels of experts from different organizations and background to help us comment the completeness, reasonability, and practicality of the design. A comprehensive conceptual design will be released for public comments this spring to solicit the general comments for reaching a design as comprehensive as possible. The final design is scheduled to be published in 2015 to contribute to the general world wide scientists and CI builders in the geoscience domain and beyond.
A SOA-Based Solution to Monitor Vaccination Coverage Among HIV-Infected Patients in Liguria.
Giannini, Barbara; Gazzarata, Roberta; Sticchi, Laura; Giacomini, Mauro
2016-01-01
Vaccination in HIV-infected patients constitutes an essential tool in the prevention of the most common infectious diseases. The Ligurian Vaccination in HIV Program is a proposed vaccination schedule specifically dedicated to this risk group. Selective strategies are proposed within this program, employing ICT (Information and Communication) tools to identify this susceptible target group, to monitor immunization coverage over time and to manage failures and defaulting. The proposal is to connect an immunization registry system to an existing regional platform that allows clinical data re-use among several medical structures, to completely manage the vaccination process. This architecture will adopt a Service Oriented Architecture (SOA) approach and standard HSSP (Health Services Specification Program) interfaces to support interoperability. According to the presented solution, vaccination administration information retrieved from the immunization registry will be structured according to the specifications within the immunization section of the HL7 (Health Level 7) CCD (Continuity of Care Document) document. Immunization coverage will be evaluated through the continuous monitoring of serology and antibody titers gathered from the hospital LIS (Laboratory Information System) structured into a HL7 Version 3 (v3) Clinical Document Architecture Release 2 (CDA R2).
39 CFR 501.7 - Postage Evidencing System requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Performance Criteria for Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance Criteria for Information-Based Indicia and Security Architecture for Closed IBI... Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance...
39 CFR 501.7 - Postage Evidencing System requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Performance Criteria for Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance Criteria for Information-Based Indicia and Security Architecture for Closed IBI... Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance...
Potapov, V; Reichmann, D; Abramovich, R; Filchtinski, D; Zohar, N; Ben Halevy, D; Edelman, M; Sobolev, V; Schreiber, G
2008-12-05
A new method is presented for the redesign of protein-protein interfaces, resulting in specificity of the designed pair while maintaining high affinity. The design is based on modular interface architecture and was carried out on the interaction between TEM1 beta-lactamase and its inhibitor protein, beta-lactamase inhibitor protein. The interface between these two proteins is composed of several mostly independent modules. We previously showed that it is possible to delete a complete module without affecting the overall structure of the interface. Here, we replace a complete module with structure fragments taken from nonrelated proteins. Nature-optimized fragments were chosen from 10(7) starting templates found in the Protein Data Bank. A procedure was then developed to identify sets of interacting template residues with a backbone arrangement mimicking the original module. This generated a final list of 361 putative replacement modules that were ranked using a novel scoring function based on grouped atom-atom contact surface areas. The top-ranked designed complex exhibited an affinity of at least the wild-type level and a mode of binding that was remarkably specific despite the absence of negative design in the procedure. In retrospect, the combined application of three factors led to the success of the design approach: utilizing the modular construction of the interface, capitalizing on native rather than artificial templates, and ranking with an accurate atom-atom contact surface scoring function.
Building validation tools for knowledge-based systems
NASA Technical Reports Server (NTRS)
Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.
1987-01-01
The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.
Parallel Logic Programming and Parallel Systems Software and Hardware
1989-07-29
Conference, Dallas TX. January 1985. (55) [Rous75] Roussel, P., "PROLOG: Manuel de Reference et d’Uilisation", Group d’ Intelligence Artificielle , Universite d...completed. Tools were provided for software development using artificial intelligence techniques. Al software for massively parallel architectures was...using artificial intelligence tech- niques. Al software for massively parallel architectures was started. 1. Introduction We describe research conducted
Jupiter Europa Orbiter Architecture Definition Process
NASA Technical Reports Server (NTRS)
Rasmussen, Robert; Shishko, Robert
2011-01-01
The proposed Jupiter Europa Orbiter mission, planned for launch in 2020, is using a new architectural process and framework tool to drive its model-based systems engineering effort. The process focuses on getting the architecture right before writing requirements and developing a point design. A new architecture framework tool provides for the structured entry and retrieval of architecture artifacts based on an emerging architecture meta-model. This paper describes the relationships among these artifacts and how they are used in the systems engineering effort. Some early lessons learned are discussed.
NASA Astrophysics Data System (ADS)
Yen, Y.-N.; Wu, Y.-W.; Weng, K.-H.
2013-07-01
E-learning assisted teaching and learning is the trend of the 21st century and has many advantages - freedom from the constraints of time and space, hypertext and multimedia rich resources - enhancing the interaction between students and the teaching materials. The purpose of this study is to explore how rich Internet resources assisted students with the Western Architectural History course. First, we explored the Internet resources which could assist teaching and learning activities. Second, according to course objectives, we built a web-based platform which integrated the Google spreadsheets form, SIMILE widget, Wikipedia and the Google Maps and applied it to the course of Western Architectural History. Finally, action research was applied to understanding the effectiveness of this teaching/learning mode. Participants were the students of the Department of Architecture in the Private University of Technology in northern Taiwan. Results showed that students were willing to use the web-based platform to assist their learning. They found this platform to be useful in understanding the relationship between different periods of buildings. Through the view of the map mode, this platform also helped students expand their international perspective. However, we found that the information shared by students via the Internet were not completely correct. One possible reason was that students could easily acquire information on Internet but they could not determine the correctness of the information. To conclude, this study found some useful and rich resources that could be well-integrated, from which we built a web-based platform to collect information and present this information in diverse modes to stimulate students' learning motivation. We recommend that future studies should consider hiring teaching assistants in order to ease the burden on teachers, and to assist in the maintenance of information quality.
Development of an FPGA-based multipoint laser pyroshock measurement system for explosive bolts
NASA Astrophysics Data System (ADS)
Abbas, Syed Haider; Jang, Jae-Kyeong; Lee, Jung-Ryul; Kim, Zaeill
2016-07-01
Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to an explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests.
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps
QCAPUF: QCA-based physically unclonable function as a hardware security primitive
NASA Astrophysics Data System (ADS)
Abutaleb, M. M.
2018-04-01
Physically unclonable functions (PUFs) are increasingly used as innovative security primitives to provide the hardware authentication and identification as well as the secret key generation based on unique and random variations in identically fabricated devices. Security and low power have appeared to become two crucial necessities to modern designs. As an emerging nanoelectronic technology, a quantum-dot cellular automata (QCA) can achieve ultra-low power consumption as well as an extremely small area for implementing digital designs. However, there are various classes of permanent defects that can happen during the manufacture of QCA devices. The recent extensive research has been focused on how to eliminate errors in QCA structures resulting from fabrication variances. By a completely different vision, to turn this disadvantage into an advantage, this paper presents a novel QCA-based PUF (QCAPUF) architecture to exploit the unique physical characteristics of fabricated QCA cells in order to produce different hardware fingerprint instances. This architecture is composed of proposed logic and interconnect blocks that have critical vulnerabilities and perform unexpected logical operations. The behaviour of QCAPUF is thoroughly analysed through physical relations and simulations. Results confirm that the proposed QCAPUF has state of the art PUF characteristics in the QCA technology. This paper will serve as a basis for further research into QCA-based hardware security primitives and applications.
Study on the contract characteristics of Internet architecture
NASA Astrophysics Data System (ADS)
Fu, Chuan; Zhang, Guoqing; Yang, Jing; Liu, Xiaona
2011-11-01
The importance of Internet architecture goes beyond the technical aspects. The architecture of Internet has a profound influence on the Internet-based economy in term of how the profits are shared by different market participants (Internet Server Provider, Internet Content Provider), since it is the physical foundation upon which the profit-sharing contracts are derived. In order to facilitate the continuing growth of the Internet, it is necessary to systematically study factors that curtail the Internet-based economy including the existing Internet architecture. In this paper, we used transaction cost economics and contract economics as new tools to analyse the contracts derived from the current Internet architecture. This study sheds light on how the macro characteristics of Internet architecture effect the microeconomical decisions of market participants. Based on the existing Internet architecture, we discuss the possibility of promoting Internet-based economy by encouraging user to connect their private stub network to the Internet and giving the user more right of self-governing.
SiC: An Agent Based Architecture for Preventing and Detecting Attacks to Ubiquitous Databases
NASA Astrophysics Data System (ADS)
Pinzón, Cristian; de Paz, Yanira; Bajo, Javier; Abraham, Ajith; Corchado, Juan M.
One of the main attacks to ubiquitous databases is the structure query language (SQL) injection attack, which causes severe damages both in the commercial aspect and in the user’s confidence. This chapter proposes the SiC architecture as a solution to the SQL injection attack problem. This is a hierarchical distributed multiagent architecture, which involves an entirely new approach with respect to existing architectures for the prevention and detection of SQL injections. SiC incorporates a kind of intelligent agent, which integrates a case-based reasoning system. This agent, which is the core of the architecture, allows the application of detection techniques based on anomalies as well as those based on patterns, providing a great degree of autonomy, flexibility, robustness and dynamic scalability. The characteristics of the multiagent system allow an architecture to detect attacks from different types of devices, regardless of the physical location. The architecture has been tested on a medical database, guaranteeing safe access from various devices such as PDAs and notebook computers.
Content addressable memory project
NASA Technical Reports Server (NTRS)
Hall, Josh; Levy, Saul; Smith, D.; Wei, S.; Miyake, K.; Murdocca, M.
1991-01-01
The progress on the Rutgers CAM (Content Addressable Memory) Project is described. The overall design of the system is completed at the architectural level and described. The machine is composed of two kinds of cells: (1) the CAM cells which include both memory and processor, and support local processing within each cell; and (2) the tree cells, which have smaller instruction set, and provide global processing over the CAM cells. A parameterized design of the basic CAM cell is completed. Progress was made on the final specification of the CPS. The machine architecture was driven by the design of algorithms whose requirements are reflected in the resulted instruction set(s). A few of these algorithms are described.
Marshall Application Realignment System (MARS) Architecture
NASA Technical Reports Server (NTRS)
Belshe, Andrea; Sutton, Mandy
2010-01-01
The Marshall Application Realignment System (MARS) Architecture project was established to meet the certification requirements of the Department of Defense Architecture Framework (DoDAF) V2.0 Federal Enterprise Architecture Certification (FEAC) Institute program and to provide added value to the Marshall Space Flight Center (MSFC) Application Portfolio Management process. The MARS Architecture aims to: (1) address the NASA MSFC Chief Information Officer (CIO) strategic initiative to improve Application Portfolio Management (APM) by optimizing investments and improving portfolio performance, and (2) develop a decision-aiding capability by which applications registered within the MSFC application portfolio can be analyzed and considered for retirement or decommission. The MARS Architecture describes a to-be target capability that supports application portfolio analysis against scoring measures (based on value) and overall portfolio performance objectives (based on enterprise needs and policies). This scoring and decision-aiding capability supports the process by which MSFC application investments are realigned or retired from the application portfolio. The MARS Architecture is a multi-phase effort to: (1) conduct strategic architecture planning and knowledge development based on the DoDAF V2.0 six-step methodology, (2) describe one architecture through multiple viewpoints, (3) conduct portfolio analyses based on a defined operational concept, and (4) enable a new capability to support the MSFC enterprise IT management mission, vision, and goals. This report documents Phase 1 (Strategy and Design), which includes discovery, planning, and development of initial architecture viewpoints. Phase 2 will move forward the process of building the architecture, widening the scope to include application realignment (in addition to application retirement), and validating the underlying architecture logic before moving into Phase 3. The MARS Architecture key stakeholders are most interested in Phase 3 because this is where the data analysis, scoring, and recommendation capability is realized. Stakeholders want to see the benefits derived from reducing the steady-state application base and identify opportunities for portfolio performance improvement and application realignment.
A research update for southeast poultry research laboratory
USDA-ARS?s Scientific Manuscript database
The Southeast Poultry Research Laboratory continues with their modernization plan. The 35% architectural drawings have been completed and the project is currently out for bid for the completion of the design and building of the new facility. Research activities in the Exotic and Emerging Avian Vir...
Eigensolution of finite element problems in a completely connected parallel architecture
NASA Technical Reports Server (NTRS)
Akl, Fred A.; Morel, Michael R.
1989-01-01
A parallel algorithm for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi)=(M)(phi)(omega), where (K) and (M) are of order N, and (omega) is of order q is presented. The parallel algorithm is based on a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm has been successfully implemented on a tightly coupled multiple-instruction-multiple-data (MIMD) parallel processing computer, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor, or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macro-tasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. For a 64-element rectangular plate, speed-ups of 1.86, 3.13, 3.18 and 3.61 are achieved on two, four, six and eight processors, respectively.
Parallel eigenanalysis of finite element models in a completely connected architecture
NASA Technical Reports Server (NTRS)
Akl, F. A.; Morel, M. R.
1989-01-01
A parallel algorithm is presented for the solution of the generalized eigenproblem in linear elastic finite element analysis, (K)(phi) = (M)(phi)(omega), where (K) and (M) are of order N, and (omega) is order of q. The concurrent solution of the eigenproblem is based on the multifrontal/modified subspace method and is achieved in a completely connected parallel architecture in which each processor is allowed to communicate with all other processors. The algorithm was successfully implemented on a tightly coupled multiple-instruction multiple-data parallel processing machine, Cray X-MP. A finite element model is divided into m domains each of which is assumed to process n elements. Each domain is then assigned to a processor or to a logical processor (task) if the number of domains exceeds the number of physical processors. The macrotasking library routines are used in mapping each domain to a user task. Computational speed-up and efficiency are used to determine the effectiveness of the algorithm. The effect of the number of domains, the number of degrees-of-freedom located along the global fronts and the dimension of the subspace on the performance of the algorithm are investigated. A parallel finite element dynamic analysis program, p-feda, is documented and the performance of its subroutines in parallel environment is analyzed.
G.A.M.E.: GPU-accelerated mixture elucidator.
Schurz, Alioune; Su, Bo-Han; Tu, Yi-Shu; Lu, Tony Tsung-Yu; Lin, Olivia A; Tseng, Yufeng J
2017-09-15
GPU acceleration is useful in solving complex chemical information problems. Identifying unknown structures from the mass spectra of natural product mixtures has been a desirable yet unresolved issue in metabolomics. However, this elucidation process has been hampered by complex experimental data and the inability of instruments to completely separate different compounds. Fortunately, with current high-resolution mass spectrometry, one feasible strategy is to define this problem as extending a scaffold database with sidechains of different probabilities to match the high-resolution mass obtained from a high-resolution mass spectrum. By introducing a dynamic programming (DP) algorithm, it is possible to solve this NP-complete problem in pseudo-polynomial time. However, the running time of the DP algorithm grows by orders of magnitude as the number of mass decimal digits increases, thus limiting the boost in structural prediction capabilities. By harnessing the heavily parallel architecture of modern GPUs, we designed a "compute unified device architecture" (CUDA)-based GPU-accelerated mixture elucidator (G.A.M.E.) that considerably improves the performance of the DP, allowing up to five decimal digits for input mass data. As exemplified by four testing datasets with verified constitutions from natural products, G.A.M.E. allows for efficient and automatic structural elucidation of unknown mixtures for practical procedures. Graphical abstract .
Hutchison, Kimberly N; Song, Yanna; Wang, Lily; Malow, Beth A
2008-04-15
Polysomnography is associated with changes in sleep architecture called the first-night effect. This effect is believed to result from sleeping in an unusual environment and the technical equipment used to study sleep. Sleep experts hope to decrease this variable by providing a more familiar, comfortable atmosphere for sleep testing through hotel-based sleep centers. In this study, we compared the sleep parameters of patients studied in our hotel-based and hospital-based sleep laboratories. We retrospectively reviewed polysomnograms completed in our hotel-based and hospital-based sleep laboratories from August 2003 to July 2005. All patients were undergoing evaluation for obstructive sleep apnea. Hospital-based patients were matched for age and apnea-hypopnea index with hotel-based patients. We compared the sleep architecture changes associated with the first-night effect in the two groups. The associated conditions and symptoms listed on the polysomnography referral forms are also compared. No significant differences were detected between the two groups in sleep onset latency, sleep efficiency, REM sleep latency, total amount of slow wave sleep (NREM stages 3 and 4), arousal index, and total stage 1 sleep. This pilot study failed to show a difference in sleep parameters associated with the first-night effect in patients undergoing sleep studies in our hotel and hospital-based sleep laboratories. Future studies need to compare the first-night effect in different sleep disorders, preferably in multi-night recordings.
Dynamic malware analysis using IntroVirt: a modified hypervisor-based system
NASA Astrophysics Data System (ADS)
White, Joshua S.; Pape, Stephen R.; Meily, Adam T.; Gloo, Richard M.
2013-05-01
In this paper, we present a system for Dynamic Malware Analysis which incorporates the use of IntroVirt™. IntroVirt is an introspective hypervisor architecture and infrastructure that supports advanced analysis techniques for stealth-malwareanalysis. This system allows for complete guest monitoring and interaction, including the manipulation and blocking of system calls. IntroVirt is capable of bypassing virtual machine detection capabilities of even the most sophisticated malware, by spoofing returns to system call responses. Additional fuzzing capabilities can be employed to detect both malware vulnerabilities and polymorphism.
Fiber optic control system integration
NASA Technical Reports Server (NTRS)
Poppel, G. L.; Glasheen, W. M.; Russell, J. C.
1987-01-01
A total fiber optic, integrated propulsion/flight control system concept for advanced fighter aircraft is presented. Fiber optic technology pertaining to this system is identified and evaluated for application readiness. A fiber optic sensor vendor survey was completed, and the results are reported. The advantages of centralized/direct architecture are reviewed, and the concept of the protocol branch is explained. Preliminary protocol branch selections are made based on the F-18/F404 application. Concepts for new optical tools are described. Development plans for the optical technology and the described system are included.
Developing a Complete and Effective ACT-R Architecture
2008-01-01
of computational primitives , as contrasted with the predominant “one-off” and “grab-bag” cognitive models in the field. These architectures have...transport/ semaphore protocols connected via a glue script. Both protocols rely on the fact that file rename and file remove operations are atomic...the Trial Log file until just prior to processing the next input request. Thus, to perform synchronous identifications it is necessary to run an
NASA Technical Reports Server (NTRS)
Adams, Robert B.; LaPointe, Michael; Wilks, Rod; Allen, Brian
2009-01-01
This poster reviews the planning and design for an integrated architecture for characterization, mitigation, scientific evaluation and resource utilization of near earth objects. This includes tracks to observe and characterize the nature of the threat posed by a NEO, and deflect if a significant threat is posed. The observation stack can also be used for a more complete scientific analysis of the NEO.
A Research Update for Southeast Poultry Research Laboratory
USDA-ARS?s Scientific Manuscript database
The Southeast Poultry Research Laboratory continues with their modernization plan. The 35% architectural drawings have been completed and the project is currently out for bid for the completion of the design and building of the new facility. Research activities include responding to the H7N9 highl...
Formal Foundations for the Specification of Software Architecture.
1995-03-01
Architectures For- mally: A Case-Study Using KWIC." Kestrel Institute, Palo Alto, CA 94304, April 1994. 58. Kang, Kyo C. Feature-Oriented Domain Analysis ( FODA ...6.3.5 Constraint-Based Architectures ................. 6-60 6.4 Summary ......... ............................. 6-63 VII. Analysis of Process-Based...between these architec- ture theories were investigated. A feasibility analysis on an image processing application demonstrated that architecture theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-11
GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.
A Design Architecture for an Integrated Training System Decision Support System
1990-07-01
Sensory modes include visual, auditory, tactile, or kinesthetic; performance categories include time to complete , speed of response, or correct action ...procedures, and finally application and examples from the aviation proponency with emphasis on the LHX program. Appendix B is a complete bibliography...integrated analysis of ITS development. The approach was designed to provide an accurate and complete representation of the ITS development process and
FPGA wavelet processor design using language for instruction-set architectures (LISA)
NASA Astrophysics Data System (ADS)
Meyer-Bäse, Uwe; Vera, Alonzo; Rao, Suhasini; Lenk, Karl; Pattichis, Marios
2007-04-01
The design of an microprocessor is a long, tedious, and error-prone task consisting of typically three design phases: architecture exploration, software design (assembler, linker, loader, profiler), architecture implementation (RTL generation for FPGA or cell-based ASIC) and verification. The Language for instruction-set architectures (LISA) allows to model a microprocessor not only from instruction-set but also from architecture description including pipelining behavior that allows a design and development tool consistency over all levels of the design. To explore the capability of the LISA processor design platform a.k.a. CoWare Processor Designer we present in this paper three microprocessor designs that implement a 8/8 wavelet transform processor that is typically used in today's FBI fingerprint compression scheme. We have designed a 3 stage pipelined 16 bit RISC processor (NanoBlaze). Although RISC μPs are usually considered "fast" processors due to design concept like constant instruction word size, deep pipelines and many general purpose registers, it turns out that DSP operations consume essential processing time in a RISC processor. In a second step we have used design principles from programmable digital signal processor (PDSP) to improve the throughput of the DWT processor. A multiply-accumulate operation along with indirect addressing operation were the key to achieve higher throughput. A further improvement is possible with today's FPGA technology. Today's FPGAs offer a large number of embedded array multipliers and it is now feasible to design a "true" vector processor (TVP). A multiplication of two vectors can be done in just one clock cycle with our TVP, a complete scalar product in two clock cycles. Code profiling and Xilinx FPGA ISE synthesis results are provided that demonstrate the essential improvement that a TVP has compared with traditional RISC or PDSP designs.
NASA Astrophysics Data System (ADS)
Magnoni, L.; Suthakar, U.; Cordeiro, C.; Georgiou, M.; Andreeva, J.; Khan, A.; Smith, D. R.
2015-12-01
Monitoring the WLCG infrastructure requires the gathering and analysis of a high volume of heterogeneous data (e.g. data transfers, job monitoring, site tests) coming from different services and experiment-specific frameworks to provide a uniform and flexible interface for scientists and sites. The current architecture, where relational database systems are used to store, to process and to serve monitoring data, has limitations in coping with the foreseen increase in the volume (e.g. higher LHC luminosity) and the variety (e.g. new data-transfer protocols and new resource-types, as cloud-computing) of WLCG monitoring events. This paper presents a new scalable data store and analytics platform designed by the Support for Distributed Computing (SDC) group, at the CERN IT department, which uses a variety of technologies each one targeting specific aspects of big-scale distributed data-processing (commonly referred as lambda-architecture approach). Results of data processing on Hadoop for WLCG data activities monitoring are presented, showing how the new architecture can easily analyze hundreds of millions of transfer logs in a few minutes. Moreover, a comparison of data partitioning, compression and file format (e.g. CSV, Avro) is presented, with particular attention given to how the file structure impacts the overall MapReduce performance. In conclusion, the evolution of the current implementation, which focuses on data storage and batch processing, towards a complete lambda-architecture is discussed, with consideration of candidate technology for the serving layer (e.g. Elasticsearch) and a description of a proof of concept implementation, based on Apache Spark and Esper, for the real-time part which compensates for batch-processing latency and automates problem detection and failures.
Lessons Learned from Engineering a Multi-Mission Satellite Operations Center
NASA Technical Reports Server (NTRS)
Madden, Maureen; Cary, Everett, Jr.; Esposito, Timothy; Parker, Jeffrey; Bradley, David
2006-01-01
NASA's Small Explorers (SMEX) satellites have surpassed their designed science-lifetimes and their flight operations teams are now facing the challenge of continuing operations with reduced funding. At present, these missions are being re-engineered into a fleet-oriented ground system at Goddard Space Flight Center (GSFC). When completed, this ground system will provide command and control of four SMEX missions and will demonstrate fleet automation and control concepts. As a path-finder for future mission consolidation efforts, this ground system will also demonstrate new ground-based technologies that show promise of supporting longer mission lifecycles and simplifying component integration. One of the core technologies being demonstrated in the SMEX Mission Operations Center is the GSFC Mission Services Evolution Center (GMSEC) architecture. The GMSEC architecture uses commercial Message Oriented Middleware with a common messaging standard to realize a higher level of component interoperability, allowing for interchangeable components in ground systems. Moreover, automation technologies utilizing the GMSEC architecture are being evaluated and implemented to provide extended lights-out operations. This mode of operation will provide routine monitoring and control of the heterogeneous spacecraft fleet. The operational concepts being developed will reduce the need for staffed contacts and is seen as a necessity for fleet management. This paper will describe the experiences of the integration team throughout the re-enginering effort of the SMEX ground system. Additionally, lessons learned will be presented based on the team's experiences with integrating multiple missions into a fleet-automated ground system.
Hardware Architecture Study for NASA's Space Software Defined Radios
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Scardelletti, Maximilian C.; Mortensen, Dale J.; Kacpura, Thomas J.; Andro, Monty; Smith, Carl; Liebetreu, John
2008-01-01
This study defines a hardware architecture approach for software defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general purpose processors, digital signal processors, field programmable gate arrays (FPGAs), and application-specific integrated circuits (ASICs) in addition to flexible and tunable radio frequency (RF) front-ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and and interfaces. The modules are a logical division of common radio functions that comprise a typical communication radio. This paper describes the architecture details, module definitions, and the typical functions on each module as well as the module interfaces. Trade-offs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify the internal physical implementation within each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.
NASA Technical Reports Server (NTRS)
Reinhart, Richard C.; Kacpura, Thomas J.; Smith, Carl R.; Liebetreu, John; Hill, Gary; Mortensen, Dale J.; Andro, Monty; Scardelletti, Maximilian C.; Farrington, Allen
2008-01-01
This report defines a hardware architecture approach for software-defined radios to enable commonality among NASA space missions. The architecture accommodates a range of reconfigurable processing technologies including general-purpose processors, digital signal processors, field programmable gate arrays, and application-specific integrated circuits (ASICs) in addition to flexible and tunable radiofrequency front ends to satisfy varying mission requirements. The hardware architecture consists of modules, radio functions, and interfaces. The modules are a logical division of common radio functions that compose a typical communication radio. This report describes the architecture details, the module definitions, the typical functions on each module, and the module interfaces. Tradeoffs between component-based, custom architecture and a functional-based, open architecture are described. The architecture does not specify a physical implementation internally on each module, nor does the architecture mandate the standards or ratings of the hardware used to construct the radios.
A reference architecture for integrated EHR in Colombia.
de la Cruz, Edgar; Lopez, Diego M; Uribe, Gustavo; Gonzalez, Carolina; Blobel, Bernd
2011-01-01
The implementation of national EHR infrastructures has to start by a detailed definition of the overall structure and behavior of the EHR system (system architecture). Architectures have to be open, scalable, flexible, user accepted and user friendly, trustworthy, based on standards including terminologies and ontologies. The GCM provides an architectural framework created with the purpose of analyzing any kind of system, including EHR system´s architectures. The objective of this paper is to propose a reference architecture for the implementation of an integrated EHR in Colombia, based on the current state of system´s architectural models, and EHR standards. The proposed EHR architecture defines a set of services (elements) and their interfaces, to support the exchange of clinical documents, offering an open, scalable, flexible and semantically interoperable infrastructure. The architecture was tested in a pilot tele-consultation project in Colombia, where dental EHR are exchanged.
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
NASA Astrophysics Data System (ADS)
Bauerdick, L. A. T.; Bloom, K.; Bockelman, B.; Bradley, D. C.; Dasu, S.; Dost, J. M.; Sfiligoi, I.; Tadel, A.; Tadel, M.; Wuerthwein, F.; Yagil, A.; Cms Collaboration
2014-06-01
Following the success of the XRootd-based US CMS data federation, the AAA project investigated extensions of the federation architecture by developing two sample implementations of an XRootd, disk-based, caching proxy. The first one simply starts fetching a whole file as soon as a file open request is received and is suitable when completely random file access is expected or it is already known that a whole file be read. The second implementation supports on-demand downloading of partial files. Extensions to the Hadoop Distributed File System have been developed to allow for an immediate fallback to network access when local HDFS storage fails to provide the requested block. Both cache implementations are in pre-production testing at UCSD.
NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations
NASA Astrophysics Data System (ADS)
Frisbie, T. E.; Hall, C. M.
2006-12-01
Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.
Image Understanding Architecture
1991-09-01
architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers
Enterprise application architecture development based on DoDAF and TOGAF
NASA Astrophysics Data System (ADS)
Tao, Zhi-Gang; Luo, Yun-Feng; Chen, Chang-Xin; Wang, Ming-Zhe; Ni, Feng
2017-05-01
For the purpose of supporting the design and analysis of enterprise application architecture, here, we report a tailored enterprise application architecture description framework and its corresponding design method. The presented framework can effectively support service-oriented architecting and cloud computing by creating the metadata model based on architecture content framework (ACF), DoDAF metamodel (DM2) and Cloud Computing Modelling Notation (CCMN). The framework also makes an effort to extend and improve the mapping between The Open Group Architecture Framework (TOGAF) application architectural inputs/outputs, deliverables and Department of Defence Architecture Framework (DoDAF)-described models. The roadmap of 52 DoDAF-described models is constructed by creating the metamodels of these described models and analysing the constraint relationship among metamodels. By combining the tailored framework and the roadmap, this article proposes a service-oriented enterprise application architecture development process. Finally, a case study is presented to illustrate the results of implementing the tailored framework in the Southern Base Management Support and Information Platform construction project using the development process proposed by the paper.
Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures
2015-09-01
soundness or completeness. An incomplete analysis will produce extra edges in the CFG that might allow an attacker to slip through. An unsound analysis...Analysis of Defenses Against Code Reuse Attacks on Modern and New Architectures by Isaac Noah Evans Submitted to the Department of Electrical...Engineering and Computer Science in partial fulfillment of the requirements for the degree of Master of Engineering in Electrical Engineering and Computer
Two-dimensional optical architectures for the receive mode of phased-array antennas.
Pastur, L; Tonda-Goldstein, S; Dolfi, D; Huignard, J P; Merlet, T; Maas, O; Chazelas, J
1999-05-10
We propose and experimentally demonstrate two optical architectures that process the receive mode of a p x p element phased-array antenna. The architectures are based on free-space propagation and switching of the channelized optical carriers of microwave signals. With the first architecture a direct transposition of the received signals in the optical domain is assumed. The second architecture is based on the optical generation and distribution of a microwave local oscillator matched in frequency and direction. Preliminary experimental results at microwave frequencies of approximately 3 GHz are presented.
Observations of the Performance of the U.S. Laboratory Architecture
NASA Technical Reports Server (NTRS)
Jones, Rod
2002-01-01
The United States Laboratory Module "Destiny" was the product of many architectural, technology, manufacturing, schedule and cost constraints which spanned 15 years. Requirements for the Space Station pressurized elements were developed and baselined in the mid to late '80's. Although the station program went through several design changes the fundamental requirements that drove the architecture did not change. Manufacturing of the U.S. Laboratory began in the early 90's. Final assembly and checkout testing completed in December of 2000. Destiny was launched, mated to the International Space Station and successfully activated on the STS-98 mission in February of 2001. The purpose of this paper is to identify key requirements, which directly or indirectly established the architecture of the U.S. Laboratory. Provide an overview of how that architecture affected the manufacture, assembly, test, and activation of the module on-orbit. And finally, through observations made during the last year of operation, provide considerations in the development of future requirements and mission integration controls for space habitats.
ERIC Educational Resources Information Center
Pihl, Ole
2015-01-01
How do architecture students experience the contradictions between the individual and the group at the Department of Architecture and Design of Aalborg University? The Problem-Based Learning model has been extensively applied to the department's degree programs in coherence with the Integrated Design Process, but is a group-based architecture and…
Monleón, Daniel; Colson, Kimberly; Moseley, Hunter N B; Anklin, Clemens; Oswald, Robert; Szyperski, Thomas; Montelione, Gaetano T
2002-01-01
Rapid data collection, spectral referencing, processing by time domain deconvolution, peak picking and editing, and assignment of NMR spectra are necessary components of any efficient integrated system for protein NMR structure analysis. We have developed a set of software tools designated AutoProc, AutoPeak, and AutoAssign, which function together with the data processing and peak-picking programs NMRPipe and Sparky, to provide an integrated software system for rapid analysis of protein backbone resonance assignments. In this paper we demonstrate that these tools, together with high-sensitivity triple resonance NMR cryoprobes for data collection and a Linux-based computer cluster architecture, can be combined to provide nearly complete backbone resonance assignments and secondary structures (based on chemical shift data) for a 59-residue protein in less than 30 hours of data collection and processing time. In this optimum case of a small protein providing excellent spectra, extensive backbone resonance assignments could also be obtained using less than 6 hours of data collection and processing time. These results demonstrate the feasibility of high throughput triple resonance NMR for determining resonance assignments and secondary structures of small proteins, and the potential for applying NMR in large scale structural proteomics projects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbas, Syed Haider; Lee, Jung-Ryul; Jang, Jae-Kyeong
Pyroshock can cause failure to the objective of an aerospace structure by damaging its sensitive electronic equipment, which is responsible for performing decisive operations. A pyroshock is the high intensity shock wave that is generated when a pyrotechnic device is explosively triggered to separate, release, or activate structural subsystems of an aerospace architecture. Pyroshock measurement plays an important role in experimental simulations to understand the characteristics of pyroshock on the host structure. This paper presents a technology to measure a pyroshock wave at multiple points using laser Doppler vibrometers (LDVs). These LDVs detect the pyroshock wave generated due to anmore » explosive-based pyrotechnical event. Field programmable gate array (FPGA) based data acquisition is used in the study to acquire pyroshock signals simultaneously from multiple channels. This paper describes the complete system design for multipoint pyroshock measurement. The firmware architecture for the implementation of multichannel data acquisition on an FPGA-based development board is also discussed. An experiment using explosive bolts was configured to test the reliability of the system. Pyroshock was generated using explosive excitation on a 22-mm-thick steel plate. Three LDVs were deployed to capture the pyroshock wave at different points. The pyroshocks captured were displayed as acceleration plots. The results showed that our system effectively captured the pyroshock wave with a peak-to-peak magnitude of 303 741 g. The contribution of this paper is a specialized architecture of firmware design programmed in FPGA for data acquisition of large amount of multichannel pyroshock data. The advantages of the developed system are the near-field, multipoint, non-contact, and remote measurement of a pyroshock wave, which is dangerous and expensive to produce in aerospace pyrotechnic tests.« less
Ultrathin Metallic Nanowire-Based Architectures as High-Performing Electrocatalysts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Luyao; Wong, Stanislaus S.
Fuel cells (FCs) convert chemical energy into electricity through electrochemical reactions. They maintain desirable functional advantages that render them as attractive candidates for renewable energy alternatives. However, the high cost and general scarcity of conventional FC catalysts largely limit the ubiquitous application of this device configuration. For example, under current consumption requirements, there is an insufficient global reserve of Pt to provide for the needs of an effective FC for every car produced. Therefore, it is absolutely necessary in the future to replace Pt either completely or in part with far more plentiful, abundant, cheaper, and potentially less toxic firstmore » row transition metals, because the high cost-to-benefit ratio of conventional catalysts is and will continue to be a major limiting factor preventing mass commercialization. We and other groups have explored a number of nanowire-based catalytic architectures, which are either Pt-free or with reduced Pt content, as an energy efficient solution with improved performance metrics versus conventional, currently commercially available Pt nanoparticles that are already well established in the community. Specifically, in this Perspective, we highlight strategies aimed at the rational modification of not only the physical structure but also the chemical composition as a means of developing superior electrocatalysts for a number of small-molecule-based anodic oxidation and cathodic reduction reactions, which underlie the overall FC behavior. In particular, we focus on efforts to precisely, synergistically, and simultaneously tune not only the size, morphology, architectural motif, surface chemistry, and chemical composition of the as-generated catalysts but also the nature of the underlying support so as to controllably improve performance metrics of the hydrogen oxidation reaction, the methanol oxidation reaction, the ethanol oxidation reaction, and the formic acid oxidation reaction, in addition to the oxygen reduction reaction.« less
AHaH computing-from metastable switches to attractors to machine learning.
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures-all key capabilities of biological nervous systems and modern machine learning algorithms with real world application.
Ultrathin Metallic Nanowire-Based Architectures as High-Performing Electrocatalysts
Li, Luyao; Wong, Stanislaus S.
2018-03-19
Fuel cells (FCs) convert chemical energy into electricity through electrochemical reactions. They maintain desirable functional advantages that render them as attractive candidates for renewable energy alternatives. However, the high cost and general scarcity of conventional FC catalysts largely limit the ubiquitous application of this device configuration. For example, under current consumption requirements, there is an insufficient global reserve of Pt to provide for the needs of an effective FC for every car produced. Therefore, it is absolutely necessary in the future to replace Pt either completely or in part with far more plentiful, abundant, cheaper, and potentially less toxic firstmore » row transition metals, because the high cost-to-benefit ratio of conventional catalysts is and will continue to be a major limiting factor preventing mass commercialization. We and other groups have explored a number of nanowire-based catalytic architectures, which are either Pt-free or with reduced Pt content, as an energy efficient solution with improved performance metrics versus conventional, currently commercially available Pt nanoparticles that are already well established in the community. Specifically, in this Perspective, we highlight strategies aimed at the rational modification of not only the physical structure but also the chemical composition as a means of developing superior electrocatalysts for a number of small-molecule-based anodic oxidation and cathodic reduction reactions, which underlie the overall FC behavior. In particular, we focus on efforts to precisely, synergistically, and simultaneously tune not only the size, morphology, architectural motif, surface chemistry, and chemical composition of the as-generated catalysts but also the nature of the underlying support so as to controllably improve performance metrics of the hydrogen oxidation reaction, the methanol oxidation reaction, the ethanol oxidation reaction, and the formic acid oxidation reaction, in addition to the oxygen reduction reaction.« less
Activity-Centric Approach to Distributed Programming
NASA Technical Reports Server (NTRS)
Levy, Renato; Satapathy, Goutam; Lang, Jun
2004-01-01
The first phase of an effort to develop a NASA version of the Cybele software system has been completed. To give meaning to even a highly abbreviated summary of the modifications to be embodied in the NASA version, it is necessary to present the following background information on Cybele: Cybele is a proprietary software infrastructure for use by programmers in developing agent-based application programs [complex application programs that contain autonomous, interacting components (agents)]. Cybele provides support for event handling from multiple sources, multithreading, concurrency control, migration, and load balancing. A Cybele agent follows a programming paradigm, called activity-centric programming, that enables an abstraction over system-level thread mechanisms. Activity centric programming relieves application programmers of the complex tasks of thread management, concurrency control, and event management. In order to provide such functionality, activity-centric programming demands support of other layers of software. This concludes the background information. In the first phase of the present development, a new architecture for Cybele was defined. In this architecture, Cybele follows a modular service-based approach to coupling of the programming and service layers of software architecture. In a service-based approach, the functionalities supported by activity-centric programming are apportioned, according to their characteristics, among several groups called services. A well-defined interface among all such services serves as a path that facilitates the maintenance and enhancement of such services without adverse effect on the whole software framework. The activity-centric application-program interface (API) is part of a kernel. The kernel API calls the services by use of their published interface. This approach makes it possible for any application code written exclusively under the API to be portable for any configuration of Cybele.
AHaH Computing–From Metastable Switches to Attractors to Machine Learning
Nugent, Michael Alexander; Molter, Timothy Wesley
2014-01-01
Modern computing architecture based on the separation of memory and processing leads to a well known problem called the von Neumann bottleneck, a restrictive limit on the data bandwidth between CPU and RAM. This paper introduces a new approach to computing we call AHaH computing where memory and processing are combined. The idea is based on the attractor dynamics of volatile dissipative electronics inspired by biological systems, presenting an attractive alternative architecture that is able to adapt, self-repair, and learn from interactions with the environment. We envision that both von Neumann and AHaH computing architectures will operate together on the same machine, but that the AHaH computing processor may reduce the power consumption and processing time for certain adaptive learning tasks by orders of magnitude. The paper begins by drawing a connection between the properties of volatility, thermodynamics, and Anti-Hebbian and Hebbian (AHaH) plasticity. We show how AHaH synaptic plasticity leads to attractor states that extract the independent components of applied data streams and how they form a computationally complete set of logic functions. After introducing a general memristive device model based on collections of metastable switches, we show how adaptive synaptic weights can be formed from differential pairs of incremental memristors. We also disclose how arrays of synaptic weights can be used to build a neural node circuit operating AHaH plasticity. By configuring the attractor states of the AHaH node in different ways, high level machine learning functions are demonstrated. This includes unsupervised clustering, supervised and unsupervised classification, complex signal prediction, unsupervised robotic actuation and combinatorial optimization of procedures–all key capabilities of biological nervous systems and modern machine learning algorithms with real world application. PMID:24520315
The first steps towards a de minimus, affordable NEA exploration architecture
NASA Astrophysics Data System (ADS)
Landis, Rob R.; Abell, Paul A.; Adamo, Daniel R.; Barbee, Brent W.; Johnson, Lindley N.
2013-03-01
The impetus for asteroid exploration is scientific, political, and pragmatic. The notion of sending human explorers to asteroids is not new. Piloted missions to these primitive bodies were first discussed in the 1960s, pairing Saturn V rockets with enhanced Apollo spacecraft to explore what were then called "Earth-approaching asteroids." Two decades ago, NASA's Space Exploration Initiative (SEI) also briefly examined the possibility of visiting these small celestial bodies. Most recently, the US Human Space Flight Review Committee (the second Augustine Commission) suggested that near-Earth objects (NEOs) represent a target-rich environment for exploration via the "Flexible Path" option. However, prior to seriously considering human missions to NEOs, it has become clear that we currently lack a robust catalog of human-accessible targets. The majority of the known NEOs identified by a study team across several NASA centers as "human-accessible" are probably too small and have orbits that are too uncertain to consider mounting piloted expeditions to these small worlds. The first step in developing a comprehensive catalog is, therefore, to complete a space-based NEO survey. The resulting catalog of candidate NEOs would then be transformed into a matrix of opportunities for robotic and human missions for the next several decades and shared with the international community. This initial step of a space-based NEO survey is therefore the linchpin to laying the foundation of a low-risk architecture to venture out and explore these primitive bodies. We suggest such a minimalist framework architecture from (1) extensive ground-based and precursor spacecraft investigations (while applying operational knowledge from science-driven robotic missions), (2) astronaut servicing of spacecraft operating at geosynchronous Earth orbit to retain essential skills and experience, and (3) applying the sum of these skills, knowledge and experience to piloted missions to NEOs.
SVGA and XGA LCOS microdisplays for HMD applications
NASA Astrophysics Data System (ADS)
Bolotski, Michael; Alvelda, Phillip
1999-07-01
MicroDisplay liquid crystal on silicon (LCOS) display devices are based on a combination of technologies combined with the extreme integration capability of conventionally fabricated CMOS substrates. Two recent SVGA (800 X 600) pixel resolution designs were demonstrated based on 10 micron and 12.5-micron pixel pitch architectures. The resulting microdisplays measure approximately 10 mm and 12 mm in diagonal respectively. Further, an XGA (1024 X 768) resolution display fabricated with a 12.5-micron pixel pitch with a 16-mm diagonal was also demonstrated. Both the larger SVGA and the XGA design were based on the same 12.5-micron pixel-pitch design, demonstrating a quickly scalable design architecture for rapid prototyping life-cycles. All three microdisplay designs described above function in grayscale and high-performance Field-Sequential-Color (FSC) operating modes. The fast liquid crystal operating modes and new scalable high- performance pixel addressing architectures presented in this paper enable substantially improved color, contrast, and brightness while still satisfying the optical, packaging, and power requirements of portable commercial and defense applications including ultra-portable helmet, eyeglass, and heat-mounted systems. The entire suite of The MicroDisplay Corporation's technologies was devised to create a line of mixed-signal application-specific integrated circuits (ASIC) in single-chip display systems. Mixed-signal circuits can integrate computing, memory, and communication circuitry on the same substrate as the display drivers and pixel array for a multifunctional complete system-on-a-chip. For helmet and head-mounted displays this can include capabilities such as the incorporation of customized symbology and information storage directly on the display substrate. System-on-a-chip benefits also include reduced head supported weight requirements through the elimination of off-chip drive electronics.
Design of an Airborne L-Band Cross-Track Scanning Scatterometer
NASA Technical Reports Server (NTRS)
Hilliard, Lawrence M. (Technical Monitor)
2002-01-01
In this report, we describe the design of an airborne L-band cross-track scanning scatterometer suitable for airborne operation aboard the NASA P-3 aircraft. The scatterometer is being designed for joint operation with existing L-band radiometers developed by NASA for soil moisture and ocean salinity remote sensing. In addition, design tradeoffs for a space-based radar system have been considered, with particular attention given to antenna architectures suitable for sharing the antenna between the radar and radiometer. During this study, we investigated a number of imaging techniques, including the use of real and synthetic aperture processing in both the along track and cross-track dimensions. The architecture selected will permit a variety of beamforming algorithms to be implemented, although real aperture processing, with hardware beamforming, provides better sidelobe suppression than synthetic array processing and superior signal-to-noise performance. In our discussions with the staff of NASA GSFC, we arrived at an architecture that employs complete transmit/receive modules for each subarray. Amplitude and phase control at each of the transmit modules will allow a low-sidelobe transmit pattern to be generated over scan angles of +/- 50 degrees. Each receiver module will include all electronics necessary to downconvert the received signal to an IF offset of 30 MHz where it will be digitized for further processing.
Station Explorer for X-Ray Timing and Navigation Technology Architecture Overview
NASA Technical Reports Server (NTRS)
Hasouneh, Monther Abdel Hamid
2014-01-01
The Station Explorer for X-ray Timing and Navigation Technology (SEXTANT) is a technology demonstration enhancement to the Neutron-star Interior Composition Explorer (NICER) mission. NICER is a NASA astrophysics Explorer Mission of Opportunity, scheduled for launch in mid-2016, that will be hosted on the International Space Station (ISS) via the ExPRESS Logistics Carrier (ELC). By exploiting the regular pulsations emit-ted by the ultra dense remnants of dead stars, which rotate many hundreds of times per second, SEXTANT will, for the first-time, demonstrate real-time, on-board X-ray pulsar-based navigation is a significant milestone in the quest to establish a GPS-like navigation capability available throughout our Solar System and beyond and include the worlds first completely functional system architecture for navigation using X-ray pulsars. In addition, NICER SEXTANT will investigate the suit-ability of these millisecond X-ray pulsars (MSPs) as a Solar System-wide timing infrastructure to rival terrestrial atomic clocks on long timescales. This paper provides a brief overview of the SEXTANT demonstration and the design of the system architecture that consists of the NICER X-ray timing instrument, the SEXTANT flight software and algorithms, supporting ground system, and the GSFC X-ray Navigation Laboratory Testbed (GXLT).
A System for the Semantic Multimodal Analysis of News Audio-Visual Content
NASA Astrophysics Data System (ADS)
Mezaris, Vasileios; Gidaros, Spyros; Papadopoulos, GeorgiosTh; Kasper, Walter; Steffen, Jörg; Ordelman, Roeland; Huijbregts, Marijn; de Jong, Franciska; Kompatsiaris, Ioannis; Strintzis, MichaelG
2010-12-01
News-related content is nowadays among the most popular types of content for users in everyday applications. Although the generation and distribution of news content has become commonplace, due to the availability of inexpensive media capturing devices and the development of media sharing services targeting both professional and user-generated news content, the automatic analysis and annotation that is required for supporting intelligent search and delivery of this content remains an open issue. In this paper, a complete architecture for knowledge-assisted multimodal analysis of news-related multimedia content is presented, along with its constituent components. The proposed analysis architecture employs state-of-the-art methods for the analysis of each individual modality (visual, audio, text) separately and proposes a novel fusion technique based on the particular characteristics of news-related content for the combination of the individual modality analysis results. Experimental results on news broadcast video illustrate the usefulness of the proposed techniques in the automatic generation of semantic annotations.
Recent advances in integrated photonic sensors.
Passaro, Vittorio M N; de Tullio, Corrado; Troia, Benedetto; La Notte, Mario; Giannoccaro, Giovanni; De Leonardis, Francesco
2012-11-09
Nowadays, optical devices and circuits are becoming fundamental components in several application fields such as medicine, biotechnology, automotive, aerospace, food quality control, chemistry, to name a few. In this context, we propose a complete review on integrated photonic sensors, with specific attention to materials, technologies, architectures and optical sensing principles. To this aim, sensing principles commonly used in optical detection are presented, focusing on sensor performance features such as sensitivity, selectivity and rangeability. Since photonic sensors provide substantial benefits regarding compatibility with CMOS technology and integration on chips characterized by micrometric footprints, design and optimization strategies of photonic devices are widely discussed for sensing applications. In addition, several numerical methods employed in photonic circuits and devices, simulations and design are presented, focusing on their advantages and drawbacks. Finally, recent developments in the field of photonic sensing are reviewed, considering advanced photonic sensor architectures based on linear and non-linear optical effects and to be employed in chemical/biochemical sensing, angular velocity and electric field detection.
Automation for System Safety Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul
2009-01-01
This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
Object-oriented approach for gas turbine engine simulation
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Felder, James L.
1995-01-01
An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.
Recent Advances in Integrated Photonic Sensors
Passaro, Vittorio M. N.; de Tullio, Corrado; Troia, Benedetto; La Notte, Mario; Giannoccaro, Giovanni; De Leonardis, Francesco
2012-01-01
Nowadays, optical devices and circuits are becoming fundamental components in several application fields such as medicine, biotechnology, automotive, aerospace, food quality control, chemistry, to name a few. In this context, we propose a complete review on integrated photonic sensors, with specific attention to materials, technologies, architectures and optical sensing principles. To this aim, sensing principles commonly used in optical detection are presented, focusing on sensor performance features such as sensitivity, selectivity and rangeability. Since photonic sensors provide substantial benefits regarding compatibility with CMOS technology and integration on chips characterized by micrometric footprints, design and optimization strategies of photonic devices are widely discussed for sensing applications. In addition, several numerical methods employed in photonic circuits and devices, simulations and design are presented, focusing on their advantages and drawbacks. Finally, recent developments in the field of photonic sensing are reviewed, considering advanced photonic sensor architectures based on linear and non-linear optical effects and to be employed in chemical/biochemical sensing, angular velocity and electric field detection. PMID:23202223
A TMS320-based modem for the aeronautical-satellite core data service
NASA Astrophysics Data System (ADS)
Moher, Michael L.; Lodge, John H.
The International Civil Aviation Organization (ICAO) Future Air Navigation Systems (FANS) committee, the Airlines Electronics Engineering Committee (AEEC), and Inmarsat have been developing standards for an aeronautical satellite communications service. These standards encompass a satellite communications system architecture to provide comprehensive aeronautical communications services. Incorporated into the architecture is a core service capability, providing only low rate data communications, which all service providers and all aircraft earth terminals are required to support. In this paper an implementation of the physical layer of this standard for the low data rate core service is described. This is a completely digital modem (up to a low intermediate frequency). The implementation uses a single TMS320C25 chip for the transmit baseband functions of scrambling, encoding, interleaving, block formatting and modulation. The receiver baseband unit uses a dual processor configuration to implement the functions of demodulation, synchronization, de-interleaving, decoding and de-scrambling. The hardware requirements, the software structure and the algorithms of this implementation are described.
Real-time FPGA architectures for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2000-03-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low level image processing. The FPGA-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on a dedicated VLSI to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real time performance are discussed. Some results are presented and discussed.
LISA Framework for Enhancing Gravitational Wave Signal Extraction Techniques
NASA Technical Reports Server (NTRS)
Thompson, David E.; Thirumalainambi, Rajkumar
2006-01-01
This paper describes the development of a Framework for benchmarking and comparing signal-extraction and noise-interference-removal methods that are applicable to interferometric Gravitational Wave detector systems. The primary use is towards comparing signal and noise extraction techniques at LISA frequencies from multiple (possibly confused) ,gravitational wave sources. The Framework includes extensive hybrid learning/classification algorithms, as well as post-processing regularization methods, and is based on a unique plug-and-play (component) architecture. Published methods for signal extraction and interference removal at LISA Frequencies are being encoded, as well as multiple source noise models, so that the stiffness of GW Sensitivity Space can be explored under each combination of methods. Furthermore, synthetic datasets and source models can be created and imported into the Framework, and specific degraded numerical experiments can be run to test the flexibility of the analysis methods. The Framework also supports use of full current LISA Testbeds, Synthetic data systems, and Simulators already in existence through plug-ins and wrappers, thus preserving those legacy codes and systems in tact. Because of the component-based architecture, all selected procedures can be registered or de-registered at run-time, and are completely reusable, reconfigurable, and modular.
NASA Astrophysics Data System (ADS)
Xi, Wenze; McKisson, J. E.; Weisenberger, Andrew G.; Zhang, Shukui; Zorn, Carl
2014-06-01
A new laser-based externally-modulated electro-optically coupled detector (EOCD) architecture is being developed to enable high-density readout for radiation detectors with accurate analog radiation pulse shape and timing preservation. Unlike digital conversion before electro-optical modulation, the EOCD implements complete analog optical signal modulation and multiplexing in its detector front-end. The result is a compact, high performance detector readout that can be both radiation tolerant and immune to magnetic fields. In this work, the feasibility of EOCD was explored by constructing a two-wavelength laser-based externally-modulated EOCD, and testing analog pulse shape preservation and wavelength-division multiplexing (WDM) crosstalk. Comparisons were first made between the corresponding initial pulses and the electro-optically coupled analog pulses. This confirmed an excellent analog pulse preservation over 29% of the modulator's switching voltage range. Optical spectrum analysis revealed less than -14 dB crosstalk with 1.2 nm WDM wavelength bandgap, and provided insight on experimental conditions that could lead to increased inter-wavelength crosstalk. Further discussions and previous research on the radiation tolerance and magnetic field immunity of the candidate materials were also given, and quantitative device testing is proposed in the future.
An AFDX Network for Spacecraft Data Handling
NASA Astrophysics Data System (ADS)
Deredempt, Marie-Helene; Kollias, Vangelis; Sun, Zhili; Canamares, Ernest; Ricco, Philippe
2014-08-01
In aeronautical domain, ARINC-664 Part 7 specification (AFDX) [4] provides the enabling technology for interfacing equipment in Integrated Modular Avionics (IMA) architectures. The complementary part of AFDX for a complete interoperability - Time and Space Partitioning (ARINC 653) concepts [1]- was already studied as part of space domain ESA roadmap (i.e. IMA4Space project)Standardized IMA based architecture is already considered in aeronautical domain as more flexible, reliable and secure. Integration and validation become simple, using a common set of tools and data base and could be done by part on different means with the same definition (hardware and software test benches, flight control or alarm test benches, simulator and flight test installation).In some area, requirements in terms of data processing are quite similar in space domain and the concept could be applicable to take benefit of the technology itself and of the panel of hardware and software solutions and tools available on the market. The Mission project (Methodology and assessment for the applicability of ARINC-664 (AFDX) in Satellite/Spacecraft on-board communicatION networks), as an FP7 initiative for bringing terrestrial SME research into the space domain started to evaluate the applicability of the standard in space domain.
NASA Technical Reports Server (NTRS)
1994-01-01
Formed in Jan. 1992, the Panel to Review EOSDIS Plans was charged with advising NASA on its plans for developing the Earth Observing System (EOS) Data and Information System (EOSDIS). Specifically, the panel was asked to do the following: assess the validity of the engineering and technical underpinnings of the EOSDIS; assess its potential value to scientific users; suggest how technical risk can be minimized; and assess whether current plans are sufficiently resilient to be adaptable to changing technology and requirements such as budget environments, data volumes, new users, and new databases. The panel completed an interim report (Addendum A) and transmitted it to NASA and other interested parties in the government on 9 Apr. 1992. Because of a delay in NASA's plans to select the contractor for EOSDIS, the panel was not able to complete its review of the program according to the original government request. With the issuance of a letter report (Addendum B) on 28 Sep. 1992, the panel became inactive until such time as NASA could release the details of the contractor's proposed architecture, schedule, and costs for developing EOSDIS. In early 1993, NASA awarded the contract for the EOSDIS Core System (ECS). On 20 Apr. 1993, NASA asked the panel to reconvene to do the following: ( 1) complete its review of NASA's approach to the EOSDIS architecture and implementation; (2) appraise NASA's responses to the panel's previous recommendations; and (3) review the planning for EOSDIS in the context of NASA's role in the Global Change Data and Information System (GCDIS) implementation plan. To respond to the NASA charge, the panel met three times in 1993 including sessions with NASA officials and the EOSDIS contractor. In addition, several of the panel members visited individual Distributed Active Archive Centers (DAAC's) to obtain additional views of EOSDIS. The panel has now obtained substantial information on the EOSDIS budget, contractor work program, and current baseline architecture that was not previously available, due to procurement restrictions. This report presents the panel's findings and recommendations based on this additional information.
Medication Reconciliation: Work Domain Ontology, prototype development, and a predictive model.
Markowitz, Eliz; Bernstam, Elmer V; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R
2011-01-01
Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System's and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load.
Medication Reconciliation: Work Domain Ontology, Prototype Development, and a Predictive Model
Markowitz, Eliz; Bernstam, Elmer V.; Herskovic, Jorge; Zhang, Jiajie; Shneiderman, Ben; Plaisant, Catherine; Johnson, Todd R.
2011-01-01
Medication errors can result from administration inaccuracies at any point of care and are a major cause for concern. To develop a successful Medication Reconciliation (MR) tool, we believe it necessary to build a Work Domain Ontology (WDO) for the MR process. A WDO defines the explicit, abstract, implementation-independent description of the task by separating the task from work context, application technology, and cognitive architecture. We developed a prototype based upon the WDO and designed to adhere to standard principles of interface design. The prototype was compared to Legacy Health System’s and Pre-Admission Medication List Builder MR tools via a Keystroke-Level Model analysis for three MR tasks. The analysis found the prototype requires the fewest mental operations, completes tasks in the fewest steps, and completes tasks in the least amount of time. Accordingly, we believe that developing a MR tool, based upon the WDO and user interface guidelines, improves user efficiency and reduces cognitive load. PMID:22195146
Updates to the NASA Space Telecommunications Radio System (STRS) Architecture
NASA Technical Reports Server (NTRS)
Kacpura, Thomas J.; Handler, Louis M.; Briones, Janette; Hall, Charles S.
2008-01-01
This paper describes an update of the Space Telecommunications Radio System (STRS) open architecture for NASA space based radios. The STRS architecture has been defined as a framework for the design, development, operation and upgrade of space based software defined radios, where processing resources are constrained. The architecture has been updated based upon reviews by NASA missions, radio providers, and component vendors. The STRS Standard prescribes the architectural relationship between the software elements used in software execution and defines the Application Programmer Interface (API) between the operating environment and the waveform application. Modeling tools have been adopted to present the architecture. The paper will present a description of the updated API, configuration files, and constraints. Minimum compliance is discussed for early implementations. The paper then closes with a summary of the changes made and discussion of the relevant alignment with the Object Management Group (OMG) SWRadio specification, and enhancements to the specialized signal processing abstraction.
A Distributed Intelligent E-Learning System
ERIC Educational Resources Information Center
Kristensen, Terje
2016-01-01
An E-learning system based on a multi-agent (MAS) architecture combined with the Dynamic Content Manager (DCM) model of E-learning, is presented. We discuss the benefits of using such a multi-agent architecture. Finally, the MAS architecture is compared with a pure service-oriented architecture (SOA). This MAS architecture may also be used within…
Extensive Evaluation of Using a Game Project in a Software Architecture Course
ERIC Educational Resources Information Center
Wang, Alf Inge
2011-01-01
This article describes an extensive evaluation of introducing a game project to a software architecture course. In this project, university students have to construct and design a type of software architecture, evaluate the architecture, implement an application based on the architecture, and test this implementation. In previous years, the domain…
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
The C370 Program was awarded in October 2010 with the ambitious goal of designing and testing the most electrically efficient recuperated microturbine engine at a rated power of less than 500 kW. The aggressive targets for electrical efficiency, emission regulatory compliance, and the estimated price point make the system state-of-the-art for microturbine engine systems. These goals will be met by designing a two stage microturbine engine identified as the low pressure spool and high pressure spool that are based on derivative hardware of Capstone’s current commercially available engines. The development and testing of the engine occurred in two phases. Phasemore » I focused on developing a higher power and more efficient engine, that would become the low pressure spool which is based on Capstone’s C200 (200kW) engine architecture. Phase II integrated the low pressure spool created in Phase I with the high pressure spool, which is based on Capstone’s C65 (65 kW) commercially available engine. Integration of the engines, based on preliminary research, would allow the dual spool engine to provide electrical power in excess of 370 kW, with electrical efficiency approaching 42%. If both of these targets were met coupled with the overall CHP target of 85% total combined heating and electrical efficiency California Air Resources Board (CARB) level emissions, and a price target of $600 per kW, the system would represent a step change in the currently available commercial generation technology. Phase I of the C370 program required the development of the C370 low pressure spool. The goal was to increase the C200 engine power by a minimum of 25% — 250 kW — and efficiency from 32% to 37%. These increases in the C200 engine output were imperative to meet the power requirements of the engine when both spools were integrated. An additional benefit of designing and testing the C370 low pressure spool was the possibility of developing a stand-alone product for possible commercialization. The low pressure spool design activity focused on an aeropath derivative of the current C200 engine. The aeropath derivative included changes to the compressor section —compressor and inducer — and to the turbine nozzle. The increased power also necessitated a larger, more powerful generator and generator controller to support the increased power requirements. These two major design changes were completed by utilizing both advanced 3D modeling and computational fluid dynamics modelling. After design, modeling, and analysis, the decision was made to acquire and integrate the components for testing. The second task of Phase I was to integrate and test the components of the low pressure spool to validate power and efficiency. Acquisition of the components for the low pressure spool was completed utilizing Capstone’s current supplier base. Utilization of Capstone’s supply base for integration of the test article would allow — if the decision was made —expedited commercialization of the product. After integration of the engine components, the engine was tested and evaluated for performance and emissions. Test data analysis confirmed that the engine met all power and efficiency requirements and did so while maintaining CARB level emissions. The emissions were met without the use of any post processing or catalyst. After testing was completed, the DOE authorized — via a milestone review — proceeding to Phase II: the development of the integrated C370 engine. The C370 high pressure spool design activity required significant changes to the C65 engine architecture. The engine required a high power density generator, completely redesigned compressor stage, turbine section, recuperator, controls architecture, and intercooler stage asThe two most critical design challenges were the turbine section (the nozzle and turbine) and the controls architecture. The design and analysis of all of the components was completed and integrated into a system model. The system model — after numerous iterations — indicated that, once integrated, the engine will meet or exceed all system requirements. Unfortunately, the turbine section’s life requirements remain a technical challenge and will require continued refinement of the bi-metallic turbine wheel design and manufacturing approach to meet the life requirement at theses high temperatures. The current controls architecture requires substantial effort to develop a system capable of handling the high-speed, near real-time controls requirement, but it was determined not to be a technical roadblock for the project. The C370 Program has been a significant effort with state-of-the-art technical targets. The targets have pushed Capstone’s designers to the limits of current technology. The program has been fortunate to see many successes: the successful testing of the low pressure spool (C250), the development of new material processes, and the implementation of new design practices. The technology and practices learned during the program will be utilized in Capstone’s current product lines and future products. The C370 Program has been a resounding success on many fronts for the DOE and for Capstone.« less
Web-Based Course Management and Web Services
ERIC Educational Resources Information Center
Mandal, Chittaranjan; Sinha, Vijay Luxmi; Reade, Christopher M. P.
2004-01-01
The architecture of a web-based course management tool that has been developed at IIT [Indian Institute of Technology], Kharagpur and which manages the submission of assignments is discussed. Both the distributed architecture used for data storage and the client-server architecture supporting the web interface are described. Further developments…
AlJarullah, Asma; El-Masri, Samir
2013-08-01
The goal of a national electronic health records integration system is to aggregate electronic health records concerning a particular patient at different healthcare providers' systems to provide a complete medical history of the patient. It holds the promise to address the two most crucial challenges to the healthcare systems: improving healthcare quality and controlling costs. Typical approaches for the national integration of electronic health records are a centralized architecture and a distributed architecture. This paper proposes a new approach for the national integration of electronic health records, the semi-centralized approach, an intermediate solution between the centralized architecture and the distributed architecture that has the benefits of both approaches. The semi-centralized approach is provided with a clearly defined architecture. The main data elements needed by the system are defined and the main system modules that are necessary to achieve an effective and efficient functionality of the system are designed. Best practices and essential requirements are central to the evolution of the proposed architecture. The proposed architecture will provide the basis for designing the simplest and the most effective systems to integrate electronic health records on a nation-wide basis that maintain integrity and consistency across locations, time and systems, and that meet the challenges of interoperability, security, privacy, maintainability, mobility, availability, scalability, and load balancing.
Design of a modular digital computer system, CDRL no. D001, final design plan
NASA Technical Reports Server (NTRS)
Easton, R. A.
1975-01-01
The engineering breadboard implementation for the CDRL no. D001 modular digital computer system developed during design of the logic system was documented. This effort followed the architecture study completed and documented previously, and was intended to verify the concepts of a fault tolerant, automatically reconfigurable, modular version of the computer system conceived during the architecture study. The system has a microprogrammed 32 bit word length, general register architecture and an instruction set consisting of a subset of the IBM System 360 instruction set plus additional fault tolerance firmware. The following areas were covered: breadboard packaging, central control element, central processing element, memory, input/output processor, and maintenance/status panel and electronics.
Collected Papers of the Soar/IFOR Project. Spring 1994
1994-04-25
leads directly to com- aent th kowledge necessary for posite tactical actions. For example, the agent to complete the 1-v-i aggressive bo-fighter may...expressive power and ease of mainte- 47 nance. For example, when mapping all to architectural goals in a simple manner. agent goals to architectural...bear to law-~ehlo fmv, as in Eagle 11 DPowal Aire power that no singl agent has alone. The and Hutchinson, 1993]. However, this did not problenm is
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draeger, E. W.
The Advanced Architecture and Portability Specialists team (AAPS) worked with a select set of LLNL application teams to develop and/or implement a portability strategy for next-generation architectures. The team also investigated new and updated programming models and helped develop programming abstractions targeting maintainability and performance portability. Significant progress was made on both fronts in FY17, resulting in multiple applications being significantly more prepared for the nextgeneration machines than before.
Characterizing the audibility of sound field with diffusion in architectural spaces
NASA Astrophysics Data System (ADS)
Utami, Sentagi Sesotya
The significance of diffusion control in room acoustics is that it attempts to avoid echoes by dispersing reflections while removing less valuable sound energy. Some applications place emphasis on the enhancement of late reflections to promote a sense of envelopment, and on methods required to measure the performance of diffusers. What still remains unclear is the impact of diffusion on the audibility quality due to the geometric arrangement of architectural elements. The objective of this research is to characterize the audibility of the sound field with diffusion in architectural space. In order to address this objective, an approach utilizing various methods and new techniques relevant to room acoustics standards was applied. An array of microphones based on beam forming (i.e., an acoustic camera) was utilized for field measurements in a recording studio, classrooms, auditoriums, concert halls and sport arenas. Given the ability to combine a visual image with acoustical data, the impulse responses measured were analyzed to identify the impact of diffusive surfaces on the early, late, and reverberant sound fields. The effects of the room geometry and the proportions of the diffusive and absorptive surfaces were observed by utilizing geometrical room acoustics simulations. The degree of diffuseness in each space was measured by coherences from different measurement positions along with the acoustical conditions predicted by well-known objective parameters such as T30, EDT, C80, and C50. Noticeable differences of the auditory experience were investigated by utilizing computer-based survey techniques, including the use of an immersive virtual environment system, given the current software auralization capabilities. The results based on statistical analysis demonstrate the users' ability to localize the sound and to distinguish the intensity, clarity, and reverberation created within the virtual environment. Impact of architectural elements in diffusion control is evaluated by the design variable interaction, objectively and subjectively. Effectiveness of the diffusive surfaces is determined by the echo reduction and the sense of complete immersion in a given room acoustics volume. Application of such methodology at various stages of design provides the ability to create a better auditory experience by the users. The results based on the cases studied have contributed to the development of new acoustical treatment based on the diffusion characteristics.
Developing Performance Based Requirements for Open Architecture Design
2006-04-30
Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= DEVELOPING PERFORMANCE BASED REQUIREMENTS FOR OPEN ARCHITECTURE DESIGN Published: 30 April 2006 by Brad Naegle...for Open Architecture Design 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...Ñçê=áåÑçêãÉÇ=ÅÜ~åÖÉ======= = - 174 - = = Developing Performance Based Requirements for Open Architecture Design Presenter: Brad Naegle, Lieutenant
ERIC Educational Resources Information Center
Hill, Raymond; Klein, Raymond S.
A study examined the feasibility of adopting National Occupational Competency Testing Institute (NOCTI) examinations for use by completers of vocational programs in Florida comprehensive high schools. A total of 34 candidates in five occupational areas (architectural drafting, carpentry, plumbing, small engine repair, and welding) at four…
NASA Astrophysics Data System (ADS)
Cataldo, A.; De Benedetto, E.; Cannazza, G.; Huebner, C.; Trebbels, D.
2017-01-01
In this work, the performance of three time domain reflectometry (TDR) instruments (with different hardware architectures, specifications and costs) is comparatively assessed. The goal is to evaluate the performance of low-cost TDR instrumentation, in view of the development of a completely permanent TDR-based monitoring solution, wherein the costs of the instrument is so low, that it can be left on-site, even unguarded, and controlled remotely. Without losing generality, the applications considered for the comparative experiments are the TDR-based detection of leaks in underground pipes and, more in general, of soil water content variations. For this reason, both laboratory and in-the-field experiments are carried out by comparatively using three TDR instruments, in conjunction with wire-like sensing elements (SEs).
Research on TCP/IP network communication based on Node.js
NASA Astrophysics Data System (ADS)
Huang, Jing; Cai, Lixiong
2018-04-01
In the face of big data, long connection and high synchronization, TCP/IP network communication will cause performance bottlenecks due to its blocking multi-threading service model. This paper presents a method of TCP/IP network communication protocol based on Node.js. On the basis of analyzing the characteristics of Node.js architecture and asynchronous non-blocking I/O model, the principle of its efficiency is discussed, and then compare and analyze the network communication model of TCP/IP protocol to expound the reasons why TCP/IP protocol stack is widely used in network communication. Finally, according to the large data and high concurrency in the large-scale grape growing environment monitoring process, a TCP server design based on Node.js is completed. The results show that the example runs stably and efficiently.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draeger, Erik W.
This report documents the fact that the work in creating a strategic plan and beginning customer engagements has been completed. The description of milestone is: The newly formed advanced architecture and portability specialists (AAPS) team will develop a strategic plan to meet the goals of 1) sharing knowledge and experience with code teams to ensure that ASC codes run well on new architectures, and 2) supplying skilled computational scientists to put the strategy into practice. The plan will be delivered to ASC management in the first quarter. By the fourth quarter, the team will identify their first customers within PEMmore » and IC, perform an initial assessment and scalability and performance bottleneck for next-generation architectures, and embed AAPS team members with customer code teams to assist with initial portability development within standalone kernels or proxy applications.« less
New concept of aging care architecture landscape design based on sustainable development
NASA Astrophysics Data System (ADS)
Xu, Ying
2017-05-01
As the aging problem becoming serious in China, Aging care is now one of the top issuer in front of all of us. Lots of private and public aging care architecture and facilities have been built. At present, we only pay attention to the architecture design and interior design scientific, ecological and sustainable design on aged care architecture landscape. Based on the social economy, population resources, mutual coordination and development of the environment, taking the elderly as the special group, this paper follows the principles of the sustainable development, conducts the comprehensive design planning of aged care landscape architecture and makes a deeper understanding and exploration through changing the form of architectural space, ecological landscape planting, new materials and technology, ecological energy utilization.
NASA Astrophysics Data System (ADS)
Dumoulin, Jean; Crinière, Antoine; Averty, Rodolphe
2015-04-01
An infrared system has been developed to monitor transport infrastructures in a standalone configuration. Results obtained on bridges open to traffic allows to retrieve the inner structure of the decks. To complete this study, experiments were carried out over several months to monitor two reinforced concrete beams of 16 m long and 21 T each. Detection of a damaged area over one of the two beams was made by Pulse Phase Thermography approach. Measurements carried out over several months. Finally, conclusion on the robustness of the system is proposed and perspectives are presented.
NASA Astrophysics Data System (ADS)
Drobitch, Justine L.; Ahsanul Abeed, Md; Bandyopadhyay, Supriyo
2017-10-01
We describe an approach to implement precessional switching of a perpendicular-magnetic-anisotropy magneto-tunneling-junction (p-MTJ) without using any magnetic field. The switching is accomplished with voltage-controlled-magnetic-anisotropy (VCMA), spin transfer torque (STT) and mechanical strain. The soft layer of the p-MTJ is magnetostrictive and the strain acts as an effective in-plane magnetic field around which the magnetization of the soft layer precesses to complete a flip. A two-terminal energy-efficient p-MTJ based memory cell, that is compatible with crossbar architecture and high cell density, is designed.
A flexible acquisition cycle for incompletely defined fieldbus protocols.
Gaitan, Vasile-Gheorghita; Gaitan, Nicoleta-Cristina; Ungurean, Ioan
2014-05-01
Real time data-acquisition from fieldbuses strongly depends on the network type and protocol used. Currently, there is an impressive number of fieldbuses, some of them are completely defined and others are incompletely defined. In those from the second category, the time variable, the main element in real-time data acquisition, does not appear explicitly. Examples include protocols such as Modbus ASCII/RTU, M-bus, ASCII character-based, and so on. This paper defines a flexible acquisition cycle based on the Master-Slave architecture that can be implemented on a Master station, called a Base Station Gateway (BSG). The BSG can add a timestamp for temporal location of data. It also presents a possible extension for the Modbus protocol, developed as simple and low cost solution based on existing hardware. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A comparative analysis of loop heat pipe based thermal architectures for spacecraft thermal control
NASA Technical Reports Server (NTRS)
Pauken, Mike; Birur, Gaj
2004-01-01
Loop Heat Pipes (LHP) have gained acceptance as a viable means of heat transport in many spacecraft in recent years. However, applications using LHP technology tend to only remove waste heat from a single component to an external radiator. Removing heat from multiple components has been done by using multiple LHPs. This paper discusses the development and implementation of a Loop Heat Pipe based thermal architecture for spacecraft. In this architecture, a Loop Heat Pipe with multiple evaporators and condensers is described in which heat load sharing and thermal control of multiple components can be achieved. A key element in using a LHP thermal architecture is defining the need for such an architecture early in the spacecraft design process. This paper describes an example in which a LHP based thermal architecture can be used and how such a system can have advantages in weight, cost and reliability over other kinds of distributed thermal control systems. The example used in this paper focuses on a Mars Rover Thermal Architecture. However, the principles described here are applicable to Earth orbiting spacecraft as well.
Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs
NASA Technical Reports Server (NTRS)
Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan
2006-01-01
Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.
Open architecture design and approach for the Integrated Sensor Architecture (ISA)
NASA Astrophysics Data System (ADS)
Moulton, Christine L.; Krzywicki, Alan T.; Hepp, Jared J.; Harrell, John; Kogut, Michael
2015-05-01
Integrated Sensor Architecture (ISA) is designed in response to stovepiped integration approaches. The design, based on the principles of Service Oriented Architectures (SOA) and Open Architectures, addresses the problem of integration, and is not designed for specific sensors or systems. The use of SOA and Open Architecture approaches has led to a flexible, extensible architecture. Using these approaches, and supported with common data formats, open protocol specifications, and Department of Defense Architecture Framework (DoDAF) system architecture documents, an integration-focused architecture has been developed. ISA can help move the Department of Defense (DoD) from costly stovepipe solutions to a more cost-effective plug-and-play design to support interoperability.
A knowledge-base generating hierarchical fuzzy-neural controller.
Kandadai, R M; Tien, J M
1997-01-01
We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability.
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Abney, Morgan B.; Frederick, Kenneth R.; Greenwood, Zachary W.; Kayatin, Matthew J.; Newton, Robert L.; Parrish, Keith J.; Roman, Monsi C.; Takada, Kevin C.; Miller, Lee A.;
2013-01-01
A subsystem architecture derived from the International Space Station's (ISS) Atmosphere Revitalization Subsystem (ARS) has been functionally demonstrated. This ISS-derived architecture features re-arranged unit operations for trace contaminant control and carbon dioxide removal functions, a methane purification component as a precursor to enhance resource recovery over ISS capability, operational modifications to a water electrolysis-based oxygen generation assembly, and an alternative major atmospheric constituent monitoring concept. Results from this functional demonstration are summarized and compared to the performance observed during ground-based testing conducted on an ISS-like subsystem architecture. Considerations for further subsystem architecture and process technology development are discussed.
Utopian Kinetic Structures and Their Impact on the Contemporary Architecture
NASA Astrophysics Data System (ADS)
Cudzik, Jan; Nyka, Lucyna
2017-10-01
This paper delves into relationships between twentieth century utopian concepts of movable structures and the kinematic solutions implemented in contemporary architectural projects. The reason for conducting this study is to determine the impact of early architectural conceptions on today’s solutions. This paper points out close links that stem from the imagination of artists and architects working in 1960s and 70s and the solutions implemented by contemporary architects of that era. The research method of this paper is based on comparative analyses of architectural forms with adopted kinematic solutions. It is based on archive drawings’ studies and the examination of theoretical concepts. The research pertains to different forms of such mobility that evolved in 1960s and 70s. Many of them, usually based on the simple forms of movement were realized. The more complicated ones remained in the sphere of utopian visionary architecture. In this case, projects often exceed technical limitations and capabilities of design tools. Finally, after some decades, with the development of innovative architectural design tools and new building technologies many early visions materialized into architectural forms. In conclusion, this research indicates that modern kinematic design solutions are often based on conceptual designs formed from the beginning of the second half of the twentieth century.
An integrated content and metadata based retrieval system for art.
Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James
2004-03-01
A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.
TEXSYS. [a knowledge based system for the Space Station Freedom thermal control system test-bed
NASA Technical Reports Server (NTRS)
Bull, John
1990-01-01
The Systems Autonomy Demonstration Project has recently completed a major test and evaluation of TEXSYS, a knowledge-based system (KBS) which demonstrates real-time control and FDIR for the Space Station Freedom thermal control system test-bed. TEXSYS is the largest KBS ever developed by NASA and offers a unique opportunity for the study of technical issues associated with the use of advanced KBS concepts including: model-based reasoning and diagnosis, quantitative and qualitative reasoning, integrated use of model-based and rule-based representations, temporal reasoning, and scale-up performance issues. TEXSYS represents a major achievement in advanced automation that has the potential to significantly influence Space Station Freedom's design for the thermal control system. An overview of the Systems Autonomy Demonstration Project, the thermal control system test-bed, the TEXSYS architecture, preliminary test results, and thermal domain expert feedback are presented.
NASA Astrophysics Data System (ADS)
Liu, Chen; Han, Runze; Zhou, Zheng; Huang, Peng; Liu, Lifeng; Liu, Xiaoyan; Kang, Jinfeng
2018-04-01
In this work we present a novel convolution computing architecture based on metal oxide resistive random access memory (RRAM) to process the image data stored in the RRAM arrays. The proposed image storage architecture shows performances of better speed-device consumption efficiency compared with the previous kernel storage architecture. Further we improve the architecture for a high accuracy and low power computing by utilizing the binary storage and the series resistor. For a 28 × 28 image and 10 kernels with a size of 3 × 3, compared with the previous kernel storage approach, the newly proposed architecture shows excellent performances including: 1) almost 100% accuracy within 20% LRS variation and 90% HRS variation; 2) more than 67 times speed boost; 3) 71.4% energy saving.
Real-time field programmable gate array architecture for computer vision
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar
2001-01-01
This paper presents an architecture for real-time generic convolution of a mask and an image. The architecture is intended for fast low-level image processing. The field programmable gate array (FPGA)-based architecture takes advantage of the availability of registers in FPGAs to implement an efficient and compact module to process the convolutions. The architecture is designed to minimize the number of accesses to the image memory and it is based on parallel modules with internal pipeline operation in order to improve its performance. The architecture is prototyped in a FPGA, but it can be implemented on dedicated very- large-scale-integrated devices to reach higher clock frequencies. Complexity issues, FPGA resources utilization, FPGA limitations, and real-time performance are discussed. Some results are presented and discussed.
Cool Apps: Building Cryospheric Data Applications with Standards-Based Service Oriented Architecture
NASA Astrophysics Data System (ADS)
Oldenburg, J.; Truslove, I.; Collins, J. A.; Liu, M.; Lewis, S.; Brodzik, M.
2012-12-01
The National Snow and Ice Data Center (NSIDC) holds a large collection of cryospheric data, and is involved in a number of informatics research and development projects aimed at improving the discoverability and accessibility of these data. To develop high- quality software in a timely manner, we have adopted a Service- Oriented Architecture (SOA) approach for our core technical infrastructure development. Data services at NSIDC are internally exposed to other tools and applications through standards-based service interfaces. These standards include OAI-PMH (Open Archives Initiative Protocol for Metadata Harvesting), various OGC (Open Geospatial Consortium) standards including WMS (Web Map Service) and WFS (Web Feature Service), ESIP (Federation of Earth Sciences Information Partners) OpenSearch, and NSIDC-defined service endpoints which follow a RESTful architecture. By taking a standards-based approach, we are able to use off-the-shelf tools and libraries to consume, translate and broker these data services, and thus develop applications faster. Additionally, by exposing public interfaces to these services we provide valuable data services to technical collaborators; for example, NASA Reverb (http://reverb.echo.nasa.gov) uses NSIDC's WMS services. Our latest generation of web applications consume these data services directly. The most complete example of this is the Operation IceBridge Data Portal (http://nsidc.org/icebridge/ portal) which depends on many of the aforementioned services, retrieving data in several ways. The maps it displays are obtained through the use of WMS and WFS protocols from a MapServer instance hosted at NSIDC. Links to the scientific data collected on Operation IceBridge campaigns are obtained through ESIP OpenSearch requests service providers that encapsulate our metadata databases. These standards-based web services are also developed at NSIDC and are designed to be used independently of the Portal. This poster provides a visual representation of the relationships described above, with additional details and examples, and more generally outlines the benefits and challenges of this SOA approach.
Nanometric summation architecture based on optical near-field interaction between quantum dots.
Naruse, Makoto; Miyazaki, Tetsuya; Kubota, Fumito; Kawazoe, Tadashi; Kobayashi, Kiyoshi; Sangu, Suguru; Ohtsu, Motoichi
2005-01-15
A nanoscale data summation architecture is proposed and experimentally demonstrated based on the optical near-field interaction between quantum dots. Based on local electromagnetic interactions between a few nanometric elements via optical near fields, we can combine multiple excitations at a certain quantum dot, which allows construction of a summation architecture. Summation plays a key role for content-addressable memory, which is one of the most important functions in optical networks.
An e-consent-based shared EHR system architecture for integrated healthcare networks.
Bergmann, Joachim; Bott, Oliver J; Pretschner, Dietrich P; Haux, Reinhold
2007-01-01
Virtual integration of distributed patient data promises advantages over a consolidated health record, but raises questions mainly about practicability and authorization concepts. Our work aims on specification and development of a virtual shared health record architecture using a patient-centred integration and authorization model. A literature survey summarizes considerations of current architectural approaches. Complemented by a methodical analysis in two regional settings, a formal architecture model was specified and implemented. Results presented in this paper are a survey of architectural approaches for shared health records and an architecture model for a virtual shared EHR, which combines a patient-centred integration policy with provider-oriented document management. An electronic consent system assures, that access to the shared record remains under control of the patient. A corresponding system prototype has been developed and is currently being introduced and evaluated in a regional setting. The proposed architecture is capable of partly replacing message-based communications. Operating highly available provider repositories for the virtual shared EHR requires advanced technology and probably means additional costs for care providers. Acceptance of the proposed architecture depends on transparently embedding document validation and digital signature into the work processes. The paradigm shift from paper-based messaging to a "pull model" needs further evaluation.
Building Interactive Simulations in Web Pages without Programming.
Mailen Kootsey, J; McAuley, Grant; Bernal, Julie
2005-01-01
A software system is described for building interactive simulations and other numerical calculations in Web pages. The system is based on a new Java-based software architecture named NumberLinX (NLX) that isolates each function required to build the simulation so that a library of reusable objects could be assembled. The NLX objects are integrated into a commercial Web design program for coding-free page construction. The model description is entered through a wizard-like utility program that also functions as a model editor. The complete system permits very rapid construction of interactive simulations without coding. A wide range of applications are possible with the system beyond interactive calculations, including remote data collection and processing and collaboration over a network.
Scaling of muscle architecture and fiber types in the rat hindlimb.
Eng, Carolyn M; Smallwood, Laura H; Rainiero, Maria Pia; Lahey, Michele; Ward, Samuel R; Lieber, Richard L
2008-07-01
The functional capacity of a muscle is determined by its architecture and metabolic properties. Although extensive analyses of muscle architecture and fiber type have been completed in a large number of muscles in numerous species, there have been few studies that have looked at the interrelationship of these functional parameters among muscles of a single species. Nor have the architectural properties of individual muscles been compared across species to understand scaling. This study examined muscle architecture and fiber type in the rat (Rattus norvegicus) hindlimb to examine each muscle's functional specialization. Discriminant analysis demonstrated that architectural properties are a greater predictor of muscle function (as defined by primary joint action and anti-gravity or non anti-gravity role) than fiber type. Architectural properties were not strictly aligned with fiber type, but when muscles were grouped according to anti-gravity versus non-anti-gravity function there was evidence of functional specialization. Specifically, anti-gravity muscles had a larger percentage of slow fiber type and increased muscle physiological cross-sectional area. Incongruities between a muscle's architecture and fiber type may reflect the variability of functional requirements on single muscles, especially those that cross multiple joints. Additionally, discriminant analysis and scaling of architectural variables in the hindlimb across several mammalian species was used to explore whether any functional patterns could be elucidated within single muscles or across muscle groups. Several muscles deviated from previously described muscle architecture scaling rules and there was large variability within functional groups in how muscles should be scaled with body size. This implies that functional demands placed on muscles across species should be examined on the single muscle level.
The Live Access Server - A Web-Services Framework for Earth Science Data
NASA Astrophysics Data System (ADS)
Schweitzer, R.; Hankin, S. C.; Callahan, J. S.; O'Brien, K.; Manke, A.; Wang, X. Y.
2005-12-01
The Live Access Server (LAS) is a general purpose Web-server for delivering services related to geo-science data sets. Data providers can use the LAS architecture to build custom Web interfaces to their scientific data. Users and client programs can then access the LAS site to search the provider's on-line data holdings, make plots of data, create sub-sets in a variety of formats, compare data sets and perform analysis on the data. The Live Access server software has continued to evolve by expanding the types of data (in-situ observations and curvilinear grids) it can serve and by taking advantages of advances in software infrastructure both in the earth sciences community (THREDDS, the GrADS Data Server, the Anagram framework and Java netCDF 2.2) and in the Web community (Java Servlet and the Apache Jakarta frameworks). This presentation will explore the continued evolution of the LAS architecture towards a complete Web-services-based framework. Additionally, we will discuss the redesign and modernization of some of the support tools available to LAS installers. Soon after the initial implementation, the LAS architecture was redesigned to separate the components that are responsible for the user interaction (the User Interface Server) from the components that are responsible for interacting with the data and producing the output requested by the user (the Product Server). During this redesign, we changed the implementation of the User Interface Server from CGI and JavaScript to the Java Servlet specification using Apache Jakarta Velocity backed by a database store for holding the user interface widget components. The User Interface server is now quite flexible and highly configurable because we modernized the components used for the implementation. Meanwhile, the implementation of the Product Server has remained a Perl CGI-based system. Clearly, the time has come to modernize this part of the LAS architecture. Before undertaking such a modernization it is important to understand what we hope to gain. Specifically we would like to make it even easier to add new output products into our core system based on the Ferret analysis and visualization package. By carefully factoring the tasks needed to create a product we will be able to create new products simply by adding a description of the product into the configuration and by writing the Ferret script needed to create the product. No code will need to be added to the Product Server to bring the new product on-line. The new architecture should be faster at extracting and processing configuration information needed to address each request. Finally, the new Product Server architecture should make it even easier to pass specialized configuration information to the Product Server to deal with unanticipated special data structures or processing requirements.
Guiding Principles for Data Architecture to Support the Pathways Community HUB Model
Zeigler, Bernard P.; Redding, Sarah; Leath, Brenda A.; Carter, Ernest L.; Russell, Cynthia
2016-01-01
Introduction: The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Pathways Community HUB Model and Formalization: Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. Requirements for Data Architecture to Support the Pathways Community HUB Model: The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Problems with Quality of Data Extracted from the CHAP Database: Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Implementation of Features: Presentation of features is followed by a practical guide to their implementation allowing an organization to consider either tailoring off-the-shelf generic systems to meet the requirements or offerings that are specialized for community-based care coordination. Discussion: Looking to future extensions, we discuss the utility and prospects for an ontology to include care coordination in the Unified Medical Language System (UMLS) of the National Library of Medicine and other existing medical and nursing taxonomies. Conclusions and Recommendations: Pathways structures are an important principle, not only for organizing the care coordination activities, but also for structuring the data stored in electronic form in the conduct of such care. We showed how the proposed architecture encourages design of effective decision support systems for coordinated care and suggested how interested organizations can set about acquiring such systems. Although the presentation focuses on the Pathways Community HUB Model, the principles for data architecture are stated in generic form and are applicable to any health information system for improving care coordination services and population health. PMID:26870743
A mission operations architecture for the 21st century
NASA Technical Reports Server (NTRS)
Tai, W.; Sweetnam, D.
1996-01-01
An operations architecture is proposed for low cost missions beyond the year 2000. The architecture consists of three elements: a service based architecture; a demand access automata; and distributed science hubs. The service based architecture is based on a set of standard multimission services that are defined, packaged and formalized by the deep space network and the advanced multi-mission operations system. The demand access automata is a suite of technologies which reduces the need to be in contact with the spacecraft, and thus reduces operating costs. The beacon signaling, the virtual emergency room, and the high efficiency tracking automata technologies are described. The distributed science hubs provide information system capabilities to the small science oriented flight teams: individual access to all traditional mission functions and services; multimedia intra-team communications, and automated direct transparent communications between the scientists and the instrument.
Model-Drive Architecture for Agent-Based Systems
NASA Technical Reports Server (NTRS)
Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.
2004-01-01
The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.
ERIC Educational Resources Information Center
Carlhed, Carina
2017-01-01
The aim of the study was to analyse enrolment patterns, and study efficiency and completion among students in programmes with professional qualifications, using microdata from Statistics Sweden. The programmes were Architecture, Medicine, Nursing, Law, Social work, Psychology, andEngineering (year 2001-2002, n = 15,918). Using the concepts from…
A support architecture for reliable distributed computing systems
NASA Technical Reports Server (NTRS)
Mckendry, Martin S.
1986-01-01
The Clouds kernel design was through several design phases and is nearly complete. The object manager, the process manager, the storage manager, the communications manager, and the actions manager are examined.
Cognitive Architectures and Human-Computer Interaction. Introduction to Special Issue.
ERIC Educational Resources Information Center
Gray, Wayne D.; Young, Richard M.; Kirschenbaum, Susan S.
1997-01-01
In this introduction to a special issue on cognitive architectures and human-computer interaction (HCI), editors and contributors provide a brief overview of cognitive architectures. The following four architectures represented by articles in this issue are: Soar; LICAI (linked model of comprehension-based action planning and instruction taking);…
Development of Design Expertise by Architecture Students
ERIC Educational Resources Information Center
Oluwatayo, Adedapo Adewunmi; Ezema, Isidore; Opoko, Akunnaya
2017-01-01
What constitutes design ability and design expertise in architecture? Which categories of design expertise can be identified amongst architecture students? And which input factors differentiate one level of expertise from another? These questions were addressed in a survey of architecture students in Nigeria. Based on the results, students were…
Hutchison, Kimberly N.; Song, Yanna; Wang, Lily; Malow, Beth A.
2008-01-01
Background: Polysomnography is associated with changes in sleep architecture called the first-night effect. This effect is believed to result from sleeping in an unusual environment and the technical equipment used to study sleep. Sleep experts hope to decrease this variable by providing a more familiar, comfortable atmosphere for sleep testing through hotel-based sleep centers. In this study, we compared the sleep parameters of patients studied in our hotel-based and hospital-based sleep laboratories. Methods: We retrospectively reviewed polysomnograms completed in our hotel-based and hospital-based sleep laboratories from August 2003 to July 2005. All patients were undergoing evaluation for obstructive sleep apnea. Hospital-based patients were matched for age and apnea-hypopnea index with hotel-based patients. We compared the sleep architecture changes associated with the first-night effect in the two groups. The associated conditions and symptoms listed on the polysomnography referral forms are also compared. Results: No significant differences were detected between the two groups in sleep onset latency, sleep efficiency, REM sleep latency, total amount of slow wave sleep (NREM stages 3 and 4), arousal index, and total stage 1 sleep. Conclusions: This pilot study failed to show a difference in sleep parameters associated with the first-night effect in patients undergoing sleep studies in our hotel and hospital-based sleep laboratories. Future studies need to compare the first-night effect in different sleep disorders, preferably in multi-night recordings. Citation: Hutchison KN; Song Y; Wang L; Malow BA. Analysis of sleep parameters in patients with obstructive sleep apnea studied in a hospital vs. A hotel-based sleep center. J Clin Sleep Med 2008;4(2):119–122. PMID:18468309
MODULAR MANIPULATOR FOR ROBOTICS APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joseph W. Geisinger, Ph.D.
ARM Automation, Inc. is developing a framework of modular actuators that can address the DOE's wide range of robotics needs. The objective of this effort is to demonstrate the effectiveness of this technology by constructing a manipulator from these actuators within a glovebox for Automated Plutonium Processing (APP). At the end of the project, the system of actuators was used to construct several different manipulator configurations, which accommodate common glovebox tasks such as repackaging. The modular nature and quickconnects of this system simplify installation into ''hot'' boxes and any potential modifications or repair therein. This work focused on the developmentmore » of self-contained robotic actuator modules including the embedded electronic controls for the purpose of building a manipulator system. Both of the actuators developed under this project contain the control electronics, sensors, motor, gear train, wiring, system communications and mechanical interfaces of a complete robotics servo device. Test actuators and accompanying DISC{trademark}s underwent validation testing at The University of Texas at Austin and ARM Automation, Inc. following final design and fabrication. The system also included custom links, an umbilical cord, an open architecture PC-based system controller, and operational software that permitted integration into a completely functional robotic manipulator system. The open architecture on which this system is based avoids proprietary interfaces and communication protocols which only serve to limit the capabilities and flexibility of automation equipment. The system was integrated and tested in the contractor's facility for intended performance and operations. The manipulator was tested using the full-scale equipment and process mock-ups. The project produced a practical and operational system including a quantitative evaluation of its performance and cost.« less
NASA Astrophysics Data System (ADS)
Carlowitz, Christian; Girg, Thomas; Ghaleb, Hatem; Du, Xuan-Quang
2017-09-01
For ultra-high speed communication systems at high center frequencies above 100 GHz, we propose a disruptive change in system architecture to address major issues regarding amplifier chains with a large number of amplifier stages. They cause a high noise figure and high power consumption when operating close to the frequency limits of the underlying semiconductor technologies. Instead of scaling a classic homodyne transceiver system, we employ repeated amplification in single-stage amplifiers through positive feedback as well as synthesizer-free self-mixing demodulation at the receiver to simplify the system architecture notably. Since the amplitude and phase information for the emerging oscillation is defined by the input signal and the oscillator is only turned on for a very short time, it can be left unstabilized and thus come without a PLL. As soon as gain is no longer the most prominent issue, relaxed requirements for all the other major components allow reconsidering their implementation concepts to achieve further improvements compared to classic systems. This paper provides the first comprehensive overview of all major design aspects that need to be addressed upon realizing a SPARS-based transceiver. At system level, we show how to achieve high data rates and a noise performance comparable to classic systems, backed by scaled demonstrator experiments. Regarding the transmitter, design considerations for efficient quadrature modulation are discussed. For the frontend components that replace PA and LNA amplifier chains, implementation techniques for regenerative sampling circuits based on super-regenerative oscillators are presented. Finally, an analog-to-digital converter with outstanding performance and complete interfaces both to the analog baseband as well as to the digital side completes the set of building blocks for efficient ultra-high speed communication.
Kullgren, Jeffrey T; Hafez, Dina; Fedewa, Allison; Heisler, Michele
2017-09-01
The purpose of this paper was to review studies of behavioral economic interventions (financial incentives, choice architecture modifications, or commitment devices) to prevent type 2 diabetes mellitus (T2DM) among at-risk patients or improve self-management among patients with T2DM. We found 15 studies that used varied study designs and outcomes to test behavioral economic interventions in clinical, workplace, or health plan settings. Of four studies that focused on prevention of T2DM, two found that financial incentives increased weight loss and completion of a fasting blood glucose test, and two choice architecture modifications had mixed effects in encouraging completion of tests to screen for T2DM. Of 11 studies that focused on improving self-management of T2DM, four of six tests of financial incentives demonstrated increased engagement in recommended care processes or improved biometric measures, and three of five tests of choice architecture modifications found improvements in self-management behaviors. Though few studies have tested behavioral economic interventions for prevention or treatment of T2DM, those that have suggested such approaches have the potential to improve patient behaviors and such approaches should be tested more broadly.
Kullgren, Jeffrey T.; Hafez, Dina; Fedewa, Allison; Heisler, Michele
2017-01-01
Purpose of review To review studies of behavioral economic interventions (financial incentives, choice architecture modifications, or commitment devices) to prevent type 2 diabetes mellitus (T2DM) among at-risk patients or improve self-management among patients with T2DM. Recent findings We found 15 studies that used varied study designs and outcomes to test behavioral economic interventions in clinical, workplace, or health plan settings. Of four studies that focused on prevention of T2DM, two found that financial incentives increased weight loss and completion of a fasting blood glucose test, and two choice architecture modifications had mixed effects in encouraging completion of tests to screen for T2DM. Of 11 studies that focused on improving self-management of T2DM, four of six tests of financial incentives demonstrated increased engagement in recommended care processes or improved biometric measures, and three of five tests of choice architecture modifications found improvements in self-management behaviors. Summary Though few studies have tested behavioral economic interventions for prevention or treatment of T2DM, those that have suggest such approaches have potential to improve patient behaviors and should be tested more broadly. PMID:28755061
Information Quality Evaluation of C2 Systems at Architecture Level
2014-06-01
based on architecture models of C2 systems, which can help to identify key factors impacting information quality and improve the system capability at the stage of architecture design of C2 system....capability evaluation of C2 systems at architecture level becomes necessary and important for improving the system capability at the stage of architecture ... design . This paper proposes a method for information quality evaluation of C2 system at architecture level. First, the information quality model is
2007-12-01
model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate
Transmission control unit drive based on the AUTOSAR standard
NASA Astrophysics Data System (ADS)
Guo, Xiucai; Qin, Zhen
2018-03-01
It is a trend of automotive electronics industry in the future that automotive electronics embedded system development based on the AUTOSAR standard. AUTOSAR automotive architecture standard has proposed the transmission control unit (TCU) development architecture and designed its interfaces and configurations in detail. This essay has discussed that how to drive the TCU based on AUTOSAR standard architecture. The results show that driving the TCU with the AUTOSAR system improves reliability and shortens development cycles.
Unified web-based network management based on distributed object orientated software agents
NASA Astrophysics Data System (ADS)
Djalalian, Amir; Mukhtar, Rami; Zukerman, Moshe
2002-09-01
This paper presents an architecture that provides a unified web interface to managed network devices that support CORBA, OSI or Internet-based network management protocols. A client gains access to managed devices through a web browser, which is used to issue management operations and receive event notifications. The proposed architecture is compatible with both the OSI Management reference Model and CORBA. The steps required for designing the building blocks of such architecture are identified.
Distributed information system architecture for Primary Health Care.
Grammatikou, M; Stamatelopoulos, F; Maglaris, B
2000-01-01
We present a distributed architectural framework for Primary Health Care (PHC) Centres. Distribution is handled through the introduction of the Roaming Electronic Health Care Record (R-EHCR) and the use of local caching and incremental update of a global index. The proposed architecture is designed to accommodate a specific PHC workflow model. Finally, we discuss a pilot implementation in progress, which is based on CORBA and web-based user interfaces. However, the conceptual architecture is generic and open to other middleware approaches like the DHE or HL7.
Fuzzy-Neural Controller in Service Requests Distribution Broker for SOA-Based Systems
NASA Astrophysics Data System (ADS)
Fras, Mariusz; Zatwarnicka, Anna; Zatwarnicki, Krzysztof
The evolution of software architectures led to the rising importance of the Service Oriented Architecture (SOA) concept. This architecture paradigm support building flexible distributed service systems. In the paper the architecture of service request distribution broker designed for use in SOA-based systems is proposed. The broker is built with idea of fuzzy control. The functional and non-functional request requirements in conjunction with monitoring of execution and communication links are used to distribute requests. Decisions are made with use of fuzzy-neural network.
Observation of beta and X rays with 3-D-architecture silicon microstrip sensors
NASA Astrophysics Data System (ADS)
Kenney, C. J.; Parker, S. I.; Krieger, B.; Ludewigt, B.; Dubbs, T. P.; Sadrozinski, H.
2001-04-01
The first silicon radiation sensors based on the three-dimensional (3-D) architecture have been successfully fabricated. X-ray spectra from iron-55 and americium-241 have been recorded by reading out a 3-D architecture detector via wire bonds to a low-noise, charge-sensitive preamplifier. Using a beta source, coincidences between a 3-D sensor and a plastic scintillator were observed. This is the first observation of ionizing radiation using a silicon sensor based on the 3-D architecture. Details of the apparatus and measurements are described.
NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
An architecture for intelligent task interruption
NASA Technical Reports Server (NTRS)
Sharma, D. D.; Narayan, Srini
1990-01-01
In the design of real time systems the capability for task interruption is often considered essential. The problem of task interruption in knowledge-based domains is examined. It is proposed that task interruption can be often avoided by using appropriate functional architectures and knowledge engineering principles. Situations for which task interruption is indispensable, a preliminary architecture based on priority hierarchies is described.
A Model for Communications Satellite System Architecture Assessment
2011-09-01
This is shown in Equation 4. The total system cost includes all development, acquisition, fielding, operations, maintenance and upgrades, and system...protection. A mathematical model was implemented to enable the analysis of communications satellite system architectures based on multiple system... implemented to enable the analysis of communications satellite system architectures based on multiple system attributes. Utilization of the model in
ERIC Educational Resources Information Center
Bin Hassan, Isham Shah; Ismail, Mohd Arif; Mustafa, Ramlee
2011-01-01
The purpose of this research is to examine the effect of integrating the mobile and CAD technology on teaching architectural design process for Malaysian polytechnic architectural students in producing a creative product. The website is set up based on Caroll's minimal theory, while mobile and CAD technology integration is based on Brown and…
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-01-01
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation. PMID:24264330
Perez-Peña, Fernando; Morgado-Estevez, Arturo; Linares-Barranco, Alejandro; Jimenez-Fernandez, Angel; Gomez-Rodriguez, Francisco; Jimenez-Moreno, Gabriel; Lopez-Coronado, Juan
2013-11-20
In this paper we present a complete spike-based architecture: from a Dynamic Vision Sensor (retina) to a stereo head robotic platform. The aim of this research is to reproduce intended movements performed by humans taking into account as many features as possible from the biological point of view. This paper fills the gap between current spike silicon sensors and robotic actuators by applying a spike processing strategy to the data flows in real time. The architecture is divided into layers: the retina, visual information processing, the trajectory generator layer which uses a neuroinspired algorithm (SVITE) that can be replicated into as many times as DoF the robot has; and finally the actuation layer to supply the spikes to the robot (using PFM). All the layers do their tasks in a spike-processing mode, and they communicate each other through the neuro-inspired AER protocol. The open-loop controller is implemented on FPGA using AER interfaces developed by RTC Lab. Experimental results reveal the viability of this spike-based controller. Two main advantages are: low hardware resources (2% of a Xilinx Spartan 6) and power requirements (3.4 W) to control a robot with a high number of DoF (up to 100 for a Xilinx Spartan 6). It also evidences the suitable use of AER as a communication protocol between processing and actuation.
BowMapCL: Burrows-Wheeler Mapping on Multiple Heterogeneous Accelerators.
Nogueira, David; Tomas, Pedro; Roma, Nuno
2016-01-01
The computational demand of exact-search procedures has pressed the exploitation of parallel processing accelerators to reduce the execution time of many applications. However, this often imposes strict restrictions in terms of the problem size and implementation efforts, mainly due to their possibly distinct architectures. To circumvent this limitation, a new exact-search alignment tool (BowMapCL) based on the Burrows-Wheeler Transform and FM-Index is presented. Contrasting to other alternatives, BowMapCL is based on a unified implementation using OpenCL, allowing the exploitation of multiple and possibly different devices (e.g., NVIDIA, AMD/ATI, and Intel GPUs/APUs). Furthermore, to efficiently exploit such heterogeneous architectures, BowMapCL incorporates several techniques to promote its performance and scalability, including multiple buffering, work-queue task-distribution, and dynamic load-balancing, together with index partitioning, bit-encoding, and sampling. When compared with state-of-the-art tools, the attained results showed that BowMapCL (using a single GPU) is 2 × to 7.5 × faster than mainstream multi-threaded CPU BWT-based aligners, like Bowtie, BWA, and SOAP2; and up to 4 × faster than the best performing state-of-the-art GPU implementations (namely, SOAP3 and HPG-BWT). When multiple and completely distinct devices are considered, BowMapCL efficiently scales the offered throughput, ensuring a convenient load-balance of the involved processing in the several distinct devices.
Geometry induced sequence of nanoscale Frank–Kasper and quasicrystal mesophases in giant surfactants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yue, Kan; Huang, Mingjun; Marson, Ryan L.
Frank–Kasper (F-K) and quasicrystal phases were originally identified in metal alloys and only sporadically reported in soft materials. These unconventional sphere-packing schemes open up possibilities to design materials with different properties. The challenge in soft materials is how to correlate complex phases built from spheres with the tunable parameters of chemical composition and molecular architecture. Here, we report a complete sequence of various highly ordered mesophases by the self-assembly of specifically designed and synthesized giant surfactants, which are conjugates of hydrophilic polyhedral oligomeric silsesquioxane cages tethered with hydrophobic polystyrene tails. We show that the occurrence of these mesophases results frommore » nanophase separation between the heads and tails and thus is critically dependent on molecular geometry. Variations in molecular geometry achieved by changing the number of tails from one to four not only shift compositional phase boundaries but also stabilize F-K and quasicrystal phases in regions where simple phases of spheroidal micelles are typically observed. These complex self-assembled nanostructures have been identified by combining X-ray scattering techniques and real-space electron microscopy images. Brownian dynamics simulations based on a simplified molecular model confirm the architecture-induced sequence of phases. Our results demonstrate the critical role of molecular architecture in dictating the formation of supramolecular crystals with “soft” spheroidal motifs and provide guidelines to the design of unconventional self-assembled nanostructures.« less
Nano-Photonic Structures for Light Trapping in Ultra-Thin Crystalline Silicon Solar Cells
Pathi, Prathap; Peer, Akshit; Biswas, Rana
2017-01-01
Thick wafer-silicon is the dominant solar cell technology. It is of great interest to develop ultra-thin solar cells that can reduce materials usage, but still achieve acceptable performance and high solar absorption. Accordingly, we developed a highly absorbing ultra-thin crystalline Si based solar cell architecture using periodically patterned front and rear dielectric nanocone arrays which provide enhanced light trapping. The rear nanocones are embedded in a silver back reflector. In contrast to previous approaches, we utilize dielectric photonic crystals with a completely flat silicon absorber layer, providing expected high electronic quality and low carrier recombination. This architecture creates a dense mesh of wave-guided modes at near-infrared wavelengths in the absorber layer, generating enhanced absorption. For thin silicon (<2 μm) and 750 nm pitch arrays, scattering matrix simulations predict enhancements exceeding 90%. Absorption approaches the Lambertian limit at small thicknesses (<10 μm) and is slightly lower (by ~5%) at wafer-scale thicknesses. Parasitic losses are ~25% for ultra-thin (2 μm) silicon and just 1%–2% for thicker (>100 μm) cells. There is potential for 20 μm thick cells to provide 30 mA/cm2 photo-current and >20% efficiency. This architecture has great promise for ultra-thin silicon solar panels with reduced material utilization and enhanced light-trapping. PMID:28336851
Geometry induced sequence of nanoscale Frank–Kasper and quasicrystal mesophases in giant surfactants
Yue, Kan; Huang, Mingjun; Marson, Ryan L.; He, Jinlin; Huang, Jiahao; Zhou, Zhe; Wang, Jing; Liu, Chang; Yan, Xuesheng; Wu, Kan; Guo, Zaihong; Liu, Hao; Ni, Peihong; Wesdemiotis, Chrys; Zhang, Wen-Bin; Glotzer, Sharon C.; Cheng, Stephen Z. D.
2016-01-01
Frank–Kasper (F-K) and quasicrystal phases were originally identified in metal alloys and only sporadically reported in soft materials. These unconventional sphere-packing schemes open up possibilities to design materials with different properties. The challenge in soft materials is how to correlate complex phases built from spheres with the tunable parameters of chemical composition and molecular architecture. Here, we report a complete sequence of various highly ordered mesophases by the self-assembly of specifically designed and synthesized giant surfactants, which are conjugates of hydrophilic polyhedral oligomeric silsesquioxane cages tethered with hydrophobic polystyrene tails. We show that the occurrence of these mesophases results from nanophase separation between the heads and tails and thus is critically dependent on molecular geometry. Variations in molecular geometry achieved by changing the number of tails from one to four not only shift compositional phase boundaries but also stabilize F-K and quasicrystal phases in regions where simple phases of spheroidal micelles are typically observed. These complex self-assembled nanostructures have been identified by combining X-ray scattering techniques and real-space electron microscopy images. Brownian dynamics simulations based on a simplified molecular model confirm the architecture-induced sequence of phases. Our results demonstrate the critical role of molecular architecture in dictating the formation of supramolecular crystals with “soft” spheroidal motifs and provide guidelines to the design of unconventional self-assembled nanostructures. PMID:27911786
NASA Astrophysics Data System (ADS)
Wallace, William; Miller, Jared; Diallo, Ahmed
2015-11-01
MultiPoint Thomson Scattering (MPTS) is an established, accurate method of finding the temperature, density, and pressure of a magnetically confined plasma. Two Nd:YAG (1064 nm) lasers are fired into the plasma with a effective frequency of 60 Hz, and the light is Doppler shifted by Thomson scattering. Polychromators on the NSTX-U midplane collect the scattered photons at various radii/scattering angles, and the avalanche photodiode voltages are saved to an MDSplus tree for later analysis. IDL code is then used to determine plasma temperature, pressure, and density from the captured polychromator measurements via Selden formulas. [1] Previous work [2] converted the single-processor IDL code into Python code, and prepared a new architecture for multiprocessing MPTS in parallel. However, that work was not completed to the generation of output data and curve fits that match with the previous IDL. This project refactored the Python code into a object-oriented architecture, and created a software test suite for the new architecture which allowed identification of the code which generated the difference in output. Another effort currently underway is to display the Thomson data in an intuitive, interactive format. This work was supported in part by the U.S. Department of Energy, Office of Science, Office of Workforce Development for Teachers and Scientists (WDTS) under the Community College Internship (CCI) program.
An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation.
Hoseini, Farnaz; Shahbahrami, Asadollah; Bayat, Peyman
2018-02-27
Image segmentation is one of the most common steps in digital image processing, classifying a digital image into different segments. The main goal of this paper is to segment brain tumors in magnetic resonance images (MRI) using deep learning. Tumors having different shapes, sizes, brightness and textures can appear anywhere in the brain. These complexities are the reasons to choose a high-capacity Deep Convolutional Neural Network (DCNN) containing more than one layer. The proposed DCNN contains two parts: architecture and learning algorithms. The architecture and the learning algorithms are used to design a network model and to optimize parameters for the network training phase, respectively. The architecture contains five convolutional layers, all using 3 × 3 kernels, and one fully connected layer. Due to the advantage of using small kernels with fold, it allows making the effect of larger kernels with smaller number of parameters and fewer computations. Using the Dice Similarity Coefficient metric, we report accuracy results on the BRATS 2016, brain tumor segmentation challenge dataset, for the complete, core, and enhancing regions as 0.90, 0.85, and 0.84 respectively. The learning algorithm includes the task-level parallelism. All the pixels of an MR image are classified using a patch-based approach for segmentation. We attain a good performance and the experimental results show that the proposed DCNN increases the segmentation accuracy compared to previous techniques.
Learning classifier systems for single and multiple mobile robots in unstructured environments
NASA Astrophysics Data System (ADS)
Bay, John S.
1995-12-01
The learning classifier system (LCS) is a learning production system that generates behavioral rules via an underlying discovery mechanism. The LCS architecture operates similarly to a blackboard architecture; i.e., by posted-message communications. But in the LCS, the message board is wiped clean at every time interval, thereby requiring no persistent shared resource. In this paper, we adapt the LCS to the problem of mobile robot navigation in completely unstructured environments. We consider the model of the robot itself, including its sensor and actuator structures, to be part of this environment, in addition to the world-model that includes a goal and obstacles at unknown locations. This requires a robot to learn its own I/O characteristics in addition to solving its navigation problem, but results in a learning controller that is equally applicable, unaltered, in robots with a wide variety of kinematic structures and sensing capabilities. We show the effectiveness of this LCS-based controller through both simulation and experimental trials with a small robot. We then propose a new architecture, the Distributed Learning Classifier System (DLCS), which generalizes the message-passing behavior of the LCS from internal messages within a single agent to broadcast massages among multiple agents. This communications mode requires little bandwidth and is easily implemented with inexpensive, off-the-shelf hardware. The DLCS is shown to have potential application as a learning controller for multiple intelligent agents.
Nano-photonic structures for light trapping in ultra-thin crystalline silicon solar cells
Pathi, Prathap; Peer, Akshit; Biswas, Rana
2017-01-13
Thick wafer-silicon is the dominant solar cell technology. It is of great interest to develop ultra-thin solar cells that can reduce materials usage, but still achieve acceptable performance and high solar absorption. Accordingly, we developed a highly absorbing ultra-thin crystalline Si based solar cell architecture using periodically patterned front and rear dielectric nanocone arrays which provide enhanced light trapping. The rear nanocones are embedded in a silver back reflector. In contrast to previous approaches, we utilize dielectric photonic crystals with a completely flat silicon absorber layer, providing expected high electronic quality and low carrier recombination. This architecture creates a densemore » mesh of wave-guided modes at near-infrared wavelengths in the absorber layer, generating enhanced absorption. For thin silicon (<2 μm) and 750 nm pitch arrays, scattering matrix simulations predict enhancements exceeding 90%. Absorption approaches the Lambertian limit at small thicknesses (<10 μm) and is slightly lower (by ~5%) at wafer-scale thicknesses. Parasitic losses are ~25% for ultra-thin (2 μm) silicon and just 1%–2% for thicker (>100 μm) cells. There is potential for 20 μm thick cells to provide 30 mA/cm2 photo-current and >20% efficiency. Furthermore, this architecture has great promise for ultra-thin silicon solar panels with reduced material utilization and enhanced light-trapping.« less
Nano-photonic structures for light trapping in ultra-thin crystalline silicon solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pathi, Prathap; Peer, Akshit; Biswas, Rana
Thick wafer-silicon is the dominant solar cell technology. It is of great interest to develop ultra-thin solar cells that can reduce materials usage, but still achieve acceptable performance and high solar absorption. Accordingly, we developed a highly absorbing ultra-thin crystalline Si based solar cell architecture using periodically patterned front and rear dielectric nanocone arrays which provide enhanced light trapping. The rear nanocones are embedded in a silver back reflector. In contrast to previous approaches, we utilize dielectric photonic crystals with a completely flat silicon absorber layer, providing expected high electronic quality and low carrier recombination. This architecture creates a densemore » mesh of wave-guided modes at near-infrared wavelengths in the absorber layer, generating enhanced absorption. For thin silicon (<2 μm) and 750 nm pitch arrays, scattering matrix simulations predict enhancements exceeding 90%. Absorption approaches the Lambertian limit at small thicknesses (<10 μm) and is slightly lower (by ~5%) at wafer-scale thicknesses. Parasitic losses are ~25% for ultra-thin (2 μm) silicon and just 1%–2% for thicker (>100 μm) cells. There is potential for 20 μm thick cells to provide 30 mA/cm2 photo-current and >20% efficiency. Furthermore, this architecture has great promise for ultra-thin silicon solar panels with reduced material utilization and enhanced light-trapping.« less
Nano-Photonic Structures for Light Trapping in Ultra-Thin Crystalline Silicon Solar Cells.
Pathi, Prathap; Peer, Akshit; Biswas, Rana
2017-01-13
Thick wafer-silicon is the dominant solar cell technology. It is of great interest to develop ultra-thin solar cells that can reduce materials usage, but still achieve acceptable performance and high solar absorption. Accordingly, we developed a highly absorbing ultra-thin crystalline Si based solar cell architecture using periodically patterned front and rear dielectric nanocone arrays which provide enhanced light trapping. The rear nanocones are embedded in a silver back reflector. In contrast to previous approaches, we utilize dielectric photonic crystals with a completely flat silicon absorber layer, providing expected high electronic quality and low carrier recombination. This architecture creates a dense mesh of wave-guided modes at near-infrared wavelengths in the absorber layer, generating enhanced absorption. For thin silicon (<2 μm) and 750 nm pitch arrays, scattering matrix simulations predict enhancements exceeding 90%. Absorption approaches the Lambertian limit at small thicknesses (<10 μm) and is slightly lower (by ~5%) at wafer-scale thicknesses. Parasitic losses are ~25% for ultra-thin (2 μm) silicon and just 1%-2% for thicker (>100 μm) cells. There is potential for 20 μm thick cells to provide 30 mA/cm² photo-current and >20% efficiency. This architecture has great promise for ultra-thin silicon solar panels with reduced material utilization and enhanced light-trapping.
Network-centric decision architecture for financial or 1/f data models
NASA Astrophysics Data System (ADS)
Jaenisch, Holger M.; Handley, James W.; Massey, Stoney; Case, Carl T.; Songy, Claude G.
2002-12-01
This paper presents a decision architecture algorithm for training neural equation based networks to make autonomous multi-goal oriented, multi-class decisions. These architectures make decisions based on their individual goals and draw from the same network centric feature set. Traditionally, these architectures are comprised of neural networks that offer marginal performance due to lack of convergence of the training set. We present an approach for autonomously extracting sample points as I/O exemplars for generation of multi-branch, multi-node decision architectures populated by adaptively derived neural equations. To test the robustness of this architecture, open source data sets in the form of financial time series were used, requiring a three-class decision space analogous to the lethal, non-lethal, and clutter discrimination problem. This algorithm and the results of its application are presented here.
System design in an evolving system-of-systems architecture and concept of operations
NASA Astrophysics Data System (ADS)
Rovekamp, Roger N., Jr.
Proposals for space exploration architectures have increased in complexity and scope. Constituent systems (e.g., rovers, habitats, in-situ resource utilization facilities, transfer vehicles, etc) must meet the needs of these architectures by performing in multiple operational environments and across multiple phases of the architecture's evolution. This thesis proposes an approach for using system-of-systems engineering principles in conjunction with system design methods (e.g., Multi-objective optimization, genetic algorithms, etc) to create system design options that perform effectively at both the system and system-of-systems levels, across multiple concepts of operations, and over multiple architectural phases. The framework is presented by way of an application problem that investigates the design of power systems within a power sharing architecture for use in a human Lunar Surface Exploration Campaign. A computer model has been developed that uses candidate power grid distribution solutions for a notional lunar base. The agent-based model utilizes virtual control agents to manage the interactions of various exploration and infrastructure agents. The philosophy behind the model is based both on lunar power supply strategies proposed in literature, as well as on the author's own approaches for power distribution strategies of future lunar bases. In addition to proposing a framework for system design, further implications of system-of-systems engineering principles are briefly explored, specifically as they relate to producing more robust cross-cultural system-of-systems architecture solutions.
39 CFR 501.7 - Postage Evidencing System requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Information-Based Indicia and Security Architecture for Open IBI Postage Evidencing Systems or Performance Criteria for Information-Based Indicia and Security Architecture for Closed IBI Postage Metering Systems...
Space Communications Capability Roadmap Interim Review
NASA Technical Reports Server (NTRS)
Spearing, Robert; Regan, Michael
2005-01-01
Contents include the following: Identify the need for a robust communications and navigation architecture for the success of exploration and science missions. Describe an approach for specifying architecture alternatives and analyzing them. Establish a top level architecture based on a network of networks. Identify key enabling technologies. Synthesize capability, architecture and technology into an initial capability roadmap.
Design of an integrated airframe/propulsion control system architecture
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Lee, C. William; Strickland, Michael J.; Torkelson, Thomas C.
1990-01-01
The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture.
Teaching Case: Enterprise Architecture Specification Case Study
ERIC Educational Resources Information Center
Steenkamp, Annette Lerine; Alawdah, Amal; Almasri, Osama; Gai, Keke; Khattab, Nidal; Swaby, Carval; Abaas, Ramy
2013-01-01
A graduate course in enterprise architecture had a team project component in which a real-world business case, provided by an industry sponsor, formed the basis of the project charter and the architecture statement of work. The paper aims to share the team project experience on developing the architecture specifications based on the business case…
Using Multimedia for Teaching Analysis in History of Modern Architecture.
ERIC Educational Resources Information Center
Perryman, Garry
This paper presents a case for the development and support of a computer-based interactive multimedia program for teaching analysis in community college architecture design programs. Analysis in architecture design is an extremely important strategy for the teaching of higher-order thinking skills, which senior schools of architecture look for in…
NASA Technical Reports Server (NTRS)
Doggett, William; Vazquez, Sixto
2000-01-01
A visualization system is being developed out of the need to monitor, interpret, and make decisions based on the information from several thousand sensors during experimental testing to facilitate development and validation of structural health monitoring algorithms. As an added benefit the system will enable complete real-time sensor assessment of complex test specimens. Complex structural specimens are routinely tested that have hundreds or thousands of sensors. During a test, it is impossible for a single researcher to effectively monitor all the sensors and subsequently interesting phenomena occur that are not recognized until post-test analysis. The ability to detect and alert the researcher to these unexpected phenomena as the test progresses will significantly enhance the understanding and utilization of complex test articles. Utilization is increased by the ability to halt a test when the health monitoring algorithm response is not satisfactory or when an unexpected phenomenon occurs, enabling focused investigation potentially through the installation of additional sensors. Often if the test continues, structural changes make it impossible to reproduce the conditions that exhibited the phenomena. The prohibitive time and costs associated with fabrication, sensoring, and subsequent testing of additional test articles generally makes it impossible to further investigate the phenomena. A scalable architecture is described to address the complex computational demands of structural health monitoring algorithm development and laboratory experimental test monitoring. The researcher monitors the test using a photographic quality 3D graphical model with actual sensor locations identified. In addition, researchers can quickly activate plots displaying time or load versus selected sensor response along with the expected values and predefined limits. The architecture has several key features. First, distributed dissimilar computers may be seamlessly integrated into the information flow. Second, virtual sensors may be defined that are complex functions of existing sensors or other virtual sensors. Virtual sensors represent a calculated value not directly measured by particular physical instrument. They can be used, for example, to represent the maximum difference in a range of sensors or the calculated buckling load based on the current strains. Third, the architecture enables autonomous response to preconceived events, where by the system can be configured to suspend or abort a test if a failure is detected in the load introduction system. Fourth, the architecture is designed to allow cooperative monitoring and control of the test progression from multiple stations both remote and local to the test system. To illustrate the architecture, a preliminary implementation is described monitoring the Stitched Composite Wing recently tested at LaRC.
Modeling and Analysis of Space Based Transceivers
NASA Technical Reports Server (NTRS)
Moore, Michael S.; Price, Jeremy C.; Reinhart, Richard; Liebetreu, John; Kacpura, Tom J.
2005-01-01
This paper presents the tool chain, methodology, and results of an on-going study being performed jointly by Space Communication Experts at NASA Glenn Research Center (GRC), General Dynamics C4 Systems (GD), and Southwest Research Institute (SwRI). The team is evaluating the applicability and tradeoffs concerning the use of Software Defined Radio (SDR) technologies for Space missions. The Space Telecommunications Radio Systems (STRS) project is developing an approach toward building SDR-based transceivers for space communications applications based on an accompanying software architecture that can be used to implement transceivers for NASA space missions. The study is assessing the overall cost and benefit of employing SDR technologies in general, and of developing a software architecture standard for its space SDR transceivers. The study is considering the cost and benefit of existing architectures, such as the Joint Tactical Radio Systems (JTRS) Software Communications Architecture (SCA), as well as potential new space-specific architectures.
FPGA implementation of motifs-based neuronal network and synchronization analysis
NASA Astrophysics Data System (ADS)
Deng, Bin; Zhu, Zechen; Yang, Shuangming; Wei, Xile; Wang, Jiang; Yu, Haitao
2016-06-01
Motifs in complex networks play a crucial role in determining the brain functions. In this paper, 13 kinds of motifs are implemented with Field Programmable Gate Array (FPGA) to investigate the relationships between the networks properties and motifs properties. We use discretization method and pipelined architecture to construct various motifs with Hindmarsh-Rose (HR) neuron as the node model. We also build a small-world network based on these motifs and conduct the synchronization analysis of motifs as well as the constructed network. We find that the synchronization properties of motif determine that of motif-based small-world network, which demonstrates effectiveness of our proposed hardware simulation platform. By imitation of some vital nuclei in the brain to generate normal discharges, our proposed FPGA-based artificial neuronal networks have the potential to replace the injured nuclei to complete the brain function in the treatment of Parkinson's disease and epilepsy.
Barthélémy, Daniel; Caraglio, Yves
2007-01-01
Background and Aims The architecture of a plant depends on the nature and relative arrangement of each of its parts; it is, at any given time, the expression of an equilibrium between endogenous growth processes and exogenous constraints exerted by the environment. The aim of architectural analysis is, by means of observation and sometimes experimentation, to identify and understand these endogenous processes and to separate them from the plasticity of their expression resulting from external influences. Scope Using the identification of several morphological criteria and considering the plant as a whole, from germination to death, architectural analysis is essentially a detailed, multilevel, comprehensive and dynamic approach to plant development. Despite their recent origin, architectural concepts and analysis methods provide a powerful tool for studying plant form and ontogeny. Completed by precise morphological observations and appropriated quantitative methods of analysis, recent researches in this field have greatly increased our understanding of plant structure and development and have led to the establishment of a real conceptual and methodological framework for plant form and structure analysis and representation. This paper is a summarized update of current knowledge on plant architecture and morphology; its implication and possible role in various aspects of modern plant biology is also discussed. PMID:17218346
Upper and Lower Limb Muscle Architecture of a 104 Year-Old Cadaver
Infantolino, Benjamin
2016-01-01
Muscle architecture is an important component to typical musculoskeletal models. Previous studies of human muscle architecture have focused on a single joint, two adjacent joints, or an entire limb. To date, no study has presented muscle architecture for the upper and lower limbs of a single cadaver. Additionally, muscle architectural parameters from elderly cadavers are lacking, making it difficult to accurately model elderly populations. Therefore, the purpose of this study was to present muscle architecture of the upper and lower limbs of a 104 year old female cadaver. The major muscles of the upper and lower limbs were removed and the musculotendon mass, tendon mass, musculotendon length, tendon length, pennation angle, optimal fascicle length, physiological cross-sectional area, and tendon cross-sectional area were determined for each muscle. Data from this complete cadaver are presented in table format. The data from this study can be used to construct a musculoskeletal model of a specific individual who was ambulatory, something which has not been possible to date. This should increase the accuracy of the model output as the model will be representing a specific individual, not a synthesis of measurements from multiple individuals. Additionally, an elderly individual can be modeled which will provide insight into muscle function as we age. PMID:28033339
Upper and Lower Limb Muscle Architecture of a 104 Year-Old Cadaver.
Ruggiero, Marissa; Cless, Daniel; Infantolino, Benjamin
2016-01-01
Muscle architecture is an important component to typical musculoskeletal models. Previous studies of human muscle architecture have focused on a single joint, two adjacent joints, or an entire limb. To date, no study has presented muscle architecture for the upper and lower limbs of a single cadaver. Additionally, muscle architectural parameters from elderly cadavers are lacking, making it difficult to accurately model elderly populations. Therefore, the purpose of this study was to present muscle architecture of the upper and lower limbs of a 104 year old female cadaver. The major muscles of the upper and lower limbs were removed and the musculotendon mass, tendon mass, musculotendon length, tendon length, pennation angle, optimal fascicle length, physiological cross-sectional area, and tendon cross-sectional area were determined for each muscle. Data from this complete cadaver are presented in table format. The data from this study can be used to construct a musculoskeletal model of a specific individual who was ambulatory, something which has not been possible to date. This should increase the accuracy of the model output as the model will be representing a specific individual, not a synthesis of measurements from multiple individuals. Additionally, an elderly individual can be modeled which will provide insight into muscle function as we age.
Digital Device Architecture and the Safe Use of Flash Devices in Munitions
NASA Technical Reports Server (NTRS)
Katz, Richard B.; Flowers, David; Bergevin, Keith
2017-01-01
Flash technology is being utilized in fuzed munition applications and, based on the development of digital logic devices in the commercial world, usage of flash technology will increase. Digital devices of interest to designers include flash-based microcontrollers and field programmable gate arrays (FPGAs). Almost a decade ago, a study was undertaken to determine if flash-based microcontrollers could be safely used in fuzes and, if so, how should such devices be applied. The results were documented in the Technical Manual for the Use of Logic Devices in Safety Features. This paper will first review the Technical Manual and discuss the rationale behind the suggested architectures for microcontrollers and a brief review of the concern about data retention in flash cells. An architectural feature in the microcontroller under study will be discussed and its use will show how to screen for weak or failed cells during manufacture, storage, or immediately prior to use. As was done for microcontrollers a decade ago, architectures for a flash-based FPGA will be discussed, showing how it can be safely used in fuzes. Additionally, architectures for using non-volatile (including flash-based) storage will be discussed for SRAM-based FPGAs.
Collaboration pathway(s) using new tools for optimizing operational climate monitoring from space
NASA Astrophysics Data System (ADS)
Helmuth, Douglas B.; Selva, Daniel; Dwyer, Morgan M.
2014-10-01
Consistently collecting the earth's climate signatures remains a priority for world governments and international scientific organizations. Architecting a solution requires transforming scientific missions into an optimized robust `operational' constellation that addresses the needs of decision makers, scientific investigators and global users for trusted data. The application of new tools offers pathways for global architecture collaboration. Recent (2014) rulebased decision engine modeling runs that targeted optimizing the intended NPOESS architecture, becomes a surrogate for global operational climate monitoring architecture(s). This rule-based systems tools provide valuable insight for Global climate architectures, through the comparison and evaluation of alternatives considered and the exhaustive range of trade space explored. A representative optimization of Global ECV's (essential climate variables) climate monitoring architecture(s) is explored and described in some detail with thoughts on appropriate rule-based valuations. The optimization tools(s) suggest and support global collaboration pathways and hopefully elicit responses from the audience and climate science shareholders.
Guiding Principles for Data Architecture to Support the Pathways Community HUB Model.
Zeigler, Bernard P; Redding, Sarah; Leath, Brenda A; Carter, Ernest L; Russell, Cynthia
2016-01-01
The Pathways Community HUB Model provides a unique strategy to effectively supplement health care services with social services needed to overcome barriers for those most at risk of poor health outcomes. Pathways are standardized measurement tools used to define and track health and social issues from identification through to a measurable completion point. The HUB use Pathways to coordinate agencies and service providers in the community to eliminate the inefficiencies and duplication that exist among them. Experience with the Model has brought out the need for better information technology solutions to support implementation of the Pathways themselves through decision-support tools for care coordinators and other users to track activities and outcomes, and to facilitate reporting. Here we provide a basis for discussing recommendations for such a data infrastructure by developing a conceptual model that formalizes the Pathway concept underlying current implementations. The main contribution is a set of core recommendations as a framework for developing and implementing a data architecture to support implementation of the Pathways Community HUB Model. The objective is to present a tool for communities interested in adopting the Model to learn from and to adapt in their own development and implementation efforts. Experience with the Community Health Access Project (CHAP) data base system (the core implementation of the Model) has identified several issues and remedies that have been developed to address these issues. Based on analysis of issues and remedies, we present several key features for a data architecture meeting the just mentioned recommendations. Presentation of features is followed by a practical guide to their implementation allowing an organization to consider either tailoring off-the-shelf generic systems to meet the requirements or offerings that are specialized for community-based care coordination. Looking to future extensions, we discuss the utility and prospects for an ontology to include care coordination in the Unified Medical Language System (UMLS) of the National Library of Medicine and other existing medical and nursing taxonomies. Pathways structures are an important principle, not only for organizing the care coordination activities, but also for structuring the data stored in electronic form in the conduct of such care. We showed how the proposed architecture encourages design of effective decision support systems for coordinated care and suggested how interested organizations can set about acquiring such systems. Although the presentation focuses on the Pathways Community HUB Model, the principles for data architecture are stated in generic form and are applicable to any health information system for improving care coordination services and population health.
An Architecture for Autonomous Rovers on Future Planetary Missions
NASA Astrophysics Data System (ADS)
Ocon, J.; Avilés, M.; Graziano, M.
2018-04-01
This paper proposes an architecture for autonomous planetary rovers. This architecture combines a set of characteristics required in this type of system: high level of abstraction, reactive event-based activity execution, and automous navigation.
An Agent-Based Dynamic Model for Analysis of Distributed Space Exploration Architectures
NASA Astrophysics Data System (ADS)
Sindiy, Oleg V.; DeLaurentis, Daniel A.; Stein, William B.
2009-07-01
A range of complex challenges, but also potentially unique rewards, underlie the development of exploration architectures that use a distributed, dynamic network of resources across the solar system. From a methodological perspective, the prime challenge is to systematically model the evolution (and quantify comparative performance) of such architectures, under uncertainty, to effectively direct further study of specialized trajectories, spacecraft technologies, concept of operations, and resource allocation. A process model for System-of-Systems Engineering is used to define time-varying performance measures for comparative architecture analysis and identification of distinguishing patterns among interoperating systems. Agent-based modeling serves as the means to create a discrete-time simulation that generates dynamics for the study of architecture evolution. A Solar System Mobility Network proof-of-concept problem is introduced representing a set of longer-term, distributed exploration architectures. Options within this set revolve around deployment of human and robotic exploration and infrastructure assets, their organization, interoperability, and evolution, i.e., a system-of-systems. Agent-based simulations quantify relative payoffs for a fully distributed architecture (which can be significant over the long term), the latency period before they are manifest, and the up-front investment (which can be substantial compared to alternatives). Verification and sensitivity results provide further insight on development paths and indicate that the framework and simulation modeling approach may be useful in architectural design of other space exploration mass, energy, and information exchange settings.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: The preceding rules of § 25.254 are based on cdma2000 and IS-95 system architecture. To the extent that a Big LEO MSS licensee is able to demonstrate that the use of different system architectures would... section, an MSS licensee is permitted to apply for ATC authorization based on another system architecture...
Code of Federal Regulations, 2011 CFR
2011-10-01
...: The preceding rules of § 25.254 are based on cdma2000 and IS-95 system architecture. To the extent that a Big LEO MSS licensee is able to demonstrate that the use of different system architectures would... section, an MSS licensee is permitted to apply for ATC authorization based on another system architecture...
Expert Systems on Multiprocessor Architectures. Phase 1
1988-08-01
great rate) as early experience indicates what alternative aspect of system operation should have been monitored in any given completed run. The... system operation should have been monitored in any given completed run. The design goals that emerged then were (1) that the simulation system should...ORGANIZATION 6b OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION Stanford University (If applicable) Knowledge Systems Laboratory Rome Air Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ernest A. Mancini
The University of Alabama in cooperation with Texas A&M University, McGill University, Longleaf Energy Group, Strago Petroleum Corporation, and Paramount Petroleum Company are undertaking an integrated, interdisciplinary geoscientific and engineering research project. The project is designed to characterize and model reservoir architecture, pore systems and rock-fluid interactions at the pore to field scale in Upper Jurassic Smackover reef and carbonate shoal reservoirs associated with varying degrees of relief on pre-Mesozoic basement paleohighs in the northeastern Gulf of Mexico. The project effort includes the prediction of fluid flow in carbonate reservoirs through reservoir simulation modeling that utilizes geologic reservoir characterization andmore » modeling and the prediction of carbonate reservoir architecture, heterogeneity and quality through seismic imaging. The primary objective of the project is to increase the profitability, producibility and efficiency of recovery of oil from existing and undiscovered Upper Jurassic fields characterized by reef and carbonate shoals associated with pre-Mesozoic basement paleohighs. The principal research effort for Year 3 of the project has been reservoir characterization, 3-D modeling, testing of the geologic-engineering model, and technology transfer. This effort has included six tasks: (1) the study of seismic attributes, (2) petrophysical characterization, (3) data integration, (4) the building of the geologic-engineering model, (5) the testing of the geologic-engineering model and (6) technology transfer. This work was scheduled for completion in Year 3. Progress on the project is as follows: geoscientific reservoir characterization is completed. The architecture, porosity types and heterogeneity of the reef and shoal reservoirs at Appleton and Vocation Fields have been characterized using geological and geophysical data. The study of rock-fluid interactions has been completed. Observations regarding the diagenetic processes influencing pore system development and heterogeneity in these reef and shoal reservoirs have been made. Petrophysical and engineering property characterization has been completed. Porosity and permeability data at Appleton and Vocation Fields have been analyzed, and well performance analysis has been conducted. Data integration is up to date, in that, the geological, geophysical, petrophysical and engineering data collected to date for Appleton and Vocation Fields have been compiled into a fieldwide digital database. 3-D geologic modeling of the structures and reservoirs at Appleton and Vocation Fields has been completed. The models represent an integration of geological, petrophysical and seismic data. 3-D reservoir simulation of the reservoirs at Appleton and Vocation Fields has been completed. The 3-D geologic models served as the framework for the simulations. The geologic-engineering models of the Appleton and Vocation Field reservoirs have been developed. These models are being tested. The geophysical interpretation for the paleotopographic feature being tested has been made, and the study of the data resulting from drilling of a well on this paleohigh is in progress. Numerous presentations on reservoir characterization and modeling at Appleton and Vocation Fields have been made at professional meetings and conferences and a short course on microbial reservoir characterization and modeling based on these fields has been prepared.« less
Reconfigurable firmware-defined radios synthesized from standard digital logic cells
NASA Astrophysics Data System (ADS)
Faisal, Muhammad; Park, Youngmin; Wentzloff, David D.
2011-06-01
This paper presents recent work on reconfigurable all-digital radio architectures. We leverage the flexibility and scalability of synthesized digital cells to construct reconfigurable radio architectures that consume significantly less power than a software defined radio implementing similar architectures. We present two prototypes of such architectures that can receive and demodulate FM and FRS band signals. Moreover, a radio architecture based on a reconfigurable alldigital phase-locked loop for coherent demodulation is presented.
Enabling GEODSS for Space Situational Awareness (SSA)
NASA Astrophysics Data System (ADS)
Wootton, S.
2016-09-01
The Ground-Based Electro-Optical Deep Space Surveillance (GEODSS) System has been in operation since the mid-1980's. While GEODSS has been the Space Surveillance Network's (SSN's) workhorse in terms of deep space surveillance, it has not undergone a significant modernization since the 1990's. This means GEODSS continues to operate under a mostly obsolete, legacy data processing baseline. The System Program Office (SPO) responsible for GEODSS, SMC/SYGO, has a number of advanced Space Situational Awareness (SSA)-related efforts in progress, in the form of innovative optical capabilities, data processing algorithms, and hardware upgrades. Each of these efforts is in various stages of evaluation and acquisition. These advanced capabilities rely upon a modern computing environment in which to integrate, but GEODSS does not have one—yet. The SPO is also executing a Service Life Extension Program (SLEP) to modernize the various subsystems within GEODSS, along with a parallel effort to implement a complete, modern software re-architecture. The goal is to use a modern, service-based architecture to provide expedient integration as well as easier and more sustainable expansion. This presentation will describe these modernization efforts in more detail and discuss how adopting such modern paradigms and practices will help ensure the GEODSS system remains relevant and sustainable far beyond 2027.
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
NASA Technical Reports Server (NTRS)
Ivancic, William D.
2003-01-01
Traditional NASA missions, both near Earth and deep space, have been stovepipe in nature and point-to-point in architecture. Recently, NASA and others have conceptualized missions that required space-based networking. The notion of networks in space is a drastic shift in thinking and requires entirely new architectures, radio systems (antennas, modems, and media access), and possibly even new protocols. A full system engineering approach for some key mission architectures will occur that considers issues such as the science being performed, stationkeeping, antenna size, contact time, data rates, radio-link power requirements, media access techniques, and appropriate networking and transport protocols. This report highlights preliminary architecture concepts and key technologies that will be investigated.
Developing Information Power Grid Based Algorithms and Software
NASA Technical Reports Server (NTRS)
Dongarra, Jack
1998-01-01
This was an exploratory study to enhance our understanding of problems involved in developing large scale applications in a heterogeneous distributed environment. It is likely that the large scale applications of the future will be built by coupling specialized computational modules together. For example, efforts now exist to couple ocean and atmospheric prediction codes to simulate a more complete climate system. These two applications differ in many respects. They have different grids, the data is in different unit systems and the algorithms for inte,-rating in time are different. In addition the code for each application is likely to have been developed on different architectures and tend to have poor performance when run on an architecture for which the code was not designed, if it runs at all. Architectural differences may also induce differences in data representation which effect precision and convergence criteria as well as data transfer issues. In order to couple such dissimilar codes some form of translation must be present. This translation should be able to handle interpolation from one grid to another as well as construction of the correct data field in the correct units from available data. Even if a code is to be developed from scratch, a modular approach will likely be followed in that standard scientific packages will be used to do the more mundane tasks such as linear algebra or Fourier transform operations. This approach allows the developers to concentrate on their science rather than becoming experts in linear algebra or signal processing. Problems associated with this development approach include difficulties associated with data extraction and translation from one module to another, module performance on different nodal architectures, and others. In addition to these data and software issues there exists operational issues such as platform stability and resource management.
Flexible Software Architecture for Visualization and Seismic Data Analysis
NASA Astrophysics Data System (ADS)
Petunin, S.; Pavlov, I.; Mogilenskikh, D.; Podzyuban, D.; Arkhipov, A.; Baturuin, N.; Lisin, A.; Smith, A.; Rivers, W.; Harben, P.
2007-12-01
Research in the field of seismology requires software and signal processing utilities for seismogram manipulation and analysis. Seismologists and data analysts often encounter a major problem in the use of any particular software application specific to seismic data analysis: the tuning of commands and windows to the specific waveforms and hot key combinations so as to fit their familiar informational environment. The ability to modify the user's interface independently from the developer requires an adaptive code structure. An adaptive code structure also allows for expansion of software capabilities such as new signal processing modules and implementation of more efficient algorithms. Our approach is to use a flexible "open" architecture for development of geophysical software. This report presents an integrated solution for organizing a logical software architecture based on the Unix version of the Geotool software implemented on the Microsoft NET 2.0 platform. Selection of this platform greatly expands the variety and number of computers that can implement the software, including laptops that can be utilized in field conditions. It also facilitates implementation of communication functions for seismic data requests from remote databases through the Internet. The main principle of the new architecture for Geotool is that scientists should be able to add new routines for digital waveform analysis via software plug-ins that utilize the basic Geotool display for GUI interaction. The use of plug-ins allows the efficient integration of diverse signal-processing software, including software still in preliminary development, into an organized platform without changing the fundamental structure of that platform itself. An analyst's use of Geotool is tracked via a metadata file so that future studies can reconstruct, and alter, the original signal processing operations. The work has been completed in the framework of a joint Russian- American project.
Space station needs, attributes and architectural options: Architectural options and selection
NASA Technical Reports Server (NTRS)
Nelson, W. G.
1983-01-01
The approach, study results, and recommendations for defining and selecting space station architectural options are described. Space station system architecture is defined as the arrangement of elements (manned and unmanned on-orbit facilities, shuttle vehicles, orbital transfer vehicles, etc.), the number of these elements, their location (orbital inclination and altitude, and their functional performance capability, power, volume, crew, etc.). Architectural options are evaluated based on the degree of mission capture versus cost and required funding rate. Mission capture refers to the number of missions accommodated by the particular architecture.
ERIC Educational Resources Information Center
Yuping, Cai; Shuang, Liang
2017-01-01
The traditional undergraduate education mode of architecture has been unable to adapt to the rapid development of society. Taking the junior professional course of architecture--the preliminary course of architectural design as an example, this paper analyzes the problems existing in the current professional courses of lower grades, puts forward…
An investigation of hardwood plywood markets. Part 1. Architectural woodworkers
Craig L. Forbes; Larry G. Jahn; Philip A. Araman
2001-01-01
This is the first part of a two-part study investigating markets for hardwood plywood. North American architectural woodworkers were surveyed to better understand the structure and use ofwood-based panels in the industry. A questionnaire was mailed to a sample of U.S. and Canadian architectural woodworkers. The sample consisted of members of the Architectural Woodwork...
Organic Rankine cycle - review and research directions in engine applications
NASA Astrophysics Data System (ADS)
Panesar, Angad
2017-11-01
Waste heat to power conversion using Organic Rankine Cycles (ORC) is expected to play an important role in CO2 reductions from diesel engines. Firstly, a review of automotive ORCs is presented focusing on the pure working fluids, thermal architectures and expanders. The discussion includes, but is not limited to: R245fa, ethanol and water as fluids; series, parallel and cascade as architectures; dry saturated, superheated and supercritical as expansion conditions; and scroll, radial turbine and piston as expansion machines. Secondly, research direction in versatile expander and holistic architecture (NOx + CO2) are proposed. Benefits of using the proposed unconventional approaches are quantified using Ricardo Wave and Aspen HYSYS for diesel engine and ORC modelling. Results indicate that, the implementation of versatile piston expander tolerant to two-phase and using cyclopentane can potentially increase the highway drive cycle power by 8%. Furthermore, holistic architecture offering complete utilisation of charge air and exhaust recirculation heat increased the performance noticeably to 5% of engine power at the design point condition.
Human Transportation System (HTS) study: Executive summary
NASA Technical Reports Server (NTRS)
Lance, N.; Geyer, M. S.; Gaunce, M. T.
1993-01-01
Work completed under the Human Transportation System Study is summarized. This study was conducted by the New Initiatives Office at JSC with the technical support of Boeing, General Dynamics, Lockheed, McDonnell-Douglas, Martin Marietta, and Rockwell. The study was designed to generate information on determining the appropriate path to follow for new system development to meet the Nation's space transportation needs. The study evaluates 18 transportation architecture options using a parametric set of mission requirements. These options include use of current systems as well as proposed systems to assess the impact of various considerations, such as the cost of alternate access, or the benefit of separating people and cargo. The architecture options are compared to each other with six measurable evaluation criteria or attributes. They are the following: funding profile, human safety, probability of mission success, architecture cost risk, launch schedule confidence, and environmental impact. Values for these attributes are presented for the architecture options, with pertinent conclusions and recommendations.
Human Transportation System (HTS) study, volume 2
NASA Technical Reports Server (NTRS)
Lance, N.; Geyer, M. S.; Gaunce, M. T.
1993-01-01
This report summarizes work completed under the Human Transportation System Study. This study was conducted by the New Initiatives Office at JSC with the technical support of Boeing, General Dynamics, Lockheed, McDonnell-Douglas, Martin Marietta, and Rockwell. The study was designed to generate information on determining the appropriate path to follow for new system development to meet the Nation's space transportation needs. The study evaluates 18 transportation architecture options using a parametric set of mission requirements. These options include use of current systems (e.g., Shuttle, Titan, etc. ) as well as proposed systems (e.g., PLS, Single-Stage-to-Orbit, etc.) to assess the impact of various considerations, such as the cost of alternate access, or the benefit of separating people and cargo. The architecture options are compared to each other with six measurable evaluation criteria or attributes. They are: funding profile, human safety, probability of mission success, architecture cost risk, launch schedule confidence, and environmental impact. Values for these attributes are presented for the architecture options, with pertinent conclusions and recommendations.
Human Transportation System (HTS) study, volume 1
NASA Technical Reports Server (NTRS)
Lance, N.; Geyer, M. S.; Gaunce, M. T.
1993-01-01
Work completed under the Human Transportation System Study is summarized. This study was conducted by the New Initiatives Office at JSC with the technical support of Boeing, General Dynamics, Lockheed, McDonnell-Douglas, Martin Marietta, and Rockwell. The study was designed to generate information on determining the appropriate path to follow for new system development to meet the Nation's space transportation needs. The study evaluates 18 transportation architecture options using a parametric set of mission requirements. These options include use of current systems as well as proposed systems to assess the impact of various considerations, such as the cost of alternate access, or the benefit of separating people and cargo. The architecture options are compared to each other with six measurable evaluation criteria or attributes. They are the following: funding profile, human safety, probability of mission success, architecture cost risk, launch schedule confidence, and environmental impact. Values for these attributes are presented for the architecture options, with pertinent conclusions and recommendations.
Multi-Body Orbit Architectures for Lunar South Pole Coverage
NASA Technical Reports Server (NTRS)
Grebow, D. J.; Ozimek, M. T.; Howell, K. C.; Folta, D. C.
2006-01-01
A potential ground station at the lunar south pole has prompted studies of orbit architectures that ensure adequate coverage. Constant communications can be achieved with two spacecraft in different combinations of Earth-Moon libration point orbits. Halo and vertical families, as well as other orbits near L1 and L2 are considered. The investigation includes detailed results using nine different orbits with periods ranging from 7 to 16 days. Natural solutions are generated in a full ephemeris model, including solar perturbations. A preliminary station-keeping analysis is also completed.
NASA Technical Reports Server (NTRS)
Wercinski, Paul F.
2017-01-01
The ADEPT architecture represents a completely new approach for entry vehicle design using a high-performance carbon fabric to serve as the primary drag surface of the mechanically deployed decelerator and to protect the payload from hypersonic aerothermal heating during entry. The initial system-level development of the nano-ADEPT architecture will culminate in the launch of a 0.7-m deployed diameter ADEPT sounding rocket flight experiment. The SR-1 sounding rocket flight experiment is a critical milestone in the technology maturation plan for ADEPT and will generate performance data on in-space deployment and aerodynamic stability.
1989-01-01
Susan Donahue. Maps and graphs were completed by Ms. Morgan and Ms. Donahue, and David Higginbotham. LeAnne Baird , Kathy Morgan, Allyn Mateu, Marian ...Consultants, Inc. ELECTE JAN 08 1990 By:* S LeAnne Baird , Principal Investigator 1989 Approved forPubic rM16=61 HISTORICAL AND ARCHITECTURAL FIELD...SURVEY OF A PORTION OF FORT SCOTT LAKE PROJECT, BOURBON COUNTY, KANSAS LeAnne Baird , Principal Investigator S. Alan Skinner, Project Director with
The semantic architecture of the World-Wide Molecular Matrix (WWMM)
2011-01-01
The World-Wide Molecular Matrix (WWMM) is a ten year project to create a peer-to-peer (P2P) system for the publication and collection of chemical objects, including over 250, 000 molecules. It has now been instantiated in a number of repositories which include data encoded in Chemical Markup Language (CML) and linked by URIs and RDF. The technical specification and implementation is now complete. We discuss the types of architecture required to implement nodes in the WWMM and consider the social issues involved in adoption. PMID:21999475
The semantic architecture of the World-Wide Molecular Matrix (WWMM).
Murray-Rust, Peter; Adams, Sam E; Downing, Jim; Townsend, Joe A; Zhang, Yong
2011-10-14
The World-Wide Molecular Matrix (WWMM) is a ten year project to create a peer-to-peer (P2P) system for the publication and collection of chemical objects, including over 250, 000 molecules. It has now been instantiated in a number of repositories which include data encoded in Chemical Markup Language (CML) and linked by URIs and RDF. The technical specification and implementation is now complete. We discuss the types of architecture required to implement nodes in the WWMM and consider the social issues involved in adoption.
SKA Telescope Manager (TM): status and architecture overview
NASA Astrophysics Data System (ADS)
Natarajan, Swaminathan; Barbosa, Domingos; Barraca, Joao P.; Bridger, Alan; Choudhury, Subhrojyoti R.; Di Carlo, Matteo; Dolci, Mauro; Gupta, Yashwant; Guzman, Juan; Van den Heever, Lize; Le Roux, Gerhard; Nicol, Mark; Patil, Mangesh; Smareglia, Riccardo; Swart, Paul; Thompson, Roger; Vrcic, Sonja; Williams, Stewart
2016-07-01
The SKA radio telescope project is building two telescopes, SKA-Low in Australia and SKA-Mid in South Africa respectively. The Telescope Manager is responsible for the observations lifecycle and for monitoring and control of each instrument, and is being developed by an international consortium. The project is currently in the design phase, with the Preliminary Design Review having been successfully completed, along with re-baselining to match project scope to available budget. This report presents the status of the Telescope Manager work, key architectural challenges and our approach to addressing them.
NASA Astrophysics Data System (ADS)
Hegde, Ganapathi; Vaya, Pukhraj
2013-10-01
This article presents a parallel architecture for 3-D discrete wavelet transform (3-DDWT). The proposed design is based on the 1-D pipelined lifting scheme. The architecture is fully scalable beyond the present coherent Daubechies filter bank (9, 7). This 3-DDWT architecture has advantages such as no group of pictures restriction and reduced memory referencing. It offers low power consumption, low latency and high throughput. The computing technique is based on the concept that lifting scheme minimises the storage requirement. The application specific integrated circuit implementation of the proposed architecture is done by synthesising it using 65 nm Taiwan Semiconductor Manufacturing Company standard cell library. It offers a speed of 486 MHz with a power consumption of 2.56 mW. This architecture is suitable for real-time video compression even with large frame dimensions.
Interior Reconstruction Using the 3d Hough Transform
NASA Astrophysics Data System (ADS)
Dumitru, R.-C.; Borrmann, D.; Nüchter, A.
2013-02-01
Laser scanners are often used to create accurate 3D models of buildings for civil engineering purposes, but the process of manually vectorizing a 3D point cloud is time consuming and error-prone (Adan and Huber, 2011). Therefore, the need to characterize and quantify complex environments in an automatic fashion arises, posing challenges for data analysis. This paper presents a system for 3D modeling by detecting planes in 3D point clouds, based on which the scene is reconstructed at a high architectural level through removing automatically clutter and foreground data. The implemented software detects openings, such as windows and doors and completes the 3D model by inpainting.
Design of testbed and emulation tools
NASA Technical Reports Server (NTRS)
Lundstrom, S. F.; Flynn, M. J.
1986-01-01
The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems.
Beyond the Renderer: Software Architecture for Parallel Graphics and Visualization
NASA Technical Reports Server (NTRS)
Crockett, Thomas W.
1996-01-01
As numerous implementations have demonstrated, software-based parallel rendering is an effective way to obtain the needed computational power for a variety of challenging applications in computer graphics and scientific visualization. To fully realize their potential, however, parallel renderers need to be integrated into a complete environment for generating, manipulating, and delivering visual data. We examine the structure and components of such an environment, including the programming and user interfaces, rendering engines, and image delivery systems. We consider some of the constraints imposed by real-world applications and discuss the problems and issues involved in bringing parallel rendering out of the lab and into production.
Chip architecture - A revolution brewing
NASA Astrophysics Data System (ADS)
Guterl, F.
1983-07-01
Techniques being explored by microchip designers and manufacturers to both speed up memory access and instruction execution while protecting memory are discussed. Attention is given to hardwiring control logic, pipelining for parallel processing, devising orthogonal instruction sets for interchangeable instruction fields, and the development of hardware for implementation of virtual memory and multiuser systems to provide memory management and protection. The inclusion of microcode in mainframes eliminated logic circuits that control timing and gating of the CPU. However, improvements in memory architecture have reduced access time to below that needed for instruction execution. Hardwiring the functions as a virtual memory enhances memory protection. Parallelism involves a redundant architecture, which allows identical operations to be performed simultaneously, and can be directed with microcode to avoid abortion of intermediate instructions once on set of instructions has been completed.
NASA Astrophysics Data System (ADS)
Gel, Aytekin; Hu, Jonathan; Ould-Ahmed-Vall, ElMoustapha; Kalinkin, Alexander A.
2017-02-01
Legacy codes remain a crucial element of today's simulation-based engineering ecosystem due to the extensive validation process and investment in such software. The rapid evolution of high-performance computing architectures necessitates the modernization of these codes. One approach to modernization is a complete overhaul of the code. However, this could require extensive investments, such as rewriting in modern languages, new data constructs, etc., which will necessitate systematic verification and validation to re-establish the credibility of the computational models. The current study advocates using a more incremental approach and is a culmination of several modernization efforts of the legacy code MFIX, which is an open-source computational fluid dynamics code that has evolved over several decades, widely used in multiphase flows and still being developed by the National Energy Technology Laboratory. Two different modernization approaches,'bottom-up' and 'top-down', are illustrated. Preliminary results show up to 8.5x improvement at the selected kernel level with the first approach, and up to 50% improvement in total simulated time with the latter were achieved for the demonstration cases and target HPC systems employed.
Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2009-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
Robinson, Elise B.; Kirby, Andrew; Ruparel, Kosha; Yang, Jian; McGrath, Lauren; Anttila, Verneri; Neale, Benjamin M.; Merikangas, Kathleen; Lehner, Thomas; Sleiman, Patrick M.A.; Daly, Mark J.; Gur, Ruben; Gur, Raquel; Hakonarson, Hakon
2014-01-01
The objective of this analysis was to examine the genetic architecture of diverse cognitive abilities in children and adolescents, including the magnitude of common genetic effects and patterns of shared and unique genetic influences. Subjects included 3,689 members of the Philadelphia Neurodevelopmental Cohort, a general population sample of ages 8-21 years who completed an extensive battery of cognitive tests. We used genome-wide complex trait analysis (GCTA) to estimate the SNP-based heritability of each domain, as well as the genetic correlation between all domains that showed significant genetic influence. Several of the individual domains suggested strong influence of common genetic variants (e.g. reading ability, h2g=0.43, p=4e-06; emotion identification, h2g=0.36, p=1e-05; verbal memory, h2g=0.24, p=0.005). The genetic correlations highlighted trait domains that are candidates for joint interrogation in future genetic studies (e.g. language reasoning and spatial reasoning, r(g)=0.72, p=0.007). These results can be used to structure future genetic and neuropsychiatric investigations of diverse cognitive abilities. PMID:25023143
Systems budgets architecture and development for the Maunakea Spectroscopic Explorer
NASA Astrophysics Data System (ADS)
Mignot, Shan; Flagey, Nicolas; Szeto, Kei; Murowinski, Rick; McConnachie, Alan
2016-08-01
The Maunakea Spectroscopic Explorer (MSE) project is an enterprise to upgrade the existing Canada-France- Hawaii observatory into a spectroscopic facility based on a 10 meter-class telescope. As such, the project relies on engineering requirements not limited only to its instruments (the low, medium and high resolution spectrographs) but for the whole observatory. The science requirements, the operations concept, the project management and the applicable regulations are the basis from which these requirements are initially derived, yet they do not form hierarchies as each may serve several purposes, that is, pertain to several budgets. Completeness and consistency are hence the main systems engineering challenges for such a large project as MSE. Special attention is devoted to ensuring the traceability of requirements via parametric models, derivation documents, simulations, and finally maintaining KAOS diagrams and a database under IBM Rational DOORS linking them together. This paper will present the architecture of the main budgets under development and the associated processes, expand to highlight those that are interrelated and how the system, as a whole, is then optimized by modelling and analysis of the pertinent system parameters.
Overview of the Development of the Advanced Electric Propulsion System (AEPS)
NASA Technical Reports Server (NTRS)
Herman, Daniel; Tofil, Todd; Santiago, Walter; Kamhawi, Hani; Polk, James; Snyder, John Steven; Hofer, Richard; Picha, Frank; Schmidt, George
2017-01-01
NASA is committed to the demonstration and application of high-power solar electric propulsion to meet its future mission needs. It is continuing to develop the 14 kW Advanced Electric Propulsion System (AEPS) under a project that recently completed an Early Integrated System Test (EIST) and System Preliminary Design Review (PDR). In addition, NASA is pursuing external partnerships in order to demonstrate Solar Electric Propulsion (SEP) technology and the advantages of high-power electric propulsion-based spacecraft. The recent announcement of a Power and Propulsion Element (PPE) as the first major piece of an evolvable human architecture to Mars has replaced the Asteroid Redirect Robotic Mission (ARRM) as the most likely first application of the AEPS Hall thruster system. This high-power SEP capability, or an extensible derivative of it, has been recognized as a critical part of a new, affordable human exploration architecture for missions beyond-low-Earth-orbit. This paper presents the status of AEPS development activities, and describes how AEPS hardware will be integrated into the PPE ion propulsion system.
Automatic Texture Mapping of Architectural and Archaeological 3d Models
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Stallmann, D.
2012-07-01
Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.
Walls, Alexandra C.; Tortorici, M. Alejandra; Frenz, Brandon; Snijder, Joost; Li, Wentao; Rey, Félix A.; DiMaio, Frank; Bosch, Berend-Jan; Veesler, David
2017-01-01
The threat of a major coronavirus pandemic urges the development of suitable strategies to combat these pathogens. HCoV-NL63 is an α-coronavirus that can cause severe lower respiratory tract infections requiring hospitalization. We report here the 3.4 Å resolution cryo-electron microscopy reconstruction of the HCoV-NL63 coronavirus spike glycoprotein trimer, which is the conformational machine responsible for entry into host cells and the sole target of neutralizing antibodies during infection. The map resolves the extensive glycan shield obstructing the protein surface and, in combination with mass-spectrometry, provides a structural framework to understand accessibility to antibodies. The structure also reveals a remarkable modular architecture of the receptor-binding subunit and the complete architecture of the fusion machinery including the triggering loop and the C-terminal domains, which contribute to anchoring the trimer to the viral membrane. Our data further suggest that HCoV-NL63 and other coronaviruses use molecular trickery, based on masking of epitopes with glycans and activating conformational changes, to evade the immune system of infected hosts. PMID:27617430
An ISRU Propellant Production System to Fully Fuel a Mars Ascent Vehicle
NASA Technical Reports Server (NTRS)
Kleinhenz, Julie; Paz, Aaron
2017-01-01
ISRU of Mars resources was base lined in 2009 Design Reference Architecture (DRA) 5.0, but only for Oxygen production using atmospheric CO2The Methane (LCH4) needed for ascent propulsion of the Mars Ascent Vehicle (MAV) would need to be brought from Earth. HOWEVER: Extracting water from the Martian Regolith enables the production of both Oxygen and Methane from Mars resources Water resources could also be used for other applications including: Life support, radiation shielding, plant growth, etc. Water extraction was not base lined in DRA5.0 due to perceived difficulties and complexity in processing regolith. The NASA Evolvable Mars Campaign (EMC) requested studies to look at the quantitative benefits and trades of using Mars water ISRU Phase 1: Examined architecture scenarios for regolith water retrieval. Completed October 2015Phase 2: Deep dive of one architecture concept to look at end-to-end system size, mass, power of a LCH4LO2 ISRU production system.Evolvable Mars CampaignPre-deployed Mars ascent vehicle (MAV)4 crew membersPropellants: Oxygen MethaneGenerate a system model to roll up mass power of a full ISRU system and enable parametric trade studies. Leverage models from previous studies and technology development programs Anchor with mass power performance from existing hardware. Whenever possible used reference-able (published) numbers for traceability.Modular approach to allow subsystem trades and parametric studies. Propellant mass needs taken from most recently published MAV study:Polsgrove, T. et al. (2015), AIAA2015-4416MAV engines operate at mixture ratios (oxygen: methane) between 3:1 and 3.5:1, whereas the Sabatier reactor produces at a 4:1 ratio. Therefore:Methane production is the driving requirement-Excess Oxygen will be produced.
CE-ACCE: The Cloud Enabled Advanced sCience Compute Environment
NASA Astrophysics Data System (ADS)
Cinquini, L.; Freeborn, D. J.; Hardman, S. H.; Wong, C.
2017-12-01
Traditionally, Earth Science data from NASA remote sensing instruments has been processed by building custom data processing pipelines (often based on a common workflow engine or framework) which are typically deployed and run on an internal cluster of computing resources. This approach has some intrinsic limitations: it requires each mission to develop and deploy a custom software package on top of the adopted framework; it makes use of dedicated hardware, network and storage resources, which must be specifically purchased, maintained and re-purposed at mission completion; and computing services cannot be scaled on demand beyond the capability of the available servers.More recently, the rise of Cloud computing, coupled with other advances in containerization technology (most prominently, Docker) and micro-services architecture, has enabled a new paradigm, whereby space mission data can be processed through standard system architectures, which can be seamlessly deployed and scaled on demand on either on-premise clusters, or commercial Cloud providers. In this talk, we will present one such architecture named CE-ACCE ("Cloud Enabled Advanced sCience Compute Environment"), which we have been developing at the NASA Jet Propulsion Laboratory over the past year. CE-ACCE is based on the Apache OODT ("Object Oriented Data Technology") suite of services for full data lifecycle management, which are turned into a composable array of Docker images, and complemented by a plug-in model for mission-specific customization. We have applied this infrastructure to both flying and upcoming NASA missions, such as ECOSTRESS and SMAP, and demonstrated deployment on the Amazon Cloud, either using simple EC2 instances, or advanced AWS services such as Amazon Lambda and ECS (EC2 Container Services).
A multi-agent architecture for geosimulation of moving agents
NASA Astrophysics Data System (ADS)
Vahidnia, Mohammad H.; Alesheikh, Ali A.; Alavipanah, Seyed Kazem
2015-10-01
In this paper, a novel architecture is proposed in which an axiomatic derivation system in the form of first-order logic facilitates declarative explanation and spatial reasoning. Simulation of environmental perception and interaction between autonomous agents is designed with a geographic belief-desire-intention and a request-inform-query model. The architecture has a complementary quantitative component that supports collaborative planning based on the concept of equilibrium and game theory. This new architecture presents a departure from current best practices geographic agent-based modelling. Implementation tasks are discussed in some detail, as well as scenarios for fleet management and disaster management.
Biomimetic design processes in architecture: morphogenetic and evolutionary computational design.
Menges, Achim
2012-03-01
Design computation has profound impact on architectural design methods. This paper explains how computational design enables the development of biomimetic design processes specific to architecture, and how they need to be significantly different from established biomimetic processes in engineering disciplines. The paper first explains the fundamental difference between computer-aided and computational design in architecture, as the understanding of this distinction is of critical importance for the research presented. Thereafter, the conceptual relation and possible transfer of principles from natural morphogenesis to design computation are introduced and the related developments of generative, feature-based, constraint-based, process-based and feedback-based computational design methods are presented. This morphogenetic design research is then related to exploratory evolutionary computation, followed by the presentation of two case studies focusing on the exemplary development of spatial envelope morphologies and urban block morphologies.
Detailed Primitive-Based 3d Modeling of Architectural Elements
NASA Astrophysics Data System (ADS)
Remondino, F.; Lo Buglio, D.; Nony, N.; De Luca, L.
2012-07-01
The article describes a pipeline, based on image-data, for the 3D reconstruction of building façades or architectural elements and the successive modeling using geometric primitives. The approach overcome some existing problems in modeling architectural elements and deliver efficient-in-size reality-based textured 3D models useful for metric applications. For the 3D reconstruction, an opensource pipeline developed within the TAPENADE project is employed. In the successive modeling steps, the user manually selects an area containing an architectural element (capital, column, bas-relief, window tympanum, etc.) and then the procedure fits geometric primitives and computes disparity and displacement maps in order to tie visual and geometric information together in a light but detailed 3D model. Examples are reported and commented.
Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes
NASA Astrophysics Data System (ADS)
Huang, Shaoming
2003-06-01
An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.
NASA Technical Reports Server (NTRS)
Lehmer, R.; Ingram, C.; Jovic, S.; Alderete, J.; Brown, D.; Carpenter, D.; LaForce, S.; Panda, R.; Walker, J.; Chaplin, P.;
2006-01-01
The Virtual Airspace Simulation Technology - Real-Time (VAST-RT) Project, an element cf NASA's Virtual Airspace Modeling and Simulation (VAMS) Project, has been developing a distributed simulation capability that supports an extensible and expandable real-time, human-in-the-loop airspace simulation environment. The VAST-RT system architecture is based on DoD High Level Architecture (HLA) and the VAST-RT HLA Toolbox, a common interface implementation that incorporates a number of novel design features. The scope of the initial VAST-RT integration activity (Capability 1) included the high-fidelity human-in-the-loop simulation facilities located at NASA/Ames Research Center and medium fidelity pseudo-piloted target generators, such as the Airspace Traffic Generator (ATG) being developed as part of VAST-RT, as well as other real-time tools. This capability has been demonstrated in a gate-to-gate simulation. VAST-RT's (Capability 2A) has been recently completed, and this paper will discuss the improved integration of the real-time assets into VAST-RT, including the development of tools to integrate data collected across the simulation environment into a single data set for the researcher. Current plans for the completion of the VAST-RT distributed simulation environment (Capability 2B) and its use to evaluate future airspace capacity enhancing concepts being developed by VAMS will be discussed. Additionally, the simulation environment's application to other airspace and airport research projects is addressed.
Low resistivity ZnO-GO electron transport layer based CH{sub 3}NH{sub 3}PbI{sub 3} solar cells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Muhammad Imran, E-mail: imranrahbar@scme.nust.edu.pk, E-mail: amirhabib@scme.nust.edu.pk; Hussain, Zakir; Mujahid, Mohammad
Perovskite based solar cells have demonstrated impressive performances. Controlled environment synthesis and expensive hole transport material impede their potential commercialization. We report ambient air synthesis of hole transport layer free devices using ZnO-GO as electron selective contacts. Solar cells fabricated with hole transport layer free architecture under ambient air conditions with ZnO as electron selective contact achieved an efficiency of 3.02%. We have demonstrated that by incorporating GO in ZnO matrix, low resistivity electron selective contacts, critical to improve the performance, can be achieved. We could achieve max efficiency of 4.52% with our completed devices for ZnO: GO composite. Impedancemore » spectroscopy confirmed the decrease in series resistance and an increase in recombination resistance with inclusion of GO in ZnO matrix. Effect of temperature on completed devices was investigated by recording impedance spectra at 40 and 60 {sup o}C, providing indirect evidence of the performance of solar cells at elevated temperatures.« less
A Low Cost VLSI Architecture for Spike Sorting Based on Feature Extraction with Peak Search.
Chang, Yuan-Jyun; Hwang, Wen-Jyi; Chen, Chih-Chang
2016-12-07
The goal of this paper is to present a novel VLSI architecture for spike sorting with high classification accuracy, low area costs and low power consumption. A novel feature extraction algorithm with low computational complexities is proposed for the design of the architecture. In the feature extraction algorithm, a spike is separated into two portions based on its peak value. The area of each portion is then used as a feature. The algorithm is simple to implement and less susceptible to noise interference. Based on the algorithm, a novel architecture capable of identifying peak values and computing spike areas concurrently is proposed. To further accelerate the computation, a spike can be divided into a number of segments for the local feature computation. The local features are subsequently merged with the global ones by a simple hardware circuit. The architecture can also be easily operated in conjunction with the circuits for commonly-used spike detection algorithms, such as the Non-linear Energy Operator (NEO). The architecture has been implemented by an Application-Specific Integrated Circuit (ASIC) with 90-nm technology. Comparisons to the existing works show that the proposed architecture is well suited for real-time multi-channel spike detection and feature extraction requiring low hardware area costs, low power consumption and high classification accuracy.
An architecture for rule based system explanation
NASA Technical Reports Server (NTRS)
Fennel, T. R.; Johannes, James D.
1990-01-01
A system architecture is presented which incorporate both graphics and text into explanations provided by rule based expert systems. This architecture facilitates explanation of the knowledge base content, the control strategies employed by the system, and the conclusions made by the system. The suggested approach combines hypermedia and inference engine capabilities. Advantages include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. User models are suggested to control the type, amount, and order of information presented.
Li, Cai; Lowe, Robert; Ziemke, Tom
2014-01-01
In this article, we propose an architecture of a bio-inspired controller that addresses the problem of learning different locomotion gaits for different robot morphologies. The modeling objective is split into two: baseline motion modeling and dynamics adaptation. Baseline motion modeling aims to achieve fundamental functions of a certain type of locomotion and dynamics adaptation provides a "reshaping" function for adapting the baseline motion to desired motion. Based on this assumption, a three-layer architecture is developed using central pattern generators (CPGs, a bio-inspired locomotor center for the baseline motion) and dynamic motor primitives (DMPs, a model with universal "reshaping" functions). In this article, we use this architecture with the actor-critic algorithms for finding a good "reshaping" function. In order to demonstrate the learning power of the actor-critic based architecture, we tested it on two experiments: (1) learning to crawl on a humanoid and, (2) learning to gallop on a puppy robot. Two types of actor-critic algorithms (policy search and policy gradient) are compared in order to evaluate the advantages and disadvantages of different actor-critic based learning algorithms for different morphologies. Finally, based on the analysis of the experimental results, a generic view/architecture for locomotion learning is discussed in the conclusion.
Modelling of internal architecture of kinesin nanomotor as a machine language.
Khataee, H R; Ibrahim, M Y
2012-09-01
Kinesin is a protein-based natural nanomotor that transports molecular cargoes within cells by walking along microtubules. Kinesin nanomotor is considered as a bio-nanoagent which is able to sense the cell through its sensors (i.e. its heads and tail), make the decision internally and perform actions on the cell through its actuator (i.e. its motor domain). The study maps the agent-based architectural model of internal decision-making process of kinesin nanomotor to a machine language using an automata algorithm. The applied automata algorithm receives the internal agent-based architectural model of kinesin nanomotor as a deterministic finite automaton (DFA) model and generates a regular machine language. The generated regular machine language was acceptable by the architectural DFA model of the nanomotor and also in good agreement with its natural behaviour. The internal agent-based architectural model of kinesin nanomotor indicates the degree of autonomy and intelligence of the nanomotor interactions with its cell. Thus, our developed regular machine language can model the degree of autonomy and intelligence of kinesin nanomotor interactions with its cell as a language. Modelling of internal architectures of autonomous and intelligent bio-nanosystems as machine languages can lay the foundation towards the concept of bio-nanoswarms and next phases of the bio-nanorobotic systems development.
Li, Cai; Lowe, Robert; Ziemke, Tom
2014-01-01
In this article, we propose an architecture of a bio-inspired controller that addresses the problem of learning different locomotion gaits for different robot morphologies. The modeling objective is split into two: baseline motion modeling and dynamics adaptation. Baseline motion modeling aims to achieve fundamental functions of a certain type of locomotion and dynamics adaptation provides a “reshaping” function for adapting the baseline motion to desired motion. Based on this assumption, a three-layer architecture is developed using central pattern generators (CPGs, a bio-inspired locomotor center for the baseline motion) and dynamic motor primitives (DMPs, a model with universal “reshaping” functions). In this article, we use this architecture with the actor-critic algorithms for finding a good “reshaping” function. In order to demonstrate the learning power of the actor-critic based architecture, we tested it on two experiments: (1) learning to crawl on a humanoid and, (2) learning to gallop on a puppy robot. Two types of actor-critic algorithms (policy search and policy gradient) are compared in order to evaluate the advantages and disadvantages of different actor-critic based learning algorithms for different morphologies. Finally, based on the analysis of the experimental results, a generic view/architecture for locomotion learning is discussed in the conclusion. PMID:25324773
A Ka-Band Wide-Bandgap Solid-State Power Amplifier: Architecture Performance Estimates
NASA Technical Reports Server (NTRS)
Epp, L.; Khan, P.; Silva, A.
2005-01-01
Motivated by recent advances in wide-bandgap (WBG) gallium nitride (GaN) semiconductor technology, there is considerable interest in developing efficient solidstate power amplifiers (SSPAs) as an alternative to the traveling-wave tube amplifier (TWTA) for space applications. This article documents the results of a study to investigate power-combining technology and SSPA architectures that can enable a 120-W, 40 percent power-added efficiency (PAE) SSPA. Results of the study indicate that architectures based on at least three power combiner designs are likely to enable the target SSPA. The proposed architectures can power combine 16 to 32 individual monolithic microwave integrated circuits (MMICs) with >80 percent combining efficiency. This corresponds to MMIC requirements of 5- to 10-W output power and >48 percent PAE. For the three proposed architectures [1], detailed analysis and design of the power combiner are presented. The first architecture studied is based on a 16-way septum combiner that offers low loss and high isolation over the design band of 31 to 36 GHz. Analysis of a 2-way prototype septum combiner had an input match >25 dB, output match >30 dB, insertion loss <0.02 dB, and isolation >30 dB over the design band. A 16-way design, based on cascading this combiner in a binary fashion, is documented. The second architecture is based on a 24-way waveguide radial combiner. A prototype 24-way radial base was analyzed to have an input match >30 dB (under equal excitation of all input ports). The match of the mode transducer that forms the output of a radial combiner was found to be >27 dB. The functional bandwidth of the radial base and mode transducer, which together will form a radial combiner/divider, exceeded the design band. The third architecture employs a 32-way, parallel-plate radial combiner. Simulation results indicated an input match >24 dB, output match >22 dB, insertion loss <0.23 dB, and adjacent port isolation >20 dB over the design band. All three architectures utilize a low-loss MMIC amplifier module based on commercial MMIC packaging and a custom microstrip-to-rectangular-waveguide transition. The insertion loss of the module is expected to be 0.45 dB over the design band.
Chromium: A Stress-Processing Framework for Interactive Rendering on Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, G,; Houston, M.; Ng, Y.-R.
2002-01-11
We describe Chromium, a system for manipulating streams of graphics API commands on clusters of workstations. Chromium's stream filters can be arranged to create sort-first and sort-last parallel graphics architectures that, in many cases, support the same applications while using only commodity graphics accelerators. In addition, these stream filters can be extended programmatically, allowing the user to customize the stream transformations performed by nodes in a cluster. Because our stream processing mechanism is completely general, any cluster-parallel rendering algorithm can be either implemented on top of or embedded in Chromium. In this paper, we give examples of real-world applications thatmore » use Chromium to achieve good scalability on clusters of workstations, and describe other potential uses of this stream processing technology. By completely abstracting the underlying graphics architecture, network topology, and API command processing semantics, we allow a variety of applications to run in different environments.« less
An Architecture for Case-Based Learning
ERIC Educational Resources Information Center
Cifuentes, Laurent; Mercer, Rene; Alverez, Omar; Bettati, Riccardo
2010-01-01
We report on the design, development, implementation, and evaluation of a case-based instructional environment designed for learning network engineering skills for cybersecurity. We describe the societal problem addressed, the theory-based solution, and the preliminary testing and evaluation of that solution. We identify an architecture for…
Digital Preservation of Ancient Maya Cave Architecture: Recent Field Efforts in Quintana Roo, Mexico
NASA Astrophysics Data System (ADS)
Rissolo, D.; Lo, E.; Hess, M. R.; Meyer, D. E.; Amador, F. E.
2017-08-01
The presence of ancient Maya shrines in caves serves as unequivocal evidence for the ritual appropriation of these subterranean spaces and their significance with respect to Maya religious practice. Detailed study of the miniature masonry temples and altar features in the caves of Quintana Roo, Mexico reveals a strong stylistic and likely functional correspondence between these structures and their terrestrial counterparts at Postclassic sites. The Proyecto Arquitectura Subterranea de Quintana Roo (coordinated by the Center of Interdisciplinary Science for Art, Architecture, and Archaeology, or CISA3, at the University of California, San Diego and in collaboration with the Instituto Nacional de Antropologia e Historia in Mexico) is conducting a survey and program of digital documentation of both the pristine and impacted cave shrines of the region. Once an area is developed and populated, and access is opened to caves containing ancient architectural features, they are soon vandalized - often resulting in the complete obliteration of these rare miniature buildings and their diagnostic architectural elements. This emergent situation necessitates the use of rapid reality-capture tools; however, the physical challenges of working in caves requires researchers of adapt increasingly common architectural documentation methodologies to more adverse field conditions.
Storage system architectures and their characteristics
NASA Technical Reports Server (NTRS)
Sarandrea, Bryan M.
1993-01-01
Not all users storage requirements call for 20 MBS data transfer rates, multi-tier file or data migration schemes, or even automated retrieval of data. The number of available storage solutions reflects the broad range of user requirements. It is foolish to think that any one solution can address the complete range of requirements. For users with simple off-line storage requirements, the cost and complexity of high end solutions would provide no advantage over a more simple solution. The correct answer is to match the requirements of a particular storage need to the various attributes of the available solutions. The goal of this paper is to introduce basic concepts of archiving and storage management in combination with the most common architectures and to provide some insight into how these concepts and architectures address various storage problems. The intent is to provide potential consumers of storage technology with a framework within which to begin the hunt for a solution which meets their particular needs. This paper is not intended to be an exhaustive study or to address all possible solutions or new technologies, but is intended to be a more practical treatment of todays storage system alternatives. Since most commercial storage systems today are built on Open Systems concepts, the majority of these solutions are hosted on the UNIX operating system. For this reason, some of the architectural issues discussed focus around specific UNIX architectural concepts. However, most of the architectures are operating system independent and the conclusions are applicable to such architectures on any operating system.
Software architecture of INO340 telescope control system
NASA Astrophysics Data System (ADS)
Ravanmehr, Reza; Khosroshahi, Habib
2016-08-01
The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.
Robotic Form-Finding and Construction Based on the Architectural Projection Logic
NASA Astrophysics Data System (ADS)
Zexin, Sun; Mei, Hongyuan
2017-06-01
In this article we analyze the relationship between the architectural drawings and form-finding, indicate that architects should reuse and redefine the traditional architectural drawings as a from-finding tool. Explain the projection systems and analyze how these systems affected the architectural design. Use robotic arm to do the experiment and establish a cylindrical projection form-finding system.
2012-06-01
MISP) COMPLIANT ARCHITECTURE WHITE SANDS MISSILE RANGE REAGAN TEST SITE YUMA PROVING GROUND DUGWAY PROVING GROUND ABERDEEN TEST CENTER...DIGITAL MOTION IMAGERY COMPRESSION BEST PRACTICES GUIDE – A MOTION IMAGERY STANDARDS PROFILE (MISP) COMPLIANT ARCHITECTURE ...delivery, and archival purposes. These practices are based on a Motion Imagery Standards Profile (MISP) compliant architecture , which has been defined
Defense Against National Vulnerabilities in Public Data
2017-02-28
ingestion of subscription based precision data sources ( Business Intelligence Databases, Monster, others). Flexible data architecture that allows for... Architecture Objective: Develop a data acquisition architecture that can successfully ingest 1,000,000 records per hour from up to 100 different open...data sources. Developed and operate a data acquisition architecture comprised of the four following major components: Robust website
The System of Systems Architecture Feasibility Assessment Model
2016-06-01
OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL by Stephen E. Gillespie June 2016 Dissertation Supervisor Eugene Paulo THIS PAGE...Dissertation 4. TITLE AND SUBTITLE THE SYSTEM OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL 5. FUNDING NUMBERS 6. AUTHOR(S) Stephen E...SoS architecture feasibility assessment model (SoS-AFAM). Together, these extend current model- based systems engineering (MBSE) and SoS engineering
MIRAGE: developments in IRSP systems, RIIC design, emitter fabrication, and performance
NASA Astrophysics Data System (ADS)
Bryant, Paul; Oleson, Jim; James, Jay; McHugh, Steve; Lannon, John; Vellenga, David; Goodwin, Scott; Huffman, Alan; Solomon, Steve; Goldsmith, George C., II
2005-05-01
SBIR's family of MIRAGE infrared scene projection systems is undergoing significant growth and expansion. The first two lots of production IR emitters have completed fabrication at Microelectronics Center of North Carolina/Research and Development Institute (MCNC-RDI), and the next round(s) of emitter production has begun. These latest emitter arrays support programs such as Large Format Resistive Array (LFRA), Optimized Array for Space-based Infrared Simulation (OASIS), MIRAGE 1.5, and MIRAGE II. We present the latest performance data on emitters fabricated at MCNC-RDI, plus integrated system performance on recently completed IRSP systems. Teamed with FLIR Systems/Indigo Operations, SBIR and the Tri-Services IRSP Working Group have completed development of the CMOS Read-In Integrated Circuit (RIIC) portion of the Wide Format Resistive Array (WFRA) program-to extend LFRA performance to a 768 x 1536 "wide screen" projection configuration. WFRA RIIC architecture and performance is presented. Finally, we summarize development of the LFRA Digital Emitter Engine (DEE) and OASIS cryogenic package assemblies, the next-generation Command & Control Electronics (C&CE).
An Introduction to Message-Bus Architectures for Space Systems
NASA Technical Reports Server (NTRS)
Smith, Danford; Gregory, Brian
2005-01-01
This course presents technical and programmatic information on the development of message-based architectures for space mission ground and flight software systems. Message-based architecture approaches provide many significant advantages over the more traditional socket-based one-of-a-kind integrated system development approaches. The course provides an overview of publish/subscribe concepts, the use of common isolation layer API's, approaches to message standardization, and other technical topics. Several examples of currently operational systems are discussed and possible changes to the system discussed and time for questions and answers will be provided.
Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.
Carrascal, A; Manrique, D; Ríos, J; Rossi, C
2003-01-01
This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.
Formalism Challenges of the Cougaar Model Driven Architecture
NASA Technical Reports Server (NTRS)
Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.
2004-01-01
The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.
Architectures for wrist-worn energy harvesting
NASA Astrophysics Data System (ADS)
Rantz, R.; Halim, M. A.; Xue, T.; Zhang, Q.; Gu, L.; Yang, K.; Roundy, S.
2018-04-01
This paper reports the simulation-based analysis of six dynamical structures with respect to their wrist-worn vibration energy harvesting capability. This work approaches the problem of maximizing energy harvesting potential at the wrist by considering multiple mechanical substructures; rotational and linear motion-based architectures are examined. Mathematical models are developed and experimentally corroborated. An optimization routine is applied to the proposed architectures to maximize average power output and allow for comparison. The addition of a linear spring element to the structures has the potential to improve power output; for example, in the case of rotational structures, a 211% improvement in power output was estimated under real walking excitation. The analysis concludes that a sprung rotational harvester architecture outperforms a sprung linear architecture by 66% when real walking data is used as input to the simulations.
FPGA Implementation of Generalized Hebbian Algorithm for Texture Classification
Lin, Shiow-Jyu; Hwang, Wen-Jyi; Lee, Wei-Hao
2012-01-01
This paper presents a novel hardware architecture for principal component analysis. The architecture is based on the Generalized Hebbian Algorithm (GHA) because of its simplicity and effectiveness. The architecture is separated into three portions: the weight vector updating unit, the principal computation unit and the memory unit. In the weight vector updating unit, the computation of different synaptic weight vectors shares the same circuit for reducing the area costs. To show the effectiveness of the circuit, a texture classification system based on the proposed architecture is physically implemented by Field Programmable Gate Array (FPGA). It is embedded in a System-On-Programmable-Chip (SOPC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient design for attaining both high speed performance and low area costs. PMID:22778640
Measuring the style of innovative thinking among engineering students
NASA Astrophysics Data System (ADS)
Passig, David; Cohen, Lizi
2014-01-01
Background: Many tools have been developed to measure the ability of workers to innovate. However, all of them are based on self-reporting questionnaires, which raises questions about their validity Purpose: The aim was to develop and validate a tool, called Ideas Generation Implementation (IGI), to objectively measure the style and potential of engineering students in generating innovative technological ideas. The cognitive framework of IGI is based on the Architectural Innovation Model (AIM). Tool description: The IGI tool was designed to measure the level of innovation in generating technological ideas and their potential to be implemented. These variables rely on the definition of innovation as 'creativity, implemented in a high degree of success'. The levels of innovative thinking are based on the AIM and consist of four levels: incremental innovation, modular innovation, architectural innovation and radical innovation. Sample: Sixty experts in technological innovation developed the tool. We checked its face validity and calculated its reliability in a pilot study (kappa = 0.73). Then, 145 undergraduate students were sampled at random from the seven Israeli universities offering engineering programs and asked to complete the questionnaire. Design and methods: We examined the construct validity of the tool by conducting a variance analysis and measuring the correlations between the innovator's style of each student, as suggested by the AIM, and the three subscale factors of creative styles (efficient, conformist and original), as suggested by the Kirton Adaptors and Innovators (KAI) questionnaire. Results: Students with a radical innovator's style inclined more than those with an incremental innovator's style towards the three creative cognitive styles. Students with an architectural innovator's style inclined moderately, but not significantly, towards the three creative styles. Conclusions: The IGI tool objectively measures innovative thinking among students, thus allowing screening of potential employees at an early stage, during their undergraduate studies. The tool was found to be reliable and valid in measuring the style and potential of technological innovation among engineering students.
Saved by Iridium? An Alternative to GPS
2012-05-17
know this. The enemy presents itself at any time, at any place, in many shapes and forms, often for no apparent reason. As Ecclesiastes 9:18 states...system. It also marked the first coordinated Tomahawk and manned-aircraft strike in history . Within the first few minutes of Operation Desert Storm...These physical architectures correspond to ground operations in scenarios ranging from complete air superiority to completely denied airspace.”86 A
Component-Based Approach in Learning Management System Development
ERIC Educational Resources Information Center
Zaitseva, Larisa; Bule, Jekaterina; Makarov, Sergey
2013-01-01
The paper describes component-based approach (CBA) for learning management system development. Learning object as components of e-learning courses and their metadata is considered. The architecture of learning management system based on CBA being developed in Riga Technical University, namely its architecture, elements and possibilities are…
Advanced computer architecture specification for automated weld systems
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
This report describes the requirements for an advanced automated weld system and the associated computer architecture, and defines the overall system specification from a broad perspective. According to the requirements of welding procedures as they relate to an integrated multiaxis motion control and sensor architecture, the computer system requirements are developed based on a proven multiple-processor architecture with an expandable, distributed-memory, single global bus architecture, containing individual processors which are assigned to specific tasks that support sensor or control processes. The specified architecture is sufficiently flexible to integrate previously developed equipment, be upgradable and allow on-site modifications.
Optical linear algebra processors - Architectures and algorithms
NASA Technical Reports Server (NTRS)
Casasent, David
1986-01-01
Attention is given to the component design and optical configuration features of a generic optical linear algebra processor (OLAP) architecture, as well as the large number of OLAP architectures, number representations, algorithms and applications encountered in current literature. Number-representation issues associated with bipolar and complex-valued data representations, high-accuracy (including floating point) performance, and the base or radix to be employed, are discussed, together with case studies on a space-integrating frequency-multiplexed architecture and a hybrid space-integrating and time-integrating multichannel architecture.
New architecture for utility scale electricity from concentrator photovoltaics
NASA Astrophysics Data System (ADS)
Angel, Roger; Connors, Thomas; Davison, Warren; Olbert, Blain; Sivanandam, Suresh
2010-08-01
The paper describes a new system architecture optimized for utility-scale generation with concentrating photovoltaic cells (CPV) at fossil fuel price. We report on-sun tests of the architecture and development at the University of Arizona of the manufacturing processes adapted for high volume production. The new system takes advantage of triple-junction cells to convert concentrated sunlight into electricity. These commercially available cells have twice the conversion efficiency of silicon panels (40%) and one-tenth the cost per watt, when used at 1000x concentration. Telescope technology is adapted to deliver concentrated light to the cells at minimum cost. The architecture combines three novel elements: large (3.1 m x 3.1 m square) dish reflectors made as back-silvered glass monoliths; 2.5 kW receivers at each dish focus, each one incorporating a spherical field lens to deliver uniform illumination to multiple cells; and a lightweight steel spaceframe structure to hold multiple dish/receiver units in coalignment and oriented to the sun. Development of the process for replicating single-piece reflector dishes is well advanced at the Steward Observatory Mirror Lab. End-to-end system tests have been completed with single cells. A lightweight steel spaceframe to hold and track eight dish/receiver units to generate 20 kW has been completed. A single 2.5 kW receiver is presently under construction, and is expected to be operated in an end-to-end on-sun test with a monolithic dish before the end of 2010. The University of Arizona has granted an exclusive license to REhnu, LLC to commercialize this technology.
Transforming medical imaging applications into collaborative PACS-based telemedical systems
NASA Astrophysics Data System (ADS)
Maani, Rouzbeh; Camorlinga, Sergio; Arnason, Neil
2011-03-01
Telemedical systems are not practical for use in a clinical workflow unless they are able to communicate with the Picture Archiving and Communications System (PACS). On the other hand, there are many medical imaging applications that are not developed as telemedical systems. Some medical imaging applications do not support collaboration and some do not communicate with the PACS and therefore limit their usability in clinical workflows. This paper presents a general architecture based on a three-tier architecture model. The architecture and the components developed within it, transform medical imaging applications into collaborative PACS-based telemedical systems. As a result, current medical imaging applications that are not telemedical, not supporting collaboration, and not communicating with PACS, can be enhanced to support collaboration among a group of physicians, be accessed remotely, and be clinically useful. The main advantage of the proposed architecture is that it does not impose any modification to the current medical imaging applications and does not make any assumptions about the underlying architecture or operating system.
NASA Astrophysics Data System (ADS)
Jiang, Yuning; Kang, Jinfeng; Wang, Xinan
2017-03-01
Resistive switching memory (RRAM) is considered as one of the most promising devices for parallel computing solutions that may overcome the von Neumann bottleneck of today’s electronic systems. However, the existing RRAM-based parallel computing architectures suffer from practical problems such as device variations and extra computing circuits. In this work, we propose a novel parallel computing architecture for pattern recognition by implementing k-nearest neighbor classification on metal-oxide RRAM crossbar arrays. Metal-oxide RRAM with gradual RESET behaviors is chosen as both the storage and computing components. The proposed architecture is tested by the MNIST database. High speed (~100 ns per example) and high recognition accuracy (97.05%) are obtained. The influence of several non-ideal device properties is also discussed, and it turns out that the proposed architecture shows great tolerance to device variations. This work paves a new way to achieve RRAM-based parallel computing hardware systems with high performance.
NASA Technical Reports Server (NTRS)
Ashworth, Barry R.
1989-01-01
A description is given of the SSM/PMAD power system automation testbed, which was developed using a systems engineering approach. The architecture includes a knowledge-based system and has been successfully used in power system management and fault diagnosis. Architectural issues which effect overall system activities and performance are examined. The knowledge-based system is discussed along with its associated automation implications, and interfaces throughout the system are presented.
ERIC Educational Resources Information Center
Wallace, Guy W.
2001-01-01
Explains lean instructional systems design/development (ISD) as it relates to curriculum architecture design, based on Japan's lean production system. Discusses performance-based systems; ISD models; processes for organizational training and development; curriculum architecture to support job performance; and modular curriculum development. (LRW)
A neural network architecture for implementation of expert systems for real time monitoring
NASA Technical Reports Server (NTRS)
Ramamoorthy, P. A.
1991-01-01
Since neural networks have the advantages of massive parallelism and simple architecture, they are good tools for implementing real time expert systems. In a rule based expert system, the antecedents of rules are in the conjunctive or disjunctive form. We constructed a multilayer feedforward type network in which neurons represent AND or OR operations of rules. Further, we developed a translator which can automatically map a given rule base into the network. Also, we proposed a new and powerful yet flexible architecture that combines the advantages of both fuzzy expert systems and neural networks. This architecture uses the fuzzy logic concepts to separate input data domains into several smaller and overlapped regions. Rule-based expert systems for time critical applications using neural networks, the automated implementation of rule-based expert systems with neural nets, and fuzzy expert systems vs. neural nets are covered.
NASA Astrophysics Data System (ADS)
Xiao, Jian; Zhang, Mingqiang; Tian, Haiping; Huang, Bo; Fu, Wenlong
2018-02-01
In this paper, a novel prognostics and health management system architecture for hydropower plant equipment was proposed based on fog computing and Docker container. We employed the fog node to improve the real-time processing ability of improving the cloud architecture-based prognostics and health management system and overcome the problems of long delay time, network congestion and so on. Then Storm-based stream processing of fog node was present and could calculate the health index in the edge of network. Moreover, the distributed micros-service and Docker container architecture of hydropower plants equipment prognostics and health management was also proposed. Using the micro service architecture proposed in this paper, the hydropower unit can achieve the goal of the business intercommunication and seamless integration of different equipment and different manufacturers. Finally a real application case is given in this paper.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
NASA Astrophysics Data System (ADS)
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
Benchmarking hardware architecture candidates for the NFIRAOS real-time controller
NASA Astrophysics Data System (ADS)
Smith, Malcolm; Kerley, Dan; Herriot, Glen; Véran, Jean-Pierre
2014-07-01
As a part of the trade study for the Narrow Field Infrared Adaptive Optics System, the adaptive optics system for the Thirty Meter Telescope, we investigated the feasibility of performing real-time control computation using a Linux operating system and Intel Xeon E5 CPUs. We also investigated a Xeon Phi based architecture which allows higher levels of parallelism. This paper summarizes both the CPU based real-time controller architecture and the Xeon Phi based RTC. The Intel Xeon E5 CPU solution meets the requirements and performs the computation for one AO cycle in an average of 767 microseconds. The Xeon Phi solution did not meet the 1200 microsecond time requirement and also suffered from unpredictable execution times. More detailed benchmark results are reported for both architectures.
Modeling and Optimization of Multiple Unmanned Aerial Vehicles System Architecture Alternatives
Wang, Weiping; He, Lei
2014-01-01
Unmanned aerial vehicle (UAV) systems have already been used in civilian activities, although very limitedly. Confronted different types of tasks, multi UAVs usually need to be coordinated. This can be extracted as a multi UAVs system architecture problem. Based on the general system architecture problem, a specific description of the multi UAVs system architecture problem is presented. Then the corresponding optimization problem and an efficient genetic algorithm with a refined crossover operator (GA-RX) is proposed to accomplish the architecting process iteratively in the rest of this paper. The availability and effectiveness of overall method is validated using 2 simulations based on 2 different scenarios. PMID:25140328
Agent Architectures for Compliance
NASA Astrophysics Data System (ADS)
Burgemeestre, Brigitte; Hulstijn, Joris; Tan, Yao-Hua
A Normative Multi-Agent System consists of autonomous agents who must comply with social norms. Different kinds of norms make different assumptions about the cognitive architecture of the agents. For example, a principle-based norm assumes that agents can reflect upon the consequences of their actions; a rule-based formulation only assumes that agents can avoid violations. In this paper we present several cognitive agent architectures for self-monitoring and compliance. We show how different assumptions about the cognitive architecture lead to different information needs when assessing compliance. The approach is validated with a case study of horizontal monitoring, an approach to corporate tax auditing recently introduced by the Dutch Customs and Tax Authority.
Education Design Showcase: Annual Awards 2003.
ERIC Educational Resources Information Center
School Planning & Management, 2003
2003-01-01
This fourth annual special supplement recognizes outstanding architecture and design in K-12 schools and college facilities. Each entry contains photographs, a text description, and summarized project data. Most also include floor plans. Architect and manufacturer indexes complete the supplement. (EV)
Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés
2015-02-25
This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems.
Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés
2015-01-01
This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems. PMID:25723145
Selective randomized load balancing and mesh networks with changing demands
NASA Astrophysics Data System (ADS)
Shepherd, F. B.; Winzer, P. J.
2006-05-01
We consider the problem of building cost-effective networks that are robust to dynamic changes in demand patterns. We compare several architectures using demand-oblivious routing strategies. Traditional approaches include single-hop architectures based on a (static or dynamic) circuit-switched core infrastructure and multihop (packet-switched) architectures based on point-to-point circuits in the core. To address demand uncertainty, we seek minimum cost networks that can carry the class of hose demand matrices. Apart from shortest-path routing, Valiant's randomized load balancing (RLB), and virtual private network (VPN) tree routing, we propose a third, highly attractive approach: selective randomized load balancing (SRLB). This is a blend of dual-hop hub routing and randomized load balancing that combines the advantages of both architectures in terms of network cost, delay, and delay jitter. In particular, we give empirical analyses for the cost (in terms of transport and switching equipment) for the discussed architectures, based on three representative carrier networks. Of these three networks, SRLB maintains the resilience properties of RLB while achieving significant cost reduction over all other architectures, including RLB and multihop Internet protocol/multiprotocol label switching (IP/MPLS) networks using VPN-tree routing.
On the development of a reactive sensor-based robotic system
NASA Technical Reports Server (NTRS)
Hexmoor, Henry H.; Underwood, William E., Jr.
1989-01-01
Flexible robotic systems for space applications need to use local information to guide their action in uncertain environments where the state of the environment and even the goals may change. They have to be tolerant of unexpected events and robust enough to carry their task to completion. Tactical goals should be modified while maintaining strategic goals. Furthermore, reactive robotic systems need to have a broader view of their environments than sensory-based systems. An architecture and a theory of representation extending the basic cycles of action and perception are described. This scheme allows for dynamic description of the environment and determining purposive and timely action. Applications of this scheme for assembly and repair tasks using a Universal Machine Intelligence RTX robot are being explored, but the ideas are extendable to other domains. The nature of reactivity for sensor-based robotic systems and implementation issues encountered in developing a prototype are discussed.
Web-based healthcare hand drawing management system.
Hsieh, Sheau-Ling; Weng, Yung-Ching; Chen, Chi-Huang; Hsu, Kai-Ping; Lin, Jeng-Wei; Lai, Feipei
2010-01-01
The paper addresses Medical Hand Drawing Management System architecture and implementation. In the system, we developed four modules: hand drawing management module; patient medical records query module; hand drawing editing and upload module; hand drawing query module. The system adapts windows-based applications and encompasses web pages by ASP.NET hosting mechanism under web services platforms. The hand drawings implemented as files are stored in a FTP server. The file names with associated data, e.g. patient identification, drawing physician, access rights, etc. are reposited in a database. The modules can be conveniently embedded, integrated into any system. Therefore, the system possesses the hand drawing features to support daily medical operations, effectively improve healthcare qualities as well. Moreover, the system includes the printing capability to achieve a complete, computerized medical document process. In summary, the system allows web-based applications to facilitate the graphic processes for healthcare operations.
Design of CMOS imaging system based on FPGA
NASA Astrophysics Data System (ADS)
Hu, Bo; Chen, Xiaolai
2017-10-01
In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.
Balasubramanian, Viswanathan; Ruedi, Pierre-Francois; Temiz, Yuksel; Ferretti, Anna; Guiducci, Carlotta; Enz
2013-10-01
This paper presents a novel sensor front-end circuit that addresses the issues of 1/f noise and distortion in a unique way by using canceling techniques. The proposed front-end is a fully differential transimpedance amplifier (TIA) targeted for current mode electrochemical biosensing applications. In this paper, we discuss the architecture of this canceling based front-end and the optimization methods followed for achieving low noise, low distortion performance at minimum current consumption are presented. To validate the employed canceling based front-end, it has been realized in a 0.18 μm CMOS process and the characterization results are presented. The front-end has also been tested as part of a complete wireless sensing system and the cyclic voltammetry (CV) test results from electrochemical sensors are provided. Overall current consumption in the front-end is 50 μA while operating on a 1.8 V supply.
Raster Scan Computer Image Generation (CIG) System Based On Refresh Memory
NASA Astrophysics Data System (ADS)
Dichter, W.; Doris, K.; Conkling, C.
1982-06-01
A full color, Computer Image Generation (CIG) raster visual system has been developed which provides a high level of training sophistication by utilizing advanced semiconductor technology and innovative hardware and firmware techniques. Double buffered refresh memory and efficient algorithms eliminate the problem of conventional raster line ordering by allowing the generated image to be stored in a random fashion. Modular design techniques and simplified architecture provide significant advantages in reduced system cost, standardization of parts, and high reliability. The major system components are a general purpose computer to perform interfacing and data base functions; a geometric processor to define the instantaneous scene image; a display generator to convert the image to a video signal; an illumination control unit which provides final image processing; and a CRT monitor for display of the completed image. Additional optional enhancements include texture generators, increased edge and occultation capability, curved surface shading, and data base extensions.
Digital Architecture for a Trace Gas Sensor Platform
NASA Technical Reports Server (NTRS)
Gonzales, Paula; Casias, Miguel; Vakhtin, Andrei; Pilgrim, Jeffrey
2012-01-01
A digital architecture has been implemented for a trace gas sensor platform, as a companion to standard analog control electronics, which accommodates optical absorption whose fractional absorbance equivalent would result in excess error if assumed to be linear. In cases where the absorption (1-transmission) is not equivalent to the fractional absorbance within a few percent error, it is necessary to accommodate the actual measured absorption while reporting the measured concentration of a target analyte with reasonable accuracy. This requires incorporation of programmable intelligence into the sensor platform so that flexible interpretation of the acquired data may be accomplished. Several different digital component architectures were tested and implemented. Commercial off-the-shelf digital electronics including data acquisition cards (DAQs), complex programmable logic devices (CPLDs), field-programmable gate arrays (FPGAs), and microcontrollers have been used to achieve the desired outcome. The most completely integrated architecture achieved during the project used the CPLD along with a microcontroller. The CPLD provides the initial digital demodulation of the raw sensor signal, and then communicates over a parallel communications interface with a microcontroller. The microcontroller analyzes the digital signal from the CPLD, and applies a non-linear correction obtained through extensive data analysis at the various relevant EVA operating pressures. The microcontroller then presents the quantitatively accurate carbon dioxide partial pressure regardless of optical density. This technique could extend the linear dynamic range of typical absorption spectrometers, particularly those whose low end noise equivalent absorbance is below one-part-in-100,000. In the EVA application, it allows introduction of a path-length-enhancing architecture whose optical interference effects are well understood and quantified without sacrificing the dynamic range that allows quantitative detection at the higher carbon dioxide partial pressures. The digital components are compact and allow reasonably complete integration with separately developed analog control electronics without sacrificing size, mass, or power draw.
Boolean and brain-inspired computing using spin-transfer torque devices
NASA Astrophysics Data System (ADS)
Fan, Deliang
Several completely new approaches (such as spintronic, carbon nanotube, graphene, TFETs, etc.) to information processing and data storage technologies are emerging to address the time frame beyond current Complementary Metal-Oxide-Semiconductor (CMOS) roadmap. The high speed magnetization switching of a nano-magnet due to current induced spin-transfer torque (STT) have been demonstrated in recent experiments. Such STT devices can be explored in compact, low power memory and logic design. In order to truly leverage STT devices based computing, researchers require a re-think of circuit, architecture, and computing model, since the STT devices are unlikely to be drop-in replacements for CMOS. The potential of STT devices based computing will be best realized by considering new computing models that are inherently suited to the characteristics of STT devices, and new applications that are enabled by their unique capabilities, thereby attaining performance that CMOS cannot achieve. The goal of this research is to conduct synergistic exploration in architecture, circuit and device levels for Boolean and brain-inspired computing using nanoscale STT devices. Specifically, we first show that the non-volatile STT devices can be used in designing configurable Boolean logic blocks. We propose a spin-memristor threshold logic (SMTL) gate design, where memristive cross-bar array is used to perform current mode summation of binary inputs and the low power current mode spintronic threshold device carries out the energy efficient threshold operation. Next, for brain-inspired computing, we have exploited different spin-transfer torque device structures that can implement the hard-limiting and soft-limiting artificial neuron transfer functions respectively. We apply such STT based neuron (or 'spin-neuron') in various neural network architectures, such as hierarchical temporal memory and feed-forward neural network, for performing "human-like" cognitive computing, which show more than two orders of lower energy consumption compared to state of the art CMOS implementation. Finally, we show the dynamics of injection locked Spin Hall Effect Spin-Torque Oscillator (SHE-STO) cluster can be exploited as a robust multi-dimensional distance metric for associative computing, image/ video analysis, etc. Our simulation results show that the proposed system architecture with injection locked SHE-STOs and the associated CMOS interface circuits can be suitable for robust and energy efficient associative computing and pattern matching.
Advanced EMU Portable Life Support System (PLSS) and Shuttle/ISS EMU Schematics, a Comparison
NASA Technical Reports Server (NTRS)
Campbell, Colin
2012-01-01
In order to be able to adapt to differing vehicle interfaces such as suitport and airlock, adjust to varying vehicle pressure schedules, tolerate lower quality working fluids, and adapt to differing suit architectures as dictated by a range of mission architectures, the next generation space suit requires more adaptability and robustness over that of the current Shuttle/ISS Extra-vehicular Mobility Unit (EMU). While some features have been added to facilitate interfaces to differing vehicle and suit architectures, the key performance gains have been made via incorporation of new technologies such as the variable pressure regulators, Rapid Cycle Amine swing-bed, and Suit Water Membrane Evaporator. This paper performs a comparison between the Shuttle/ISS EMU PLSS schematic and the Advanced EMU PLSS schematic complete with a discussion for each difference.
NASA Astrophysics Data System (ADS)
Guilfoyle, Peter S.; Stone, Richard V.
1991-12-01
OptiComp is currently completing a 32-bit, fully programmable digital optical computer (DOC II) that is designed to operate in a UNIX environment running RISC microcode. OptiComp's DOC II architecture is focused toward parallel microcode implementation where data is input in a dual rail format. By exploiting the physical principals inherent to optics (speed and low power consumption), an architectural balance of optical interconnects and software code efficiency can be achieved including high fan-in and fan-out. OptiComp's DOC II program is jointly sponsored by the Office of Naval Research (ONR), the Strategic Defense Initiative Office (SDIO), NASA space station group and Rome Laboratory (USAF). This paper not only describes the motivational basis behind DOC II but also provides an optical overview and architectural summary of the device that allows the emulation of any digital instruction set.
Rosetta3: An Object-Oriented Software Suite for the Simulation and Design of Macromolecules
Leaver-Fay, Andrew; Tyka, Michael; Lewis, Steven M.; Lange, Oliver F.; Thompson, James; Jacak, Ron; Kaufman, Kristian; Renfrew, P. Douglas; Smith, Colin A.; Sheffler, Will; Davis, Ian W.; Cooper, Seth; Treuille, Adrien; Mandell, Daniel J.; Richter, Florian; Ban, Yih-En Andrew; Fleishman, Sarel J.; Corn, Jacob E.; Kim, David E.; Lyskov, Sergey; Berrondo, Monica; Mentzer, Stuart; Popović, Zoran; Havranek, James J.; Karanicolas, John; Das, Rhiju; Meiler, Jens; Kortemme, Tanja; Gray, Jeffrey J.; Kuhlman, Brian; Baker, David; Bradley, Philip
2013-01-01
We have recently completed a full re-architecturing of the Rosetta molecular modeling program, generalizing and expanding its existing functionality. The new architecture enables the rapid prototyping of novel protocols by providing easy to use interfaces to powerful tools for molecular modeling. The source code of this rearchitecturing has been released as Rosetta3 and is freely available for academic use. At the time of its release, it contained 470,000 lines of code. Counting currently unpublished protocols at the time of this writing, the source includes 1,285,000 lines. Its rapid growth is a testament to its ease of use. This document describes the requirements for our new architecture, justifies the design decisions, sketches out central classes, and highlights a few of the common tasks that the new software can perform. PMID:21187238
The Need for Software Architecture Evaluation in the Acquisition of Software-Intensive Sysetms
2014-01-01
Function and Performance Specification GIG Global Information Grid ISO International Standard Organisation MDA Model Driven Architecture...architecture and design, which is a key part of knowledge-based economy UNCLASSIFIED DSTO-TR-2936 UNCLASSIFIED 24 Allow Australian SMEs to
DOT National Transportation Integrated Search
1999-09-01
This is one of seven studies exploring processes for developing Intelligent Transportation Systems (ITS) architectures for regional, statewide, or commercial vehicle applications. This study was prepared for a broad-based, non-technical audience. In ...
The future of computing--new architectures and new technologies.
Warren, P
2004-02-01
All modern computers are designed using the 'von Neumann' architecture and built using silicon transistor technology. Both architecture and technology have been remarkably successful. Yet there are a range of problems for which this conventional architecture is not particularly well adapted, and new architectures are being proposed to solve these problems, in particular based on insight from nature. Transistor technology has enjoyed 50 years of continuing progress. However, the laws of physics dictate that within a relatively short time period this progress will come to an end. New technologies, based on molecular and biological sciences as well as quantum physics, are vying to replace silicon, or at least coexist with it and extend its capability. The paper describes these novel architectures and technologies, places them in the context of the kinds of problems they might help to solve, and predicts their possible manner and time of adoption. Finally it describes some key questions and research problems associated with their use.
Lunar Navigation Architecture Design Considerations
NASA Technical Reports Server (NTRS)
D'Souza, Christopher; Getchius, Joel; Holt, Greg; Moreau, Michael
2009-01-01
The NASA Constellation Program is aiming to establish a long-term presence on the lunar surface. The Constellation elements (Orion, Altair, Earth Departure Stage, and Ares launch vehicles) will require a lunar navigation architecture for navigation state updates during lunar-class missions. Orion in particular has baselined earth-based ground direct tracking as the primary source for much of its absolute navigation needs. However, due to the uncertainty in the lunar navigation architecture, the Orion program has had to make certain assumptions on the capabilities of such architectures in order to adequately scale the vehicle design trade space. The following paper outlines lunar navigation requirements, the Orion program assumptions, and the impacts of these assumptions to the lunar navigation architecture design. The selection of potential sites was based upon geometric baselines, logistical feasibility, redundancy, and abort support capability. Simulated navigation covariances mapped to entry interface flightpath- angle uncertainties were used to evaluate knowledge errors. A minimum ground station architecture was identified consisting of Goldstone, Madrid, Canberra, Santiago, Hartebeeshoek, Dongora, Hawaii, Guam, and Ascension Island (or the geometric equivalent).
Sprinting performance on the Woodway Curve 3.0 is related to muscle architecture.
Mangine, Gerald T; Fukuda, David H; Townsend, Jeremy R; Wells, Adam J; Gonzalez, Adam M; Jajtner, Adam R; Bohner, Jonathan D; LaMonica, Michael; Hoffman, Jay R; Fragala, Maren S; Stout, Jeffrey R
2015-01-01
To determine if unilateral measures of muscle architecture in the rectus femoris (RF) and vastus lateralis (VL) were related to (and predictive of) sprinting speed and unilateral (and bilateral) force (FRC) and power (POW) during a 30 s maximal sprint on the Woodway Curve 3.0 non-motorized treadmill. Twenty-eight healthy, physically active men (n = 14) and women (n = 14) (age = 22.9 ± 2.4 years; body mass = 77.1 ± 16.2 kg; height = 171.6 ± 11.2 cm; body-fa t = 19.4 ± 8.1%) completed one familiarization and one 30-s maximal sprint on the TM to obtain maximal sprinting speed, POW and FRC. Muscle thickness (MT), cross-sectional area (CSA) and echo intensity (ECHO) of the RF and VL in the dominant (DOM; determined by unilateral sprinting power) and non-dominant (ND) legs were measured via ultrasound. Pearson correlations indicated several significant (p < 0.05) relationships between sprinting performance [POW (peak, DOM and ND), FRC (peak, DOM, ND) and sprinting time] and muscle architecture. Stepwise regression indicated that POW(DOM) was predictive of ipsilateral RF (MT and CSA) and VL (CSA and ECHO), while POW(ND) was predictive of ipsilateral RF (MT and CSA) and VL (CSA); sprinting power/force asymmetry was not predictive of architecture asymmetry. Sprinting time was best predicted by peak power and peak force, though muscle quality (ECHO) and the bilateral percent difference in VL (CSA) were strong architectural predictors. Muscle architecture is related to (and predictive of) TM sprinting performance, while unilateral POW is predictive of ipsilateral architecture. However, the extent to which architecture and other factors (i.e. neuromuscular control and sprinting technique) affect TM performance remains unknown.
Brooks, R.A.; Bell, S.S.
2005-01-01
A descriptive study of the architecture of the red mangrove, Rhizophora mangle L., habitat of Tampa Bay, FL, was conducted to assess if plant architecture could be used to discriminate overwash from fringing forest type. Seven above-water (e.g., tree height, diameter at breast height, and leaf area) and 10 below-water (e.g., root density, root complexity, and maximum root order) architectural features were measured in eight mangrove stands. A multivariate technique (discriminant analysis) was used to test the ability of different models comprising above-water, below-water, or whole tree architecture to classify forest type. Root architectural features appear to be better than classical forestry measurements at discriminating between fringing and overwash forests but, regardless of the features loaded into the model, misclassification rates were high as forest type was only correctly classified in 66% of the cases. Based upon habitat architecture, the results of this study do not support a sharp distinction between overwash and fringing red mangrove forests in Tampa Bay but rather indicate that the two are architecturally undistinguishable. Therefore, within this northern portion of the geographic range of red mangroves, a more appropriate classification system based upon architecture may be one in which overwash and fringing forest types are combined into a single, "tide dominated" category. ?? 2005 Elsevier Ltd. All rights reserved.
A semantically-aided architecture for a web-based monitoring system for carotid atherosclerosis.
Kolias, Vassileios D; Stamou, Giorgos; Golemati, Spyretta; Stoitsis, Giannis; Gkekas, Christos D; Liapis, Christos D; Nikita, Konstantina S
2015-08-01
Carotid atherosclerosis is a multifactorial disease and its clinical diagnosis depends on the evaluation of heterogeneous clinical data, such as imaging exams, biochemical tests and the patient's clinical history. The lack of interoperability between Health Information Systems (HIS) does not allow the physicians to acquire all the necessary data for the diagnostic process. In this paper, a semantically-aided architecture is proposed for a web-based monitoring system for carotid atherosclerosis that is able to gather and unify heterogeneous data with the use of an ontology and to create a common interface for data access enhancing the interoperability of HIS. The architecture is based on an application ontology of carotid atherosclerosis that is used to (a) integrate heterogeneous data sources on the basis of semantic representation and ontological reasoning and (b) access the critical information using SPARQL query rewriting and ontology-based data access services. The architecture was tested over a carotid atherosclerosis dataset consisting of the imaging exams and the clinical profile of 233 patients, using a set of complex queries, constructed by the physicians. The proposed architecture was evaluated with respect to the complexity of the queries that the physicians could make and the retrieval speed. The proposed architecture gave promising results in terms of interoperability, data integration of heterogeneous sources with an ontological way and expanded capabilities of query and retrieval in HIS.
Study on the E-commerce platform based on the agent
NASA Astrophysics Data System (ADS)
Fu, Ruixue; Qin, Lishuan; Gao, Yinmin
2011-10-01
To solve problem of dynamic integration in e-commerce, the Multi-Agent architecture of electronic commerce platform system based on Agent and Ontology has been introduced, which includes three major types of agent, Ontology and rule collection. In this architecture, service agent and rule are used to realize the business process reengineering, the reuse of software component, and agility of the electronic commerce platform. To illustrate the architecture, a simulation work has been done and the results imply that the architecture provides a very efficient method to design and implement the flexible, distributed, open and intelligent electronic commerce platform system to solve problem of dynamic integration in ecommerce. The objective of this paper is to illustrate the architecture of electronic commerce platform system, and the approach how Agent and Ontology support the electronic commerce platform system.
National Launch System comparative economic analysis
NASA Technical Reports Server (NTRS)
Prince, A.
1992-01-01
Results are presented from an analysis of economic benefits (or losses), in the form of the life cycle cost savings, resulting from the development of the National Launch System (NLS) family of launch vehicles. The analysis was carried out by comparing various NLS-based architectures with the current Shuttle/Titan IV fleet. The basic methodology behind this NLS analysis was to develop a set of annual payload requirements for the Space Station Freedom and LEO, to design launch vehicle architectures around these requirements, and to perform life-cycle cost analyses on all of the architectures. A SEI requirement was included. Launch failure costs were estimated and combined with the relative reliability assumptions to measure the effects of losses. Based on the analysis, a Shuttle/NLS architecture evolving into a pressurized-logistics-carrier/NLS architecture appears to offer the best long-term cost benefit.
NASA Astrophysics Data System (ADS)
Tramm, John R.; Gunow, Geoffrey; He, Tim; Smith, Kord S.; Forget, Benoit; Siegel, Andrew R.
2016-05-01
In this study we present and analyze a formulation of the 3D Method of Characteristics (MOC) technique applied to the simulation of full core nuclear reactors. Key features of the algorithm include a task-based parallelism model that allows independent MOC tracks to be assigned to threads dynamically, ensuring load balancing, and a wide vectorizable inner loop that takes advantage of modern SIMD computer architectures. The algorithm is implemented in a set of highly optimized proxy applications in order to investigate its performance characteristics on CPU, GPU, and Intel Xeon Phi architectures. Speed, power, and hardware cost efficiencies are compared. Additionally, performance bottlenecks are identified for each architecture in order to determine the prospects for continued scalability of the algorithm on next generation HPC architectures.
A practical approach for active camera coordination based on a fusion-driven multi-agent system
NASA Astrophysics Data System (ADS)
Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.
2014-04-01
In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sohn, A.; Gaudiot, J.-L.
1991-12-31
Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less
A monitoring system for vegetable greenhouses based on a wireless sensor network.
Li, Xiu-hong; Cheng, Xiao; Yan, Ke; Gong, Peng
2010-01-01
A wireless sensor network-based automatic monitoring system is designed for monitoring the life conditions of greenhouse vegetables. The complete system architecture includes a group of sensor nodes, a base station, and an internet data center. For the design of wireless sensor node, the JN5139 micro-processor is adopted as the core component and the Zigbee protocol is used for wireless communication between nodes. With an ARM7 microprocessor and embedded ZKOS operating system, a proprietary gateway node is developed to achieve data influx, screen display, system configuration and GPRS based remote data forwarding. Through a Client/Server mode the management software for remote data center achieves real-time data distribution and time-series analysis. Besides, a GSM-short-message-based interface is developed for sending real-time environmental measurements, and for alarming when a measurement is beyond some pre-defined threshold. The whole system has been tested for over one year and satisfactory results have been observed, which indicate that this system is very useful for greenhouse environment monitoring.
Design of Complex BPF with Automatic Digital Tuning Circuit for Low-IF Receivers
NASA Astrophysics Data System (ADS)
Kondo, Hideaki; Sawada, Masaru; Murakami, Norio; Masui, Shoichi
This paper describes the architecture and implementations of an automatic digital tuning circuit for a complex bandpass filter (BPF) in a low-power and low-cost transceiver for applications such as personal authentication and wireless sensor network systems. The architectural design analysis demonstrates that an active RC filter in a low-IF architecture can be at least 47.7% smaller in area than a conventional gm-C filter; in addition, it features a simple implementation of an associated tuning circuit. The principle of simultaneous tuning of both the center frequency and bandwidth through calibration of a capacitor array is illustrated as based on an analysis of filter characteristics, and a scalable automatic digital tuning circuit with simple analog blocks and control logic having only 835 gates is introduced. The developed capacitor tuning technique can achieve a tuning error of less than ±3.5% and lower a peaking in the passband filter characteristics. An experimental complex BPF using 0.18µm CMOS technology can successfully reduce the tuning error from an initial value of -20% to less than ±2.5% after tuning. The filter block dimensions are 1.22mm × 1.01mm; and in measurement results of the developed complex BPF with the automatic digital tuning circuit, current consumption is 705µA and the image rejection ratio is 40.3dB. Complete evaluation of the BPF indicates that this technique can be applied to low-power, low-cost transceivers.
Nissan, Noam; Furman-Haran, Edna; Feinberg-Shapiro, Myra; Grobgeld, Dov; Eyal, Erez; Zehavi, Tania; Degani, Hadassa
2014-12-15
Breast cancer is the most common cause of cancer among women worldwide. Early detection of breast cancer has a critical role in improving the quality of life and survival of breast cancer patients. In this paper a new approach for the detection of breast cancer is described, based on tracking the mammary architectural elements using diffusion tensor imaging (DTI). The paper focuses on the scanning protocols and image processing algorithms and software that were designed to fit the diffusion properties of the mammary fibroglandular tissue and its changes during malignant transformation. The final output yields pixel by pixel vector maps that track the architecture of the entire mammary ductal glandular trees and parametric maps of the diffusion tensor coefficients and anisotropy indices. The efficiency of the method to detect breast cancer was tested by scanning women volunteers including 68 patients with breast cancer confirmed by histopathology findings. Regions with cancer cells exhibited a marked reduction in the diffusion coefficients and in the maximal anisotropy index as compared to the normal breast tissue, providing an intrinsic contrast for delineating the boundaries of malignant growth. Overall, the sensitivity of the DTI parameters to detect breast cancer was found to be high, particularly in dense breasts, and comparable to the current standard breast MRI method that requires injection of a contrast agent. Thus, this method offers a completely non-invasive, safe and sensitive tool for breast cancer detection.
A compact linear accelerator based on a scalable microelectromechanical-system RF-structure
Persaud, A.; Ji, Q.; Feinberg, E.; ...
2017-06-08
Here, a new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number ofmore » parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further red ucing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Finally, ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.« less
A compact linear accelerator based on a scalable microelectromechanical-system RF-structure
NASA Astrophysics Data System (ADS)
Persaud, A.; Ji, Q.; Feinberg, E.; Seidl, P. A.; Waldron, W. L.; Schenkel, T.; Lal, A.; Vinayakumar, K. B.; Ardanuc, S.; Hammer, D. A.
2017-06-01
A new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number of parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further reducing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.
A compact linear accelerator based on a scalable microelectromechanical-system RF-structure.
Persaud, A; Ji, Q; Feinberg, E; Seidl, P A; Waldron, W L; Schenkel, T; Lal, A; Vinayakumar, K B; Ardanuc, S; Hammer, D A
2017-06-01
A new approach for a compact radio-frequency (RF) accelerator structure is presented. The new accelerator architecture is based on the Multiple Electrostatic Quadrupole Array Linear Accelerator (MEQALAC) structure that was first developed in the 1980s. The MEQALAC utilized RF resonators producing the accelerating fields and providing for higher beam currents through parallel beamlets focused using arrays of electrostatic quadrupoles (ESQs). While the early work obtained ESQs with lateral dimensions on the order of a few centimeters, using a printed circuit board (PCB), we reduce the characteristic dimension to the millimeter regime, while massively scaling up the potential number of parallel beamlets. Using Microelectromechanical systems scalable fabrication approaches, we are working on further reducing the characteristic dimension to the sub-millimeter regime. The technology is based on RF-acceleration components and ESQs implemented in the PCB or silicon wafers where each beamlet passes through beam apertures in the wafer. The complete accelerator is then assembled by stacking these wafers. This approach has the potential for fast and inexpensive batch fabrication of the components and flexibility in system design for application specific beam energies and currents. For prototyping the accelerator architecture, the components have been fabricated using the PCB. In this paper, we present proof of concept results of the principal components using the PCB: RF acceleration and ESQ focusing. Ongoing developments on implementing components in silicon and scaling of the accelerator technology to high currents and beam energies are discussed.
The Reed-Solomon encoders: Conventional versus Berlekamp's architecture
NASA Technical Reports Server (NTRS)
Perlman, M.; Lee, J. J.
1982-01-01
Concatenated coding was adopted for interplanetary space missions. Concatenated coding was employed with a convolutional inner code and a Reed-Solomon (RS) outer code for spacecraft telemetry. Conventional RS encoders are compared with those that incorporate two architectural features which approximately halve the number of multiplications of a set of fixed arguments by any RS codeword symbol. The fixed arguments and the RS symbols are taken from a nonbinary finite field. Each set of multiplications is bit-serially performed and completed during one (bit-serial) symbol shift. All firmware employed by conventional RS encoders is eliminated.
Design of the ARES Mars Airplane and Mission Architecture
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Wright, Henry S.; Croom, Mark A.; Levine, Joel S.; Spencer, David A.
2006-01-01
Significant technology advances have enabled planetary aircraft to be considered as viable science platforms. Such systems fill a unique planetary science measurement gap, that of regional-scale, near-surface observation, while providing a fresh perspective for potential discovery. Recent efforts have produced mature mission and flight system concepts, ready for flight project implementation. This paper summarizes the development of a Mars airplane mission architecture that balances science, implementation risk and cost. Airplane mission performance, flight system design and technology maturation are described. The design, analysis and testing completed demonstrates the readiness of this science platform for use in a Mars flight project.
Development of NASA's Next Generation L-Band Digital Beamforming Synthetic Aperture Radar (DBSAR-2)
NASA Technical Reports Server (NTRS)
Rincon, Rafael; Fatoyinbo, Temilola; Osmanoglu, Batuhan; Lee, Seung-Kuk; Ranson, K. Jon; Marrero, Victor; Yeary, Mark
2014-01-01
NASA's Next generation Digital Beamforming SAR (DBSAR-2) is a state-of-the-art airborne L-band radar developed at the NASA Goddard Space Flight Center (GSFC). The instrument builds upon the advanced architectures in NASA's DBSAR-1 and EcoSAR instruments. The new instrument employs a 16-channel radar architecture characterized by multi-mode operation, software defined waveform generation, digital beamforming, and configurable radar parameters. The instrument has been design to support several disciplines in Earth and Planetary sciences. The instrument was recently completed, and tested and calibrated in a anechoic chamber.
NASA Astrophysics Data System (ADS)
Yeh, Chien-Hung; Yang, Zi-Qing; Huang, Tzu-Jung; Chow, Chi-Wai
2018-03-01
To achieve a steady single-longitudinal-mode (SLM) erbium-doped fiber (EDF) laser, the wheel-ring architecture is proposed in the laser cavity. According to Vernier effect, the proposed wheel-ring can produce three different free spectrum ranges (FSRs) to serve as the mode-filter for suppressing the densely multi-longitudinal-mode (MLM). Here, to complete wavelength-tunable EDF laser, an optical tunable bandpass filter (OTBF) is utilized inside the cavity for tuning arbitrarily. In addition, the entire output performances of the proposed EDF wheel-ring laser are also discussed and analyzed experimentally.
Path planning and execution monitoring for a planetary rover
NASA Technical Reports Server (NTRS)
Gat, Erann; Slack, Marc G.; Miller, David P.; Firby, R. James
1990-01-01
A path planner and an execution monitoring planner that will enable the rover to navigate to its various destinations safely and correctly while detecting and avoiding hazards are described. An overview of the complete architecture is given. Implementation and testbeds are described. The robot can detect unforseen obstacles and take appropriate action. This includes having the rover back away from the hazard and mark the area as untraversable in the in the rover's internal map. The experiments have consisted of paths roughly 20 m in length. The architecture works with a large variety of rover configurations with different kinematic constraints.
The architecture of a video image processor for the space station
NASA Technical Reports Server (NTRS)
Yalamanchili, S.; Lee, D.; Fritze, K.; Carpenter, T.; Hoyme, K.; Murray, N.
1987-01-01
The architecture of a video image processor for space station applications is described. The architecture was derived from a study of the requirements of algorithms that are necessary to produce the desired functionality of many of these applications. Architectural options were selected based on a simulation of the execution of these algorithms on various architectural organizations. A great deal of emphasis was placed on the ability of the system to evolve and grow over the lifetime of the space station. The result is a hierarchical parallel architecture that is characterized by high level language programmability, modularity, extensibility and can meet the required performance goals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yao; Balaprakash, Prasanna; Meng, Jiayuan
We present Raexplore, a performance modeling framework for architecture exploration. Raexplore enables rapid, automated, and systematic search of architecture design space by combining hardware counter-based performance characterization and analytical performance modeling. We demonstrate Raexplore for two recent manycore processors IBM Blue- Gene/Q compute chip and Intel Xeon Phi, targeting a set of scientific applications. Our framework is able to capture complex interactions between architectural components including instruction pipeline, cache, and memory, and to achieve a 3–22% error for same-architecture and cross-architecture performance predictions. Furthermore, we apply our framework to assess the two processors, and discover and evaluate a list ofmore » architectural scaling options for future processor designs.« less
Water and the Thirsting Spirit.
ERIC Educational Resources Information Center
Najem, Robert E.
1984-01-01
Highlights a four-part slide show focusing on the humanistic legacy of water: (1) water in literature; (2) architecture; (3) religion; and (4) painting. Discusses representative slides in each category and presents a complete list of all slides that comprise the program. (BC)
An Enterprise Architecture Perspective to Electronic Health Record Based Care Governance.
Motoc, Bogdan
2017-01-01
This paper proposes an Enterprise Architecture viewpoint of Electronic Health Record (EHR) based care governance. The improvements expected are derived from the collaboration framework and the clinical health model proposed as foundation for the concept of EHR.
The research of service provision based on service-oriented architecture for NGN
NASA Astrophysics Data System (ADS)
Jie, Yin; Nian, Zhou; Qian, Mao
2007-11-01
Service convergence is an important characteristic of NGN(Next Generation Networking). How to integrate the service capabilities of telecommunication network and Internet. At first, this article puts forward the concepts and characteristics of SOA (Service-Oriented Architecture) and Web Service, then discusses relationship between them. Secondly, combined with five kinds of Service Provision in NGN, A service platform architecture design of NGN and a service development mode based on SOA are brought up. At last, a specific example is analyzed with BPEL (Business Process Execution Language) in order to describe service development flow based on SOA for NGN.
Ryan, Amanda; Eklund, Peter
2008-01-01
Healthcare information is composed of many types of varying and heterogeneous data. Semantic interoperability in healthcare is especially important when all these different types of data need to interact. Presented in this paper is a solution to interoperability in healthcare based on a standards-based middleware software architecture used in enterprise solutions. This architecture has been translated into the healthcare domain using a messaging and modeling standard which upholds the ideals of the Semantic Web (HL7 V3) combined with a well-known standard terminology of clinical terms (SNOMED CT).
Introduction to Message-Bus Architectures for Space Systems
NASA Technical Reports Server (NTRS)
Smith, Dan; Gregory, Brian
2005-01-01
This course presents technical and programmatic information on the development of message-based architectures for space mission ground and flight software systems. Message-based architecture approaches provide many significant advantages over the more traditional socket-based one-of-a-kind integrated system development approaches. The course provides an overview of publish/subscribe concepts, the use of common isolation layer API's, approaches to message standardization, and other technical topics. Several examples of currently operational systems are discussed and possible changes to the system development process are presented. Benefits and lessons learned will be discussed and time for questions and answers will be provided.
Weather Information Communications (WINCOMM) Overview and Status
NASA Technical Reports Server (NTRS)
Martzaklis, K.
2003-01-01
The second annual project review of Weather Information Communications (WINCOMM) is presented. The topics of discussion include: 1) In-Flight Weather Information; 2) System Elements; 3) Technology Investment Areas; 4) NAS Information Exchange; 5) FIS Datalink Architecture Analyses; 6) Hybrid FIS Datalink Architecture; 7) FIS Datalink Architecture Analyses; 8) Air Transport: Ground and Satellite-based Datalinks; 9) General Aviation: Ground and Satellite-based Datalinks; 10) Low Altitude AutoMET Reporting; 11) AutoMET: Airborne-based Datalinks; 12) Network Protocols Development; and 13) FAA/NASA Collaboration. A summary of WINCOMM is also included. This paper is in viewgraph form.
A supportive architecture for CFD-based design optimisation
NASA Astrophysics Data System (ADS)
Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong
2014-03-01
Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture and developed algorithms have performed successfully and efficiently in dealing with the design optimisation with over 200 design variables.
Becoming and Disappearing: Between Art, Architecture and Research
ERIC Educational Resources Information Center
Beinart, Katy
2014-01-01
This paper examines some parallels and differences in pursuing practice-based research in art or architecture. Using a series of different headlines and examples, I examine the potential of working "between" art and architecture, which I argue could generate new, hybridised methodologies of practice through interrogating the…
Hybridization of Architectural Styles for Integrated Enterprise Information Systems
NASA Astrophysics Data System (ADS)
Bagusyte, Lina; Lupeikiene, Audrone
Current enterprise systems engineering theory does not provide adequate support for the development of information systems on demand. To say more precisely, it is forming. This chapter proposes the main architectural decisions that underlie the design of integrated enterprise information systems. This chapter argues for the extending service-oriented architecture - for merging it with component-based paradigm at the design stage and using connectors of different architectural styles. The suitability of general-purpose language SysML for the modeling of integrated enterprise information systems architectures is described and arguments pros are presented.
Hardware architecture design of image restoration based on time-frequency domain computation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Jing; Jiao, Zipeng
2013-10-01
The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.
NASA Astrophysics Data System (ADS)
Liu, Lei; Hong, Xiaobin; Wu, Jian; Lin, Jintong
As Grid computing continues to gain popularity in the industry and research community, it also attracts more attention from the customer level. The large number of users and high frequency of job requests in the consumer market make it challenging. Clearly, all the current Client/Server(C/S)-based architecture will become unfeasible for supporting large-scale Grid applications due to its poor scalability and poor fault-tolerance. In this paper, based on our previous works [1, 2], a novel self-organized architecture to realize a highly scalable and flexible platform for Grids is proposed. Experimental results show that this architecture is suitable and efficient for consumer-oriented Grids.
Internet-enabled collaborative agent-based supply chains
NASA Astrophysics Data System (ADS)
Shen, Weiming; Kremer, Rob; Norrie, Douglas H.
2000-12-01
This paper presents some results of our recent research work related to the development of a new Collaborative Agent System Architecture (CASA) and an Infrastructure for Collaborative Agent Systems (ICAS). Initially being proposed as a general architecture for Internet based collaborative agent systems (particularly complex industrial collaborative agent systems), the proposed architecture is very suitable for managing the Internet enabled complex supply chain for a large manufacturing enterprise. The general collaborative agent system architecture with the basic communication and cooperation services, domain independent components, prototypes and mechanisms are described. Benefits of implementing Internet enabled supply chains with the proposed infrastructure are discussed. A case study on Internet enabled supply chain management is presented.
Large liquid rocket engine transient performance simulation system
NASA Technical Reports Server (NTRS)
Mason, J. R.; Southwick, R. D.
1989-01-01
Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory A.; Ingham, Michel D.; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
Innovative systems and software engineering solutions are required to meet the increasingly challenging demands of deep-space robotic missions. While recent advances in the development of an integrated systems and software engineering approach have begun to address some of these issues, they are still at the core highly manual and, therefore, error-prone. This paper describes a task aimed at infusing MIT's model-based executive, Titan, into JPL's Mission Data System (MDS), a unified state-based architecture, systems engineering process, and supporting software framework. Results of the task are presented, including a discussion of the benefits and challenges associated with integrating mature model-based programming techniques and technologies into a rigorously-defined domain specific architecture.
ERIC Educational Resources Information Center
Smith, Carl A.; Boyer, Mark E.
2015-01-01
In light of concerns with architectural students' emotional jeopardy during traditional desk and final-jury critiques, the authors pursue alternative approaches intended to provide more supportive and mentoring verbal assessment in landscape architecture studios. In addition to traditional studio-based critiques throughout a semester, we provide…
Dynamic Weather Routes Architecture Overview
NASA Technical Reports Server (NTRS)
Eslami, Hassan; Eshow, Michelle
2014-01-01
Dynamic Weather Routes Architecture Overview, presents the high level software architecture of DWR, based on the CTAS software framework and the Direct-To automation tool. The document also covers external and internal data flows, required dataset, changes to the Direct-To software for DWR, collection of software statistics, and the code structure.
78 FR 7820 - Notice of Intelligent Mail Indicia Performance Criteria
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
... FURTHER INFORMATION CONTACT: Marlo Kay Ivey, Business Programs Specialist, Payment Technology, U.S. Postal... Performance Criteria and Security Architecture for Open Information Based Indicia (IBI) Postage Evidencing Systems and the Performance Criteria and Security Architecture for Closed Information Based Indicia (IBI...
NASA Astrophysics Data System (ADS)
Fathil, M. F. M.; Arshad, M. K. Md.; Hashim, U.; Ruslinda, A. R.; Gopinath, Subash C. B.; M. Nuzaihan M., N.; Ayub, R. M.; Adzhri, R.; Zaki, M.; Azman, A. H.
2016-07-01
This paper presents the preparation method of photolithography chrome mask design used in fabrication process of double spiral interdigitated electrode with back gate biasing based biosensor. By learning the fabrication process flow of the biosensor, the chrome masks are designed through drawing using the AutoCAD software. The overall width and length of the device is optimized at 7.0 mm and 10.0 mm, respectively. Fabrication processes of the biosensor required three chrome masks, which included back gate opening, spiral IDE formation, and passivation area formation. The complete chrome masks design will be sent for chrome mask fabrication and for future use in biosensor fabrication.