NASA Technical Reports Server (NTRS)
LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.
2011-01-01
This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.
NASA Technical Reports Server (NTRS)
White, A. L.
1983-01-01
This paper examines the reliability of three architectures for six components. For each architecture, the probabilities of the failure states are given by algebraic formulas involving the component fault rate, the system recovery rate, and the operating time. The dominant failure modes are identified, and the change in reliability is considered with respect to changes in fault rate, recovery rate, and operating time. The major conclusions concern the influence of system architecture on failure modes and parameter requirements. Without this knowledge, a system designer may pick an inappropriate structure.
Reliability Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange Kevin E.; Anderson, Molly S.
2012-01-01
Quantitative assessments of system reliability and equivalent system mass (ESM) were made for different life support architectures based primarily on International Space Station technologies. The analysis was applied to a one-year deep-space mission. System reliability was increased by adding redundancy and spares, which added to the ESM. Results were thus obtained allowing a comparison of the ESM for each architecture at equivalent levels of reliability. Although the analysis contains numerous simplifications and uncertainties, the results suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support ESM and could influence the optimal degree of life support closure. Approaches for reducing reliability impacts were investigated and are discussed.
Avionics architecture studies for the entry research vehicle
NASA Technical Reports Server (NTRS)
Dzwonczyk, M. J.; Mckinney, M. F.; Adams, S. J.; Gauthier, R. J.
1989-01-01
This report is the culmination of a year-long investigation of the avionics architecture for NASA's Entry Research Vehicle (ERV). The Entry Research Vehicle is conceived to be an unmanned, autonomous spacecraft to be deployed from the Shuttle. It will perform various aerodynamic and propulsive maneuvers in orbit and land at Edwards AFB after a 5 to 10 hour mission. The design and analysis of the vehicle's avionics architecture are detailed here. The architecture consists of a central triply redundant ultra-reliable fault tolerant processor attached to three replicated and distributed MIL-STD-1553 buses for input and output. The reliability analysis is detailed here. The architecture was found to be sufficiently reliable for the ERV mission plan.
A reliability analysis tool for SpaceWire network
NASA Astrophysics Data System (ADS)
Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou
2017-04-01
A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.
Reliability Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin E.; Anderson, Molly S.
2011-01-01
Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.
Architecture-Based Reliability Analysis of Web Services
ERIC Educational Resources Information Center
Rahmani, Cobra Mariam
2012-01-01
In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity…
Reliability analysis of multicellular system architectures for low-cost satellites
NASA Astrophysics Data System (ADS)
Erlank, A. O.; Bridges, C. P.
2018-06-01
Multicellular system architectures are proposed as a solution to the problem of low reliability currently seen amongst small, low cost satellites. In a multicellular architecture, a set of independent k-out-of-n systems mimic the cells of a biological organism. In order to be beneficial, a multicellular architecture must provide more reliability per unit of overhead than traditional forms of redundancy. The overheads include power consumption, volume and mass. This paper describes the derivation of an analytical model for predicting a multicellular system's lifetime. The performance of such architectures is compared against that of several common forms of redundancy and proven to be beneficial under certain circumstances. In addition, the problem of peripheral interfaces and cross-strapping is investigated using a purpose-developed, multicellular simulation environment. Finally, two case studies are presented based on a prototype cell implementation, which demonstrate the feasibility of the proposed architecture.
Space transportation architecture: Reliability sensitivities
NASA Technical Reports Server (NTRS)
Williams, A. M.
1992-01-01
A sensitivity analysis is given of the benefits and drawbacks associated with a proposed Earth to orbit vehicle architecture. The architecture represents a fleet of six vehicles (two existing, four proposed) that would be responsible for performing various missions as mandated by NASA and the U.S. Air Force. Each vehicle has a prescribed flight rate per year for a period of 31 years. By exposing this fleet of vehicles to a probabilistic environment where the fleet experiences failures, downtimes, setbacks, etc., the analysis involves determining the resiliency and costs associated with the fleet of specific vehicle/subsystem reliabilities. The resources required were actual observed data on the failures and downtimes associated with existing vehicles, data based on engineering judgement for proposed vehicles, and the development of a sensitivity analysis program.
1994-01-29
other processes, but that he arrived at his results in a different manner. Batory didn’t start with idioms; he performed a domain analysis and...abstracted idioms. Through domain analysis and domain modeling, new idioms can be found and the form of architecture can be the same. It was also questioned...Programming 5. Consensus Definition of Architecture 6. Inductive Analysis of Current Exemplars 7. VHDL (Bailor) 8. Ontological Structuring 3.3.3
Performance Evaluation of Reliable Multicast Protocol for Checkout and Launch Control Systems
NASA Technical Reports Server (NTRS)
Shu, Wei Wennie; Porter, John
2000-01-01
The overall objective of this project is to study reliability and performance of Real Time Critical Network (RTCN) for checkout and launch control systems (CLCS). The major tasks include reliability and performance evaluation of Reliable Multicast (RM) package and fault tolerance analysis and design of dual redundant network architecture.
National Launch System comparative economic analysis
NASA Technical Reports Server (NTRS)
Prince, A.
1992-01-01
Results are presented from an analysis of economic benefits (or losses), in the form of the life cycle cost savings, resulting from the development of the National Launch System (NLS) family of launch vehicles. The analysis was carried out by comparing various NLS-based architectures with the current Shuttle/Titan IV fleet. The basic methodology behind this NLS analysis was to develop a set of annual payload requirements for the Space Station Freedom and LEO, to design launch vehicle architectures around these requirements, and to perform life-cycle cost analyses on all of the architectures. A SEI requirement was included. Launch failure costs were estimated and combined with the relative reliability assumptions to measure the effects of losses. Based on the analysis, a Shuttle/NLS architecture evolving into a pressurized-logistics-carrier/NLS architecture appears to offer the best long-term cost benefit.
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1995-01-01
This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.
Near-Earth Phase Risk Comparison of Human Mars Campaign Architectures
NASA Technical Reports Server (NTRS)
Manning, Ted A.; Nejad, Hamed S.; Mattenberger, Chris
2013-01-01
A risk analysis of the launch, orbital assembly, and Earth-departure phases of human Mars exploration campaign architectures was completed as an extension of a probabilistic risk assessment (PRA) originally carried out under the NASA Constellation Program Ares V Project. The objective of the updated analysis was to study the sensitivity of loss-of-campaign risk to such architectural factors as composition of the propellant delivery portion of the launch vehicle fleet (Ares V heavy-lift launch vehicle vs. smaller/cheaper commercial launchers) and the degree of launcher or Mars-bound spacecraft element sparing. Both a static PRA analysis and a dynamic, event-based Monte Carlo simulation were developed and used to evaluate the probability of loss of campaign under different sparing options. Results showed that with no sparing, loss-of-campaign risk is strongly driven by launcher count and on-orbit loiter duration, favoring an all-Ares V launch approach. Further, the reliability of the all-Ares V architecture showed significant improvement with the addition of a single spare launcher/payload. Among architectures utilizing a mix of Ares V and commercial launchers, those that minimized the on-orbit loiter duration of Mars-bound elements were found to exceed the reliability of no spare all-Ares V campaign if unlimited commercial vehicle sparing was assumed
The Domain-Specific Software Architecture Program
1992-06-01
Kang, K.C; Cohen, S.C: Jess, J.A; Novak, W.E; Peterson, A.S. Feature- Oriented Domain Analysis ( FODA ) Feasibility Study. (CMU/SEI-90-TR-21, ADA235785...perspective of a con- trols engineer solving a problem using an iterative process of simulation and analysis . The CMU/SEI-92-SR-9 1 I ~math AnalysislP...for schedulability analysis and Markov processes for the determination of reliability. Software architectures are derived from these formal models. ORA
Technology advances and market forces: Their impact on high performance architectures
NASA Technical Reports Server (NTRS)
Best, D. R.
1978-01-01
Reasonable projections into future supercomputer architectures and technology require an analysis of the computer industry market environment, the current capabilities and trends within the component industry, and the research activities on computer architecture in the industrial and academic communities. Management, programmer, architect, and user must cooperate to increase the efficiency of supercomputer development efforts. Care must be taken to match the funding, compiler, architecture and application with greater attention to testability, maintainability, reliability, and usability than supercomputer development programs of the past.
Design and reliability analysis of DP-3 dynamic positioning control architecture
NASA Astrophysics Data System (ADS)
Wang, Fang; Wan, Lei; Jiang, Da-Peng; Xu, Yu-Ru
2011-12-01
As the exploration and exploitation of oil and gas proliferate throughout deepwater area, the requirements on the reliability of dynamic positioning system become increasingly stringent. The control objective ensuring safety operation at deep water will not be met by a single controller for dynamic positioning. In order to increase the availability and reliability of dynamic positioning control system, the triple redundancy hardware and software control architectures were designed and developed according to the safe specifications of DP-3 classification notation for dynamically positioned ships and rigs. The hardware redundant configuration takes the form of triple-redundant hot standby configuration including three identical operator stations and three real-time control computers which connect each other through dual networks. The function of motion control and redundancy management of control computers were implemented by software on the real-time operating system VxWorks. The software realization of task loose synchronization, majority voting and fault detection were presented in details. A hierarchical software architecture was planed during the development of software, consisting of application layer, real-time layer and physical layer. The behavior of the DP-3 dynamic positioning control system was modeled by a Markov model to analyze its reliability. The effects of variation in parameters on the reliability measures were investigated. The time domain dynamic simulation was carried out on a deepwater drilling rig to prove the feasibility of the proposed control architecture.
Use of Model-Based Design Methods for Enhancing Resiliency Analysis of Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Knox, Lenora A.
The most common traditional non-functional requirement analysis is reliability. With systems becoming more complex, networked, and adaptive to environmental uncertainties, system resiliency has recently become the non-functional requirement analysis of choice. Analysis of system resiliency has challenges; which include, defining resilience for domain areas, identifying resilience metrics, determining resilience modeling strategies, and understanding how to best integrate the concepts of risk and reliability into resiliency. Formal methods that integrate all of these concepts do not currently exist in specific domain areas. Leveraging RAMSoS, a model-based reliability analysis methodology for Systems of Systems (SoS), we propose an extension that accounts for resiliency analysis through evaluation of mission performance, risk, and cost using multi-criteria decision-making (MCDM) modeling and design trade study variability modeling evaluation techniques. This proposed methodology, coined RAMSoS-RESIL, is applied to a case study in the multi-agent unmanned aerial vehicle (UAV) domain to investigate the potential benefits of a mission architecture where functionality to complete a mission is disseminated across multiple UAVs (distributed) opposed to being contained in a single UAV (monolithic). The case study based research demonstrates proof of concept for the proposed model-based technique and provides sufficient preliminary evidence to conclude which architectural design (distributed vs. monolithic) is most resilient based on insight into mission resilience performance, risk, and cost in addition to the traditional analysis of reliability.
ERIC Educational Resources Information Center
Ramalhoto, M. F.
1999-01-01
Introduces a special theme journal issue on research and education in quality control, maintenance, reliability, risk analysis, and safety. Discusses each of these theme concepts and their applications to naval architecture, marine engineering, and industrial engineering. Considers the effects of the rapid transfer of research results through…
Fiber Access Networks: Reliability Analysis and Swedish Broadband Market
NASA Astrophysics Data System (ADS)
Wosinska, Lena; Chen, Jiajia; Larsen, Claus Popp
Fiber access network architectures such as active optical networks (AONs) and passive optical networks (PONs) have been developed to support the growing bandwidth demand. Whereas particularly Swedish operators prefer AON, this may not be the case for operators in other countries. The choice depends on a combination of technical requirements, practical constraints, business models, and cost. Due to the increasing importance of reliable access to the network services, connection availability is becoming one of the most crucial issues for access networks, which should be reflected in the network owner's architecture decision. In many cases protection against failures is realized by adding backup resources. However, there is a trade off between the cost of protection and the level of service reliability since improving reliability performance by duplication of network resources (and capital expenditures CAPEX) may be too expensive. In this paper we present the evolution of fiber access networks and compare reliability performance in relation to investment and management cost for some representative cases. We consider both standard and novel architectures for deployment in both sparsely and densely populated areas. While some recent works focused on PON protection schemes with reduced CAPEX the current and future effort should be put on minimizing the operational expenditures (OPEX) during the access network lifetime.
Reliability and Productivity Modeling for the Optimization of Separated Spacecraft Interferometers
NASA Technical Reports Server (NTRS)
Kenny, Sean (Technical Monitor); Wertz, Julie
2002-01-01
As technological systems grow in capability, they also grow in complexity. Due to this complexity, it is no longer possible for a designer to use engineering judgement to identify the components that have the largest impact on system life cycle metrics, such as reliability, productivity, cost, and cost effectiveness. One way of identifying these key components is to build quantitative models and analysis tools that can be used to aid the designer in making high level architecture decisions. Once these key components have been identified, two main approaches to improving a system using these components exist: add redundancy or improve the reliability of the component. In reality, the most effective approach to almost any system will be some combination of these two approaches, in varying orders of magnitude for each component. Therefore, this research tries to answer the question of how to divide funds, between adding redundancy and improving the reliability of components, to most cost effectively improve the life cycle metrics of a system. While this question is relevant to any complex system, this research focuses on one type of system in particular: Separate Spacecraft Interferometers (SSI). Quantitative models are developed to analyze the key life cycle metrics of different SSI system architectures. Next, tools are developed to compare a given set of architectures in terms of total performance, by coupling different life cycle metrics together into one performance metric. Optimization tools, such as simulated annealing and genetic algorithms, are then used to search the entire design space to find the "optimal" architecture design. Sensitivity analysis tools have been developed to determine how sensitive the results of these analyses are to uncertain user defined parameters. Finally, several possibilities for the future work that could be done in this area of research are presented.
The art of fault-tolerant system reliability modeling
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Johnson, Sally C.
1990-01-01
A step-by-step tutorial of the methods and tools used for the reliability analysis of fault-tolerant systems is presented. Emphasis is on the representation of architectural features in mathematical models. Details of the mathematical solution of complex reliability models are not presented. Instead the use of several recently developed computer programs--SURE, ASSIST, STEM, PAWS--which automate the generation and solution of these models is described.
NASA Technical Reports Server (NTRS)
1972-01-01
The design is reported of an advanced modular computer system designated the Automatically Reconfigurable Modular Multiprocessor System, which anticipates requirements for higher computing capacity and reliability for future spaceborne computers. Subjects discussed include: an overview of the architecture, mission analysis, synchronous and nonsynchronous scheduling control, reliability, and data transmission.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2012-01-11
GENI Project: Georgia Tech is developing a decentralized, autonomous, internet-like control architecture and control software system for the electric power grid. Georgia Tech’s new architecture is based on the emerging concept of electricity prosumers—economically motivated actors that can produce, consume, or store electricity. Under Georgia Tech’s architecture, all of the actors in an energy system are empowered to offer associated energy services based on their capabilities. The actors achieve their sustainability, efficiency, reliability, and economic objectives, while contributing to system-wide reliability and efficiency goals. This is in marked contrast to the current one-way, centralized control paradigm.
Predictable and reliable ECG monitoring over IEEE 802.11 WLANs within a hospital.
Park, Juyoung; Kang, Kyungtae
2014-09-01
Telecardiology provides mobility for patients who require constant electrocardiogram (ECG) monitoring. However, its safety is dependent on the predictability and robustness of data delivery, which must overcome errors in the wireless channel through which the ECG data are transmitted. We report here a framework that can be used to gauge the applicability of IEEE 802.11 wireless local area network (WLAN) technology to ECG monitoring systems in terms of delay constraints and transmission reliability. For this purpose, a medical-grade WLAN architecture achieved predictable delay through the combination of a medium access control mechanism based on the point coordination function provided by IEEE 802.11 and an error control scheme based on Reed-Solomon coding and block interleaving. The size of the jitter buffer needed was determined by this architecture to avoid service dropout caused by buffer underrun, through analysis of variations in transmission delay. Finally, we assessed this architecture in terms of service latency and reliability by modeling the transmission of uncompressed two-lead electrocardiogram data from the MIT-BIH Arrhythmia Database and highlight the applicability of this wireless technology to telecardiology.
Advanced Launch System Multi-Path Redundant Avionics Architecture Analysis and Characterization
NASA Technical Reports Server (NTRS)
Baker, Robert L.
1993-01-01
The objective of the Multi-Path Redundant Avionics Suite (MPRAS) program is the development of a set of avionic architectural modules which will be applicable to the family of launch vehicles required to support the Advanced Launch System (ALS). To enable ALS cost/performance requirements to be met, the MPRAS must support autonomy, maintenance, and testability capabilities which exceed those present in conventional launch vehicles. The multi-path redundant or fault tolerance characteristics of the MPRAS are necessary to offset a reduction in avionics reliability due to the increased complexity needed to support these new cost reduction and performance capabilities and to meet avionics reliability requirements which will provide cost-effective reductions in overall ALS recurring costs. A complex, real-time distributed computing system is needed to meet the ALS avionics system requirements. General Dynamics, Boeing Aerospace, and C.S. Draper Laboratory have proposed system architectures as candidates for the ALS MPRAS. The purpose of this document is to report the results of independent performance and reliability characterization and assessment analyses of each proposed candidate architecture and qualitative assessments of testability, maintainability, and fault tolerance mechanisms. These independent analyses were conducted as part of the MPRAS Part 2 program and were carried under NASA Langley Research Contract NAS1-17964, Task Assignment 28.
Partially Decentralized Control Architectures for Satellite Formations
NASA Technical Reports Server (NTRS)
Carpenter, J. Russell; Bauer, Frank H.
2002-01-01
In a partially decentralized control architecture, more than one but less than all nodes have supervisory capability. This paper describes an approach to choosing the number of supervisors in such au architecture, based on a reliability vs. cost trade. It also considers the implications of these results for the design of navigation systems for satellite formations that could be controlled with a partially decentralized architecture. Using an assumed cost model, analytic and simulation-based results indicate that it may be cheaper to achieve a given overall system reliability with a partially decentralized architecture containing only a few supervisors, than with either fully decentralized or purely centralized architectures. Nominally, the subset of supervisors may act as centralized estimation and control nodes for corresponding subsets of the remaining subordinate nodes, and act as decentralized estimation and control peers with respect to each other. However, in the context of partially decentralized satellite formation control, the absolute positions and velocities of each spacecraft are unique, so that correlations which make estimates using only local information suboptimal only occur through common biases and process noise. Covariance and monte-carlo analysis of a simplified system show that this lack of correlation may allow simplification of the local estimators while preserving the global optimality of the maneuvers commanded by the supervisors.
NASA Astrophysics Data System (ADS)
Armstrong, Michael James
Increases in power demands and changes in the design practices of overall equipment manufacturers has led to a new paradigm in vehicle systems definition. The development of unique power systems architectures is of increasing importance to overall platform feasibility and must be pursued early in the aircraft design process. Many vehicle systems architecture trades must be conducted concurrent to platform definition. With an increased complexity introduced during conceptual design, accurate predictions of unit level sizing requirements must be made. Architecture specific emergent requirements must be identified which arise due to the complex integrated effect of unit behaviors. Off-nominal operating scenarios present sizing critical requirements to the aircraft vehicle systems. These requirements are architecture specific and emergent. Standard heuristically defined failure mitigation is sufficient for sizing traditional and evolutionary architectures. However, architecture concepts which vary significantly in terms of structure and composition require that unique failure mitigation strategies be defined for accurate estimations of unit level requirements. Identifying of these off-nominal emergent operational requirements require extensions to traditional safety and reliability tools and the systematic identification of optimal performance degradation strategies. Discrete operational constraints posed by traditional Functional Hazard Assessment (FHA) are replaced by continuous relationships between function loss and operational hazard. These relationships pose the objective function for hazard minimization. Load shedding optimization is performed for all statistically significant failures by varying the allocation of functional capability throughout the vehicle systems architecture. Expressing hazards, and thereby, reliability requirements as continuous relationships with the magnitude and duration of functional failure requires augmentations to the traditional means for system safety assessment (SSA). The traditional two state and discrete system reliability assessment proves insufficient. Reliability is, therefore, handled in an analog fashion: as a function of magnitude of failure and failure duration. A series of metrics are introduced which characterize system performance in terms of analog hazard probabilities. These include analog and cumulative system and functional risk, hazard correlation, and extensions to the traditional component importance metrics. Continuous FHA, load shedding optimization, and analog SSA constitute the SONOMA process (Systematic Off-Nominal Requirements Analysis). Analog system safety metrics inform both architecture optimization (changes in unit level capability and reliability) and architecture augmentation (changes in architecture structure and composition). This process was applied for two vehicle systems concepts (conventional and 'more-electric') in terms of loss/hazard relationships with varying degrees of fidelity. Application of this process shows that the traditional assumptions regarding the structure of the function loss vs. hazard relationship apply undue design bias to functions and components during exploratory design. This bias is illustrated in terms of inaccurate estimations of the system and function level risk and unit level importance. It was also shown that off-nominal emergent requirements must be defined specific to each architecture concept. Quantitative comparisons of architecture specific off-nominal performance were obtained which provide evidence to the need for accurate definition of load shedding strategies during architecture exploratory design. Formally expressing performance degradation strategies in terms of the minimization of a continuous hazard space enhances the system architects ability to accurately predict sizing critical emergent requirements concurrent to architecture definition. Furthermore, the methods and frameworks generated here provide a structured and flexible means for eliciting these architecture specific requirements during the performance of architecture trades.
Sensor Network Architectures for Monitoring Underwater Pipelines
Mohamed, Nader; Jawhar, Imad; Al-Jaroodi, Jameela; Zhang, Liren
2011-01-01
This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (Radio Frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring. PMID:22346669
Sensor network architectures for monitoring underwater pipelines.
Mohamed, Nader; Jawhar, Imad; Al-Jaroodi, Jameela; Zhang, Liren
2011-01-01
This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (radio frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring.
Fault tolerant architectures for integrated aircraft electronics systems, task 2
NASA Technical Reports Server (NTRS)
Levitt, K. N.; Melliar-Smith, P. M.; Schwartz, R. L.
1984-01-01
The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported.
A fault-tolerant avionics suite for an entry research vehicle
NASA Technical Reports Server (NTRS)
Dzwonczyk, Mark; Stone, Howard
1988-01-01
A highly-reliable avionics suite has been designed for an Entry Research Vehicle. The autonomous spacecraft would be deployed from the Space Shuttle Orbiter and perform a variety of aerodynamic and propulsive maneuvers which may be required for future space transportation system vehicles. The flight electronics consist of a central fault-tolerant processor, which is resilient to all first failures, reliably cross-strapped to redundant and distributed sets of sensors and effectors. This paper describes the preliminary design and analysis of the architecture which resulted from a fifteen month study by the Charles Stark Draper Laboratory for the NASA Langley Research Center. After a brief introduction to the design task, the architecture of the central flight computer and its interface to the vehicle are discussed. Following this, the method and results of the baseline reliability study for the avionic suite are presented.
A fault-tolerant avionics suite for an entry research vehicle
NASA Astrophysics Data System (ADS)
Dzwonczyk, Mark; Stone, Howard
A highly-reliable avionics suite has been designed for an Entry Research Vehicle. The autonomous spacecraft would be deployed from the Space Shuttle Orbiter and perform a variety of aerodynamic and propulsive maneuvers which may be required for future space transportation system vehicles. The flight electronics consist of a central fault-tolerant processor, which is resilient to all first failures, reliably cross-strapped to redundant and distributed sets of sensors and effectors. This paper describes the preliminary design and analysis of the architecture which resulted from a fifteen month study by the Charles Stark Draper Laboratory for the NASA Langley Research Center. After a brief introduction to the design task, the architecture of the central flight computer and its interface to the vehicle are discussed. Following this, the method and results of the baseline reliability study for the avionic suite are presented.
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan H.; Harper, Richard E.; Jaskowiak, Kenneth R.; Rosch, Gene; Alger, Linda S.; Schor, Andrei L.
1990-01-01
An avionics architecture for the advanced launch system (ALS) that uses validated hardware and software building blocks developed under the advanced information processing system program is presented. The AIPS for ALS architecture defined is preliminary, and reliability requirements can be met by the AIPS hardware and software building blocks that are built using the state-of-the-art technology available in the 1992-93 time frame. The level of detail in the architecture definition reflects the level of detail available in the ALS requirements. As the avionics requirements are refined, the architecture can also be refined and defined in greater detail with the help of analysis and simulation tools. A useful methodology is demonstrated for investigating the impact of the avionics suite to the recurring cost of the ALS. It is shown that allowing the vehicle to launch with selected detected failures can potentially reduce the recurring launch costs. A comparative analysis shows that validated fault-tolerant avionics built out of Class B parts can result in lower life-cycle-cost in comparison to simplex avionics built out of Class S parts or other redundant architectures.
NASA Technical Reports Server (NTRS)
Dennehy, Cornelius J.
2010-01-01
This final report summarizes the results of a comparative assessment of the fault tolerance and reliability of different Guidance, Navigation and Control (GN&C) architectural approaches. This study was proactively performed by a combined Massachusetts Institute of Technology (MIT) and Draper Laboratory team as a GN&C "Discipline-Advancing" activity sponsored by the NASA Engineering and Safety Center (NESC). This systematic comparative assessment of GN&C system architectural approaches was undertaken as a fundamental step towards understanding the opportunities for, and limitations of, architecting highly reliable and fault tolerant GN&C systems composed of common avionic components. The primary goal of this study was to obtain architectural 'rules of thumb' that could positively influence future designs in the direction of an optimized (i.e., most reliable and cost-efficient) GN&C system. A secondary goal was to demonstrate the application and the utility of a systematic modeling approach that maps the entire possible architecture solution space.
Evaluation of fault-tolerant parallel-processor architectures over long space missions
NASA Technical Reports Server (NTRS)
Johnson, Sally C.
1989-01-01
The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.
Trade Studies of Space Launch Architectures using Modular Probabilistic Risk Analysis
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Go, Susie
2006-01-01
A top-down risk assessment in the early phases of space exploration architecture development can provide understanding and intuition of the potential risks associated with new designs and technologies. In this approach, risk analysts draw from their past experience and the heritage of similar existing systems as a source for reliability data. This top-down approach captures the complex interactions of the risk driving parts of the integrated system without requiring detailed knowledge of the parts themselves, which is often unavailable in the early design stages. Traditional probabilistic risk analysis (PRA) technologies, however, suffer several drawbacks that limit their timely application to complex technology development programs. The most restrictive of these is a dependence on static planning scenarios, expressed through fault and event trees. Fault trees incorporating comprehensive mission scenarios are routinely constructed for complex space systems, and several commercial software products are available for evaluating fault statistics. These static representations cannot capture the dynamic behavior of system failures without substantial modification of the initial tree. Consequently, the development of dynamic models using fault tree analysis has been an active area of research in recent years. This paper discusses the implementation and demonstration of dynamic, modular scenario modeling for integration of subsystem fault evaluation modules using the Space Architecture Failure Evaluation (SAFE) tool. SAFE is a C++ code that was originally developed to support NASA s Space Launch Initiative. It provides a flexible framework for system architecture definition and trade studies. SAFE supports extensible modeling of dynamic, time-dependent risk drivers of the system and functions at the level of fidelity for which design and failure data exists. The approach is scalable, allowing inclusion of additional information as detailed data becomes available. The tool performs a Monte Carlo analysis to provide statistical estimates. Example results of an architecture system reliability study are summarized for an exploration system concept using heritage data from liquid-fueled expendable Saturn V/Apollo launch vehicles.
Real-time traffic sign detection and recognition
NASA Astrophysics Data System (ADS)
Herbschleb, Ernst; de With, Peter H. N.
2009-01-01
The continuous growth of imaging databases increasingly requires analysis tools for extraction of features. In this paper, a new architecture for the detection of traffic signs is proposed. The architecture is designed to process a large database with tens of millions of images with a resolution up to 4,800x2,400 pixels. Because of the size of the database, a high reliability as well as a high throughput is required. The novel architecture consists of a three-stage algorithm with multiple steps per stage, combining both color and specific spatial information. The first stage contains an area-limitation step which is performance critical in both the detection rate as the overall processing time. The second stage locates suggestions for traffic signs using recently published feature processing. The third stage contains a validation step to enhance reliability of the algorithm. During this stage, the traffic signs are recognized. Experiments show a convincing detection rate of 99%. With respect to computational speed, the throughput for line-of-sight images of 800×600 pixels is 35 Hz and for panorama images it is 4 Hz. Our novel architecture outperforms existing algorithms, with respect to both detection rate and throughput
Reliability Engineering for Service Oriented Architectures
2013-02-01
Common Object Request Broker Architecture Ecosystem In software , an ecosystem is a set of applications and/or services that grad- ually build up over time...Enterprise Service Bus Foreign In an SOA context: Any SOA, service or software which the owners of the calling software do not have control of, either...SOA Service Oriented Architecture SRE Software Reliability Engineering System Mode Many systems exhibit different modes of operation. E.g. the cockpit
NASA Technical Reports Server (NTRS)
Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.
1992-01-01
Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions.
Sophisticated Calculation of the 1oo4-architecture for Safety-related Systems Conforming to IEC61508
NASA Astrophysics Data System (ADS)
Hayek, A.; Bokhaiti, M. Al; Schwarz, M. H.; Boercsoek, J.
2012-05-01
With the publication and enforcement of the standard IEC 61508 of safety related systems, recent system architectures have been presented and evaluated. Among a number of techniques and measures to the evaluation of safety integrity level (SIL) for safety-related systems, several measures such as reliability block diagrams and Markov models are used to analyze the probability of failure on demand (PFD) and mean time to failure (MTTF) which conform to IEC 61508. The current paper deals with the quantitative analysis of the novel 1oo4-architecture (one out of four) presented in recent work. Therefore sophisticated calculations for the required parameters are introduced. The provided 1oo4-architecture represents an advanced safety architecture based on on-chip redundancy, which is 3-failure safe. This means that at least one of the four channels have to work correctly in order to trigger the safety function.
Design of an integrated airframe/propulsion control system architecture
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Lee, C. William; Strickland, Michael J.
1990-01-01
The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that used both reliability and performance tools. An account is given of the motivation for the final design and problems associated with both reliability and performance modeling. The appendices contain a listing of the code for both the reliability and performance model used in the design.
System data communication structures for active-control transport aircraft, volume 1
NASA Technical Reports Server (NTRS)
Hopkins, A. L.; Martin, J. H.; Brock, L. D.; Jansson, D. G.; Serben, S.; Smith, T. B.; Hanley, L. D.
1981-01-01
Candidate data communication techniques are identified, including dedicated links, local buses, broadcast buses, multiplex buses, and mesh networks. The design methodology for mesh networks is then discussed, including network topology and node architecture. Several concepts of power distribution are reviewed, including current limiting and mesh networks for power. The technology issues of packaging, transmission media, and lightning are addressed, and, finally, the analysis tools developed to aid in the communication design process are described. There are special tools to analyze the reliability and connectivity of networks and more general reliability analysis tools for all types of systems.
Conceptual Launch Vehicle and Spacecraft Design for Risk Assessment
NASA Technical Reports Server (NTRS)
Motiwala, Samira A.; Mathias, Donovan L.; Mattenberger, Christopher J.
2014-01-01
One of the most challenging aspects of developing human space launch and exploration systems is minimizing and mitigating the many potential risk factors to ensure the safest possible design while also meeting the required cost, weight, and performance criteria. In order to accomplish this, effective risk analyses and trade studies are needed to identify key risk drivers, dependencies, and sensitivities as the design evolves. The Engineering Risk Assessment (ERA) team at NASA Ames Research Center (ARC) develops advanced risk analysis approaches, models, and tools to provide such meaningful risk and reliability data throughout vehicle development. The goal of the project presented in this memorandum is to design a generic launch 7 vehicle and spacecraft architecture that can be used to develop and demonstrate these new risk analysis techniques without relying on other proprietary or sensitive vehicle designs. To accomplish this, initial spacecraft and launch vehicle (LV) designs were established using historical sizing relationships for a mission delivering four crewmembers and equipment to the International Space Station (ISS). Mass-estimating relationships (MERs) were used to size the crew capsule and launch vehicle, and a combination of optimization techniques and iterative design processes were employed to determine a possible two-stage-to-orbit (TSTO) launch trajectory into a 350-kilometer orbit. Primary subsystems were also designed for the crewed capsule architecture, based on a 24-hour on-orbit mission with a 7-day contingency. Safety analysis was also performed to identify major risks to crew survivability and assess the system's overall reliability. These procedures and analyses validate that the architecture's basic design and performance are reasonable to be used for risk trade studies. While the vehicle designs presented are not intended to represent a viable architecture, they will provide a valuable initial platform for developing and demonstrating innovative risk assessment capabilities.
Sustainable, Reliable Mission-Systems Architecture
NASA Technical Reports Server (NTRS)
O'Neil, Graham; Orr, James K.; Watson, Steve
2005-01-01
A mission-systems architecture, based on a highly modular infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is essential for affordable md sustainable space exploration programs. This mission-systems architecture requires (8) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, end verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered systems are applied to define the model. Technology projections reaching out 5 years are made to refine model details.
NASA Technical Reports Server (NTRS)
Watson, Steve; Orr, Jim; O'Neil, Graham
2004-01-01
A mission-systems architecture based on a highly modular "systems of systems" infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is absolutely essential for an affordable and sustainable space exploration program. This architecture requires (a) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimum sustaining engineering. This paper proposes such an architecture. Lessons learned from the space shuttle program are applied to help define and refine the model.
Sustainable, Reliable Mission-Systems Architecture
NASA Technical Reports Server (NTRS)
O'Neil, Graham; Orr, James K.; Watson, Steve
2007-01-01
A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.
CRAB3: Establishing a new generation of services for distributed analysis at CMS
NASA Astrophysics Data System (ADS)
Cinquilli, M.; Spiga, D.; Grandi, C.; Hernàndez, J. M.; Konstantinov, P.; Mascheroni, M.; Riahi, H.; Vaandering, E.
2012-12-01
In CMS Computing the highest priorities for analysis tools are the improvement of the end users’ ability to produce and publish reliable samples and analysis results as well as a transition to a sustainable development and operations model. To achieve these goals CMS decided to incorporate analysis processing into the same framework as data and simulation processing. This strategy foresees that all workload tools (TierO, Tier1, production, analysis) share a common core with long term maintainability as well as the standardization of the operator interfaces. The re-engineered analysis workload manager, called CRAB3, makes use of newer technologies, such as RESTFul based web services and NoSQL Databases, aiming to increase the scalability and reliability of the system. As opposed to CRAB2, in CRAB3 all work is centrally injected and managed in a global queue. A pool of agents, which can be geographically distributed, consumes work from the central services serving the user tasks. The new architecture of CRAB substantially changes the deployment model and operations activities. In this paper we present the implementation of CRAB3, emphasizing how the new architecture improves the workflow automation and simplifies maintainability. In particular, we will highlight the impact of the new design on daily operations.
Local connectome phenotypes predict social, health, and cognitive factors
Powell, Michael A.; Garcia, Javier O.; Yeh, Fang-Cheng; Vettel, Jean M.
2018-01-01
The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample (N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions. PMID:29911679
Local connectome phenotypes predict social, health, and cognitive factors.
Powell, Michael A; Garcia, Javier O; Yeh, Fang-Cheng; Vettel, Jean M; Verstynen, Timothy
2018-01-01
The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample ( N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions.
A systematic review and meta-analysis of sleep architecture and chronic traumatic brain injury.
Mantua, Janna; Grillakis, Antigone; Mahfouz, Sanaa H; Taylor, Maura R; Brager, Allison J; Yarnell, Angela M; Balkin, Thomas J; Capaldi, Vincent F; Simonelli, Guido
2018-02-02
Sleep quality appears to be altered by traumatic brain injury (TBI). However, whether persistent post-injury changes in sleep architecture are present is unknown and relatively unexplored. We conducted a systematic review and meta-analysis to assess the extent to which chronic TBI (>6 months since injury) is characterized by changes to sleep architecture. We also explored the relationship between sleep architecture and TBI severity. In the fourteen included studies, sleep was assessed with at least one night of polysomnography in both chronic TBI participants and controls. Statistical analyses, performed using Comprehensive Meta-Analysis software, revealed that chronic TBI is characterized by relatively increased slow wave sleep (SWS). A meta-regression showed moderate-severe TBI is associated with elevated SWS, reduced stage 2, and reduced sleep efficiency. In contrast, mild TBI was not associated with any significant alteration of sleep architecture. The present findings are consistent with the hypothesis that increased SWS after moderate-severe TBI reflects post-injury cortical reorganization and restructuring. Suggestions for future research are discussed, including adoption of common data elements in future studies to facilitate cross-study comparability, reliability, and replicability, thereby increasing the likelihood that meaningful sleep (and other) biomarkers of TBI will be identified. Copyright © 2018 Elsevier Ltd. All rights reserved.
Comments on the MIT Assessment of the Mars One Plan
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
The MIT assessment of the Mars One mission plan reveals design assumptions that would cause significant difficulties. Growing crops in the crew chamber produces excessive oxygen levels. The assumed in-situ resource utilization (ISRU) equipment has too low a Technology Readiness Level (TRL). The required spare parts cause a large and increasing launch mass logistics burden. The assumed International Space Station (ISS) Environmental Control and Life Support (ECLS) technologies were developed for microgravity and therefore are not suitable for Mars gravity. Growing food requires more mass than sending food from Earth. The large number of spares is due to the relatively low reliability of ECLS and the low TRL of ISRU. The Mars One habitat design is similar to past concepts but does not incorporate current knowledge. The MIT architecture analysis tool for long-term settlements on the Martian surface includes an ECLS system simulation, an ISRU sizing model, and an analysis of required spares. The MIT tool showed the need for separate crop and crew chambers, the large spare parts logistics, that crops require more mass than Earth food, and that more spares are needed if reliability is lower. That ISRU has low TRL and ISS ECLS was designed for microgravity are well known. Interestingly, the results produced by the architecture analysis tool - separate crop chamber, large spares mass, large crop chamber mass, and low reliability requiring more spares - were also well known. A common approach to ECLS architecture analysis is to build a complex model that is intended to be all-inclusive and is hoped will help solve all design problems. Such models can struggle to replicate obvious and well-known results and are often unable to answer unanticipated new questions. A better approach would be to survey the literature for background knowledge and then directly analyze the important problems.
A comparative analysis of loop heat pipe based thermal architectures for spacecraft thermal control
NASA Technical Reports Server (NTRS)
Pauken, Mike; Birur, Gaj
2004-01-01
Loop Heat Pipes (LHP) have gained acceptance as a viable means of heat transport in many spacecraft in recent years. However, applications using LHP technology tend to only remove waste heat from a single component to an external radiator. Removing heat from multiple components has been done by using multiple LHPs. This paper discusses the development and implementation of a Loop Heat Pipe based thermal architecture for spacecraft. In this architecture, a Loop Heat Pipe with multiple evaporators and condensers is described in which heat load sharing and thermal control of multiple components can be achieved. A key element in using a LHP thermal architecture is defining the need for such an architecture early in the spacecraft design process. This paper describes an example in which a LHP based thermal architecture can be used and how such a system can have advantages in weight, cost and reliability over other kinds of distributed thermal control systems. The example used in this paper focuses on a Mars Rover Thermal Architecture. However, the principles described here are applicable to Earth orbiting spacecraft as well.
Evaluation of reliability modeling tools for advanced fault tolerant systems
NASA Technical Reports Server (NTRS)
Baker, Robert; Scheper, Charlotte
1986-01-01
The Computer Aided Reliability Estimation (CARE III) and Automated Reliability Interactice Estimation System (ARIES 82) reliability tools for application to advanced fault tolerance aerospace systems were evaluated. To determine reliability modeling requirements, the evaluation focused on the Draper Laboratories' Advanced Information Processing System (AIPS) architecture as an example architecture for fault tolerance aerospace systems. Advantages and limitations were identified for each reliability evaluation tool. The CARE III program was designed primarily for analyzing ultrareliable flight control systems. The ARIES 82 program's primary use was to support university research and teaching. Both CARE III and ARIES 82 were not suited for determining the reliability of complex nodal networks of the type used to interconnect processing sites in the AIPS architecture. It was concluded that ARIES was not suitable for modeling advanced fault tolerant systems. It was further concluded that subject to some limitations (the difficulty in modeling systems with unpowered spare modules, systems where equipment maintenance must be considered, systems where failure depends on the sequence in which faults occurred, and systems where multiple faults greater than a double near coincident faults must be considered), CARE III is best suited for evaluating the reliability of advanced tolerant systems for air transport.
SURE reliability analysis: Program and mathematics
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; White, Allan L.
1988-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The computational methods on which the program is based provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Godino-Llorente, J I; Gómez-Vilda, P
2004-02-01
It is well known that vocal and voice diseases do not necessarily cause perceptible changes in the acoustic voice signal. Acoustic analysis is a useful tool to diagnose voice diseases being a complementary technique to other methods based on direct observation of the vocal folds by laryngoscopy. Through the present paper two neural-network based classification approaches applied to the automatic detection of voice disorders will be studied. Structures studied are multilayer perceptron and learning vector quantization fed using short-term vectors calculated accordingly to the well-known Mel Frequency Coefficient cepstral parameterization. The paper shows that these architectures allow the detection of voice disorders--including glottic cancer--under highly reliable conditions. Within this context, the Learning Vector quantization methodology demonstrated to be more reliable than the multilayer perceptron architecture yielding 96% frame accuracy under similar working conditions.
Hybrid network defense model based on fuzzy evaluation.
Cho, Ying-Chiang; Pan, Jen-Yi
2014-01-01
With sustained and rapid developments in the field of information technology, the issue of network security has become increasingly prominent. The theme of this study is network data security, with the test subject being a classified and sensitive network laboratory that belongs to the academic network. The analysis is based on the deficiencies and potential risks of the network's existing defense technology, characteristics of cyber attacks, and network security technologies. Subsequently, a distributed network security architecture using the technology of an intrusion prevention system is designed and implemented. In this paper, first, the overall design approach is presented. This design is used as the basis to establish a network defense model, an improvement over the traditional single-technology model that addresses the latter's inadequacies. Next, a distributed network security architecture is implemented, comprising a hybrid firewall, intrusion detection, virtual honeynet projects, and connectivity and interactivity between these three components. Finally, the proposed security system is tested. A statistical analysis of the test results verifies the feasibility and reliability of the proposed architecture. The findings of this study will potentially provide new ideas and stimuli for future designs of network security architecture.
Architectural development of an advanced EVA Electronic System
NASA Technical Reports Server (NTRS)
Lavelle, Joseph
1992-01-01
An advanced electronic system for future EVA missions (including zero gravity, the lunar surface, and the surface of Mars) is under research and development within the Advanced Life Support Division at NASA Ames Research Center. As a first step in the development, an optimum system architecture has been derived from an analysis of the projected requirements for these missions. The open, modular architecture centers around a distributed multiprocessing concept where the major subsystems independently process their own I/O functions and communicate over a common bus. Supervision and coordination of the subsystems is handled by an embedded real-time operating system kernel employing multitasking software techniques. A discussion of how the architecture most efficiently meets the electronic system functional requirements, maximizes flexibility for future development and mission applications, and enhances the reliability and serviceability of the system in these remote, hostile environments is included.
Designing an architectural style for Pervasive Healthcare systems.
Rafe, Vahid; Hajvali, Masoumeh
2013-04-01
Nowadays, the Pervasive Healthcare (PH) systems are considered as an important research area. These systems have a dynamic structure and configuration. Therefore, an appropriate method for designing such systems is necessary. The Publish/Subscribe Architecture (pub/sub) is one of the convenient architectures to support such systems. PH systems are safety critical; hence, errors can bring disastrous results. To prevent such problems, a powerful analytical tool is required. So using a proper formal language like graph transformation systems for developing of these systems seems necessary. But even if software engineers use such high level methodologies, errors may occur in the system under design. Hence, it should be investigated automatically and formally that whether this model of system satisfies all their requirements or not. In this paper, a dynamic architectural style for developing PH systems is presented. Then, the behavior of these systems is modeled and evaluated using GROOVE toolset. The results of the analysis show its high reliability.
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia; Holmes, Bruce J.; Hahn, Andrew S.
2016-01-01
We report on an examination of potential benefits of infusing wireless technologies into various areas of aircraft and airspace operations. The analysis is done in support of a NASA seedling project Efficient Reconfigurable Cockpit Design and Fleet Operations Using Software Intensive, Network Enabled Wireless Architecture (ECON). The study has two objectives. First, we investigate one of the main benefit hypotheses of the ECON proposal: that the replacement of wired technologies with wireless would lead to significant weight reductions on an aircraft, among other benefits. Second, we advance a list of wireless technology applications and discuss their system benefits. With regard to the primary hypothesis, we conclude that the promise of weight reduction is premature. Specificity of the system domain and aircraft, criticality of components, reliability of wireless technologies, the weight of replacement or augmentation equipment, and the cost of infusion must all be taken into account among other considerations, to produce a reliable estimate of weight savings or increase.
ERIC Educational Resources Information Center
Inozu, Bahadir; Ayyub, Bilal A.
1999-01-01
Examines the current status of existing curricula, accreditation requirements, and new developments in Naval Architecture and Marine Engineering education in the United States. Discusses the emerging needs of the maritime industry in light of advances in information technology and movement toward risk-based, reliability-centered rule making in the…
Forecast analysis of optical waveguide bus performance
NASA Technical Reports Server (NTRS)
Ledesma, R.; Rourke, M. D.
1979-01-01
Elements to be considered in the design of a data bus include: architecture; data rate; modulation, encoding, detection; power distribution requirements; protocol, work structure; bus reliability, maintainability; interterminal transmission medium; cost; and others specific to application. Fiber- optic data bus considerations for a 32 port transmissive star architecture, are discussed in a tutorial format. General optical-waveguide bus concepts, are reviewed. The electrical and optical performance of a 32 port transmissive star bus, and the effects of temperature on the performance of optical-waveguide buses are examined. A bibliography of pertinent references and the bus receiver test results are included.
Partitioning Strategy Using Static Analysis Techniques
NASA Astrophysics Data System (ADS)
Seo, Yongjin; Soo Kim, Hyeon
2016-08-01
Flight software is software used in satellites' on-board computers. It has requirements such as real time and reliability. The IMA architecture is used to satisfy these requirements. The IMA architecture has the concept of partitions and this affected the configuration of flight software. That is, situations occurred in which software that had been loaded on one system was divided into many partitions when being loaded. For new issues, existing studies use experience based partitioning methods. However, these methods have a problem that they cannot be reused. In this respect, this paper proposes a partitioning method that is reusable and consistent.
The SURE Reliability Analysis Program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability analysis tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Design of an integrated airframe/propulsion control system architecture
NASA Technical Reports Server (NTRS)
Cohen, Gerald C.; Lee, C. William; Strickland, Michael J.; Torkelson, Thomas C.
1990-01-01
The design of an integrated airframe/propulsion control system architecture is described. The design is based on a prevalidation methodology that uses both reliability and performance. A detailed account is given for the testing associated with a subset of the architecture and concludes with general observations of applying the methodology to the architecture.
NASA Technical Reports Server (NTRS)
Poberezhskiy, Ilya; Chang, Daniel; Erlig, Hernan
2011-01-01
Non Planar Ring Oscillator (NPRO) lasers are highly attractive for metrology applications. NPRO reliability for prolonged space missions is limited by reliability of 808 nm pump diodes. Combined laser farm aging parameter allows comparing different bias approaches. Monte-Carlo software developed to calculate the reliability of laser pump architecture, perform parameter sensitivity studies To meet stringent Space Interferometry Mission (SIM) Lite lifetime reliability / output power requirements, we developed a single-mode Laser Pump Module architecture that: (1) provides 2 W of power at 808 nm with >99.7% reliability for 5.5 years (2) consists of 37 de-rated diode lasers operating at -5C, with outputs combined in a very low loss 37x1 all-fiber coupler
Architecture for fiber-optic sensors and actuators in aircraft propulsion systems
NASA Technical Reports Server (NTRS)
Glomb, W. L., Jr.
1990-01-01
This paper describes a design for fiber-optic sensing and control in advanced aircraft Electronic Engine Control (EEC). The recommended architecture is an on-engine EEC which contains electro-optic interface circuits for fiber-optic sensors. Size and weight are reduced by multiplexing arrays of functionally similar sensors on a pairs of optical fibers to common electro-optical interfaces. The architecture contains interfaces to seven sensor groups. Nine distinct fiber-optic sensor types were found to provide the sensing functions. Analysis revealed no strong discriminator (except reliability of laser diodes and remote electronics) on which to base a selection of preferred common interface type. A hardware test program is recommended to assess the relative maturity of the technologies and to determine real performance in the engine environment.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-05-02
Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-01-01
Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components. PMID:28468324
Towards automatic Markov reliability modeling of computer architectures
NASA Technical Reports Server (NTRS)
Liceaga, C. A.; Siewiorek, D. P.
1986-01-01
The analysis and evaluation of reliability measures using time-varying Markov models is required for Processor-Memory-Switch (PMS) structures that have competing processes such as standby redundancy and repair, or renewal processes such as transient or intermittent faults. The task of generating these models is tedious and prone to human error due to the large number of states and transitions involved in any reasonable system. Therefore model formulation is a major analysis bottleneck, and model verification is a major validation problem. The general unfamiliarity of computer architects with Markov modeling techniques further increases the necessity of automating the model formulation. This paper presents an overview of the Automated Reliability Modeling (ARM) program, under development at NASA Langley Research Center. ARM will accept as input a description of the PMS interconnection graph, the behavior of the PMS components, the fault-tolerant strategies, and the operational requirements. The output of ARM will be the reliability of availability Markov model formulated for direct use by evaluation programs. The advantages of such an approach are (a) utility to a large class of users, not necessarily expert in reliability analysis, and (b) a lower probability of human error in the computation.
NASA Technical Reports Server (NTRS)
Poberezhskiy, Ilya Y; Chang, Daniel H.; Erlig, Herman
2011-01-01
Optical metrology system reliability during a prolonged space mission is often limited by the reliability of pump laser diodes. We developed a metrology laser pump module architecture that meets NASA SIM Lite instrument optical power and reliability requirements by combining the outputs of multiple single-mode pump diodes in a low-loss, high port count fiber coupler. We describe Monte-Carlo simulations used to calculate the reliability of the laser pump module and introduce a combined laser farm aging parameter that serves as a load-sharing optimization metric. Employing these tools, we select pump module architecture, operating conditions, biasing approach and perform parameter sensitivity studies to investigate the robustness of the obtained solution.
SANDS: an architecture for clinical decision support in a National Health Information Network.
Wright, Adam; Sittig, Dean F
2007-10-11
A new architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support) is introduced and its performance evaluated. The architecture provides a method for performing clinical decision support across a network, as in a health information exchange. Using the prototype we demonstrated that, first, a number of useful types of decision support can be carried out using our architecture; and, second, that the architecture exhibits desirable reliability and performance characteristics.
Advanced flight control system study
NASA Technical Reports Server (NTRS)
Mcgough, J.; Moses, K.; Klafin, J. F.
1982-01-01
The architecture, requirements, and system elements of an ultrareliable, advanced flight control system are described. The basic criteria are functional reliability of 10 to the minus 10 power/hour of flight and only 6 month scheduled maintenance. A distributed system architecture is described, including a multiplexed communication system, reliable bus controller, the use of skewed sensor arrays, and actuator interfaces. Test bed and flight evaluation program are proposed.
Modeling and Verification of Dependable Electronic Power System Architecture
NASA Astrophysics Data System (ADS)
Yuan, Ling; Fan, Ping; Zhang, Xiao-fang
The electronic power system can be viewed as a system composed of a set of concurrently interacting subsystems to generate, transmit, and distribute electric power. The complex interaction among sub-systems makes the design of electronic power system complicated. Furthermore, in order to guarantee the safe generation and distribution of electronic power, the fault tolerant mechanisms are incorporated in the system design to satisfy high reliability requirements. As a result, the incorporation makes the design of such system more complicated. We propose a dependable electronic power system architecture, which can provide a generic framework to guide the development of electronic power system to ease the development complexity. In order to provide common idioms and patterns to the system *designers, we formally model the electronic power system architecture by using the PVS formal language. Based on the PVS model of this system architecture, we formally verify the fault tolerant properties of the system architecture by using the PVS theorem prover, which can guarantee that the system architecture can satisfy high reliability requirements.
Hybrid Network Defense Model Based on Fuzzy Evaluation
2014-01-01
With sustained and rapid developments in the field of information technology, the issue of network security has become increasingly prominent. The theme of this study is network data security, with the test subject being a classified and sensitive network laboratory that belongs to the academic network. The analysis is based on the deficiencies and potential risks of the network's existing defense technology, characteristics of cyber attacks, and network security technologies. Subsequently, a distributed network security architecture using the technology of an intrusion prevention system is designed and implemented. In this paper, first, the overall design approach is presented. This design is used as the basis to establish a network defense model, an improvement over the traditional single-technology model that addresses the latter's inadequacies. Next, a distributed network security architecture is implemented, comprising a hybrid firewall, intrusion detection, virtual honeynet projects, and connectivity and interactivity between these three components. Finally, the proposed security system is tested. A statistical analysis of the test results verifies the feasibility and reliability of the proposed architecture. The findings of this study will potentially provide new ideas and stimuli for future designs of network security architecture. PMID:24574870
NASA Astrophysics Data System (ADS)
Chung, Pil Seung; Song, Wonyup; Biegler, Lorenz T.; Jhon, Myung S.
2017-05-01
During the operation of hard disk drive (HDD), the perfluoropolyether (PFPE) lubricant experiences elastic or viscous shear/elongation deformations, which affect the performance and reliability of the HDD. Therefore, the viscoelastic responses of PFPE could provide a finger print analysis in designing optimal molecular architecture of lubricants to control the tribological phenomena. In this paper, we examine the rheological responses of PFPEs including storage (elastic) and loss (viscous) moduli (G' and G″) by monitoring the time-dependent-stress-strain relationship via non-equilibrium molecular dynamics simulations. We analyzed the rheological responses by using Cox-Merz rule, and investigated the molecular structural and thermal effects on the solid-like and liquid-like behaviors of PFPEs. The temperature dependence of the endgroup agglomeration phenomena was examined, where the functional endgroups are decoupled as the temperature increases. By analyzing the relaxation processes, the molecular rheological studies will provide the optimal lubricant selection criteria to enhance the HDD performance and reliability for the heat-assisted magnetic recording applications.
Silicon Nanophotonics for Many-Core On-Chip Networks
NASA Astrophysics Data System (ADS)
Mohamed, Moustafa
Number of cores in many-core architectures are scaling to unprecedented levels requiring ever increasing communication capacity. Traditionally, architects follow the path of higher throughput at the expense of latency. This trend has evolved into being problematic for performance in many-core architectures. Moreover, the trends of power consumption is increasing with system scaling mandating nontraditional solutions. Nanophotonics can address these problems, offering benefits in the three frontiers of many-core processor design: Latency, bandwidth, and power. Nanophotonics leverage circuit-switching flow control allowing low latency; in addition, the power consumption of optical links is significantly lower compared to their electrical counterparts at intermediate and long links. Finally, through wave division multiplexing, we can keep the high bandwidth trends without sacrificing the throughput. This thesis focuses on realizing nanophotonics for communication in many-core architectures at different design levels considering reliability challenges that our fabrication and measurements reveal. First, we study how to design on-chip networks for low latency, low power, and high bandwidth by exploiting the full potential of nanophotonics. The design process considers device level limitations and capabilities on one hand, and system level demands in terms of power and performance on the other hand. The design involves the choice of devices, designing the optical link, the topology, the arbitration technique, and the routing mechanism. Next, we address the problem of reliability in on-chip networks. Reliability not only degrades performance but can block communication. Hence, we propose a reliability-aware design flow and present a reliability management technique based on this flow to address reliability in the system. In the proposed flow reliability is modeled and analyzed for at the device, architecture, and system level. Our reliability management technique is superior to existing solutions in terms of power and performance. In fact, our solution can scale to thousand core with low overhead.
The SURE reliability analysis program
NASA Technical Reports Server (NTRS)
Butler, R. W.
1986-01-01
The SURE program is a new reliability tool for ultrareliable computer system architectures. The program is based on computational methods recently developed for the NASA Langley Research Center. These methods provide an efficient means for computing accurate upper and lower bounds for the death state probabilities of a large class of semi-Markov models. Once a semi-Markov model is described using a simple input language, the SURE program automatically computes the upper and lower bounds on the probability of system failure. A parameter of the model can be specified as a variable over a range of values directing the SURE program to perform a sensitivity analysis automatically. This feature, along with the speed of the program, makes it especially useful as a design tool.
Advanced flight control system study
NASA Technical Reports Server (NTRS)
Hartmann, G. L.; Wall, J. E., Jr.; Rang, E. R.; Lee, H. P.; Schulte, R. W.; Ng, W. K.
1982-01-01
A fly by wire flight control system architecture designed for high reliability includes spare sensor and computer elements to permit safe dispatch with failed elements, thereby reducing unscheduled maintenance. A methodology capable of demonstrating that the architecture does achieve the predicted performance characteristics consists of a hierarchy of activities ranging from analytical calculations of system reliability and formal methods of software verification to iron bird testing followed by flight evaluation. Interfacing this architecture to the Lockheed S-3A aircraft for flight test is discussed. This testbed vehicle can be expanded to support flight experiments in advanced aerodynamics, electromechanical actuators, secondary power systems, flight management, new displays, and air traffic control concepts.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.
A new software-based architecture for quantum computer
NASA Astrophysics Data System (ADS)
Wu, Nan; Song, FangMin; Li, Xiangdong
2010-04-01
In this paper, we study a reliable architecture of a quantum computer and a new instruction set and machine language for the architecture, which can improve the performance and reduce the cost of the quantum computing. We also try to address some key issues in detail in the software-driven universal quantum computers.
Architecture for Survivable System Processing (ASSP)
NASA Astrophysics Data System (ADS)
Wood, Richard J.
1991-11-01
The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.
Architecture for Survivable System Processing (ASSP)
NASA Technical Reports Server (NTRS)
Wood, Richard J.
1991-01-01
The Architecture for Survivable System Processing (ASSP) Program is a multi-phase effort to implement Department of Defense (DOD) and commercially developed high-tech hardware, software, and architectures for reliable space avionics and ground based systems. System configuration options provide processing capabilities to address Time Dependent Processing (TDP), Object Dependent Processing (ODP), and Mission Dependent Processing (MDP) requirements through Open System Architecture (OSA) alternatives that allow for the enhancement, incorporation, and capitalization of a broad range of development assets. High technology developments in hardware, software, and networking models, address technology challenges of long processor life times, fault tolerance, reliability, throughput, memories, radiation hardening, size, weight, power (SWAP) and security. Hardware and software design, development, and implementation focus on the interconnectivity/interoperability of an open system architecture and is being developed to apply new technology into practical OSA components. To insure for widely acceptable architecture capable of interfacing with various commercial and military components, this program provides for regular interactions with standardization working groups (e.g.) the International Standards Organization (ISO), American National Standards Institute (ANSI), Society of Automotive Engineers (SAE), and Institute of Electrical and Electronic Engineers (IEEE). Selection of a viable open architecture is based on the widely accepted standards that implement the ISO/OSI Reference Model.
A Primer on Architectural Level Fault Tolerance
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
2008-01-01
This paper introduces the fundamental concepts of fault tolerant computing. Key topics covered are voting, fault detection, clock synchronization, Byzantine Agreement, diagnosis, and reliability analysis. Low level mechanisms such as Hamming codes or low level communications protocols are not covered. The paper is tutorial in nature and does not cover any topic in detail. The focus is on rationale and approach rather than detailed exposition.
Study of a unified hardware and software fault-tolerant architecture
NASA Technical Reports Server (NTRS)
Lala, Jaynarayan; Alger, Linda; Friend, Steven; Greeley, Gregory; Sacco, Stephen; Adams, Stuart
1989-01-01
A unified architectural concept, called the Fault Tolerant Processor Attached Processor (FTP-AP), that can tolerate hardware as well as software faults is proposed for applications requiring ultrareliable computation capability. An emulation of the FTP-AP architecture, consisting of a breadboard Motorola 68010-based quadruply redundant Fault Tolerant Processor, four VAX 750s as attached processors, and four versions of a transport aircraft yaw damper control law, is used as a testbed in the AIRLAB to examine a number of critical issues. Solutions of several basic problems associated with N-Version software are proposed and implemented on the testbed. This includes a confidence voter to resolve coincident errors in N-Version software. A reliability model of N-Version software that is based upon the recent understanding of software failure mechanisms is also developed. The basic FTP-AP architectural concept appears suitable for hosting N-Version application software while at the same time tolerating hardware failures. Architectural enhancements for greater efficiency, software reliability modeling, and N-Version issues that merit further research are identified.
An end-to-end communications architecture for condition-based maintenance applications
NASA Astrophysics Data System (ADS)
Kroculick, Joseph
2014-06-01
This paper explores challenges in implementing an end-to-end communications architecture for Condition-Based Maintenance Plus (CBM+) data transmission which aligns with the Army's Network Modernization Strategy. The Army's Network Modernization strategy is based on rolling out network capabilities which connect the smallest unit and Soldier level to enterprise systems. CBM+ is a continuous improvement initiative over the life cycle of a weapon system or equipment to improve the reliability and maintenance effectiveness of Department of Defense (DoD) systems. CBM+ depends on the collection, processing and transport of large volumes of data. An important capability that enables CBM+ is an end-to-end network architecture that enables data to be uploaded from the platform at the tactical level to enterprise data analysis tools. To connect end-to-end maintenance processes in the Army's supply chain, a CBM+ network capability can be developed from available network capabilities.
De Coninck, Kyra; Hambly, Karen; Dickinson, John W; Passfield, Louis
2018-06-01
Chronic lower back pain is still regarded as a poorly understood multifactorial condition. Recently, the thoracolumbar fascia complex has been found to be a contributing factor. Ultrasound imaging has shown that people with chronic lower back pain demonstrate both a significant decrease in shear strain, and a 25% increase in thickness of the thoracolumbar fascia. There is sparse data on whether medical practitioners agree on the level of disorganisation in ultrasound images of thoracolumbar fascia. The purpose of this study was to establish inter-rater reliability of the ranking of architectural disorganisation of thoracolumbar fascia on a scale from 'very disorganised' to 'very organised'. An exploratory analysis was performed using a fully crossed design of inter-rater reliability. Thirty observers were recruited, consisting of 21 medical doctors, 7 physiotherapists and 2 radiologists, with an average of 13.03 ± 9.6 years of clinical experience. All 30 observers independently rated the architectural disorganisation of the thoracolumbar fascia in 30 ultrasound scans, on a Likert-type scale with rankings from 1 = very disorganised to 10 = very organised. Internal consistency was assessed using Cronbach's alpha. Krippendorff's alpha was used to calculate the overall inter-rater reliability. The Krippendorf's alpha was .61, indicating a modest degree of agreement between observers on the different morphologies of thoracolumbar fascia.The Cronbach's alpha (0.98), indicated that there was a high degree of consistency between observers. Experience in ultrasound image analysis did not affect constancy between observers (Cronbach's range between experienced and inexperienced raters: 0.95 and 0.96 respectively). Medical practitioners agree on morphological features such as levels of organisation and disorganisation in ultrasound images of thoracolumbar fascia, regardless of experience. Further analysis by an expert panel is required to develop specific classification criteria for thoracolumbar fascia.
Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong
2011-01-01
Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (<1 hour apart) and long-term (>5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285
IR-drop analysis for validating power grids and standard cell architectures in sub-10nm node designs
NASA Astrophysics Data System (ADS)
Ban, Yongchan; Wang, Chenchen; Zeng, Jia; Kye, Jongwook
2017-03-01
Since chip performance and power are highly dependent on the operating voltage, the robust power distribution network (PDN) is of utmost importance in designs to provide with the reliable voltage without voltage (IR)-drop. However, rapid increase of parasitic resistance and capacitance (RC) in interconnects makes IR-drop much worse with technology scaling. This paper shows various IR-drop analyses in sub 10nm designs. The major objectives are to validate standard cell architectures, where different sizes of power/ground and metal tracks are validated, and to validate PDN architecture, where types of power hook-up approaches are evaluated with IR-drop calculation. To estimate IR-drops in 10nm and below technologies, we first prepare physically routed designs given standard cell libraries, where we use open RISC RTL, synthesize the CPU, and apply placement & routing with process-design kits (PDK). Then, static and dynamic IR-drop flows are set up with commercial tools. Using the IR-drop flow, we compare standard cell architectures, and analysis impacts on performance, power, and area (PPA) with the previous technology-node designs. With this IR-drop flow, we can optimize the best PDN structure against IR-drops as well as types of standard cell library.
WindTalker: A P2P-Based Low-Latency Anonymous Communication Network
NASA Astrophysics Data System (ADS)
Zhang, Jia; Duan, Haixin; Liu, Wu; Wu, Jianping
Compared with traditional static anonymous communication networks, the P2P architecture can provide higher anonymity in communication. However, the P2P architecture also leads to more challenges, such as route, stability, trust and so on. In this paper, we present WindTalker, a P2P-based low-latency anonymous communication network. It is a pure decentralized mix network and can provide low-latency services which help users hide their real identity in communication. In order to ensure stability and reliability, WindTalker imports “seed nodes” to help a peer join in the P2P network and the peer nodes can use gossip-based protocol to exchange active information. Moreover, WindTalker uses layer encryption to ensure the information of relayed messages cannot be leaked. In addition, malicious nodes in the network are the major threat to anonymity of P2P anonymous communication, so WindTalker imports a trust mechanism which can help the P2P network exclude malicious nodes and optimize the strategy of peer discovery, tunnel construction, and relaying etc. in anonymous communications. We deploy peer nodes of WindTalker in our campus network to test reliability and analyze anonymity in theory. The network measurement and simulation analysis shows that WindTalker can provide low-latency and reliable anonymous communication services.
NASA Technical Reports Server (NTRS)
Tai, Ann T.; Chau, Savio N.; Alkalai, Leon
2000-01-01
Using COTS products, standards and intellectual properties (IPs) for all the system and component interfaces is a crucial step toward significant reduction of both system cost and development cost as the COTS interfaces enable other COTS products and IPs to be readily accommodated by the target system architecture. With respect to the long-term survivable systems for deep-space missions, the major challenge for us is, under stringent power and mass constraints, to achieve ultra-high reliability of the system comprising COTS products and standards that are not developed for mission-critical applications. The spirit of our solution is to exploit the pertinent standard features of a COTS product to circumvent its shortcomings, though these standard features may not be originally designed for highly reliable systems. In this paper, we discuss our experiences and findings on the design of an IEEE 1394 compliant fault-tolerant COTS-based bus architecture. We first derive and qualitatively analyze a -'stacktree topology" that not only complies with IEEE 1394 but also enables the implementation of a fault-tolerant bus architecture without node redundancy. We then present a quantitative evaluation that demonstrates significant reliability improvement from the COTS-based fault tolerance.
Scaling Impacts in Life Support Architecture and Technology Selection
NASA Technical Reports Server (NTRS)
Lange, Kevin
2016-01-01
For long-duration space missions outside of Earth orbit, reliability considerations will drive higher levels of redundancy and/or on-board spares for life support equipment. Component scaling will be a critical element in minimizing overall launch mass while maintaining an acceptable level of system reliability. Building on an earlier reliability study (AIAA 2012-3491), this paper considers the impact of alternative scaling approaches, including the design of technology assemblies and their individual components to maximum, nominal, survival, or other fractional requirements. The optimal level of life support system closure is evaluated for deep-space missions of varying duration using equivalent system mass (ESM) as the comparative basis. Reliability impacts are included in ESM by estimating the number of component spares required to meet a target system reliability. Common cause failures are included in the analysis. ISS and ISS-derived life support technologies are considered along with selected alternatives. This study focusses on minimizing launch mass, which may be enabling for deep-space missions.
A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP
Chun, Seung-Man; Park, Jong-Tae
2017-01-01
Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation. PMID:28085109
A Mechanism for Reliable Mobility Management for Internet of Things Using CoAP.
Chun, Seung-Man; Park, Jong-Tae
2017-01-12
Under unreliable constrained wireless networks for Internet of Things (IoT) environments, the loss of the signaling message may frequently occur. Mobile Internet Protocol version 6 (MIPv6) and its variants do not consider this situation. Consequently, as a constrained device moves around different wireless networks, its Internet Protocol (IP) connectivity may be frequently disrupted and power can be drained rapidly. This can result in the loss of important sensing data or a large delay for time-critical IoT services such as healthcare monitoring and disaster management. This paper presents a reliable mobility management mechanism in Internet of Things environments with lossy low-power constrained device and network characteristics. The idea is to use the Internet Engineering Task Force (IETF) Constrained Application Protocol (CoAP) retransmission mechanism to achieve both reliability and simplicity for reliable IoT mobility management. Detailed architecture, algorithms, and message extensions for reliable mobility management are presented. Finally, performance is evaluated using both mathematical analysis and simulation.
Investigation of an advanced fault tolerant integrated avionics system
NASA Technical Reports Server (NTRS)
Dunn, W. R.; Cottrell, D.; Flanders, J.; Javornik, A.; Rusovick, M.
1986-01-01
Presented is an advanced, fault-tolerant multiprocessor avionics architecture as could be employed in an advanced rotorcraft such as LHX. The processor structure is designed to interface with existing digital avionics systems and concepts including the Army Digital Avionics System (ADAS) cockpit/display system, navaid and communications suites, integrated sensing suite, and the Advanced Digital Optical Control System (ADOCS). The report defines mission, maintenance and safety-of-flight reliability goals as might be expected for an operational LHX aircraft. Based on use of a modular, compact (16-bit) microprocessor card family, results of a preliminary study examining simplex, dual and standby-sparing architectures is presented. Given the stated constraints, it is shown that the dual architecture is best suited to meet reliability goals with minimum hardware and software overhead. The report presents hardware and software design considerations for realizing the architecture including redundancy management requirements and techniques as well as verification and validation needs and methods.
Modeling, simulation, and high-autonomy control of a Martian oxygen production plant
NASA Technical Reports Server (NTRS)
Schooley, L. C.; Cellier, F. E.; Wang, F.-Y.; Zeigler, B. P.
1992-01-01
Progress on a project for the development of a high-autonomy intelligent command and control architecture for process plants used to produce oxygen from local planetary resources is reported. A distributed command and control architecture is being developed and implemented so that an oxygen production plant, or other equipment, can be reliably commanded and controlled over an extended time period in a high-autonomy mode with high-level task-oriented teleoperation from one or several remote locations. During the reporting period, progress was made at all levels of the architecture. At the remote site, several remote observers can now participate in monitoring the plant. At the local site, a command and control center was introduced for increased flexibility, reliability, and robustness. The local control architecture was enhanced to control multiple tubes in parallel, and was refined for increased robustness. The simulation model was enhanced to full dynamics descriptions.
Achieving Reliable Communication in Dynamic Emergency Responses
Chipara, Octav; Plymoth, Anders N.; Liu, Fang; Huang, Ricky; Evans, Brian; Johansson, Per; Rao, Ramesh; Griswold, William G.
2011-01-01
Emergency responses require the coordination of first responders to assess the condition of victims, stabilize their condition, and transport them to hospitals based on the severity of their injuries. WIISARD is a system designed to facilitate the collection of medical information and its reliable dissemination during emergency responses. A key challenge in WIISARD is to deliver data with high reliability as first responders move and operate in a dynamic radio environment fraught with frequent network disconnections. The initial WIISARD system employed a client-server architecture and an ad-hoc routing protocol was used to exchange data. The system had low reliability when deployed during emergency drills. In this paper, we identify the underlying causes of unreliability and propose a novel peer-to-peer architecture that in combination with a gossip-based communication protocol achieves high reliability. Empirical studies show that compared to the initial WIISARD system, the redesigned system improves reliability by as much as 37% while reducing the number of transmitted packets by 23%. PMID:22195075
Research and application of embedded real-time operating system
NASA Astrophysics Data System (ADS)
Zhang, Bo
2013-03-01
In this paper, based on the analysis of existing embedded real-time operating system, the architecture of an operating system is designed and implemented. The experimental results show that the design fully complies with the requirements of embedded real-time operating system, can achieve the purposes of reducing the complexity of embedded software design and improving the maintainability, reliability, flexibility. Therefore, this design program has high practical value.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, within a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability, and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation, testing results, and other information. Where appropriate, actual performance history was used to calculate failure rates for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to assess compliance with requirements and to highlight design or performance shortcomings for further decision making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability, and maintainability analysis, and present findings and observation based on analysis leading to the Ground Operations Project Preliminary Design Review milestone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taft, Jeffrey D.
The report describes work done on Grid Architecture under the auspices of the Department of Electricity Office of Electricity Delivery and Reliability in 2015. As described in the first Grid Architecture report, the primary purpose of this work is to provide stakeholder insight about grid issues so as to enable superior decision making on their part. Doing this requires the creation of various work products, including oft-times complex diagrams, analyses, and explanations. This report provides architectural insights into several important grid topics and also describes work done to advance the science of Grid Architecture as well.
Halim, Isa; Arep, Hambali; Kamat, Seri Rahayu; Abdullah, Rohana; Omar, Abdul Rahman; Ismail, Ahmad Rasdan
2014-06-01
Prolonged standing has been hypothesized as a vital contributor to discomfort and muscle fatigue in the workplace. The objective of this study was to develop a decision support system that could provide systematic analysis and solutions to minimize the discomfort and muscle fatigue associated with prolonged standing. The integration of object-oriented programming and a Model Oriented Simultaneous Engineering System were used to design the architecture of the decision support system. Validation of the decision support system was carried out in two manufacturing companies. The validation process showed that the decision support system produced reliable results. The decision support system is a reliable advisory tool for providing analysis and solutions to problems related to the discomfort and muscle fatigue associated with prolonged standing. Further testing of the decision support system is suggested before it is used commercially.
Halim, Isa; Arep, Hambali; Kamat, Seri Rahayu; Abdullah, Rohana; Omar, Abdul Rahman; Ismail, Ahmad Rasdan
2014-01-01
Background Prolonged standing has been hypothesized as a vital contributor to discomfort and muscle fatigue in the workplace. The objective of this study was to develop a decision support system that could provide systematic analysis and solutions to minimize the discomfort and muscle fatigue associated with prolonged standing. Methods The integration of object-oriented programming and a Model Oriented Simultaneous Engineering System were used to design the architecture of the decision support system. Results Validation of the decision support system was carried out in two manufacturing companies. The validation process showed that the decision support system produced reliable results. Conclusion The decision support system is a reliable advisory tool for providing analysis and solutions to problems related to the discomfort and muscle fatigue associated with prolonged standing. Further testing of the decision support system is suggested before it is used commercially. PMID:25180141
Quantitative architectural analysis: a new approach to cortical mapping.
Schleicher, A; Palomero-Gallagher, N; Morosan, P; Eickhoff, S B; Kowalski, T; de Vos, K; Amunts, K; Zilles, K
2005-12-01
Recent progress in anatomical and functional MRI has revived the demand for a reliable, topographic map of the human cerebral cortex. Till date, interpretations of specific activations found in functional imaging studies and their topographical analysis in a spatial reference system are, often, still based on classical architectonic maps. The most commonly used reference atlas is that of Brodmann and his successors, despite its severe inherent drawbacks. One obvious weakness in traditional, architectural mapping is the subjective nature of localising borders between cortical areas, by means of a purely visual, microscopical examination of histological specimens. To overcome this limitation, more objective, quantitative mapping procedures have been established in the past years. The quantification of the neocortical, laminar pattern by defining intensity line profiles across the cortical layers, has a long tradition. During the last years, this method has been extended to enable a reliable, reproducible mapping of the cortex based on image analysis and multivariate statistics. Methodological approaches to such algorithm-based, cortical mapping were published for various architectural modalities. In our contribution, principles of algorithm-based mapping are described for cyto- and receptorarchitecture. In a cytoarchitectural parcellation of the human auditory cortex, using a sliding window procedure, the classical areal pattern of the human superior temporal gyrus was modified by a replacing of Brodmann's areas 41, 42, 22 and parts of area 21, with a novel, more detailed map. An extension and optimisation of the sliding window procedure to the specific requirements of receptorarchitectonic mapping, is also described using the macaque central sulcus and adjacent superior parietal lobule as a second, biologically independent example. Algorithm-based mapping procedures, however, are not limited to these two architectural modalities, but can be applied to all images in which a laminar cortical pattern can be detected and quantified, e.g. myeloarchitectonic and in vivo high resolution MR imaging. Defining cortical borders, based on changes in cortical lamination in high resolution, in vivo structural MR images will result in a rapid increase of our knowledge on the structural parcellation of the human cerebral cortex.
NASA Technical Reports Server (NTRS)
Harper, R. E.; Alger, L. S.; Babikyan, C. A.; Butler, B. P.; Friend, S. A.; Ganska, R. J.; Lala, J. H.; Masotto, T. K.; Meyer, A. J.; Morton, D. P.
1992-01-01
Digital computing systems needed for Army programs such as the Computer-Aided Low Altitude Helicopter Flight Program and the Armored Systems Modernization (ASM) vehicles may be characterized by high computational throughput and input/output bandwidth, hard real-time response, high reliability and availability, and maintainability, testability, and producibility requirements. In addition, such a system should be affordable to produce, procure, maintain, and upgrade. To address these needs, the Army Fault Tolerant Architecture (AFTA) is being designed and constructed under a three-year program comprised of a conceptual study, detailed design and fabrication, and demonstration and validation phases. Described here are the results of the conceptual study phase of the AFTA development. Given here is an introduction to the AFTA program, its objectives, and key elements of its technical approach. A format is designed for representing mission requirements in a manner suitable for first order AFTA sizing and analysis, followed by a discussion of the current state of mission requirements acquisition for the targeted Army missions. An overview is given of AFTA's architectural theory of operation.
Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput Field Phenotyping
Shafiekhani, Ali; Kadam, Suhas; Fritschi, Felix B.; DeSouza, Guilherme N.
2017-01-01
In this paper, a new robotic architecture for plant phenotyping is being introduced. The architecture consists of two robotic platforms: an autonomous ground vehicle (Vinobot) and a mobile observation tower (Vinoculer). The ground vehicle collects data from individual plants, while the observation tower oversees an entire field, identifying specific plants for further inspection by the Vinobot. The advantage of this architecture is threefold: first, it allows the system to inspect large areas of a field at any time, during the day and night, while identifying specific regions affected by biotic and/or abiotic stresses; second, it provides high-throughput plant phenotyping in the field by either comprehensive or selective acquisition of accurate and detailed data from groups or individual plants; and third, it eliminates the need for expensive and cumbersome aerial vehicles or similarly expensive and confined field platforms. As the preliminary results from our algorithms for data collection and 3D image processing, as well as the data analysis and comparison with phenotype data collected by hand demonstrate, the proposed architecture is cost effective, reliable, versatile, and extendable. PMID:28124976
NASA Technical Reports Server (NTRS)
Evans, Richard K.; Hill, Gerald M.
2012-01-01
Very large space environment test facilities present unique engineering challenges in the design of facility data systems. Data systems of this scale must be versatile enough to meet the wide range of data acquisition and measurement requirements from a diverse set of customers and test programs, but also must minimize design changes to maintain reliability and serviceability. This paper presents an overview of the common architecture and capabilities of the facility data acquisition systems available at two of the world?s largest space environment test facilities located at the NASA Glenn Research Center?s Plum Brook Station in Sandusky, Ohio; namely, the Space Propulsion Research Facility (commonly known as the B-2 facility) and the Space Power Facility (SPF). The common architecture of the data systems is presented along with details on system scalability and efficient measurement systems analysis and verification. The architecture highlights a modular design, which utilizes fully-remotely managed components, enabling the data systems to be highly configurable and support multiple test locations with a wide-range of measurement types and very large system channel counts.
NASA Technical Reports Server (NTRS)
Evans, Richard K.; Hill, Gerald M.
2014-01-01
Very large space environment test facilities present unique engineering challenges in the design of facility data systems. Data systems of this scale must be versatile enough to meet the wide range of data acquisition and measurement requirements from a diverse set of customers and test programs, but also must minimize design changes to maintain reliability and serviceability. This paper presents an overview of the common architecture and capabilities of the facility data acquisition systems available at two of the world's largest space environment test facilities located at the NASA Glenn Research Center's Plum Brook Station in Sandusky, Ohio; namely, the Space Propulsion Research Facility (commonly known as the B-2 facility) and the Space Power Facility (SPF). The common architecture of the data systems is presented along with details on system scalability and efficient measurement systems analysis and verification. The architecture highlights a modular design, which utilizes fully-remotely managed components, enabling the data systems to be highly configurable and support multiple test locations with a wide-range of measurement types and very large system channel counts.
NASA Technical Reports Server (NTRS)
Nauda, A.
1982-01-01
Performance and reliability models of alternate microcomputer architectures as a methodology for optimizing system design were examined. A methodology for selecting an optimum microcomputer architecture for autonomous operation of planetary spacecraft power systems was developed. Various microcomputer system architectures are analyzed to determine their application to spacecraft power systems. It is suggested that no standardization formula or common set of guidelines exists which provides an optimum configuration for a given set of specifications.
NASA Technical Reports Server (NTRS)
Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl
2002-01-01
The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd-generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.
NASA Technical Reports Server (NTRS)
Thomas, Dale; Smith, Charles; Thomas, Leann; Kittredge, Sheryl
2002-01-01
The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd generation system by 2 orders of magnitude - equivalent to a crew risk of 1-in-10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. To best direct technology development decisions, analytical models are employed to accurately predict the benefits of each technology toward potential space transportation architectures as well as the risks associated with each technology. Rigorous systems analysis provides the foundation for assessing progress toward safety and cost goals. The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.
Automation Hooks Architecture Trade Study for Flexible Test Orchestration
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Maclean, John R.; Graffagnino, Frank J.; McCartney, Patrick A.
2010-01-01
We describe the conclusions of a technology and communities survey supported by concurrent and follow-on proof-of-concept prototyping to evaluate feasibility of defining a durable, versatile, reliable, visible software interface to support strategic modularization of test software development. The objective is that test sets and support software with diverse origins, ages, and abilities can be reliably integrated into test configurations that assemble and tear down and reassemble with scalable complexity in order to conduct both parametric tests and monitored trial runs. The resulting approach is based on integration of three recognized technologies that are currently gaining acceptance within the test industry and when combined provide a simple, open and scalable test orchestration architecture that addresses the objectives of the Automation Hooks task. The technologies are automated discovery using multicast DNS Zero Configuration Networking (zeroconf), commanding and data retrieval using resource-oriented Restful Web Services, and XML data transfer formats based on Automatic Test Markup Language (ATML). This open-source standards-based approach provides direct integration with existing commercial off-the-shelf (COTS) analysis software tools.
Verma, Vikash; Mallik, Leena; Hariadi, Rizal F.; Sivaramakrishnan, Sivaraj; Skiniotis, Georgios; Joglekar, Ajit P.
2015-01-01
DNA origami provides a versatile platform for conducting ‘architecture-function’ analysis to determine how the nanoscale organization of multiple copies of a protein component within a multi-protein machine affects its overall function. Such analysis requires that the copy number of protein molecules bound to the origami scaffold exactly matches the desired number, and that it is uniform over an entire scaffold population. This requirement is challenging to satisfy for origami scaffolds with many protein hybridization sites, because it requires the successful completion of multiple, independent hybridization reactions. Here, we show that a cleavable dimerization domain on the hybridizing protein can be used to multiplex hybridization reactions on an origami scaffold. This strategy yields nearly 100% hybridization efficiency on a 6-site scaffold even when using low protein concentration and short incubation time. It can also be developed further to enable reliable patterning of a large number of molecules on DNA origami for architecture-function analysis. PMID:26348722
Reliability Modeling of Double Beam Bridge Crane
NASA Astrophysics Data System (ADS)
Han, Zhu; Tong, Yifei; Luan, Jiahui; Xiangdong, Li
2018-05-01
This paper briefly described the structure of double beam bridge crane and the basic parameters of double beam bridge crane are defined. According to the structure and system division of double beam bridge crane, the reliability architecture of double beam bridge crane system is proposed, and the reliability mathematical model is constructed.
Software architecture of INO340 telescope control system
NASA Astrophysics Data System (ADS)
Ravanmehr, Reza; Khosroshahi, Habib
2016-08-01
The software architecture plays an important role in distributed control system of astronomical projects because many subsystems and components must work together in a consistent and reliable way. We have utilized a customized architecture design approach based on "4+1 view model" in order to design INOCS software architecture. In this paper, after reviewing the top level INOCS architecture, we present the software architecture model of INOCS inspired by "4+1 model", for this purpose we provide logical, process, development, physical, and scenario views of our architecture using different UML diagrams and other illustrative visual charts. Each view presents INOCS software architecture from a different perspective. We finish the paper by science data operation of INO340 and the concluding remarks.
Marinozzi, Franco; Marinozzi, Andrea; Bini, Fabiano; Zuppante, Francesca; Pecci, Raffaella; Bedini, Rossella
2012-01-01
Morphometric and architectural bone parameters change in diseases such as osteoarthritis and osteoporosis. The mechanical strength of bone is primarily influenced by bone quantity and quality. Bone quality is defined by parameters such as trabecular thickness, trabecular separation, trabecular density and degree of anisotropy that describe the micro-architectural structure of bone. Recently, many studies have validated microtomography as a valuable investigative technique to assess bone morphometry, thanks to micro-CT non-destructive, non-invasive and reliability features, in comparison to traditional techniques such as histology. The aim of this study is the analysis by micro-computed tomography of six specimens, extracted from patients affected by osteoarthritis and osteoporosis, in order to observe the tridimensional structure and calculate several morphometric parameters.
1988 IEEE Aerospace Applications Conference, Park City, UT, Feb. 7-12, 1988, Digest
NASA Astrophysics Data System (ADS)
The conference presents papers on microwave applications, data and signal processing applications, related aerospace applications, and advanced microelectronic products for the aerospace industry. Topics include a high-performance antenna measurement system, microwave power beaming from earth to space, the digital enhancement of microwave component performance, and a GaAs vector processor based on parallel RISC microprocessors. Consideration is also given to unique techniques for reliable SBNR architectures, a linear analysis subsystem for CSSL-IV, and a structured singular value approach to missile autopilot analysis.
1990-01-25
N Task: UR20 CDRL: 01000 N UR2O--ProcesslEnvironmentx Ada/Xt. Architecture : Design Report ~ ~ fFCp Informal Technical Data I? ,LECp Sofwar Tehoog for...S. FUNDING NUMBERS Ada/Xt Architecture : Design Report STARS Contract 6.AUTHOR(S)_ Ft9628-88-D-0031 6. AUTHOR(S) Kurt Wallnau 7. PERFORMING...of the STARS Prime contract under the Process Environment Integration task (UR20). This document "Ada Xt Architecture : Design Report", type A005
Semi-Markov adjunction to the Computer-Aided Markov Evaluator (CAME)
NASA Technical Reports Server (NTRS)
Rosch, Gene; Hutchins, Monica A.; Leong, Frank J.; Babcock, Philip S., IV
1988-01-01
The rule-based Computer-Aided Markov Evaluator (CAME) program was expanded in its ability to incorporate the effect of fault-handling processes into the construction of a reliability model. The fault-handling processes are modeled as semi-Markov events and CAME constructs and appropriate semi-Markov model. To solve the model, the program outputs it in a form which can be directly solved with the Semi-Markov Unreliability Range Evaluator (SURE) program. As a means of evaluating the alterations made to the CAME program, the program is used to model the reliability of portions of the Integrated Airframe/Propulsion Control System Architecture (IAPSA 2) reference configuration. The reliability predictions are compared with a previous analysis. The results bear out the feasibility of utilizing CAME to generate appropriate semi-Markov models to model fault-handling processes.
Chen, Qingkui; Zhao, Deyu; Wang, Jingjuan
2017-01-01
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes’ diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services. PMID:28777325
Space Flight Middleware: Remote AMS over DTN for Delay-Tolerant Messaging
NASA Technical Reports Server (NTRS)
Burleigh, Scott
2011-01-01
This paper describes a technique for implementing scalable, reliable, multi-source multipoint data distribution in space flight communications -- Delay-Tolerant Reliable Multicast (DTRM) -- that is fully supported by the "Remote AMS" (RAMS) protocol of the Asynchronous Message Service (AMS) proposed for standardization within the Consultative Committee for Space Data Systems (CCSDS). The DTRM architecture enables applications to easily "publish" messages that will be reliably and efficiently delivered to an arbitrary number of "subscribing" applications residing anywhere in the space network, whether in the same subnet or in a subnet on a remote planet or vehicle separated by many light minutes of interplanetary space. The architecture comprises multiple levels of protocol, each included for a specific purpose and allocated specific responsibilities: "application AMS" traffic performs end-system data introduction and delivery subject to access control; underlying "remote AMS" directs this application traffic to populations of recipients at remote locations in a multicast distribution tree, enabling the architecture to scale up to large networks; further underlying Delay-Tolerant Networking (DTN) Bundle Protocol (BP) advances RAMS protocol data units through the distribution tree using delay-tolerant storeand- forward methods; and further underlying reliable "convergence-layer" protocols ensure successful data transfer over each segment of the end-to-end route. The result is scalable, reliable, delay-tolerant multi-source multicast that is largely self-configuring.
Fang, Yuling; Chen, Qingkui; Xiong, Neal N; Zhao, Deyu; Wang, Jingjuan
2017-08-04
This paper aims to develop a low-cost, high-performance and high-reliability computing system to process large-scale data using common data mining algorithms in the Internet of Things (IoT) computing environment. Considering the characteristics of IoT data processing, similar to mainstream high performance computing, we use a GPU (Graphics Processing Unit) cluster to achieve better IoT services. Firstly, we present an energy consumption calculation method (ECCM) based on WSNs. Then, using the CUDA (Compute Unified Device Architecture) Programming model, we propose a Two-level Parallel Optimization Model (TLPOM) which exploits reasonable resource planning and common compiler optimization techniques to obtain the best blocks and threads configuration considering the resource constraints of each node. The key to this part is dynamic coupling Thread-Level Parallelism (TLP) and Instruction-Level Parallelism (ILP) to improve the performance of the algorithms without additional energy consumption. Finally, combining the ECCM and the TLPOM, we use the Reliable GPU Cluster Architecture (RGCA) to obtain a high-reliability computing system considering the nodes' diversity, algorithm characteristics, etc. The results show that the performance of the algorithms significantly increased by 34.1%, 33.96% and 24.07% for Fermi, Kepler and Maxwell on average with TLPOM and the RGCA ensures that our IoT computing system provides low-cost and high-reliability services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
March-Leuba, JA
2002-01-15
This report describes the tasks performed and the progress made during Phase 2 of the DOE-NERI project number 99-119 entitled Automatic Development of Highly Reliable Control Architecture for Future Nuclear Power Plants. This project is a collaboration effort between the Oak Ridge National Laboratory (ORNL), The University of Tennessee, Knoxville (UTK) and the North Carolina State University (NCSU). ORNL is the lead organization and is responsible for the coordination and integration of all work.
SABRE: a bio-inspired fault-tolerant electronic architecture.
Bremner, P; Liu, Y; Samie, M; Dragffy, G; Pipe, A G; Tempesti, G; Timmis, J; Tyrrell, A M
2013-03-01
As electronic devices become increasingly complex, ensuring their reliable, fault-free operation is becoming correspondingly more challenging. It can be observed that, in spite of their complexity, biological systems are highly reliable and fault tolerant. Hence, we are motivated to take inspiration for biological systems in the design of electronic ones. In SABRE (self-healing cellular architectures for biologically inspired highly reliable electronic systems), we have designed a bio-inspired fault-tolerant hierarchical architecture for this purpose. As in biology, the foundation for the whole system is cellular in nature, with each cell able to detect faults in its operation and trigger intra-cellular or extra-cellular repair as required. At the next level in the hierarchy, arrays of cells are configured and controlled as function units in a transport triggered architecture (TTA), which is able to perform partial-dynamic reconfiguration to rectify problems that cannot be solved at the cellular level. Each TTA is, in turn, part of a larger multi-processor system which employs coarser grain reconfiguration to tolerate faults that cause a processor to fail. In this paper, we describe the details of operation of each layer of the SABRE hierarchy, and how these layers interact to provide a high systemic level of fault tolerance.
A Survey on Next-generation Power Grid Data Architecture
DOE Office of Scientific and Technical Information (OSTI.GOV)
You, Shutang; Zhu, Dr. Lin; Liu, Yong
2015-01-01
The operation and control of power grids will increasingly rely on data. A high-speed, reliable, flexible and secure data architecture is the prerequisite of the next-generation power grid. This paper summarizes the challenges in collecting and utilizing power grid data, and then provides reference data architecture for future power grids. Based on the data architecture deployment, related research on data architecture is reviewed and summarized in several categories including data measurement/actuation, data transmission, data service layer, data utilization, as well as two cross-cutting issues, interoperability and cyber security. Research gaps and future work are also presented.
Special Issue on a Fault Tolerant Network on Chip Architecture
NASA Astrophysics Data System (ADS)
Janidarmian, Majid; Tinati, Melika; Khademzadeh, Ahmad; Ghavibazou, Maryam; Fekr, Atena Roshan
2010-06-01
In this paper a fast and efficient spare switch selection algorithm is presented in a reliable NoC architecture based on specific application mapped onto mesh topology called FERNA. Based on ring concept used in FERNA, this algorithm achieves best results equivalent to exhaustive algorithm with much less run time improving two parameters. Inputs of FERNA algorithm for response time of the system and extra communication cost minimization are derived from simulation of high transaction level using SystemC TLM and mathematical formulation, respectively. The results demonstrate that improvement of above mentioned parameters lead to advance whole system reliability that is analytically calculated. Mapping algorithm has been also investigated as an effective issue on extra bandwidth requirement and system reliability.
Hybrid Power Management-Based Vehicle Architecture
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2011-01-01
Hybrid Power Management (HPM) is the integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications (s ee figure). The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, that provides all power to a common energy storage system that is used to power the drive motors and vehicle accessory systems. This architecture also provides power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. The key element of HPM is the energy storage system. All generated power is sent to the energy storage system, and all loads derive their power from that system. This can significantly reduce the power requirement of the primary power source, while increasing the vehicle reliability. Ultracapacitors are ideal for an HPM-based energy storage system due to their exceptionally long cycle life, high reliability, high efficiency, high power density, and excellent low-temperature performance. Multiple power sources and multiple loads are easily incorporated into an HPM-based vehicle. A gas turbine is a good primary power source because of its high efficiency, high power density, long life, high reliability, and ability to operate on a wide range of fuels. An HPM controller maintains optimal control over each vehicle component. This flexible operating system can be applied to all vehicles to considerably improve vehicle efficiency, reliability, safety, security, and performance. The HPM-based vehicle architecture has many advantages over conventional vehicle architectures. Ultracapacitors have a much longer cycle life than batteries, which greatly improves system reliability, reduces life-of-system costs, and reduces environmental impact as ultracapacitors will probably never need to be replaced and disposed of. The environmentally safe ultracapacitor components reduce disposal concerns, and their recyclable nature reduces the environmental impact. High ultracapacitor power density provides high power during surges, and the ability to absorb high power during recharging. Ultracapacitors are extremely efficient in capturing recharging energy, are rugged, reliable, maintenance-free, have excellent lowtemperature characteristic, provide consistent performance over time, and promote safety as they can be left indefinitely in a safe, discharged state whereas batteries cannot.
Innovative on board payload optical architecture for high throughput satellites
NASA Astrophysics Data System (ADS)
Baudet, D.; Braux, B.; Prieur, O.; Hughes, R.; Wilkinson, M.; Latunde-Dada, K.; Jahns, J.; Lohmann, U.; Fey, D.; Karafolas, N.
2017-11-01
For the next generation of HighThroughPut (HTP) Telecommunications Satellites, space end users' needs will result in higher link speeds and an increase in the number of channels; up to 512 channels running at 10Gbits/s. By keeping electrical interconnections based on copper, the constraints in term of power dissipation, number of electrical wires and signal integrity will become too demanding. The replacement of the electrical links by optical links is the most adapted solution as it provides high speed links with low power consumption and no EMC/EMI. But replacing all electrical links by optical links of an On Board Payload (OBP) is challenging. It is not simply a matter of replacing electrical components with optical but rather the whole concept and architecture have to be rethought to achieve a high reliability and high performance optical solution. In this context, this paper will present the concept of an Innovative OBP Optical Architecture. The optical architecture was defined to meet the critical requirements of the application: signal speed, number of channels, space reliability, power dissipation, optical signals crossing and components availability. The resulting architecture is challenging and the need for new developments is highlighted. But this innovative optically interconnected architecture will substantially outperform standard electrical ones.
Towards Behavioral Reflexion Models
NASA Technical Reports Server (NTRS)
Ackermann, Christopher; Lindvall, Mikael; Cleaveland, Rance
2009-01-01
Software architecture has become essential in the struggle to manage today s increasingly large and complex systems. Software architecture views are created to capture important system characteristics on an abstract and, thus, comprehensible level. As the system is implemented and later maintained, it often deviates from the original design specification. Such deviations can have implication for the quality of the system, such as reliability, security, and maintainability. Software architecture compliance checking approaches, such as the reflexion model technique, have been proposed to address this issue by comparing the implementation to a model of the systems architecture design. However, architecture compliance checking approaches focus solely on structural characteristics and ignore behavioral conformance. This is especially an issue in Systems-of- Systems. Systems-of-Systems (SoS) are decompositions of large systems, into smaller systems for the sake of flexibility. Deviations of the implementation to its behavioral design often reduce the reliability of the entire SoS. An approach is needed that supports the reasoning about behavioral conformance on architecture level. In order to address this issue, we have developed an approach for comparing the implementation of a SoS to an architecture model of its behavioral design. The approach follows the idea of reflexion models and adopts it to support the compliance checking of behaviors. In this paper, we focus on sequencing properties as they play an important role in many SoS. Sequencing deviations potentially have a severe impact on the SoS correctness and qualities. The desired behavioral specification is defined in UML sequence diagram notation and behaviors are extracted from the SoS implementation. The behaviors are then mapped to the model of the desired behavior and the two are compared. Finally, a reflexion model is constructed that shows the deviations between behavioral design and implementation. This paper discusses the approach and shows how it can be applied to investigate reliability issues in SoS.
2011-01-01
4 . TITLE AND SUBTITLE INTELLIGENT APPROACHES IN IMPROVING IN-VEHICLE NETWORK ARCHITECTURE AND MINIMIZING POWER CONSUMPTION IN COMBAT VEHICLES 5a... 4 1.3 Organization...32 CHAPTER 4 – SOFTWARE RELIABILITY PREDICTION FOR COMBAT VEHICLES . 33 4.1 Introduction
Advanced cloud fault tolerance system
NASA Astrophysics Data System (ADS)
Sumangali, K.; Benny, Niketa
2017-11-01
Cloud computing has become a prevalent on-demand service on the internet to store, manage and process data. A pitfall that accompanies cloud computing is the failures that can be encountered in the cloud. To overcome these failures, we require a fault tolerance mechanism to abstract faults from users. We have proposed a fault tolerant architecture, which is a combination of proactive and reactive fault tolerance. This architecture essentially increases the reliability and the availability of the cloud. In the future, we would like to compare evaluations of our proposed architecture with existing architectures and further improve it.
Heavy Lift Vehicle (HLV) Avionics Flight Computing Architecture Study
NASA Technical Reports Server (NTRS)
Hodson, Robert F.; Chen, Yuan; Morgan, Dwayne R.; Butler, A. Marc; Sdhuh, Joseph M.; Petelle, Jennifer K.; Gwaltney, David A.; Coe, Lisa D.; Koelbl, Terry G.; Nguyen, Hai D.
2011-01-01
A NASA multi-Center study team was assembled from LaRC, MSFC, KSC, JSC and WFF to examine potential flight computing architectures for a Heavy Lift Vehicle (HLV) to better understand avionics drivers. The study examined Design Reference Missions (DRMs) and vehicle requirements that could impact the vehicles avionics. The study considered multiple self-checking and voting architectural variants and examined reliability, fault-tolerance, mass, power, and redundancy management impacts. Furthermore, a goal of the study was to develop the skills and tools needed to rapidly assess additional architectures should requirements or assumptions change.
A Collaborative Reasoning Maintenance System for a Reliable Application of Legislations
NASA Astrophysics Data System (ADS)
Tamisier, Thomas; Didry, Yoann; Parisot, Olivier; Feltz, Fernand
Decision support systems are nowadays used to disentangle all kinds of intricate situations and perform sophisticated analysis. Moreover, they are applied in areas where the knowledge can be heterogeneous, partially un-formalized, implicit, or diffuse. The representation and management of this knowledge become the key point to ensure the proper functioning of the system and keep an intuitive view upon its expected behavior. This paper presents a generic architecture for implementing knowledge-base systems used in collaborative business, where the knowledge is organized into different databases, according to the usage, persistence and quality of the information. This approach is illustrated with Cadral, a customizable automated tool built on this architecture and used for processing family benefits applications at the National Family Benefits Fund of the Grand-Duchy of Luxembourg.
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
NASA Technical Reports Server (NTRS)
Goldberg, J.; Kautz, W. H.; Melliar-Smith, P. M.; Green, M. W.; Levitt, K. N.; Schwartz, R. L.; Weinstock, C. B.
1984-01-01
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness.
Realistic absorption coefficient of each individual film in a multilayer architecture
NASA Astrophysics Data System (ADS)
Cesaria, M.; Caricato, A. P.; Martino, M.
2015-02-01
A spectrophotometric strategy, termed multilayer-method (ML-method), is presented and discussed to realistically calculate the absorption coefficient of each individual layer embedded in multilayer architectures without reverse engineering, numerical refinements and assumptions about the layer homogeneity and thickness. The strategy extends in a non-straightforward way a consolidated route, already published by the authors and here termed basic-method, able to accurately characterize an absorbing film covering transparent substrates. The ML-method inherently accounts for non-measurable contribution of the interfaces (including multiple reflections), describes the specific film structure as determined by the multilayer architecture and used deposition approach and parameters, exploits simple mathematics, and has wide range of applicability (high-to-weak absorption regions, thick-to-ultrathin films). Reliability tests are performed on films and multilayers based on a well-known material (indium tin oxide) by deliberately changing the film structural quality through doping, thickness-tuning and underlying supporting-film. Results are found consistent with information obtained by standard (optical and structural) analysis, the basic-method and band gap values reported in the literature. The discussed example-applications demonstrate the ability of the ML-method to overcome the drawbacks commonly limiting an accurate description of multilayer architectures.
NASA Technical Reports Server (NTRS)
Donovan, William J.; Davis, John E.
1991-01-01
Rockwell International is conducting an ongoing program to develop avionics architectures that provide high intrinsic value while meeting all mission objectives. Studies are being conducted to determine alternative configurations that have low life-cycle cost and minimum development risk, and that minimize launch delays while providing the reliability level to assure a successful mission. This effort is based on four decades of providing ballistic missile avionics to the United States Air Force and has focused on the requirements of the NASA Cargo Transfer Vehicle (CTV) program in 1991. During the development of architectural concepts it became apparent that rendezvous strategy issues have an impact on the architecture of the avionics system. This is in addition to the expected impact on propulsion and electrical power duration, flight profiles, and trajectory during approach.
An Open Avionics and Software Architecture to Support Future NASA Exploration Missions
NASA Technical Reports Server (NTRS)
Schlesinger, Adam
2017-01-01
The presentation describes an avionics and software architecture that has been developed through NASAs Advanced Exploration Systems (AES) division. The architecture is open-source, highly reliable with fault tolerance, and utilizes standard capabilities and interfaces, which are scalable and customizable to support future exploration missions. Specific focus areas of discussion will include command and data handling, software, human interfaces, communication and wireless systems, and systems engineering and integration.
Security Policy for a Generic Space Exploration Communication Network Architecture
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Sheehe, Charles J.; Vaden, Karl R.
2016-01-01
This document is one of three. It describes various security mechanisms and a security policy profile for a generic space-based communication architecture. Two other documents accompany this document- an Operations Concept (OpsCon) and a communication architecture document. The OpsCon should be read first followed by the security policy profile described by this document and then the architecture document. The overall goal is to design a generic space exploration communication network architecture that is affordable, deployable, maintainable, securable, evolvable, reliable, and adaptable. The architecture should also require limited reconfiguration throughout system development and deployment. System deployment includes subsystem development in a factory setting, system integration in a laboratory setting, launch preparation, launch, and deployment and operation in space.
Trust information-based privacy architecture for ubiquitous health.
Ruotsalainen, Pekka Sakari; Blobel, Bernd; Seppälä, Antto; Nykänen, Pirkko
2013-10-08
Ubiquitous health is defined as a dynamic network of interconnected systems that offers health services independent of time and location to a data subject (DS). The network takes place in open and unsecure information space. It is created and managed by the DS who sets rules that regulate the way personal health information is collected and used. Compared to health care, it is impossible in ubiquitous health to assume the existence of a priori trust between the DS and service providers and to produce privacy using static security services. In ubiquitous health features, business goals and regulations systems followed often remain unknown. Furthermore, health care-specific regulations do not rule the ways health data is processed and shared. To be successful, ubiquitous health requires novel privacy architecture. The goal of this study was to develop a privacy management architecture that helps the DS to create and dynamically manage the network and to maintain information privacy. The architecture should enable the DS to dynamically define service and system-specific rules that regulate the way subject data is processed. The architecture should provide to the DS reliable trust information about systems and assist in the formulation of privacy policies. Furthermore, the architecture should give feedback upon how systems follow the policies of DS and offer protection against privacy and trust threats existing in ubiquitous environments. A sequential method that combines methodologies used in system theory, systems engineering, requirement analysis, and system design was used in the study. In the first phase, principles, trust and privacy models, and viewpoints were selected. Thereafter, functional requirements and services were developed on the basis of a careful analysis of existing research published in journals and conference proceedings. Based on principles, models, and requirements, architectural components and their interconnections were developed using system analysis. The architecture mimics the way humans use trust information in decision making, and enables the DS to design system-specific privacy policies using computational trust information that is based on systems' measured features. The trust attributes that were developed describe the level systems for support awareness and transparency, and how they follow general and domain-specific regulations and laws. The monitoring component of the architecture offers dynamic feedback concerning how the system enforces the polices of DS. The privacy management architecture developed in this study enables the DS to dynamically manage information privacy in ubiquitous health and to define individual policies for all systems considering their trust value and corresponding attributes. The DS can also set policies for secondary use and reuse of health information. The architecture offers protection against privacy threats existing in ubiquitous environments. Although the architecture is targeted to ubiquitous health, it can easily be modified to other ubiquitous applications.
Trust Information-Based Privacy Architecture for Ubiquitous Health
2013-01-01
Background Ubiquitous health is defined as a dynamic network of interconnected systems that offers health services independent of time and location to a data subject (DS). The network takes place in open and unsecure information space. It is created and managed by the DS who sets rules that regulate the way personal health information is collected and used. Compared to health care, it is impossible in ubiquitous health to assume the existence of a priori trust between the DS and service providers and to produce privacy using static security services. In ubiquitous health features, business goals and regulations systems followed often remain unknown. Furthermore, health care-specific regulations do not rule the ways health data is processed and shared. To be successful, ubiquitous health requires novel privacy architecture. Objective The goal of this study was to develop a privacy management architecture that helps the DS to create and dynamically manage the network and to maintain information privacy. The architecture should enable the DS to dynamically define service and system-specific rules that regulate the way subject data is processed. The architecture should provide to the DS reliable trust information about systems and assist in the formulation of privacy policies. Furthermore, the architecture should give feedback upon how systems follow the policies of DS and offer protection against privacy and trust threats existing in ubiquitous environments. Methods A sequential method that combines methodologies used in system theory, systems engineering, requirement analysis, and system design was used in the study. In the first phase, principles, trust and privacy models, and viewpoints were selected. Thereafter, functional requirements and services were developed on the basis of a careful analysis of existing research published in journals and conference proceedings. Based on principles, models, and requirements, architectural components and their interconnections were developed using system analysis. Results The architecture mimics the way humans use trust information in decision making, and enables the DS to design system-specific privacy policies using computational trust information that is based on systems’ measured features. The trust attributes that were developed describe the level systems for support awareness and transparency, and how they follow general and domain-specific regulations and laws. The monitoring component of the architecture offers dynamic feedback concerning how the system enforces the polices of DS. Conclusions The privacy management architecture developed in this study enables the DS to dynamically manage information privacy in ubiquitous health and to define individual policies for all systems considering their trust value and corresponding attributes. The DS can also set policies for secondary use and reuse of health information. The architecture offers protection against privacy threats existing in ubiquitous environments. Although the architecture is targeted to ubiquitous health, it can easily be modified to other ubiquitous applications. PMID:25099213
Automated geospatial Web Services composition based on geodata quality requirements
NASA Astrophysics Data System (ADS)
Cruz, Sérgio A. B.; Monteiro, Antonio M. V.; Santos, Rafael
2012-10-01
Service-Oriented Architecture and Web Services technologies improve the performance of activities involved in geospatial analysis with a distributed computing architecture. However, the design of the geospatial analysis process on this platform, by combining component Web Services, presents some open issues. The automated construction of these compositions represents an important research topic. Some approaches to solving this problem are based on AI planning methods coupled with semantic service descriptions. This work presents a new approach using AI planning methods to improve the robustness of the produced geospatial Web Services composition. For this purpose, we use semantic descriptions of geospatial data quality requirements in a rule-based form. These rules allow the semantic annotation of geospatial data and, coupled with the conditional planning method, this approach represents more precisely the situations of nonconformities with geodata quality that may occur during the execution of the Web Service composition. The service compositions produced by this method are more robust, thus improving process reliability when working with a composition of chained geospatial Web Services.
Experience with ATLAS MySQL PanDA database service
NASA Astrophysics Data System (ADS)
Smirnov, Y.; Wlodek, T.; De, K.; Hover, J.; Ozturk, N.; Smith, J.; Wenaus, T.; Yu, D.
2010-04-01
The PanDA distributed production and analysis system has been in production use for ATLAS data processing and analysis since late 2005 in the US, and globally throughout ATLAS since early 2008. Its core architecture is based on a set of stateless web services served by Apache and backed by a suite of MySQL databases that are the repository for all PanDA information: active and archival job queues, dataset and file catalogs, site configuration information, monitoring information, system control parameters, and so on. This database system is one of the most critical components of PanDA, and has successfully delivered the functional and scaling performance required by PanDA, currently operating at a scale of half a million jobs per week, with much growth still to come. In this paper we describe the design and implementation of the PanDA database system, its architecture of MySQL servers deployed at BNL and CERN, backup strategy and monitoring tools. The system has been developed, thoroughly tested, and brought to production to provide highly reliable, scalable, flexible and available database services for ATLAS Monte Carlo production, reconstruction and physics analysis.
Altair Lander Life Support: Design Analysis Cycles 4 and 5
NASA Technical Reports Server (NTRS)
Anderson, Molly; Curley, Su; Rotter, Henry; Stambaugh, Imelda; Yagoda, Evan
2011-01-01
Life support systems are a critical part of human exploration beyond low earth orbit. NASA s Altair Lunar Lander team is pursuing efficient solutions to the technical challenges of human spaceflight. Life support design efforts up through Design Analysis Cycle (DAC) 4 focused on finding lightweight and reliable solutions for the Sortie and Outpost missions within the Constellation Program. In DAC-4 and later follow on work, changes were made to add functionality for new requirements accepted by the Altair project, and to update the design as knowledge about certain issues or hardware matured. In DAC-5, the Altair project began to consider mission architectures outside the Constellation baseline. Selecting the optimal life support system design is very sensitive to mission duration. When the mission goals and architecture change several trade studies must be conducted to determine the appropriate design. Finally, several areas of work developed through the Altair project may be applicable to other vehicle concepts for microgravity missions. Maturing the Altair life support system related analysis, design, and requirements can provide important information for developers of a wide range of other human vehicles.
Altair Lander Life Support: Design Analysis Cycles 4 and 5
NASA Technical Reports Server (NTRS)
Anderson, Molly; Curley, Su; Rotter, Henry; Yagoda, Evan
2010-01-01
Life support systems are a critical part of human exploration beyond low earth orbit. NASA s Altair Lunar Lander team is pursuing efficient solutions to the technical challenges of human spaceflight. Life support design efforts up through Design Analysis Cycle (DAC) 4 focused on finding lightweight and reliable solutions for the Sortie and Outpost missions within the Constellation Program. In DAC-4 and later follow on work, changes were made to add functionality for new requirements accepted by the Altair project, and to update the design as knowledge about certain issues or hardware matured. In DAC-5, the Altair project began to consider mission architectures outside the Constellation baseline. Selecting the optimal life support system design is very sensitive to mission duration. When the mission goals and architecture change several trade studies must be conducted to determine the appropriate design. Finally, several areas of work developed through the Altair project may be applicable to other vehicle concepts for microgravity missions. Maturing the Altair life support system related analysis, design, and requirements can provide important information for developers of a wide range of other human vehicles.
Trades Between Opposition and Conjunction Class Trajectories for Early Human Missions to Mars
NASA Technical Reports Server (NTRS)
Mattfeld, Bryan; Stromgren, Chel; Shyface, Hilary; Komar, David R.; Cirillo, William; Goodliff, Kandyce
2014-01-01
Candidate human missions to Mars, including NASA's Design Reference Architecture 5.0, have focused on conjunction-class missions with long crewed durations and minimum energy trajectories to reduce total propellant requirements and total launch mass. However, in order to progressively reduce risk and gain experience in interplanetary mission operations, it may be desirable that initial human missions to Mars, whether to the surface or to Mars orbit, have shorter total crewed durations and minimal stay times at the destination. Opposition-class missions require larger total energy requirements relative to conjunction-class missions but offer the potential for much shorter mission durations, potentially reducing risk and overall systems performance requirements. This paper will present a detailed comparison of conjunction-class and opposition-class human missions to Mars vicinity with a focus on how such missions could be integrated into the initial phases of a Mars exploration campaign. The paper will present the results of a trade study that integrates trajectory/propellant analysis, element design, logistics and sparing analysis, and risk assessment to produce a comprehensive comparison of opposition and conjunction exploration mission constructs. Included in the trade study is an assessment of the risk to the crew and the trade offs between the mission duration and element, logistics, and spares mass. The analysis of the mission trade space was conducted using four simulation and analysis tools developed by NASA. Trajectory analyses for Mars destination missions were conducted using VISITOR (Versatile ImpulSive Interplanetary Trajectory OptimizeR), an in-house tool developed by NASA Langley Research Center. Architecture elements were evaluated using EXploration Architecture Model for IN-space and Earth-to-orbit (EXAMINE), a parametric modeling tool that generates exploration architectures through an integrated systems model. Logistics analysis was conducted using NASA's Human Exploration Logistics Model (HELM), and sparing allocation predictions were generated via the Exploration Maintainability Analysis Tool (EMAT), which is a probabilistic simulation engine that evaluates trades in spacecraft reliability and sparing requirements based on spacecraft system maintainability and reparability.
Transmission control unit drive based on the AUTOSAR standard
NASA Astrophysics Data System (ADS)
Guo, Xiucai; Qin, Zhen
2018-03-01
It is a trend of automotive electronics industry in the future that automotive electronics embedded system development based on the AUTOSAR standard. AUTOSAR automotive architecture standard has proposed the transmission control unit (TCU) development architecture and designed its interfaces and configurations in detail. This essay has discussed that how to drive the TCU based on AUTOSAR standard architecture. The results show that driving the TCU with the AUTOSAR system improves reliability and shortens development cycles.
Thermal Hotspots in CPU Die and It's Future Architecture
NASA Astrophysics Data System (ADS)
Wang, Jian; Hu, Fu-Yuan
Owing to the increasing core frequency and chip integration and the limited die dimension, the power densities in CPU chip have been increasing fastly. The high temperature on chip resulted by power densities threats the processor's performance and chip's reliability. This paper analyzed the thermal hotspots in die and their properties. A new architecture of function units in die - - hot units distributed architecture is suggested to cope with the problems of high power densities for future processor chip.
Executable Architecture Research at Old Dominion University
NASA Technical Reports Server (NTRS)
Tolk, Andreas; Shuman, Edwin A.; Garcia, Johnny J.
2011-01-01
Executable Architectures allow the evaluation of system architectures not only regarding their static, but also their dynamic behavior. However, the systems engineering community do not agree on a common formal specification of executable architectures. To close this gap and identify necessary elements of an executable architecture, a modeling language, and a modeling formalism is topic of ongoing PhD research. In addition, systems are generally defined and applied in an operational context to provide capabilities and enable missions. To maximize the benefits of executable architectures, a second PhD effort introduces the idea of creating an executable context in addition to the executable architecture. The results move the validation of architectures from the current information domain into the knowledge domain and improve the reliability of such validation efforts. The paper presents research and results of both doctoral research efforts and puts them into a common context of state-of-the-art of systems engineering methods supporting more agility.
Fiber to the serving area: telephone-like star architecture for CATV
NASA Astrophysics Data System (ADS)
Fellows, David M.
1992-02-01
CATV systems traditionally use a tree and branch architecture to bring up to 550 MHz of analog bandwidth to every home in a franchise area. This changed slightly with the advent of AM fiber optic equipment, as fiber optics were used in an overlay fashion to reduce coaxial amplifier cascades and improve subscriber quality and reliability. Within the last year, fiber has economically replaced coaxial trunking. The resulting fiber to the serving area architecture combines fiber and coaxial stars for a network that looks much like the carrier serving area architectures used by telephone companies.
NASA Technical Reports Server (NTRS)
Traversi, M.; Piccolo, R.
1980-01-01
Tradeoff study activities and the analysis process used are described with emphasis on (1) review of the alternatives; (2) vehicle architecture; and (3) evaluation of the propulsion system alternatives; interim results are presented for the basic hybrid vehicle characterization; vehicle scheme development; propulsion system power and transmission ratios; vehicle weight; energy consumption and emissions; performance; production costs; reliability, availability and maintainability; life cycle costs, and operational quality. The final vehicle conceptual design is examined.
GASP-PL/I Simulation of Integrated Avionic System Processor Architectures. M.S. Thesis
NASA Technical Reports Server (NTRS)
Brent, G. A.
1978-01-01
A development study sponsored by NASA was completed in July 1977 which proposed a complete integration of all aircraft instrumentation into a single modular system. Instead of using the current single-function aircraft instruments, computers compiled and displayed inflight information for the pilot. A processor architecture called the Team Architecture was proposed. This is a hardware/software approach to high-reliability computer systems. A follow-up study of the proposed Team Architecture is reported. GASP-PL/1 simulation models are used to evaluate the operating characteristics of the Team Architecture. The problem, model development, simulation programs and results at length are presented. Also included are program input formats, outputs and listings.
NASA Technical Reports Server (NTRS)
Harper, Richard E.; Elks, Carl
1995-01-01
An Army Fault Tolerant Architecture (AFTA) has been developed to meet real-time fault tolerant processing requirements of future Army applications. AFTA is the enabling technology that will allow the Army to configure existing processors and other hardware to provide high throughput and ultrahigh reliability necessary for TF/TA/NOE flight control and other advanced Army applications. A comprehensive conceptual study of AFTA has been completed that addresses a wide range of issues including requirements, architecture, hardware, software, testability, producibility, analytical models, validation and verification, common mode faults, VHDL, and a fault tolerant data bus. A Brassboard AFTA for demonstration and validation has been fabricated, and two operating systems and a flight-critical Army application have been ported to it. Detailed performance measurements have been made of fault tolerance and operating system overheads while AFTA was executing the flight application in the presence of faults.
Software Architecture of Sensor Data Distribution In Planetary Exploration
NASA Technical Reports Server (NTRS)
Lee, Charles; Alena, Richard; Stone, Thom; Ossenfort, John; Walker, Ed; Notario, Hugo
2006-01-01
Data from mobile and stationary sensors will be vital in planetary surface exploration. The distribution and collection of sensor data in an ad-hoc wireless network presents a challenge. Irregular terrain, mobile nodes, new associations with access points and repeaters with stronger signals as the network reconfigures to adapt to new conditions, signal fade and hardware failures can cause: a) Data errors; b) Out of sequence packets; c) Duplicate packets; and d) Drop out periods (when node is not connected). To mitigate the effects of these impairments, a robust and reliable software architecture must be implemented. This architecture must also be tolerant of communications outages. This paper describes such a robust and reliable software infrastructure that meets the challenges of a distributed ad hoc network in a difficult environment and presents the results of actual field experiments testing the principles and actual code developed.
A safety-based decision making architecture for autonomous systems
NASA Technical Reports Server (NTRS)
Musto, Joseph C.; Lauderbaugh, L. K.
1991-01-01
Engineering systems designed specifically for space applications often exhibit a high level of autonomy in the control and decision-making architecture. As the level of autonomy increases, more emphasis must be placed on assimilating the safety functions normally executed at the hardware level or by human supervisors into the control architecture of the system. The development of a decision-making structure which utilizes information on system safety is detailed. A quantitative measure of system safety, called the safety self-information, is defined. This measure is analogous to the reliability self-information defined by McInroy and Saridis, but includes weighting of task constraints to provide a measure of both reliability and cost. An example is presented in which the safety self-information is used as a decision criterion in a mobile robot controller. The safety self-information is shown to be consistent with the entropy-based Theory of Intelligent Machines defined by Saridis.
Sayyed, Ali; Medeiros de Araújo, Gustavo; Bodanese, João Paulo; Buss Becker, Leandro
2015-01-01
The use of mobile nodes to collect data in a Wireless Sensor Network (WSN) has gained special attention over the last years. Some researchers explore the use of Unmanned Aerial Vehicles (UAVs) as mobile node for such data-collection purposes. Analyzing these works, it is apparent that mobile nodes used in such scenarios are typically equipped with at least two different radio interfaces. The present work presents a Dual-Stack Single-Radio Communication Architecture (DSSRCA), which allows a UAV to communicate in a bidirectional manner with a WSN and a Sink node. The proposed architecture was specifically designed to support different network QoS requirements, such as best-effort and more reliable communications, attending both UAV-to-WSN and UAV-to-Sink communications needs. DSSRCA was implemented and tested on a real UAV, as detailed in this paper. This paper also includes a simulation analysis that addresses bandwidth consumption in an environmental monitoring application scenario. It includes an analysis of the data gathering rate that can be achieved considering different UAV flight speeds. Obtained results show the viability of using a single radio transmitter for collecting data from the WSN and forwarding such data to the Sink node. PMID:26389911
Sayyed, Ali; de Araújo, Gustavo Medeiros; Bodanese, João Paulo; Becker, Leandro Buss
2015-09-16
The use of mobile nodes to collect data in a Wireless Sensor Network (WSN) has gained special attention over the last years. Some researchers explore the use of Unmanned Aerial Vehicles (UAVs) as mobile node for such data-collection purposes. Analyzing these works, it is apparent that mobile nodes used in such scenarios are typically equipped with at least two different radio interfaces. The present work presents a Dual-Stack Single-Radio Communication Architecture (DSSRCA), which allows a UAV to communicate in a bidirectional manner with a WSN and a Sink node. The proposed architecture was specifically designed to support different network QoS requirements, such as best-effort and more reliable communications, attending both UAV-to-WSN and UAV-to-Sink communications needs. DSSRCA was implemented and tested on a real UAV, as detailed in this paper. This paper also includes a simulation analysis that addresses bandwidth consumption in an environmental monitoring application scenario. It includes an analysis of the data gathering rate that can be achieved considering different UAV flight speeds. Obtained results show the viability of using a single radio transmitter for collecting data from the WSN and forwarding such data to the Sink node.
Architectural Analysis of a LLNL LWIR Sensor System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bond, Essex J.; Curry, Jim R.; LaFortune, Kai N.
The architecture of an LLNL airborne imaging and detection system is considered in this report. The purpose of the system is to find the location of substances of interest by detecting their chemical signatures using a long-wave infrared (LWIR) imager with geo-registration capability. The detection system consists of an LWIR imaging spectrometer as well as a network of computer hardware and analysis software for analyzing the images for the features of interest. The system has been in the operations phase now for well over a year, and as such, there is enough use data and feedback from the primary beneficiarymore » to assess the current successes and shortcomings of the LWIR system architecture. LWIR system has been successful in providing reliable data collection and the delivery of a report with results. The weakness of the architecture has been identified in two areas: with the network of computer hardware and software and with the feedback of the state of the system health. Regarding the former, the system computers and software that carry out the data acquisition are too complicated for routine operations and maintenance. With respect to the latter, the primary beneficiary of the instrument’s data does not have enough metrics to use to filter the large quantity of data to determine its utility. In addition to the needs in these two areas, a latent need of one of the stakeholders is identified. This report documents the strengths and weaknesses, as well as proposes a solution for enhancing the architecture that simultaneously addresses the two areas of weakness and leverages them to meet the newly identified latent need.« less
Operational Concepts for a Generic Space Exploration Communication Network Architecture
NASA Technical Reports Server (NTRS)
Ivancic, William D.; Vaden, Karl R.; Jones, Robert E.; Roberts, Anthony M.
2015-01-01
This document is one of three. It describes the Operational Concept (OpsCon) for a generic space exploration communication architecture. The purpose of this particular document is to identify communication flows and data types. Two other documents accompany this document, a security policy profile and a communication architecture document. The operational concepts should be read first followed by the security policy profile and then the architecture document. The overall goal is to design a generic space exploration communication network architecture that is affordable, deployable, maintainable, securable, evolvable, reliable, and adaptable. The architecture should also require limited reconfiguration throughout system development and deployment. System deployment includes: subsystem development in a factory setting, system integration in a laboratory setting, launch preparation, launch, and deployment and operation in space.
An infrastructure for accurate characterization of single-event transients in digital circuits.
Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael
2013-11-01
We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.
An infrastructure for accurate characterization of single-event transients in digital circuits☆
Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael
2013-01-01
We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694
Sustaining Human Presence on Mars Using ISRU and a Reusable Lander
NASA Technical Reports Server (NTRS)
Arney, Dale C.; Jones, Christopher A.; Klovstad, Jordan J.; Komar, D.R.; Earle, Kevin; Moses, Robert; Shyface, Hilary R.
2015-01-01
This paper presents an analysis of the impact of ISRU (In-Site Resource Utilization), reusability, and automation on sustaining a human presence on Mars, requiring a transition from Earth dependence to Earth independence. The study analyzes the surface and transportation architectures and compared campaigns that revealed the importance of ISRU and reusability. A reusable Mars lander, Hercules, eliminates the need to deliver a new descent and ascent stage with each cargo and crew delivery to Mars, reducing the mass delivered from Earth. As part of an evolvable transportation architecture, this investment is key to enabling continuous human presence on Mars. The extensive use of ISRU reduces the logistics supply chain from Earth in order to support population growth at Mars. Reliable and autonomous systems, in conjunction with robotics, are required to enable ISRU architectures as systems must operate and maintain themselves while the crew is not present. A comparison of Mars campaigns is presented to show the impact of adding these investments and their ability to contribute to sustaining a human presence on Mars.
A Security Architecture for Fault-Tolerant Systems
1993-06-03
aspect of our effort to achieve better performance is integrating the system into microkernel -based operating systems. 4 Summary and discussion In...135-171, June 1983. [vRBC+92] R. van Renesse, K. Birman, R. Cooper, B. Glade, and P. Stephenson. Reliable multicast between microkernels . In...Proceedings of the USENIX Microkernels and Other Kernel Architectures Workshop, April 1992. 29
Parallel Subspace Subcodes of Reed-Solomon Codes for Magnetic Recording Channels
ERIC Educational Resources Information Center
Wang, Han
2010-01-01
Read channel architectures based on a single low-density parity-check (LDPC) code are being considered for the next generation of hard disk drives. However, LDPC-only solutions suffer from the error floor problem, which may compromise reliability, if not handled properly. Concatenated architectures using an LDPC code plus a Reed-Solomon (RS) code…
MTP: An atomic multicast transport protocol
NASA Technical Reports Server (NTRS)
Freier, Alan O.; Marzullo, Keith
1990-01-01
Multicast transport protocol (MTP); a reliable transport protocol that utilizes the multicast strategy of applicable lower layer network architectures is described. In addition to transporting data reliably and efficiently, MTP provides the client synchronization necessary for agreement on the receipt of data and the joining of the group of communicants.
Sample Manipulation System for Sample Analysis at Mars
NASA Technical Reports Server (NTRS)
Mumm, Erik; Kennedy, Tom; Carlson, Lee; Roberts, Dustyn
2008-01-01
The Sample Analysis at Mars (SAM) instrument will analyze Martian samples collected by the Mars Science Laboratory Rover with a suite of spectrometers. This paper discusses the driving requirements, design, and lessons learned in the development of the Sample Manipulation System (SMS) within SAM. The SMS stores and manipulates 74 sample cups to be used for solid sample pyrolysis experiments. Focus is given to the unique mechanism architecture developed to deliver a high packing density of sample cups in a reliable, fault tolerant manner while minimizing system mass and control complexity. Lessons learned are presented on contamination control, launch restraint mechanisms for fragile sample cups, and mechanism test data.
Radioisotope Power System Pool Concept
NASA Technical Reports Server (NTRS)
Rusick, Jeffrey J.; Bolotin, Gary S.
2015-01-01
Advanced Radioisotope Power Systems (RPS) for NASA deep space science missions have historically used static thermoelectric-based designs because they are highly reliable, and their radioisotope heat sources can be passively cooled throughout the mission life cycle. Recently, a significant effort to develop a dynamic RPS, the Advanced Stirling Radioisotope Generator (ASRG), was conducted by NASA and the Department of Energy, because Stirling based designs offer energy conversion efficiencies four times higher than heritage thermoelectric designs; and the efficiency would proportionately reduce the amount of radioisotope fuel needed for the same power output. However, the long term reliability of a Stirling based design is a concern compared to thermoelectric designs, because for certain Stirling system architectures the radioisotope heat sources must be actively cooled via the dynamic operation of Stirling converters throughout the mission life cycle. To address this reliability concern, a new dynamic Stirling cycle RPS architecture is proposed called the RPS Pool Concept.
A Robust Compositional Architecture for Autonomous Systems
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Deney, Ewen; Farrell, Kimberley; Giannakopoulos, Dimitra; Jonsson, Ari; Frank, Jeremy; Bobby, Mark; Carpenter, Todd; Estlin, Tara
2006-01-01
Space exploration applications can benefit greatly from autonomous systems. Great distances, limited communications and high costs make direct operations impossible while mandating operations reliability and efficiency beyond what traditional commanding can provide. Autonomous systems can improve reliability and enhance spacecraft capability significantly. However, there is reluctance to utilizing autonomous systems. In part this is due to general hesitation about new technologies, but a more tangible concern is that of reliability of predictability of autonomous software. In this paper, we describe ongoing work aimed at increasing robustness and predictability of autonomous software, with the ultimate goal of building trust in such systems. The work combines state-of-the-art technologies and capabilities in autonomous systems with advanced validation and synthesis techniques. The focus of this paper is on the autonomous system architecture that has been defined, and on how it enables the application of validation techniques for resulting autonomous systems.
Parallelizing Timed Petri Net simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1993-01-01
The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.
Using benchmarks for radiation testing of microprocessors and FPGAs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quinn, Heather; Robinson, William H.; Rech, Paolo
Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less
Using benchmarks for radiation testing of microprocessors and FPGAs
Quinn, Heather; Robinson, William H.; Rech, Paolo; ...
2015-12-17
Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less
Digital Avionics Information System (DAIS): Development and Demonstration.
1981-09-01
advances in technology. The DAIS architecture results in improved reliability and availability of avionics systems while at the same time reducing life ...DAIS) represents a significant advance in the technology of avionics system architecture. DAIS is a total systems concept, exploiting standardization...configurations and fully capable of accommodating new advances in technology. These fundamental system charac- teristics are described in this report; the
A Vertical Organic Transistor Architecture for Fast Nonvolatile Memory.
She, Xiao-Jian; Gustafsson, David; Sirringhaus, Henning
2017-02-01
A new device architecture for fast organic transistor memory is developed, based on a vertical organic transistor configuration incorporating high-performance ambipolar conjugated polymers and unipolar small molecules as the transport layers, to achieve reliable and fast programming and erasing of the threshold voltage shift in less than 200 ns. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Tian, Liangliang; He, Gege; Cai, Yanhua; Wu, Shenping; Su, Yongyao; Yan, Hengqing; Yang, Cong; Chen, Yanling; Li, Lu
2018-02-16
Inspired by kinetics, the design of hollow hierarchical electrocatalysts through large-scale integration of building blocks is recognized as an effective approach to the achievement of superior electrocatalytic performance. In this work, a hollow, hierarchical Co 3 O 4 architecture (Co 3 O 4 HHA) was constructed using a coordinated etching and precipitation (CEP) method followed by calcination. The resulting Co 3 O 4 HHA electrode exhibited excellent electrocatalytic activity in terms of high sensitivity (839.3 μA mM -1 cm -2 ) and reliable stability in glucose detection. The high sensitivity could be attributed to the large specific surface area (SSA), ample unimpeded penetration diffusion paths and high electron transfer rate originating from the unique two-dimensional (2D) sheet-like character and hollow porous architecture. The hollow hierarchical structure also affords sufficient interspace for accommodation of volume change and structural strain, resulting in enhanced stability. The results indicate that Co 3 O 4 HHA could have potential for application in the design of non-enzymatic glucose sensors, and that the construction of hollow hierarchical architecture provides an efficient way to design highly active, stable electrocatalysts.
NASA Astrophysics Data System (ADS)
Tian, Liangliang; He, Gege; Cai, Yanhua; Wu, Shenping; Su, Yongyao; Yan, Hengqing; Yang, Cong; Chen, Yanling; Li, Lu
2018-02-01
Inspired by kinetics, the design of hollow hierarchical electrocatalysts through large-scale integration of building blocks is recognized as an effective approach to the achievement of superior electrocatalytic performance. In this work, a hollow, hierarchical Co3O4 architecture (Co3O4 HHA) was constructed using a coordinated etching and precipitation (CEP) method followed by calcination. The resulting Co3O4 HHA electrode exhibited excellent electrocatalytic activity in terms of high sensitivity (839.3 μA mM-1 cm-2) and reliable stability in glucose detection. The high sensitivity could be attributed to the large specific surface area (SSA), ample unimpeded penetration diffusion paths and high electron transfer rate originating from the unique two-dimensional (2D) sheet-like character and hollow porous architecture. The hollow hierarchical structure also affords sufficient interspace for accommodation of volume change and structural strain, resulting in enhanced stability. The results indicate that Co3O4 HHA could have potential for application in the design of non-enzymatic glucose sensors, and that the construction of hollow hierarchical architecture provides an efficient way to design highly active, stable electrocatalysts.
Fault tree models for fault tolerant hypercube multiprocessors
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Tuazon, Jezus O.
1991-01-01
Three candidate fault tolerant hypercube architectures are modeled, their reliability analyses are compared, and the resulting implications of these methods of incorporating fault tolerance into hypercube multiprocessors are discussed. In the course of performing the reliability analyses, the use of HARP and fault trees in modeling sequence dependent system behaviors is demonstrated.
Web based aphasia test using service oriented architecture (SOA)
NASA Astrophysics Data System (ADS)
Voos, J. A.; Vigliecca, N. S.; Gonzalez, E. A.
2007-11-01
Based on an aphasia test for Spanish speakers which analyze the patient's basic resources of verbal communication, a web-enabled software was developed to automate its execution. A clinical database was designed as a complement, in order to evaluate the antecedents (risk factors, pharmacological and medical backgrounds, neurological or psychiatric symptoms, brain injury -anatomical and physiological characteristics, etc) which are necessary to carry out a multi-factor statistical analysis in different samples of patients. The automated test was developed following service oriented architecture and implemented in a web site which contains a tests suite, which would allow both integrating the aphasia test with other neuropsychological instruments and increasing the available site information for scientific research. The test design, the database and the study of its psychometric properties (validity, reliability and objectivity) were made in conjunction with neuropsychological researchers, who participate actively in the software design, based on the patients or other subjects of investigation feedback.
Analysis of NASA communications (Nascom) II network protocols and performance
NASA Technical Reports Server (NTRS)
Omidyar, Guy C.; Butler, Thomas E.
1991-01-01
The NASA Communications (Nascom) Division of the Mission Operations and Data Systems Directorate is to undertake a major initiative to develop the Nascom II (NII) network to achieve its long-range service objectives for operational data transport to support the Space Station Freedom Program, the Earth Observing System, and other projects. NII is the Nascom ground communications network being developed to accommodate the operational traffic of the mid-1990s and beyond. The authors describe various baseline protocol architectures based on current and evolving technologies. They address the internetworking issues suggested for reliable transfer of data over heterogeneous segments. They also describe the NII architecture, topology, system components, and services. A comparative evaluation of the current and evolving technologies was made, and suggestions for further study are described. It is shown that the direction of the NII configuration and the subsystem component design will clearly depend on the advances made in the area of broadband integrated services.
The component-based architecture of the HELIOS medical software engineering environment.
Degoulet, P; Jean, F C; Engelmann, U; Meinzer, H P; Baud, R; Sandblad, B; Wigertz, O; Le Meur, R; Jagermann, C
1994-12-01
The constitution of highly integrated health information networks and the growth of multimedia technologies raise new challenges for the development of medical applications. We describe in this paper the general architecture of the HELIOS medical software engineering environment devoted to the development and maintenance of multimedia distributed medical applications. HELIOS is made of a set of software components, federated by a communication channel called the HELIOS Unification Bus. The HELIOS kernel includes three main components, the Analysis-Design and Environment, the Object Information System and the Interface Manager. HELIOS services consist in a collection of toolkits providing the necessary facilities to medical application developers. They include Image Related services, a Natural Language Processor, a Decision Support System and Connection services. The project gives special attention to both object-oriented approaches and software re-usability that are considered crucial steps towards the development of more reliable, coherent and integrated applications.
System Architecture Modeling for Technology Portfolio Management using ATLAS
NASA Technical Reports Server (NTRS)
Thompson, Robert W.; O'Neil, Daniel A.
2006-01-01
Strategic planners and technology portfolio managers have traditionally relied on consensus-based tools, such as Analytical Hierarchy Process (AHP) and Quality Function Deployment (QFD) in planning the funding of technology development. While useful to a certain extent, these tools are limited in the ability to fully quantify the impact of a technology choice on system mass, system reliability, project schedule, and lifecycle cost. The Advanced Technology Lifecycle Analysis System (ATLAS) aims to provide strategic planners a decision support tool for analyzing technology selections within a Space Exploration Architecture (SEA). Using ATLAS, strategic planners can select physics-based system models from a library, configure the systems with technologies and performance parameters, and plan the deployment of a SEA. Key parameters for current and future technologies have been collected from subject-matter experts and other documented sources in the Technology Tool Box (TTB). ATLAS can be used to compare the technical feasibility and economic viability of a set of technology choices for one SEA, and compare it against another set of technology choices or another SEA. System architecture modeling in ATLAS is a multi-step process. First, the modeler defines the system level requirements. Second, the modeler identifies technologies of interest whose impact on an SEA. Third, the system modeling team creates models of architecture elements (e.g. launch vehicles, in-space transfer vehicles, crew vehicles) if they are not already in the model library. Finally, the architecture modeler develops a script for the ATLAS tool to run, and the results for comparison are generated.
Preliminary Exploration of Encounter During Transit Across Southern Africa
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stroud, Phillip David; Cuellar-Hengartner, Leticia; Kubicek, Deborah Ann
Los Alamos National Laboratory (LANL) is utilizing the Probability Effectiveness Methodology (PEM) tools, particularly the Pathway Analysis, Threat Response and Interdiction Options Tool (PATRIOT) to support the DNDO Architecture and Planning Directorate’s (APD) development of a multi-region terrorist risk assessment tool. The effort is divided into three stages. The first stage is an exploration of what can be done with PATRIOT essentially as is, to characterize encounter rate during transit across a single selected region. The second stage is to develop, condition, and implement required modifications to the data and conduct analysis to generate a well-founded assessment of the transitmore » reliability across that selected region, and to identify any issues in the process. The final stage is to extend the work to a full multi-region global model. This document provides the results of the first stage, namely preliminary explorations with PATRIOT to assess the transit reliability across the region of southern Africa.« less
Piromalis, Dimitrios; Arvanitis, Konstantinos
2016-08-04
Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture.
Robot Electronics Architecture
NASA Technical Reports Server (NTRS)
Garrett, Michael; Magnone, Lee; Aghazarian, Hrand; Baumgartner, Eric; Kennedy, Brett
2008-01-01
An electronics architecture has been developed to enable the rapid construction and testing of prototypes of robotic systems. This architecture is designed to be a research vehicle of great stability, reliability, and versatility. A system according to this architecture can easily be reconfigured (including expanded or contracted) to satisfy a variety of needs with respect to input, output, processing of data, sensing, actuation, and power. The architecture affords a variety of expandable input/output options that enable ready integration of instruments, actuators, sensors, and other devices as independent modular units. The separation of different electrical functions onto independent circuit boards facilitates the development of corresponding simple and modular software interfaces. As a result, both hardware and software can be made to expand or contract in modular fashion while expending a minimum of time and effort.
Ferguson, Michael A.; Anderson, Jeffrey S.; Spreng, R. Nathan
2017-01-01
Human intelligence has been conceptualized as a complex system of dissociable cognitive processes, yet studies investigating the neural basis of intelligence have typically emphasized the contributions of discrete brain regions or, more recently, of specific networks of functionally connected regions. Here we take a broader, systems perspective in order to investigate whether intelligence is an emergent property of synchrony within the brain’s intrinsic network architecture. Using a large sample of resting-state fMRI and cognitive data (n = 830), we report that the synchrony of functional interactions within and across distributed brain networks reliably predicts fluid and flexible intellectual functioning. By adopting a whole-brain, systems-level approach, we were able to reliably predict individual differences in human intelligence by characterizing features of the brain’s intrinsic network architecture. These findings hold promise for the eventual development of neural markers to predict changes in intellectual function that are associated with neurodevelopment, normal aging, and brain disease.
2014-01-01
Background Uncovering the complex transcriptional regulatory networks (TRNs) that underlie plant and animal development remains a challenge. However, a vast amount of data from public microarray experiments is available, which can be subject to inference algorithms in order to recover reliable TRN architectures. Results In this study we present a simple bioinformatics methodology that uses public, carefully curated microarray data and the mutual information algorithm ARACNe in order to obtain a database of transcriptional interactions. We used data from Arabidopsis thaliana root samples to show that the transcriptional regulatory networks derived from this database successfully recover previously identified root transcriptional modules and to propose new transcription factors for the SHORT ROOT/SCARECROW and PLETHORA pathways. We further show that these networks are a powerful tool to integrate and analyze high-throughput expression data, as exemplified by our analysis of a SHORT ROOT induction time-course microarray dataset, and are a reliable source for the prediction of novel root gene functions. In particular, we used our database to predict novel genes involved in root secondary cell-wall synthesis and identified the MADS-box TF XAL1/AGL12 as an unexpected participant in this process. Conclusions This study demonstrates that network inference using carefully curated microarray data yields reliable TRN architectures. In contrast to previous efforts to obtain root TRNs, that have focused on particular functional modules or tissues, our root transcriptional interactions provide an overview of the transcriptional pathways present in Arabidopsis thaliana roots and will likely yield a plethora of novel hypotheses to be tested experimentally. PMID:24739361
NASA Technical Reports Server (NTRS)
Shyy, Dong-Jye; Redman, Wayne
1993-01-01
For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L.; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices. PMID:22163406
Bravo, Ignacio; Mazo, Manuel; Lázaro, José L; Gardel, Alfredo; Jiménez, Pedro; Pizarro, Daniel
2010-01-01
This paper presents a complete implementation of the Principal Component Analysis (PCA) algorithm in Field Programmable Gate Array (FPGA) devices applied to high rate background segmentation of images. The classical sequential execution of different parts of the PCA algorithm has been parallelized. This parallelization has led to the specific development and implementation in hardware of the different stages of PCA, such as computation of the correlation matrix, matrix diagonalization using the Jacobi method and subspace projections of images. On the application side, the paper presents a motion detection algorithm, also entirely implemented on the FPGA, and based on the developed PCA core. This consists of dynamically thresholding the differences between the input image and the one obtained by expressing the input image using the PCA linear subspace previously obtained as a background model. The proposal achieves a high ratio of processed images (up to 120 frames per second) and high quality segmentation results, with a completely embedded and reliable hardware architecture based on commercial CMOS sensors and FPGA devices.
Inferring Domain-Domain Interactions from Protein-Protein Interactions with Formal Concept Analysis
Khor, Susan
2014-01-01
Identifying reliable domain-domain interactions will increase our ability to predict novel protein-protein interactions, to unravel interactions in protein complexes, and thus gain more information about the function and behavior of genes. One of the challenges of identifying reliable domain-domain interactions is domain promiscuity. Promiscuous domains are domains that can occur in many domain architectures and are therefore found in many proteins. This becomes a problem for a method where the score of a domain-pair is the ratio between observed and expected frequencies because the protein-protein interaction network is sparse. As such, many protein-pairs will be non-interacting and domain-pairs with promiscuous domains will be penalized. This domain promiscuity challenge to the problem of inferring reliable domain-domain interactions from protein-protein interactions has been recognized, and a number of work-arounds have been proposed. This paper reports on an application of Formal Concept Analysis to this problem. It is found that the relationship between formal concepts provides a natural way for rare domains to elevate the rank of promiscuous domain-pairs and enrich highly ranked domain-pairs with reliable domain-domain interactions. This piggybacking of promiscuous domain-pairs onto less promiscuous domain-pairs is possible only with concept lattices whose attribute-labels are not reduced and is enhanced by the presence of proteins that comprise both promiscuous and rare domains. PMID:24586450
NASA Astrophysics Data System (ADS)
Messerotti, Mauro; Otruba, Wolfgang; Hanslmeier, Arnold
2000-06-01
The Kanzelhoehe Solar Observatory is an observing facility located in Carinthia (Austria) and operated by the Institute of Geophysics, Astrophysics and Meteorology of the Karl- Franzens University Graz. A set of instruments for solar surveillance at different wavelengths bands is continuously operated in automatic mode and is presently being upgraded to be used in supplying near-real-time solar activity indexes for space weather applications. In this frame, we tested a low-end software/hardware architecture running on the PC platform in a non-homogeneous, remotely distributed environment that allows efficient or moderately efficient application sharing at the Intranet and Extranet (i.e., Wide Area Network) levels respectively. Due to the geographical distributed of participating teams (Trieste, Italy; Kanzelhoehe and Graz, Austria), we have been using such features for collaborative remote software development and testing, data analysis and calibration, and observing run emulation from multiple sites as well. In this work, we describe the used architecture and its performances based on a series of application sharing tests we carried out to ascertain its effectiveness in real collaborative remote work, observations and data exchange. The system proved to be reliable at the Intranet level for most distributed tasks, limited to less demanding ones at the Extranet level, but quite effective in remote instrument control when real time response is not needed.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
on methods to improve visualization techniques by adding qualitative information regarding sketch-based methods for conveying levels of reliability in architectural renderings. Dr. Potter
Advanced Sea Base Enabler (ASE) Capstone Design Project
2009-09-21
Additionally, a study that examines a potential fleet architecture , which looks at a combination of sea base enabler platforms in order to close current...This change in premise spawned a post-Cold War naval intellectual renaissance , reflected in several Department of the Navy (DON) “white papers...information collected regarding the various systems is reliable. 3. Primary Areas of Focus Detailed engineering analyses, naval architecture or other
FY04 Advanced Life Support Architecture and Technology Studies: Mid-Year Presentation
NASA Technical Reports Server (NTRS)
Lange, Kevin; Anderson, Molly; Duffield, Bruce; Hanford, Tony; Jeng, Frank
2004-01-01
Long-Term Objective: Identify optimal advanced life support system designs that meet existing and projected requirements for future human spaceflight missions. a) Include failure-tolerance, reliability, and safe-haven requirements. b) Compare designs based on multiple criteria including equivalent system mass (ESM), technology readiness level (TRL), simplicity, commonality, etc. c) Develop and evaluate new, more optimal, architecture concepts and technology applications.
"Fly-by-Wireless" : A Revolution in Aerospace Architectures for Instrumentation and Control
NASA Technical Reports Server (NTRS)
Studor, George F.
2007-01-01
The conference presentation provides background information on Fly-by-Wireless technologies as well as reasons for implementation, CANEUS project goals, cost of change for instrumentation, reliability, focus areas, conceptual Hybrid SHMS architecture for future space habitats, real world problems that the technology can solve, evolution of Micro-WIS systems, and a WLEIDS system overview and end-to-end system design.
Three real-time architectures - A study using reward models
NASA Technical Reports Server (NTRS)
Sjogren, J. A.; Smith, R. M.
1990-01-01
Numerous applications in the area of computer system analysis can be effectively studied with Markov reward models. These models describe the evolutionary behavior of the computer system by a continuous-time Markov chain, and a reward rate is associated with each state. In reliability/availability models, upstates have reward rate 1, and down states have reward rate zero associated with them. In a combined model of performance and reliability, the reward rate of a state may be the computational capacity, or a related performance measure. Steady-state expected reward rate and expected instantaneous reward rate are clearly useful measures which can be extracted from the Markov reward model. The diversity of areas where Markov reward models may be used is illustrated with a comparative study of three examples of interest to the fault tolerant computing community.
A candidate architecture for monitoring and control in chemical transfer propulsion systems
NASA Technical Reports Server (NTRS)
Binder, Michael P.; Millis, Marc G.
1990-01-01
To support the exploration of space, a reusable space-based rocket engine must be developed. This engine must sustain superior operability and man-rated levels of reliability over several missions with limited maintenance or inspection between flights. To meet these requirements, an expander cycle engine incorporating a highly capable control and health monitoring system is planned. Alternatives for the functional organization and the implementation architecture of the engine's monitoring and control system are discussed. On the basis of this discussion, a decentralized architecture is favored. The trade-offs between several implementation options are outlined and future work is proposed.
Advanced techniques in reliability model representation and solution
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Nicol, David M.
1992-01-01
The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.
Achieving Lights-Out Operation of SMAP Using Ground Data System Automation
NASA Technical Reports Server (NTRS)
Sanders, Antonio
2013-01-01
The approach used in the SMAP ground data system to provide reliable, automated capabilities to conduct unattended operations has been presented. The impacts of automation on the ground data system architecture were discussed, including the three major automation patterns identified for SMAP and how these patterns address the operations use cases. The architecture and approaches used by SMAP will set the baseline for future JPL Earth Science missions.
Advanced Exploration Systems Atmosphere Resource Recovery and Environmental Monitoring
NASA Technical Reports Server (NTRS)
Perry, J.; Abney, M.; Conrad, R.; Garber, A.; Howard, D.; Kayatin, M.; Knox, J.; Newton, R.; Parrish, K.; Roman, M.;
2016-01-01
In September 2011, the Atmosphere Resource Recovery and Environmental Monitoring (ARREM) project was commissioned by NASA's Advanced Exploration Systems program to advance Atmosphere Revitalization Subsystem (ARS) and Environmental Monitoring Subsystem (EMS) technologies for enabling future crewed space exploration missions beyond low Earth orbit. The ARREM project's period of performance covered U.S. Government fiscal years 2012-2014. The ARREM project critically assessed the International Space Station (ISS) ARS and EMS architectures and process technologies as the foundation for an architecture suitable for deep space exploration vehicles. The project's technical content included technical tasks focused on improving the reliability and life cycle cost of ARS and EMS technologies as well as reducing future flight project developmental risk and design, development, test, and evaluation costs. Targeted technology development and maturation tasks, including key technical trade assessments, were accomplished and integrated ARS architectures were demonstrated. The ARREM project developed, demonstrated, and tested leading process technology candidates and subsystem architectures that met or exceeded key figures of merit, addressed capability gaps, and significantly improved the efficiency, safety, and reliability over the state-of-the-art ISS figures of merit. Promising EMS instruments were developed and functionally demonstrated in a simulated cabin environment. The project's technical approach and results are described and recommendations for continued development are provided.
Li, Zheng; Zhang, Hai; Zhou, Qifan; Che, Huan
2017-09-05
The main objective of the introduced study is to design an adaptive Inertial Navigation System/Global Navigation Satellite System (INS/GNSS) tightly-coupled integration system that can provide more reliable navigation solutions by making full use of an adaptive Kalman filter (AKF) and satellite selection algorithm. To achieve this goal, we develop a novel redundant measurement noise covariance estimation (RMNCE) theorem, which adaptively estimates measurement noise properties by analyzing the difference sequences of system measurements. The proposed RMNCE approach is then applied to design both a modified weighted satellite selection algorithm and a type of adaptive unscented Kalman filter (UKF) to improve the performance of the tightly-coupled integration system. In addition, an adaptive measurement noise covariance expanding algorithm is developed to mitigate outliers when facing heavy multipath and other harsh situations. Both semi-physical simulation and field experiments were conducted to evaluate the performance of the proposed architecture and were compared with state-of-the-art algorithms. The results validate that the RMNCE provides a significant improvement in the measurement noise covariance estimation and the proposed architecture can improve the accuracy and reliability of the INS/GNSS tightly-coupled systems. The proposed architecture can effectively limit positioning errors under conditions of poor GNSS measurement quality and outperforms all the compared schemes.
NASA Technical Reports Server (NTRS)
Perry, Jay L.; Abney, Morgan B.; Knox, James C.; Parrish, Keith J.; Roman, Monserrate C.; Jan, Darrell L.
2012-01-01
Exploring the frontiers of deep space continues to be defined by the technological challenges presented by safely transporting a crew to and from destinations of scientific interest. Living and working on that frontier requires highly reliable and efficient life support systems that employ robust, proven process technologies. The International Space Station (ISS), including its environmental control and life support (ECLS) system, is the platform from which humanity's deep space exploration missions begin. The ISS ECLS system Atmosphere Revitalization (AR) subsystem and environmental monitoring (EM) technical architecture aboard the ISS is evaluated as the starting basis for a developmental effort being conducted by the National Aeronautics and Space Administration (NASA) via the Advanced Exploration Systems (AES) Atmosphere Resource Recovery and Environmental Monitoring (ARREM) Project.. An evolutionary approach is employed by the ARREM project to address the strengths and weaknesses of the ISS AR subsystem and EM equipment, core technologies, and operational approaches to reduce developmental risk, improve functional reliability, and lower lifecycle costs of an ISS-derived subsystem architecture suitable for use for crewed deep space exploration missions. The most promising technical approaches to an ISS-derived subsystem design architecture that incorporates promising core process technology upgrades will be matured through a series of integrated tests and architectural trade studies encompassing expected exploration mission requirements and constraints.
Li, Zheng; Zhang, Hai; Zhou, Qifan; Che, Huan
2017-01-01
The main objective of the introduced study is to design an adaptive Inertial Navigation System/Global Navigation Satellite System (INS/GNSS) tightly-coupled integration system that can provide more reliable navigation solutions by making full use of an adaptive Kalman filter (AKF) and satellite selection algorithm. To achieve this goal, we develop a novel redundant measurement noise covariance estimation (RMNCE) theorem, which adaptively estimates measurement noise properties by analyzing the difference sequences of system measurements. The proposed RMNCE approach is then applied to design both a modified weighted satellite selection algorithm and a type of adaptive unscented Kalman filter (UKF) to improve the performance of the tightly-coupled integration system. In addition, an adaptive measurement noise covariance expanding algorithm is developed to mitigate outliers when facing heavy multipath and other harsh situations. Both semi-physical simulation and field experiments were conducted to evaluate the performance of the proposed architecture and were compared with state-of-the-art algorithms. The results validate that the RMNCE provides a significant improvement in the measurement noise covariance estimation and the proposed architecture can improve the accuracy and reliability of the INS/GNSS tightly-coupled systems. The proposed architecture can effectively limit positioning errors under conditions of poor GNSS measurement quality and outperforms all the compared schemes. PMID:28872629
Fault-tolerant onboard digital information switching and routing for communications satellites
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Quintana, Jorge A.; Soni, Nitin J.; Kim, Heechul
1993-01-01
The NASA Lewis Research Center is developing an information-switching processor for future meshed very-small-aperture terminal (VSAT) communications satellites. The information-switching processor will switch and route baseband user data onboard the VSAT satellite to connect thousands of Earth terminals. Fault tolerance is a critical issue in developing information-switching processor circuitry that will provide and maintain reliable communications services. In parallel with the conceptual development of the meshed VSAT satellite network architecture, NASA designed and built a simple test bed for developing and demonstrating baseband switch architectures and fault-tolerance techniques. The meshed VSAT architecture and the switching demonstration test bed are described, and the initial switching architecture and the fault-tolerance techniques that were developed and tested are discussed.
NASA Technical Reports Server (NTRS)
Boulanger, Richard P., Jr.; Kwauk, Xian-Min; Stagnaro, Mike; Kliss, Mark (Technical Monitor)
1998-01-01
The BIO-Plex control system requires real-time, flexible, and reliable data delivery. There is no simple "off-the-shelf 'solution. However, several commercial packages will be evaluated using a testbed at ARC for publish- and-subscribe and client-server communication architectures. Point-to-point communication architecture is not suitable for real-time BIO-Plex control system. Client-server architecture provides more flexible data delivery. However, it does not provide direct communication among nodes on the network. Publish-and-subscribe implementation allows direct information exchange among nodes on the net, providing the best time-critical communication. In this work Network Data Delivery Service (NDDS) from Real-Time Innovations, Inc. ARTIE will be used to implement publish-and subscribe architecture. It offers update guarantees and deadlines for real-time data delivery. Bridgestone, a data acquisition and control software package from National Instruments, will be tested for client-server arrangement. A microwave incinerator located at ARC will be instrumented with a fieldbus network of control devices. BridgeVIEW will be used to implement an enterprise server. An enterprise network consisting of several nodes at ARC and a WAN connecting ARC and RISC will then be setup to evaluate proposed control system architectures. Several network configurations will be evaluated for fault tolerance, quality of service, reliability and efficiency. Data acquired from these network evaluation tests will then be used to determine preliminary design criteria for the BIO-Plex distributed control system.
Piromalis, Dimitrios; Arvanitis, Konstantinos
2016-01-01
Wireless Sensor and Actuators Networks (WSANs) constitute one of the most challenging technologies with tremendous socio-economic impact for the next decade. Functionally and energy optimized hardware systems and development tools maybe is the most critical facet of this technology for the achievement of such prospects. Especially, in the area of agriculture, where the hostile operating environment comes to add to the general technological and technical issues, reliable and robust WSAN systems are mandatory. This paper focuses on the hardware design architectures of the WSANs for real-world agricultural applications. It presents the available alternatives in hardware design and identifies their difficulties and problems for real-life implementations. The paper introduces SensoTube, a new WSAN hardware architecture, which is proposed as a solution to the various existing design constraints of WSANs. The establishment of the proposed architecture is based, firstly on an abstraction approach in the functional requirements context, and secondly, on the standardization of the subsystems connectivity, in order to allow for an open, expandable, flexible, reconfigurable, energy optimized, reliable and robust hardware system. The SensoTube implementation reference model together with its encapsulation design and installation are analyzed and presented in details. Furthermore, as a proof of concept, certain use cases have been studied in order to demonstrate the benefits of migrating existing designs based on the available open-source hardware platforms to SensoTube architecture. PMID:27527180
Developing Architectures and Technologies for an Evolvable NASA Space Communication Infrastructure
NASA Technical Reports Server (NTRS)
Bhasin, Kul; Hayden, Jeffrey
2004-01-01
Space communications architecture concepts play a key role in the development and deployment of NASA's future exploration and science missions. Once a mission is deployed, the communication link to the user needs to provide maximum information delivery and flexibility to handle the expected large and complex data sets and to enable direct interaction with the spacecraft and experiments. In human and robotic missions, communication systems need to offer maximum reliability with robust two-way links for software uploads and virtual interactions. Identifying the capabilities to cost effectively meet the demanding space communication needs of 21st century missions, proper formulation of the requirements for these missions, and identifying the early technology developments that will be needed can only be resolved with architecture design. This paper will describe the development of evolvable space communication architecture models and the technologies needed to support Earth sensor web and collaborative observation formation missions; robotic scientific missions for detailed investigation of planets, moons, and small bodies in the solar system; human missions for exploration of the Moon, Mars, Ganymede, Callisto, and asteroids; human settlements in space, on the Moon, and on Mars; and great in-space observatories for observing other star systems and the universe. The resulting architectures will enable the reliable, multipoint, high data rate capabilities needed on demand to provide continuous, maximum coverage of areas of concentrated activities, such as in the vicinity of outposts in-space, on the Moon or on Mars.
World Ships - Architectures & Feasibility Revisited
NASA Astrophysics Data System (ADS)
Hein, A. M.; Pak, M.; Putz, D.; Buhler, C.; Reiss, P.
A world ship is a concept for manned interstellar flight. It is a huge, self-contained and self-sustained interstellar vehicle. It travels at a fraction of a per cent of the speed of light and needs several centuries to reach its target star system. The well- known world ship concept by Alan Bond and Anthony Martin was intended to show its principal feasibility. However, several important issues haven't been addressed so far: the relationship between crew size and robustness of knowledge transfer, reliability, and alternative mission architectures. This paper addresses these gaps. Furthermore, it gives an update on target star system choice, and develops possible mission architectures. The derived conclusions are: a large population size leads to robust knowledge transfer and cultural adaptation. These processes can be improved by new technologies. World ship reliability depends on the availability of an automatic repair system, as in the case of the Daedalus probe. Star systems with habitable planets are probably farther away than systems with enough resources to construct space colonies. Therefore, missions to habitable planets have longer trip times and have a higher risk of mission failure. On the other hand, the risk of constructing colonies is higher than to establish an initial settlement on a habitable planet. Mission architectures with precursor probes have the potential to significantly reduce trip and colonization risk without being significantly more costly than architectures without. In summary world ships remain an interesting concept, although they require a space colony-based civilization within our own solar system before becoming feasible.
Toward a Fault Tolerant Architecture for Vital Medical-Based Wearable Computing.
Abdali-Mohammadi, Fardin; Bajalan, Vahid; Fathi, Abdolhossein
2015-12-01
Advancements in computers and electronic technologies have led to the emergence of a new generation of efficient small intelligent systems. The products of such technologies might include Smartphones and wearable devices, which have attracted the attention of medical applications. These products are used less in critical medical applications because of their resource constraint and failure sensitivity. This is due to the fact that without safety considerations, small-integrated hardware will endanger patients' lives. Therefore, proposing some principals is required to construct wearable systems in healthcare so that the existing concerns are dealt with. Accordingly, this paper proposes an architecture for constructing wearable systems in critical medical applications. The proposed architecture is a three-tier one, supporting data flow from body sensors to cloud. The tiers of this architecture include wearable computers, mobile computing, and mobile cloud computing. One of the features of this architecture is its high possible fault tolerance due to the nature of its components. Moreover, the required protocols are presented to coordinate the components of this architecture. Finally, the reliability of this architecture is assessed by simulating the architecture and its components, and other aspects of the proposed architecture are discussed.
NASA Technical Reports Server (NTRS)
Mathur, F. P.
1972-01-01
Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.
A forward view on reliable computers for flight control
NASA Technical Reports Server (NTRS)
Goldberg, J.; Wensley, J. H.
1976-01-01
The requirements for fault-tolerant computers for flight control of commercial aircraft are examined; it is concluded that the reliability requirements far exceed those typically quoted for space missions. Examination of circuit technology and alternative computer architectures indicates that the desired reliability can be achieved with several different computer structures, though there are obvious advantages to those that are more economic, more reliable, and, very importantly, more certifiable as to fault tolerance. Progress in this field is expected to bring about better computer systems that are more rigorously designed and analyzed even though computational requirements are expected to increase significantly.
Combating the Reliability Challenge of GPU Register File at Low Supply Voltage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Jingweijia; Song, Shuaiwen; Yan, Kaige
Supply voltage reduction is an effective approach to significantly reduce GPU energy consumption. As the largest on-chip storage structure, the GPU register file becomes the reliability hotspot that prevents further supply voltage reduction below the safe limit (Vmin) due to process variation effects. This work addresses the reliability challenge of the GPU register file at low supply voltages, which is an essential first step for aggressive supply voltage reduction of the entire GPU chip. We propose GR-Guard, an architectural solution that leverages long register dead time to enable reliable operations from unreliable register file at low voltages.
Control and Communication for a Secure and Reconfigurable Power Distribution System
NASA Astrophysics Data System (ADS)
Giacomoni, Anthony Michael
A major transformation is taking place throughout the electric power industry to overlay existing electric infrastructure with advanced sensing, communications, and control system technologies. This transformation to a smart grid promises to enhance system efficiency, increase system reliability, support the electrification of transportation, and provide customers with greater control over their electricity consumption. Upgrading control and communication systems for the end-to-end electric power grid, however, will present many new security challenges that must be dealt with before extensive deployment and implementation of these technologies can begin. In this dissertation, a comprehensive systems approach is taken to minimize and prevent cyber-physical disturbances to electric power distribution systems using sensing, communications, and control system technologies. To accomplish this task, an intelligent distributed secure control (IDSC) architecture is presented and validated in silico for distribution systems to provide greater adaptive protection, with the ability to proactively reconfigure, and rapidly respond to disturbances. Detailed descriptions of functionalities at each layer of the architecture as well as the whole system are provided. To compare the performance of the IDSC architecture with that of other control architectures, an original simulation methodology is developed. The simulation model integrates aspects of cyber-physical security, dynamic price and demand response, sensing, communications, intermittent distributed energy resources (DERs), and dynamic optimization and reconfiguration. Applying this comprehensive systems approach, performance results for the IEEE 123 node test feeder are simulated and analyzed. The results show the trade-offs between system reliability, operational constraints, and costs for several control architectures and optimization algorithms. Additional simulation results are also provided. In particular, the advantages of an IDSC architecture are highlighted when an intermittent DER is present on the system.
Commanding Constellations (Pipeline Architecture)
NASA Technical Reports Server (NTRS)
Ray, Tim; Condron, Jeff
2003-01-01
Providing ground command software for constellations of spacecraft is a challenging problem. Reliable command delivery requires a feedback loop; for a constellation there will likely be an independent feedback loop for each constellation member. Each command must be sent via the proper Ground Station, which may change from one contact to the next (and may be different for different members). Dynamic configuration of the ground command software is usually required (e.g. directives to configure each member's feedback loop and assign the appropriate Ground Station). For testing purposes, there must be a way to insert command data at any level in the protocol stack. The Pipeline architecture described in this paper can support all these capabilities with a sequence of software modules (the pipeline), and a single self-identifying message format (for all types of command data and configuration directives). The Pipeline architecture is quite simple, yet it can solve some complex problems. The resulting solutions are conceptually simple, and therefore, reliable. They are also modular, and therefore, easy to distribute and extend. We first used the Pipeline architecture to design a CCSDS (Consultative Committee for Space Data Systems) Ground Telecommand system (to command one spacecraft at a time with a fixed Ground Station interface). This pipeline was later extended to include gateways to any of several Ground Stations. The resulting pipeline was then extended to handle a small constellation of spacecraft. The use of the Pipeline architecture allowed us to easily handle the increasing complexity. This paper will describe the Pipeline architecture, show how it was used to solve each of the above commanding situations, and how it can easily be extended to handle larger constellations.
Wafer level reliability for high-performance VLSI design
NASA Technical Reports Server (NTRS)
Root, Bryan J.; Seefeldt, James D.
1987-01-01
As very large scale integration architecture requires higher package density, reliability of these devices has approached a critical level. Previous processing techniques allowed a large window for varying reliability. However, as scaling and higher current densities push reliability to its limit, tighter control and instant feedback becomes critical. Several test structures developed to monitor reliability at the wafer level are described. For example, a test structure was developed to monitor metal integrity in seconds as opposed to weeks or months for conventional testing. Another structure monitors mobile ion contamination at critical steps in the process. Thus the reliability jeopardy can be assessed during fabrication preventing defective devices from ever being placed in the field. Most importantly, the reliability can be assessed on each wafer as opposed to an occasional sample.
A miniature on-chip multi-functional ECG signal processor with 30 µW ultra-low power consumption.
Liu, Xin; Zheng, Yuan Jin; Phyu, Myint Wai; Zhao, Bin; Je, Minkyu; Yuan, Xiao Jun
2010-01-01
In this paper, a miniature low-power Electrocardiogram (ECG) signal processing application specific integrated circuit (ASIC) chip is proposed. This chip provides multiple critical functions for ECG analysis using a systematic wavelet transform algorithm and a novel SRAM-based ASIC architecture, while achieves low cost and high performance. Using 0.18 µm CMOS technology and 1 V power supply, this ASIC chip consumes only 29 µW and occupies an area of 3 mm(2). This on-chip ECG processor is highly suitable for reliable real-time cardiac status monitoring applications.
Programming Recognition Arrays through Double Chalcogen-Bonding Interactions.
Biot, Nicolas; Bonifazi, Davide
2018-04-11
In this work, we have programmed and synthesized a recognition motif constructed around a chalcogenazolo-pyridine scaffold (CGP) that, through the formation of frontal double chalcogen-bonding interactions, associates into dimeric EX-type complexes. The reliability of the double chalcogen-bonding interaction has been shown at the solid-state by X-ray analysis, depicting the strongest recognition persistence for a Te-congener. The high recognition fidelity, chemical and thermal stability and easy derivatization at the 2-position makes CGP a convenient motif for constructing supramolecular architectures through programmed chalcogen-bonding interactions. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Markley, R. W.; Williams, B. F.
1993-01-01
NASA has proposed missions to the Moon and Mars that reflect three areas of emphasis: human presence, exploration, and space resource development for the benefit of Earth. A major requirement for such missions is a robust and reliable communications architecture. Network management--the ability to maintain some degree of human and automatic control over the span of the network from the space elements to the end users on Earth--is required to realize such robust and reliable communications. This article addresses several of the architectural issues associated with space network management. Round-trip delays, such as the 5- to 40-min delays in the Mars case, introduce a host of problems that must be solved by delegating significant control authority to remote nodes. Therefore, management hierarchy is one of the important architectural issues. The following article addresses these concerns, and proposes a network management approach based on emerging standards that covers the needs for fault, configuration, and performance management, delegated control authority, and hierarchical reporting of events. A relatively simple approach based on standards was demonstrated in the DSN 2000 Information Systems Laboratory, and the results are described.
Quality Attributes for Mission Flight Software: A Reference for Architects
NASA Technical Reports Server (NTRS)
Wilmot, Jonathan; Fesq, Lorraine; Dvorak, Dan
2016-01-01
In the international standards for architecture descriptions in systems and software engineering (ISO/IEC/IEEE 42010), "concern" is a primary concept that often manifests itself in relation to the quality attributes or "ilities" that a system is expected to exhibit - qualities such as reliability, security and modifiability. One of the main uses of an architecture description is to serve as a basis for analyzing how well the architecture achieves its quality attributes, and that requires architects to be as precise as possible about what they mean in claiming, for example, that an architecture supports "modifiability." This paper describes a table, generated by NASA's Software Architecture Review Board, which lists fourteen key quality attributes, identifies different important aspects of each quality attribute and considers each aspect in terms of requirements, rationale, evidence, and tactics to achieve the aspect. This quality attribute table is intended to serve as a guide to software architects, software developers, and software architecture reviewers in the domain of mission-critical real-time embedded systems, such as space mission flight software.
Flow measurements in sewers based on image analysis: automatic flow velocity algorithm.
Jeanbourquin, D; Sage, D; Nguyen, L; Schaeli, B; Kayal, S; Barry, D A; Rossi, L
2011-01-01
Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Dugan, Joanne Bechta; Trivedi, Kishor S.; Mittal, Nitin; Boyd, Mark A.; Geist, Robert M.; Smotherman, Mark D.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed to be compatible with most computing platforms and operating systems, and some programs have been beta tested, within the aerospace community for over 8 years. Volume 1 provides an introduction to the HARP program. Comprehensive information on HARP mathematical models can be found in the references.
CCARES: A computer algorithm for the reliability analysis of laminated CMC components
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Gyekenyesi, John P.
1993-01-01
Structural components produced from laminated CMC (ceramic matrix composite) materials are being considered for a broad range of aerospace applications that include various structural components for the national aerospace plane, the space shuttle main engine, and advanced gas turbines. Specifically, these applications include segmented engine liners, small missile engine turbine rotors, and exhaust nozzles. Use of these materials allows for improvements in fuel efficiency due to increased engine temperatures and pressures, which in turn generate more power and thrust. Furthermore, this class of materials offers significant potential for raising the thrust-to-weight ratio of gas turbine engines by tailoring directions of high specific reliability. The emerging composite systems, particularly those with silicon nitride or silicon carbide matrix, can compete with metals in many demanding applications. Laminated CMC prototypes have already demonstrated functional capabilities at temperatures approaching 1400 C, which is well beyond the operational limits of most metallic materials. Laminated CMC material systems have several mechanical characteristics which must be carefully considered in the design process. Test bed software programs are needed that incorporate stochastic design concepts that are user friendly, computationally efficient, and have flexible architectures that readily incorporate changes in design philosophy. The CCARES (Composite Ceramics Analysis and Reliability Evaluation of Structures) program is representative of an effort to fill this need. CCARES is a public domain computer algorithm, coupled to a general purpose finite element program, which predicts the fast fracture reliability of a structural component under multiaxial loading conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Humberto E.; Simpson, Michael F.; Lin, Wen-Chiao
In this paper, we apply an advanced safeguards approach and associated methods for process monitoring to a hypothetical nuclear material processing system. The assessment regarding the state of the processing facility is conducted at a systemcentric level formulated in a hybrid framework. This utilizes architecture for integrating both time- and event-driven data and analysis for decision making. While the time-driven layers of the proposed architecture encompass more traditional process monitoring methods based on time series data and analysis, the event-driven layers encompass operation monitoring methods based on discrete event data and analysis. By integrating process- and operation-related information and methodologiesmore » within a unified framework, the task of anomaly detection is greatly improved. This is because decision-making can benefit from not only known time-series relationships among measured signals but also from known event sequence relationships among generated events. This available knowledge at both time series and discrete event layers can then be effectively used to synthesize observation solutions that optimally balance sensor and data processing requirements. The application of the proposed approach is then implemented on an illustrative monitored system based on pyroprocessing and results are discussed.« less
RAIN: A Bio-Inspired Communication and Data Storage Infrastructure.
Monti, Matteo; Rasmussen, Steen
2017-01-01
We summarize the results and perspectives from a companion article, where we presented and evaluated an alternative architecture for data storage in distributed networks. We name the bio-inspired architecture RAIN, and it offers file storage service that, in contrast with current centralized cloud storage, has privacy by design, is open source, is more secure, is scalable, is more sustainable, has community ownership, is inexpensive, and is potentially faster, more efficient, and more reliable. We propose that a RAIN-style architecture could form the backbone of the Internet of Things that likely will integrate multiple current and future infrastructures ranging from online services and cryptocurrency to parts of government administration.
STGT program: Ada coding and architecture lessons learned
NASA Technical Reports Server (NTRS)
Usavage, Paul; Nagurney, Don
1992-01-01
STGT (Second TDRSS Ground Terminal) is currently halfway through the System Integration Test phase (Level 4 Testing). To date, many software architecture and Ada language issues have been encountered and solved. This paper, which is the transcript of a presentation at the 3 Dec. meeting, attempts to define these lessons plus others learned regarding software project management and risk management issues, training, performance, reuse, and reliability. Observations are included regarding the use of particular Ada coding constructs, software architecture trade-offs during the prototyping, development and testing stages of the project, and dangers inherent in parallel or concurrent systems, software, hardware, and operations engineering.
Tera-node Network Technology (Task 3) Scalable Personal Telecommunications
2000-03-14
Simulation results of this work may be found in http://north.east.isi.edu/spt/ audio.html. 6. Internet Research Task Force Reliable Multicast...Adaptation, 4. Multimedia Proxy Caching, 5. Experiments with the Rate Adaptation Protocol (RAP) 6. Providing leadership and innovation to the Internet ... Research Task Force (IRTF) Reliable Multicast Research Group (RMRG) 1. End-to-end Architecture for Quality-adaptive Streaming Applications over the
Uncertainties in building a strategic defense.
Zraket, C A
1987-03-27
Building a strategic defense against nuclear ballistic missiles involves complex and uncertain functional, spatial, and temporal relations. Such a defensive system would evolve and grow over decades. It is too complex, dynamic, and interactive to be fully understood initially by design, analysis, and experiments. Uncertainties exist in the formulation of requirements and in the research and design of a defense architecture that can be implemented incrementally and be fully tested to operate reliably. The analysis and measurement of system survivability, performance, and cost-effectiveness are critical to this process. Similar complexities exist for an adversary's system that would suppress or use countermeasures against a missile defense. Problems and opportunities posed by these relations are described, with emphasis on the unique characteristics and vulnerabilities of space-based systems.
Distributed Engine Control Empirical/Analytical Verification Tools
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan
2013-01-01
NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.
Real-Time and Secure Wireless Health Monitoring
Dağtaş, S.; Pekhteryev, G.; Şahinoğlu, Z.; Çam, H.; Challa, N.
2008-01-01
We present a framework for a wireless health monitoring system using wireless networks such as ZigBee. Vital signals are collected and processed using a 3-tiered architecture. The first stage is the mobile device carried on the body that runs a number of wired and wireless probes. This device is also designed to perform some basic processing such as the heart rate and fatal failure detection. At the second stage, further processing is performed by a local server using the raw data transmitted by the mobile device continuously. The raw data is also stored at this server. The processed data as well as the analysis results are then transmitted to the service provider center for diagnostic reviews as well as storage. The main advantages of the proposed framework are (1) the ability to detect signals wirelessly within a body sensor network (BSN), (2) low-power and reliable data transmission through ZigBee network nodes, (3) secure transmission of medical data over BSN, (4) efficient channel allocation for medical data transmission over wireless networks, and (5) optimized analysis of data using an adaptive architecture that maximizes the utility of processing and computational capacity at each platform. PMID:18497866
Medical Signal-Conditioning and Data-Interface System
NASA Technical Reports Server (NTRS)
Braun, Jeffrey; Jacobus, charles; Booth, Scott; Suarez, Michael; Smith, Derek; Hartnagle, Jeffrey; LePrell, Glenn
2006-01-01
A general-purpose portable, wearable electronic signal-conditioning and data-interface system is being developed for medical applications. The system can acquire multiple physiological signals (e.g., electrocardiographic, electroencephalographic, and electromyographic signals) from sensors on the wearer s body, digitize those signals that are received in analog form, preprocess the resulting data, and transmit the data to one or more remote location(s) via a radiocommunication link and/or the Internet. The system includes a computer running data-object-oriented software that can be programmed to configure the system to accept almost any analog or digital input signals from medical devices. The computing hardware and software implement a general-purpose data-routing-and-encapsulation architecture that supports tagging of input data and routing the data in a standardized way through the Internet and other modern packet-switching networks to one or more computer(s) for review by physicians. The architecture supports multiple-site buffering of data for redundancy and reliability, and supports both real-time and slower-than-real-time collection, routing, and viewing of signal data. Routing and viewing stations support insertion of automated analysis routines to aid in encoding, analysis, viewing, and diagnosis.
Smart photonic networks and computer security for image data
NASA Astrophysics Data System (ADS)
Campello, Jorge; Gill, John T.; Morf, Martin; Flynn, Michael J.
1998-02-01
Work reported here is part of a larger project on 'Smart Photonic Networks and Computer Security for Image Data', studying the interactions of coding and security, switching architecture simulations, and basic technologies. Coding and security: coding methods that are appropriate for data security in data fusion networks were investigated. These networks have several characteristics that distinguish them form other currently employed networks, such as Ethernet LANs or the Internet. The most significant characteristics are very high maximum data rates; predominance of image data; narrowcasting - transmission of data form one source to a designated set of receivers; data fusion - combining related data from several sources; simple sensor nodes with limited buffering. These characteristics affect both the lower level network design and the higher level coding methods.Data security encompasses privacy, integrity, reliability, and availability. Privacy, integrity, and reliability can be provided through encryption and coding for error detection and correction. Availability is primarily a network issue; network nodes must be protected against failure or routed around in the case of failure. One of the more promising techniques is the use of 'secret sharing'. We consider this method as a special case of our new space-time code diversity based algorithms for secure communication. These algorithms enable us to exploit parallelism and scalable multiplexing schemes to build photonic network architectures. A number of very high-speed switching and routing architectures and their relationships with very high performance processor architectures were studied. Indications are that routers for very high speed photonic networks can be designed using the very robust and distributed TCP/IP protocol, if suitable processor architecture support is available.
2nd Generation Reusable Launch Vehicle (2G RLV). Revised
NASA Technical Reports Server (NTRS)
Matlock, Steve; Sides, Steve; Kmiec, Tom; Arbogast, Tim; Mayers, Tom; Doehnert, Bill
2001-01-01
This is a revised final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.
Impact of coverage on the reliability of a fault tolerant computer
NASA Technical Reports Server (NTRS)
Bavuso, S. J.
1975-01-01
A mathematical reliability model is established for a reconfigurable fault tolerant avionic computer system utilizing state-of-the-art computers. System reliability is studied in light of the coverage probabilities associated with the first and second independent hardware failures. Coverage models are presented as a function of detection, isolation, and recovery probabilities. Upper and lower bonds are established for the coverage probabilities and the method for computing values for the coverage probabilities is investigated. Further, an architectural variation is proposed which is shown to enhance coverage.
A support architecture for reliable distributed computing systems
NASA Technical Reports Server (NTRS)
Mckendry, Martin S.
1986-01-01
The Clouds kernel design was through several design phases and is nearly complete. The object manager, the process manager, the storage manager, the communications manager, and the actions manager are examined.
Project report: Alaska Iways architecture
DOT National Transportation Integrated Search
2005-01-01
The Alaska Department of Transportation and Public Facilities (ADOT&PF) is continually looking at ways to improve the efficiency, safety, and reliability of Alaskas transportation system. This effort includes the application of advanced communicat...
Djasim, Urville Mardijanto; Wolvius, Eppo Bonne; Van Neck, Johan Wilhelm; Van Wamel, Annemieke; Weinans, Harrie; Van Der Wal, Karel George Hendrik
2008-04-01
To study the effect of two different frequencies of distraction on the quantity and architecture of bone regenerate using micro-computed tomography, and to determine whether radiographic and ultrasonographic bone-fill scores provide reliable predictive value for the amount of new bone in the distraction area. Twenty-six skeletally mature rabbits underwent three full days of latency, after which midface distraction was started. Low-frequency group (n=12): a distraction rate of 0.9 mm/d achieved by one daily activation for 11 days to create a 10mm distraction gap. High-frequency group (n=12): idem, but three daily activations were used instead of one. Control group (n=2) underwent no distraction. After 21 days of consolidation, bone-fill in the distraction area was assessed by means of ultrasonography and radiography. Micro-computed tomography was used to quantify new bone formation and bone architecture. Relative bone volume (BV/TV) showed a tendency towards a difference (P=0.09) between the low and high-frequency groups. No significant differences were found for bone architecture. No significant correlation between BV/TV values and bone-fill scores was found. An increase in rhythm from one to three activations daily does not create significantly more bone. Bone-fill score values provided no reliable predictive value for the amount of new bone formation.
Redondo, Jonatan Pajares; González, Lisardo Prieto; Guzman, Javier García; Boada, Beatriz L; Díaz, Vicente
2018-02-06
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices.
Díaz, Vicente
2018-01-01
Nowadays, the current vehicles are incorporating control systems in order to improve their stability and handling. These control systems need to know the vehicle dynamics through the variables (lateral acceleration, roll rate, roll angle, sideslip angle, etc.) that are obtained or estimated from sensors. For this goal, it is necessary to mount on vehicles not only low-cost sensors, but also low-cost embedded systems, which allow acquiring data from sensors and executing the developed algorithms to estimate and to control with novel higher speed computing. All these devices have to be integrated in an adequate architecture with enough performance in terms of accuracy, reliability and processing time. In this article, an architecture to carry out the estimation and control of vehicle dynamics has been developed. This architecture was designed considering the basic principles of IoT and integrates low-cost sensors and embedded hardware for orchestrating the experiments. A comparison of two different low-cost systems in terms of accuracy, acquisition time and reliability has been done. Both devices have been compared with the VBOX device from Racelogic, which has been used as the ground truth. The comparison has been made from tests carried out in a real vehicle. The lateral acceleration and roll rate have been analyzed in order to quantify the error of these devices. PMID:29415507
Towards advanced biological detection using surface enhanced raman scattering (SERS)-based sensors
NASA Astrophysics Data System (ADS)
Hankus, Mikella E.; Stratis-Cullum, Dimitra N.; Pellegrino, Paul M.
2010-08-01
The Army has a need for an accurate, fast, reliable and robust means to identify and quantify defense related materials. Raman spectroscopy is a form of vibrational spectroscopy that is rapidly becoming a valuable tool for homeland defense applications, as it is well suited for the molecular identification of a variety of compounds, including explosives and chemical and biological hazards. To measure trace levels of these types of materials, surface enhanced Raman scattering (SERS), a specialized form of Raman scattering, can be employed. The SERS enhancements are produced on, or in close proximity to, a nanoscale roughened metal surface and are typically associated with increased local electromagnetic field strengths. However, before application of SERS in the field and in particular to biological and other hazard sensing applications, significant improvements in substrate performance are needed. In this work, we will report the use of several SERS substrate architectures (colloids, film-over-nanospheres (FONs) and commercially available substrates) for detecting and differentiating numerous endospore samples. The variance in spectra as obtained using different sensing architectures will also be discussed. Additionally, the feasibility of using a modified substrate architecture that is tailored with molecular recognition probe system for detecting biological samples will be explored. We will discuss the progress towards an advanced, hybrid molecular recognition with a SERS/Fluorescence nanoprobe system including the optimization, fabrication, and spectroscopic analysis of samples on a commercially available substrate. Additionally, the feasibility of using this single-step switching architecture for hazard material detection will also be explored.
NASA Advanced Exploration Systems: Advancements in Life Support Systems
NASA Technical Reports Server (NTRS)
Shull, Sarah A.; Schneider, Walter F.
2016-01-01
The NASA Advanced Exploration Systems (AES) Life Support Systems (LSS) project strives to develop reliable, energy-efficient, and low-mass spacecraft systems to provide environmental control and life support systems (ECLSS) critical to enabling long duration human missions beyond low Earth orbit (LEO). Highly reliable, closed-loop life support systems are among the capabilities required for the longer duration human space exploration missions assessed by NASA’s Habitability Architecture Team.
Cost Estimation of Software Development and the Implications for the Program Manager
1992-06-01
Software Lifecycle Model (SLIM), the Jensen System-4 model, the Software Productivity, Quality, and Reliability Estimator ( SPQR \\20), the Constructive...function models in current use are the Software Productivity, Quality, and Reliability Estimator ( SPQR /20) and the Software Architecture Sizing and...Estimator ( SPQR /20) was developed by T. Capers Jones of Software Productivity Research, Inc., in 1985. The model is intended to estimate the outcome
Exploring Life Support Architectures for Evolution of Deep Space Human Exploration
NASA Technical Reports Server (NTRS)
Anderson, Molly S.; Stambaugh, Imelda C.
2015-01-01
Life support system architectures for long duration space missions are often explored analytically in the human spaceflight community to find optimum solutions for mass, performance, and reliability. But in reality, many other constraints can guide the design when the life support system is examined within the context of an overall vehicle, as well as specific programmatic goals and needs. Between the end of the Constellation program and the development of the "Evolvable Mars Campaign", NASA explored a broad range of mission possibilities. Most of these missions will never be implemented but the lessons learned during these concept development phases may color and guide future analytical studies and eventual life support system architectures. This paper discusses several iterations of design studies from the life support system perspective to examine which requirements and assumptions, programmatic needs, or interfaces drive design. When doing early concept studies, many assumptions have to be made about technology and operations. Data can be pulled from a variety of sources depending on the study needs, including parametric models, historical data, new technologies, and even predictive analysis. In the end, assumptions must be made in the face of uncertainty. Some of these may introduce more risk as to whether the solution for the conceptual design study will still work when designs mature and data becomes available.
Reliability Analysis and Modeling of ZigBee Networks
NASA Astrophysics Data System (ADS)
Lin, Cheng-Min
The architecture of ZigBee networks focuses on developing low-cost, low-speed ubiquitous communication between devices. The ZigBee technique is based on IEEE 802.15.4, which specifies the physical layer and medium access control (MAC) for a low rate wireless personal area network (LR-WPAN). Currently, numerous wireless sensor networks have adapted the ZigBee open standard to develop various services to promote improved communication quality in our daily lives. The problem of system and network reliability in providing stable services has become more important because these services will be stopped if the system and network reliability is unstable. The ZigBee standard has three kinds of networks; star, tree and mesh. The paper models the ZigBee protocol stack from the physical layer to the application layer and analyzes these layer reliability and mean time to failure (MTTF). Channel resource usage, device role, network topology and application objects are used to evaluate reliability in the physical, medium access control, network, and application layers, respectively. In the star or tree networks, a series system and the reliability block diagram (RBD) technique can be used to solve their reliability problem. However, a division technology is applied here to overcome the problem because the network complexity is higher than that of the others. A mesh network using division technology is classified into several non-reducible series systems and edge parallel systems. Hence, the reliability of mesh networks is easily solved using series-parallel systems through our proposed scheme. The numerical results demonstrate that the reliability will increase for mesh networks when the number of edges in parallel systems increases while the reliability quickly drops when the number of edges and the number of nodes increase for all three networks. More use of resources is another factor impact on reliability decreasing. However, lower network reliability will occur due to network complexity, more resource usage and complex object relationship.
Business intelligence modeling in launch operations
NASA Astrophysics Data System (ADS)
Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.
2005-05-01
The future of business intelligence in space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems. This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations, process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations, and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce long-term benefits in support of the NASA objectives for simulation based acquisition, will improve the ability to assess architectural options verses safety/risk for future exploration systems, and will facilitate incorporation of operability as a systems design consideration, reducing overall life cycle cost for future systems.
Business Intelligence Modeling in Launch Operations
NASA Technical Reports Server (NTRS)
Bardina, Jorge E.; Thirumalainambi, Rajkumar; Davis, Rodney D.
2005-01-01
This technology project is to advance an integrated Planning and Management Simulation Model for evaluation of risks, costs, and reliability of launch systems from Earth to Orbit for Space Exploration. The approach builds on research done in the NASA ARC/KSC developed Virtual Test Bed (VTB) to integrate architectural, operations process, and mission simulations for the purpose of evaluating enterprise level strategies to reduce cost, improve systems operability, and reduce mission risks. The objectives are to understand the interdependency of architecture and process on recurring launch cost of operations, provide management a tool for assessing systems safety and dependability versus cost, and leverage lessons learned and empirical models from Shuttle and International Space Station to validate models applied to Exploration. The systems-of-systems concept is built to balance the conflicting objectives of safety, reliability, and process strategy in order to achieve long term sustainability. A planning and analysis test bed is needed for evaluation of enterprise level options and strategies for transit and launch systems as well as surface and orbital systems. This environment can also support agency simulation .based acquisition process objectives. The technology development approach is based on the collaborative effort set forth in the VTB's integrating operations. process models, systems and environment models, and cost models as a comprehensive disciplined enterprise analysis environment. Significant emphasis is being placed on adapting root cause from existing Shuttle operations to exploration. Technical challenges include cost model validation, integration of parametric models with discrete event process and systems simulations. and large-scale simulation integration. The enterprise architecture is required for coherent integration of systems models. It will also require a plan for evolution over the life of the program. The proposed technology will produce long-term benefits in support of the NASA objectives for simulation based acquisition, will improve the ability to assess architectural options verses safety/risk for future exploration systems, and will facilitate incorporation of operability as a systems design consideration, reducing overall life cycle cost for future systems. The future of business intelligence of space exploration will focus on the intelligent system-of-systems real-time enterprise. In present business intelligence, a number of technologies that are most relevant to space exploration are experiencing the greatest change. Emerging patterns of set of processes rather than organizational units leading to end-to-end automation is becoming a major objective of enterprise information technology. The cost element is a leading factor of future exploration systems.
Development of a New VLBI Data Analysis Software
NASA Technical Reports Server (NTRS)
Bolotin, Sergei; Gipson, John M.; MacMillan, Daniel S.
2010-01-01
We present an overview of a new VLBI analysis software under development at NASA GSFC. The new software will replace CALC/SOLVE and many related utility programs. It will have the capabilities of the current system as well as incorporate new models and data analysis techniques. In this paper we give a conceptual overview of the new software. We formulate the main goals of the software. The software should be flexible and modular to implement models and estimation techniques that currently exist or will appear in future. On the other hand it should be reliable and possess production quality for processing standard VLBI sessions. Also, it needs to be capable of processing observations from a fully deployed network of VLBI2010 stations in a reasonable time. We describe the software development process and outline the software architecture.
On Some Aspects of Study on Dimensions and Proportions of Church Architecture
NASA Astrophysics Data System (ADS)
Kolobaeva, T. V.
2017-11-01
Architecture forms and arranges the environment required for a comfortable life and human activity. The modern principles of architectural space arrangement and form making are represented by a reliable system of buildings which are used in design. Architects apply these principles and knowledge of space arrangement in regard to the study of special and regulatory literature when performing a particular creative task. This system of accumulated knowledge is perceived in the form of an existing stereotype with no regard for understanding of the form making and experience inherent to the architects and thinkers of previous ages. We make an attempt to restore this connection as the form-making specific regularities known by ancient architects should be taken into account. The paper gives an insight into some aspects of traditional dimensions and proportions of church architecture.
Generalized hypercube structures and hyperswitch communication network
NASA Technical Reports Server (NTRS)
Young, Steven D.
1992-01-01
This paper discusses an ongoing study that uses a recent development in communication control technology to implement hybrid hypercube structures. These architectures are similar to binary hypercubes, but they also provide added connectivity between the processors. This added connectivity increases communication reliability while decreasing the latency of interprocessor message passing. Because these factors directly determine the speed that can be obtained by multiprocessor systems, these architectures are attractive for applications such as remote exploration and experimentation, where high performance and ultrareliability are required. This paper describes and enumerates these architectures and discusses how they can be implemented with a modified version of the hyperswitch communication network (HCN). The HCN is analyzed because it has three attractive features that enable these architectures to be effective: speed, fault tolerance, and the ability to pass multiple messages simultaneously through the same hyperswitch controller.
Loya, Salvador Rodriguez; Kawamoto, Kensaku; Chatwin, Chris; Huser, Vojtech
2014-12-01
The use of a service-oriented architecture (SOA) has been identified as a promising approach for improving health care by facilitating reliable clinical decision support (CDS). A review of the literature through October 2013 identified 44 articles on this topic. The review suggests that SOA related technologies such as Business Process Model and Notation (BPMN) and Service Component Architecture (SCA) have not been generally adopted to impact health IT systems' performance for better care solutions. Additionally, technologies such as Enterprise Service Bus (ESB) and architectural approaches like Service Choreography have not been generally exploited among researchers and developers. Based on the experience of other industries and our observation of the evolution of SOA, we found that the greater use of these approaches have the potential to significantly impact SOA implementations for CDS.
Loya, Salvador Rodriguez; Kawamoto, Kensaku; Chatwin, Chris; Huser, Vojtech
2017-01-01
The use of a service-oriented architecture (SOA) has been identified as a promising approach for improving health care by facilitating reliable clinical decision support (CDS). A review of the literature through October 2013 identified 44 articles on this topic. The review suggests that SOA related technologies such as Business Process Model and Notation (BPMN) and Service Component Architecture (SCA) have not been generally adopted to impact health IT systems’ performance for better care solutions. Additionally, technologies such as Enterprise Service Bus (ESB) and architectural approaches like Service Choreography have not been generally exploited among researchers and developers. Based on the experience of other industries and our observation of the evolution of SOA, we found that the greater use of these approaches have the potential to significantly impact SOA implementations for CDS PMID:25325996
Domain specific software architectures: Command and control
NASA Technical Reports Server (NTRS)
Braun, Christine; Hatch, William; Ruegsegger, Theodore; Balzer, Bob; Feather, Martin; Goldman, Neil; Wile, Dave
1992-01-01
GTE is the Command and Control contractor for the Domain Specific Software Architectures program. The objective of this program is to develop and demonstrate an architecture-driven, component-based capability for the automated generation of command and control (C2) applications. Such a capability will significantly reduce the cost of C2 applications development and will lead to improved system quality and reliability through the use of proven architectures and components. A major focus of GTE's approach is the automated generation of application components in particular subdomains. Our initial work in this area has concentrated in the message handling subdomain; we have defined and prototyped an approach that can automate one of the most software-intensive parts of C2 systems development. This paper provides an overview of the GTE team's DSSA approach and then presents our work on automated support for message processing.
NASA Astrophysics Data System (ADS)
Pearce, John; Thomsen, Sharon
2017-02-01
Large vessels can be reliably sealed with radio frequency current. High apposition pressures are necessary to ensure a high probability of a successful seal. However, the complex architecture of the vessels, particularly arteries, means that results can vary substantially even with similar thermal histories. The relative volume fractions and spatial distributions of collagen, elastin, and smooth muscle dominate the vessel function in vivo and can even vary from proximal to distal locations in the same vessel. We begin by reviewing the architectural features characteristic of porcine and canine large vessels and conclude with an experimental and numerical modeling demonstration of the reasons why cylindrical electrodes are a sub-optimal choice.
NASA Technical Reports Server (NTRS)
Smith, T. B., III; Lala, J. H.
1984-01-01
The FTMP architecture is a high reliability computer concept modeled after a homogeneous multiprocessor architecture. Elements of the FTMP are operated in tight synchronism with one another and hardware fault-detection and fault-masking is provided which is transparent to the software. Operating system design and user software design is thus greatly simplified. Performance of the FTMP is also comparable to that of a simplex equivalent due to the efficiency of fault handling hardware. The FTMP project constructed an engineering module of the FTMP, programmed the machine and extensively tested the architecture through fault injection and other stress testing. This testing confirmed the soundness of the FTMP concepts.
Dhaval, Rakesh; Borlawsky, Tara; Ostrander, Michael; Santangelo, Jennifer; Kamal, Jyoti; Payne, Philip R O
2008-11-06
In order to enhance interoperability between enterprise systems, and improve data validity and reliability throughout The Ohio State University Medical Center (OSUMC), we have initiated the development of an ontology-anchored metadata architecture and knowledge collection for our enterprise data warehouse. The metadata and corresponding semantic relationships stored in the OSUMC knowledge collection are intended to promote consistency and interoperability across the heterogeneous clinical, research, business and education information managed within the data warehouse.
A High Power Density Power System Electronics for NASA's Lunar Reconnaissance Orbiter
NASA Technical Reports Server (NTRS)
Hernandez-Pellerano, A.; Stone, R.; Travis, J.; Kercheval, B.; Alkire, G.; Ter-Minassian, V.
2009-01-01
A high power density, modular and state-of-the-art Power System Electronics (PSE) has been developed for the Lunar Reconnaissance Orbiter (LRO) mission. This paper addresses the hardware architecture and performance, the power handling capabilities, and the fabrication technology. The PSE was developed by NASA s Goddard Space Flight Center (GSFC) and is the central location for power handling and distribution of the LRO spacecraft. The PSE packaging design manages and distributes 2200W of solar array input power in a volume less than a cubic foot. The PSE architecture incorporates reliable standard internal and external communication buses, solid state circuit breakers and LiIon battery charge management. Although a single string design, the PSE achieves high reliability by elegantly implementing functional redundancy and internal fault detection and correction. The PSE has been environmentally tested and delivered to the LRO spacecraft for the flight Integration and Test. This modular design is scheduled to flight in early 2009 on board the LRO and Lunar Crater Observation and Sensing Satellite (LCROSS) spacecrafts and is the baseline architecture for future NASA missions such as Global Precipitation Measurement (GPM) and Magnetospheric MultiScale (MMS).
Hybrid RAID With Dual Control Architecture for SSD Reliability
NASA Astrophysics Data System (ADS)
Chatterjee, Santanu
2010-10-01
The Solid State Devices (SSD) which are increasingly being adopted in today's data storage Systems, have higher capacity and performance but lower reliability, which leads to more frequent rebuilds and to a higher risk. Although SSD is very energy efficient compared to Hard Disk Drives but Bit Error Rate (BER) of an SSD require expensive erase operations between successive writes. Parity based RAID (for Example RAID4,5,6)provides data integrity using parity information and supports losing of any one (RAID4, 5)or two drives(RAID6), but the parity blocks are updated more often than the data blocks due to random access pattern so SSD devices holding more parity receive more writes and consequently age faster. To address this problem, in this paper we propose a Model based System of hybrid disk array architecture in which we plan to use RAID 4(Stripping with Parity) technique and SSD drives as Data drives while any fastest Hard disk drives of same capacity can be used as dedicated parity drives. By this proposed architecture we can open the door to using commodity SSD's past their erasure limit and it can also reduce the need for expensive hardware Error Correction Code (ECC) in the devices.
Analysis of typical fault-tolerant architectures using HARP
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Bechta Dugan, Joanne; Trivedi, Kishor S.; Rothmann, Elizabeth M.; Smith, W. Earl
1987-01-01
Difficulties encountered in the modeling of fault-tolerant systems are discussed. The Hybrid Automated Reliability Predictor (HARP) approach to modeling fault-tolerant systems is described. The HARP is written in FORTRAN, consists of nearly 30,000 lines of codes and comments, and is based on behavioral decomposition. Using the behavioral decomposition, the dependability model is divided into fault-occurrence/repair and fault/error-handling models; the characteristics and combining of these two models are examined. Examples in which the HARP is applied to the modeling of some typical fault-tolerant systems, including a local-area network, two fault-tolerant computer systems, and a flight control system, are presented.
HYDRA: A Middleware-Oriented Integrated Architecture for e-Procurement in Supply Chains
NASA Astrophysics Data System (ADS)
Alor-Hernandez, Giner; Aguilar-Lasserre, Alberto; Juarez-Martinez, Ulises; Posada-Gomez, Ruben; Cortes-Robles, Guillermo; Garcia-Martinez, Mario Alberto; Gomez-Berbis, Juan Miguel; Rodriguez-Gonzalez, Alejandro
The Service-Oriented Architecture (SOA) development paradigm has emerged to improve the critical issues of creating, modifying and extending solutions for business processes integration, incorporating process automation and automated exchange of information between organizations. Web services technology follows the SOA's principles for developing and deploying applications. Besides, Web services are considered as the platform for SOA, for both intra- and inter-enterprise communication. However, an SOA does not incorporate information about occurring events into business processes, which are the main features of supply chain management. These events and information delivery are addressed in an Event-Driven Architecture (EDA). Taking this into account, we propose a middleware-oriented integrated architecture that offers a brokering service for the procurement of products in a Supply Chain Management (SCM) scenario. As salient contributions, our system provides a hybrid architecture combining features of both SOA and EDA and a set of mechanisms for business processes pattern management, monitoring based on UML sequence diagrams, Web services-based management, event publish/subscription and reliable messaging service.
Providing the full DDF link protection for bus-connected SIEPON based system architecture
NASA Astrophysics Data System (ADS)
Hwang, I.-Shyan; Pakpahan, Andrew Fernando; Liem, Andrew Tanny; Nikoukar, AliAkbar
2016-09-01
Currently a massive amount of traffic per second is delivered through EPON systems, one of the prominent access network technologies for delivering the next generation network. Therefore, it is vital to keep the EPON optical distribution network (ODN) working by providing the necessity protection mechanism in the deployed devices; otherwise, when failures occur it will cause a great loss for both network operators and business customers. In this paper, we propose a bus-connected architecture to protect and recover distribution drop fiber (DDF) link faults or transceiver failures at ONU(s) in SIEPON system. The proposed architecture provides a cost-effective architecture, which delivers the high fault-tolerance in handling multiple DDF faults, while also providing flexibility in choosing the backup ONU assignments. Simulation results show that the proposed architecture provides the reliability and maintains quality of service (QoS) performance in terms of mean packet delay, system throughput, packet loss and EF jitter when DDF link failures occur.
On-orbit assembly considerations of manned Mars transfer vehicles
NASA Technical Reports Server (NTRS)
D'Amara, Mark
1990-01-01
Ever since the United States space program started some forty years ago, there have been many ideas on how the U.S. should proceed to explore space. Throughout the years, many innovative designs have surfaced for transfer vehicles, space stations, and surface bases. Usually the difference in designs are due to differences in mission objectives and requirements. The problem for Mars is how to choose an architecture for human travel to Mars and what kind of base construction to design for Mars that will be reliable and cost effective. Eventually, if the Space Exploration Initiative is to become a reality, NASA will have to select and fund a single mission architecture involving manned and unmanned Mars fly-by precursors, a Mars landing vehicle, and, ultimately, the plan for constructing a Mars base. The decision to commit to a single architecture is a vital one and, therefore, the design issues, the decision making process, and the analysis tools must be available to explore all of the options that are available. A large part of any space mission architecture is the Earth-to-Mars transfer vehicle. The decision on the type of transfer vehicle to design is a crucial one. The many options must take into account the constraints encountered when assembling the vehicle in earth orbit such as effective joining methods, test and evaluation methods, preventative maintenance measures, etc. Therefore, the process of trading off various designs must include every facet of that design. The on-orbit assembly/construction constraints will drive designs and architectures. This viewgraph presentation highlights the above critical issues so that designs may be evaluated from these viewpoints. Evaluating designs from the issues contained in this paper will help decision makers detect inadequate designs. Stressing these issues in the evaluation procedure will have a great impact on the decisions of future space mission transfer vehicles and consequent architectures.
2016-04-30
Dabkowski, and Dixit (2015), we demonstrate that the DoDAF models required pre–MS A map to 14 of the 18 parameters of the Constructive Systems...engineering effort in complex systems. Saarbrücken, Germany: VDM Verlag. Valerdi, R., Dabkowski, M., & Dixit , I. (2015). Reliability improvement of...R., Dabkowski, M., & Dixit , I. (2015). Reliability Improvement of Major Defense Acquisition Program Cost Estimates – Mapping DoDAF to COSYSMO
NASA Technical Reports Server (NTRS)
Matlock, Steve
2001-01-01
This is the final report and addresses all of the work performed on this program. Specifically, it covers vehicle architecture background, definition of six baseline engine cycles, reliability baseline (space shuttle main engine QRAS), and component level reliability/performance/cost for the six baseline cycles, and selection of 3 cycles for further study. This report further addresses technology improvement selection and component level reliability/performance/cost for the three cycles selected for further study, as well as risk reduction plans, and recommendation for future studies.
Intelligent Operation and Maintenance of Micro-grid Technology and System Development
NASA Astrophysics Data System (ADS)
Fu, Ming; Song, Jinyan; Zhao, Jingtao; Du, Jian
2018-01-01
In order to achieve the micro-grid operation and management, Studying the micro-grid operation and maintenance knowledge base. Based on the advanced Petri net theory, the fault diagnosis model of micro-grid is established, and the intelligent diagnosis and analysis method of micro-grid fault is put forward. Based on the technology, the functional system and architecture of the intelligent operation and maintenance system of micro-grid are studied, and the microcomputer fault diagnosis function is introduced in detail. Finally, the system is deployed based on the micro-grid of a park, and the micro-grid fault diagnosis and analysis is carried out based on the micro-grid operation. The system operation and maintenance function interface is displayed, which verifies the correctness and reliability of the system.
Measuring the style of innovative thinking among engineering students
NASA Astrophysics Data System (ADS)
Passig, David; Cohen, Lizi
2014-01-01
Background: Many tools have been developed to measure the ability of workers to innovate. However, all of them are based on self-reporting questionnaires, which raises questions about their validity Purpose: The aim was to develop and validate a tool, called Ideas Generation Implementation (IGI), to objectively measure the style and potential of engineering students in generating innovative technological ideas. The cognitive framework of IGI is based on the Architectural Innovation Model (AIM). Tool description: The IGI tool was designed to measure the level of innovation in generating technological ideas and their potential to be implemented. These variables rely on the definition of innovation as 'creativity, implemented in a high degree of success'. The levels of innovative thinking are based on the AIM and consist of four levels: incremental innovation, modular innovation, architectural innovation and radical innovation. Sample: Sixty experts in technological innovation developed the tool. We checked its face validity and calculated its reliability in a pilot study (kappa = 0.73). Then, 145 undergraduate students were sampled at random from the seven Israeli universities offering engineering programs and asked to complete the questionnaire. Design and methods: We examined the construct validity of the tool by conducting a variance analysis and measuring the correlations between the innovator's style of each student, as suggested by the AIM, and the three subscale factors of creative styles (efficient, conformist and original), as suggested by the Kirton Adaptors and Innovators (KAI) questionnaire. Results: Students with a radical innovator's style inclined more than those with an incremental innovator's style towards the three creative cognitive styles. Students with an architectural innovator's style inclined moderately, but not significantly, towards the three creative styles. Conclusions: The IGI tool objectively measures innovative thinking among students, thus allowing screening of potential employees at an early stage, during their undergraduate studies. The tool was found to be reliable and valid in measuring the style and potential of technological innovation among engineering students.
Diskless supercomputers: Scalable, reliable I/O for the Tera-Op technology base
NASA Technical Reports Server (NTRS)
Katz, Randy H.; Ousterhout, John K.; Patterson, David A.
1993-01-01
Computing is seeing an unprecedented improvement in performance; over the last five years there has been an order-of-magnitude improvement in the speeds of workstation CPU's. At least another order of magnitude seems likely in the next five years, to machines with 500 MIPS or more. The goal of the ARPA Teraop program is to realize even larger, more powerful machines, executing as many as a trillion operations per second. Unfortunately, we have seen no comparable breakthroughs in I/O performance; the speeds of I/O devices and the hardware and software architectures for managing them have not changed substantially in many years. We have completed a program of research to demonstrate hardware and software I/O architectures capable of supporting the kinds of internetworked 'visualization' workstations and supercomputers that will appear in the mid 1990s. The project had three overall goals: high performance, high reliability, and scalable, multipurpose system.
NASA Technical Reports Server (NTRS)
1983-01-01
Mission scenario analysis and architectural concepts, alternative systems concepts, mission operations and architectural development, architectural analysis trades, evolution, configuration, and technology development are assessed.
Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.
Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H
2012-09-26
Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.
A Comparison of Bus Architectures for Safety-Critical Embedded Systems
NASA Technical Reports Server (NTRS)
Rushby, John; Miner, Paul S. (Technical Monitor)
2003-01-01
We describe and compare the architectures of four fault-tolerant, safety-critical buses with a view to deducing principles common to all of them, the main differences in their design choices, and the tradeoffs made. Two of the buses come from an avionics heritage, and two from automobiles, though all four strive for similar levels of reliability and assurance. The avionics buses considered are the Honeywell SAFEbus (the backplane data bus used in the Boeing 777 Airplane Information Management System) and the NASA SPIDER (an architecture being developed as a demonstrator for certification under the new DO-254 guidelines); the automobile buses considered are the TTTech Time-Triggered Architecture (TTA), recently adopted by Audi for automobile applications, and by Honeywell for avionics and aircraft control functions, and FlexRay, which is being developed by a consortium of BMW, DaimlerChrysler, Motorola, and Philips.
The Double-System Architecture for Trusted OS
NASA Astrophysics Data System (ADS)
Zhao, Yong; Li, Yu; Zhan, Jing
With the development of computer science and technology, current secure operating systems failed to respond to many new security challenges. Trusted operating system (TOS) is proposed to try to solve these problems. However, there are no mature, unified architectures for the TOS yet, since most of them cannot make clear of the relationship between security mechanism and the trusted mechanism. Therefore, this paper proposes a double-system architecture (DSA) for the TOS to solve the problem. The DSA is composed of the Trusted System (TS) and the Security System (SS). We constructed the TS by establishing a trusted environment and realized related SS. Furthermore, we proposed the Trusted Information Channel (TIC) to protect the information flow between TS and SS. In a word, the double system architecture we proposed can provide reliable protection for the OS through the SS with the supports provided by the TS.
NASA Technical Reports Server (NTRS)
Seasly, Elaine
2015-01-01
To combat contamination of physical assets and provide reliable data to decision makers in the space and missile defense community, a modular open system architecture for creation of contamination models and standards is proposed. Predictive tools for quantifying the effects of contamination can be calibrated from NASA data of long-term orbiting assets. This data can then be extrapolated to missile defense predictive models. By utilizing a modular open system architecture, sensitive data can be de-coupled and protected while benefitting from open source data of calibrated models. This system architecture will include modules that will allow the designer to trade the effects of baseline performance against the lifecycle degradation due to contamination while modeling the lifecycle costs of alternative designs. In this way, each member of the supply chain becomes an informed and active participant in managing contamination risk early in the system lifecycle.
Wang, Jiaqiu
2015-01-01
The success or failure of the street network depends on its reliability. In this article, using resilience analysis, the author studies how the shape and appearance of street networks in self-organised and top-down planned cities influences urban transport. Considering London and Beijing as proxies for self-organised and top-down planned cities, the structural properties of London and Beijing networks first are investigated based on their primal and dual representations of planar graphs. The robustness of street networks then is evaluated in primal space and dual space by deactivating road links under random and intentional attack scenarios. The results show that the reliability of London street network differs from that of Beijing, which seems to rely more on its architecture and connectivity. It is found that top-down planned Beijing with its higher average degree in the dual space and assortativity in the primal space is more robust than self-organised London using the measures of maximum and second largest cluster size and network efficiency. The article offers an insight, from a network perspective, into the reliability of street patterns in self-organised and top-down planned city systems. PMID:26682551
Development of Equivalent Material Properties of Microbump for Simulating Chip Stacking Packaging
Lee, Chang-Chun; Tzeng, Tzai-Liang; Huang, Pei-Chen
2015-01-01
A three-dimensional integrated circuit (3D-IC) structure with a significant scale mismatch causes difficulty in analytic model construction. This paper proposes a simulation technique to introduce an equivalent material composed of microbumps and their surrounding wafer level underfill (WLUF). The mechanical properties of this equivalent material, including Young’s modulus (E), Poisson’s ratio, shear modulus, and coefficient of thermal expansion (CTE), are directly obtained by applying either a tensile load or a constant displacement, and by increasing the temperature during simulations, respectively. Analytic results indicate that at least eight microbumps at the outermost region of the chip stacking structure need to be considered as an accurate stress/strain contour in the concerned region. In addition, a factorial experimental design with analysis of variance is proposed to optimize chip stacking structure reliability with four factors: chip thickness, substrate thickness, CTE, and E-value. Analytic results show that the most significant factor is CTE of WLUF. This factor affects microbump reliability and structural warpage under a temperature cycling load and high-temperature bonding process. WLUF with low CTE and high E-value are recommended to enhance the assembly reliability of the 3D-IC architecture. PMID:28793495
Using Multimedia for Teaching Analysis in History of Modern Architecture.
ERIC Educational Resources Information Center
Perryman, Garry
This paper presents a case for the development and support of a computer-based interactive multimedia program for teaching analysis in community college architecture design programs. Analysis in architecture design is an extremely important strategy for the teaching of higher-order thinking skills, which senior schools of architecture look for in…
Fiber-Optic Network Architectures for Onboard Avionics Applications Investigated
NASA Technical Reports Server (NTRS)
Nguyen, Hung D.; Ngo, Duc H.
2003-01-01
This project is part of a study within the Advanced Air Transportation Technologies program undertaken at the NASA Glenn Research Center. The main focus of the program is the improvement of air transportation, with particular emphasis on air transportation safety. Current and future advances in digital data communications between an aircraft and the outside world will require high-bandwidth onboard communication networks. Radiofrequency (RF) systems, with their interconnection network based on coaxial cables and waveguides, increase the complexity of communication systems onboard modern civil and military aircraft with respect to weight, power consumption, and safety. In addition, safety and reliability concerns from electromagnetic interference between the RF components embedded in these communication systems exist. A simple, reliable, and lightweight network that is free from the effects of electromagnetic interference and capable of supporting the broadband communications needs of future onboard digital avionics systems cannot be easily implemented using existing coaxial cable-based systems. Fiber-optical communication systems can meet all these challenges of modern avionics applications in an efficient, cost-effective manner. The objective of this project is to present a number of optical network architectures for onboard RF signal distribution. Because of the emergence of a number of digital avionics devices requiring high-bandwidth connectivity, fiber-optic RF networks onboard modern aircraft will play a vital role in ensuring a low-noise, highly reliable RF communication system. Two approaches are being used for network architectures for aircraft onboard fiber-optic distribution systems: a hybrid RF-optical network and an all-optical wavelength division multiplexing (WDM) network.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Gage, Peter; Wright, Michael J.
2017-01-01
Mars Sample Return is our Grand Challenge for the coming decade. TPS (Thermal Protection System) nominal performance is not the key challenge. The main difficulty for designers is the need to verify unprecedented reliability for the entry system: current guidelines for prevention of backward contamination require that the probability of spores larger than 1 micron diameter escaping into the Earth environment be lower than 1 million for the entire system, and the allocation to TPS would be more stringent than that. For reference, the reliability allocation for Orion TPS is closer to 11000, and the demonstrated reliability for previous human Earth return systems was closer to 1100. Improving reliability by more than 3 orders of magnitude is a grand challenge indeed. The TPS community must embrace the possibility of new architectures that are focused on reliability above thermal performance and mass efficiency. MSR (Mars Sample Return) EEV (Earth Entry Vehicle) will be hit with MMOD (Micrometeoroid and Orbital Debris) prior to reentry. A chute-less aero-shell design which allows for self-righting shape was baselined in prior MSR studies, with the assumption that a passive system will maximize EEV robustness. Hence the aero-shell along with the TPS has to take ground impact and not break apart. System verification will require testing to establish ablative performance and thermal failure but also testing of damage from MMOD, and structural performance at ground impact. Mission requirements will demand analysis, testing and verification that are focused on establishing reliability of the design. In this proposed talk, we will focus on the grand challenge of MSR EEV TPS and the need for innovative approaches to address challenges in modeling, testing, manufacturing and verification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Hsien-Hsin S
The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniquesmore » and system software for achieving a robust, secure, and reliable computing system toward our goal.« less
Life Cycle Systems Engineering Approach to NASA's 2nd Generation Reusable Launch Vehicle
NASA Technical Reports Server (NTRS)
Thomas, Dale; Smith, Charles; Safie, Fayssal; Kittredge, Sheryl
2002-01-01
The overall goal of the 2nd Generation RLV Program is to substantially reduce technical and business risks associated with developing a new class of reusable launch vehicles. NASA's specific goals are to improve the safety of a 2nd- generation system by 2 orders of magnitude - equivalent to a crew risk of 1 -in- 10,000 missions - and decrease the cost tenfold, to approximately $1,000 per pound of payload launched. Architecture definition is being conducted in parallel with the maturating of key technologies specifically identified to improve safety and reliability, while reducing operational costs. An architecture broadly includes an Earth-to-orbit reusable launch vehicle, on-orbit transfer vehicles and upper stages, mission planning, ground and flight operations, and support infrastructure, both on the ground and in orbit. The systems engineering approach ensures that the technologies developed - such as lightweight structures, long-life rocket engines, reliable crew escape, and robust thermal protection systems - will synergistically integrate into the optimum vehicle. Given a candidate architecture that possesses credible physical processes and realistic technology assumptions, the next set of analyses address the system's functionality across the spread of operational scenarios characterized by the design reference missions. The safety/reliability and cost/economics associated with operating the system will also be modeled and analyzed to answer the questions "How safe is it?" and "How much will it cost to acquire and operate?" The systems engineering review process factors in comprehensive budget estimates, detailed project schedules, and business and performance plans, against the goals of safety, reliability, and cost, in addition to overall technical feasibility. This approach forms the basis for investment decisions in the 2nd Generation RLV Program's risk-reduction activities. Through this process, NASA will continually refine its specialized needs and identify where Defense and commercial requirements overlap those of civil missions.
A Reliable Service-Oriented Architecture for NASA's Mars Exploration Rover Mission
NASA Technical Reports Server (NTRS)
Mak, Ronald; Walton, Joan; Keely, Leslie; Hehner, Dennis; Chan, Louise
2005-01-01
The Collaborative Information Portal (CIP) was enterprise software developed jointly by the NASA Ames Research Center and the Jet Propulsion Laboratory (JPL) for NASA's highly successful Mars Exploration Rover (MER) mission. Both MER and CIP have performed far beyond their original expectations. Mission managers and engineers ran CIP inside the mission control room at JPL, and the scientists ran CIP in their laboratories, homes, and offices. All the users connected securely over the Internet. Since the mission ran on Mars time, CIP displayed the current time in various Mars and Earth time zones, and it presented staffing and event schedules with Martian time scales. Users could send and receive broadcast messages, and they could view and download data and image files generated by the rovers' instruments. CIP had a three-tiered, service-oriented architecture (SOA) based on industry standards, including J2EE and web services, and it integrated commercial off-the-shelf software. A user's interactions with the graphical interface of the CIP client application generated web services requests to the CIP middleware. The middleware accessed the back-end data repositories if necessary and returned results for these requests. The client application could make multiple service requests for a single user action and then present a composition of the results. This happened transparently, and many users did not even realize that they were connecting to a server. CIP performed well and was extremely reliable; it attained better than 99% uptime during the course of the mission. In this paper, we present overviews of the MER mission and of CIP. We show how CIP helped to fulfill some of the mission needs and how people used it. We discuss the criteria for choosing its architecture, and we describe how the developers made the software so reliable. CIP's reliability did not come about by chance, but was the result of several key design decisions. We conclude with some of the important lessons we learned form developing, deploying, and supporting the software.
Serial Back-Plane Technologies in Advanced Avionics Architectures
NASA Technical Reports Server (NTRS)
Varnavas, Kosta
2005-01-01
Current back plane technologies such as VME, and current personal computer back planes such as PCI, are shared bus systems that can exhibit nondeterministic latencies. This means a card can take control of the bus and use resources indefinitely affecting the ability of other cards in the back plane to acquire the bus. This provides a real hit on the reliability of the system. Additionally, these parallel busses only have bandwidths in the 100s of megahertz range and EMI and noise effects get worse the higher the bandwidth goes. To provide scalable, fault-tolerant, advanced computing systems, more applicable to today s connected computing environment and to better meet the needs of future requirements for advanced space instruments and vehicles, serial back-plane technologies should be implemented in advanced avionics architectures. Serial backplane technologies eliminate the problem of one card getting the bus and never relinquishing it, or one minor problem on the backplane bringing the whole system down. Being serial instead of parallel improves the reliability by reducing many of the signal integrity issues associated with parallel back planes and thus significantly improves reliability. The increased speeds associated with a serial backplane are an added bonus.
Space station electrical power system availability study
NASA Technical Reports Server (NTRS)
Turnquist, Scott R.; Twombly, Mark A.
1988-01-01
ARINC Research Corporation performed a preliminary reliability, and maintainability (RAM) anlaysis of the NASA space station Electric Power Station (EPS). The analysis was performed using the ARINC Research developed UNIRAM RAM assessment methodology and software program. The analysis was performed in two phases: EPS modeling and EPS RAM assessment. The EPS was modeled in four parts: the insolar power generation system, the eclipse power generation system, the power management and distribution system (both ring and radial power distribution control unit (PDCU) architectures), and the power distribution to the inner keel PDCUs. The EPS RAM assessment was conducted in five steps: the use of UNIRAM to perform baseline EPS model analyses and to determine the orbital replacement unit (ORU) criticalities; the determination of EPS sensitivity to on-orbit spared of ORUs and the provision of an indication of which ORUs may need to be spared on-orbit; the determination of EPS sensitivity to changes in ORU reliability; the determination of the expected annual number of ORU failures; and the integration of the power generator system model results with the distribution system model results to assess the full EPS. Conclusions were drawn and recommendations were made.
A Near-Term, High-Confidence Heavy Lift Launch Vehicle
NASA Technical Reports Server (NTRS)
Rothschild, William J.; Talay, Theodore A.
2009-01-01
The use of well understood, legacy elements of the Space Shuttle system could yield a near-term, high-confidence Heavy Lift Launch Vehicle that offers significant performance, reliability, schedule, risk, cost, and work force transition benefits. A side-mount Shuttle-Derived Vehicle (SDV) concept has been defined that has major improvements over previous Shuttle-C concepts. This SDV is shown to carry crew plus large logistics payloads to the ISS, support an operationally efficient and cost effective program of lunar exploration, and offer the potential to support commercial launch operations. This paper provides the latest data and estimates on the configurations, performance, concept of operations, reliability and safety, development schedule, risks, costs, and work force transition opportunities for this optimized side-mount SDV concept. The results presented in this paper have been based on established models and fully validated analysis tools used by the Space Shuttle Program, and are consistent with similar analysis tools commonly used throughout the aerospace industry. While these results serve as a factual basis for comparisons with other launch system architectures, no such comparisons are presented in this paper. The authors welcome comparisons between this optimized SDV and other Heavy Lift Launch Vehicle concepts.
TSSAR: TSS annotation regime for dRNA-seq data.
Amman, Fabian; Wolfinger, Michael T; Lorenz, Ronny; Hofacker, Ivo L; Stadler, Peter F; Findeiß, Sven
2014-03-27
Differential RNA sequencing (dRNA-seq) is a high-throughput screening technique designed to examine the architecture of bacterial operons in general and the precise position of transcription start sites (TSS) in particular. Hitherto, dRNA-seq data were analyzed by visualizing the sequencing reads mapped to the reference genome and manually annotating reliable positions. This is very labor intensive and, due to the subjectivity, biased. Here, we present TSSAR, a tool for automated de novo TSS annotation from dRNA-seq data that respects the statistics of dRNA-seq libraries. TSSAR uses the premise that the number of sequencing reads starting at a certain genomic position within a transcriptional active region follows a Poisson distribution with a parameter that depends on the local strength of expression. The differences of two dRNA-seq library counts thus follow a Skellam distribution. This provides a statistical basis to identify significantly enriched primary transcripts.We assessed the performance by analyzing a publicly available dRNA-seq data set using TSSAR and two simple approaches that utilize user-defined score cutoffs. We evaluated the power of reproducing the manual TSS annotation. Furthermore, the same data set was used to reproduce 74 experimentally validated TSS in H. pylori from reliable techniques such as RACE or primer extension. Both analyses showed that TSSAR outperforms the static cutoff-dependent approaches. Having an automated and efficient tool for analyzing dRNA-seq data facilitates the use of the dRNA-seq technique and promotes its application to more sophisticated analysis. For instance, monitoring the plasticity and dynamics of the transcriptomal architecture triggered by different stimuli and growth conditions becomes possible.The main asset of a novel tool for dRNA-seq analysis that reaches out to a broad user community is usability. As such, we provide TSSAR both as intuitive RESTful Web service ( http://rna.tbi.univie.ac.at/TSSAR) together with a set of post-processing and analysis tools, as well as a stand-alone version for use in high-throughput dRNA-seq data analysis pipelines.
Value-centric design architecture based on analysis of space system characteristics
NASA Astrophysics Data System (ADS)
Xu, Q.; Hollingsworth, P.; Smith, K.
2018-03-01
Emerging design concepts such as miniaturisation, modularity, and standardisation, have contributed to the rapid development of small and inexpensive platforms, particularly cubesats. This has been stimulating an upcoming revolution in space design and development, leading satellites into the era of "smaller, faster, and cheaper". However, the current requirement-centric design philosophy, focused on bespoke monolithic systems, along with the associated development and production process does not inherently fit with the innovative modular, standardised, and mass-produced technologies. This paper presents a new categorisation, characterisation, and value-centric design architecture to address this need for both traditional and novel system designs. Based on the categorisation of system configurations, a characterisation of space systems, comprised of duplication, fractionation, and derivation, is proposed to capture the overall system configuration characteristics and promote potential hybrid designs. Complying with the definitions of the system characterisation, mathematical mapping relations between the system characterisation and the system properties are described to establish the mathematical foundation of the proposed value-centric design methodology. To illustrate the methodology, subsystem reliability relationships are therefore analysed to explore potential system configurations in the design space. The results of the applications of system characteristic analysis clearly show that the effects of different configuration characteristics on the system properties can be effectively analysed and evaluated, enabling the optimization of system configurations.
Flavel, Richard J; Guppy, Chris N; Rabbi, Sheikh M R; Young, Iain M
2017-01-01
The objective of this study was to develop a flexible and free image processing and analysis solution, based on the Public Domain ImageJ platform, for the segmentation and analysis of complex biological plant root systems in soil from x-ray tomography 3D images. Contrasting root architectures from wheat, barley and chickpea root systems were grown in soil and scanned using a high resolution micro-tomography system. A macro (Root1) was developed that reliably identified with good to high accuracy complex root systems (10% overestimation for chickpea, 1% underestimation for wheat, 8% underestimation for barley) and provided analysis of root length and angle. In-built flexibility allowed the user interaction to (a) amend any aspect of the macro to account for specific user preferences, and (b) take account of computational limitations of the platform. The platform is free, flexible and accurate in analysing root system metrics.
NASA Technical Reports Server (NTRS)
Benbenek, Daniel; Soloff, Jason; Lieb, Erica
2010-01-01
Selecting a communications and network architecture for future manned space flight requires an evaluation of the varying goals and objectives of the program, development of communications and network architecture evaluation criteria, and assessment of critical architecture trades. This paper uses Cx Program proposed exploration activities as a guideline; lunar sortie, outpost, Mars, and flexible path options are described. A set of proposed communications network architecture criteria are proposed and described. They include: interoperability, security, reliability, and ease of automating topology changes. Finally a key set of architecture options are traded including (1) multiplexing data at a common network layer vs. at the data link layer, (2) implementing multiple network layers vs. a single network layer, and (3) the use of a particular network layer protocol, primarily IPv6 vs. Delay Tolerant Networking (DTN). In summary, the protocol options are evaluated against the proposed exploration activities and their relative performance with respect to the criteria are assessed. An architectural approach which includes (a) the capability of multiplexing at both the network layer and the data link layer and (b) a single network layer for operations at each program phase, as these solutions are best suited to respond to the widest array of program needs and meet each of the evaluation criteria.
Autonomic Computing for Spacecraft Ground Systems
NASA Technical Reports Server (NTRS)
Li, Zhenping; Savkli, Cetin; Jones, Lori
2007-01-01
Autonomic computing for spacecraft ground systems increases the system reliability and reduces the cost of spacecraft operations and software maintenance. In this paper, we present an autonomic computing solution for spacecraft ground systems at NASA Goddard Space Flight Center (GSFC), which consists of an open standard for a message oriented architecture referred to as the GMSEC architecture (Goddard Mission Services Evolution Center), and an autonomic computing tool, the Criteria Action Table (CAT). This solution has been used in many upgraded ground systems for NASA 's missions, and provides a framework for developing solutions with higher autonomic maturity.
Designing a low cost bedside workstation for intensive care units.
Michel, A.; Zörb, L.; Dudeck, J.
1996-01-01
The paper describes the design and implementation of a software architecture for a low cost bedside workstation for intensive care units. The development is fully integrated into the information infrastructure of the existing hospital information system (HIS) at the University Hospital of Giessen. It provides cost efficient and reliable access for data entry and review from the HIS database from within patient rooms, even in very space limited environments. The architecture further supports automatical data input from medical devices. First results from three different intensive care units are reported. PMID:8947771
Shilton, Katie
2015-02-01
The technical details of Internet architecture affect social debates about privacy and autonomy, intellectual property, cybersecurity, and the basic performance and reliability of Internet services. This paper explores one method for practicing anticipatory ethics in order to understand how a new infrastructure for the Internet might impact these social debates. This paper systematically examines values expressed by an Internet architecture engineering team-the Named Data Networking project-based on data gathered from publications and internal documents. Networking engineers making technical choices also weigh non-technical values when working on Internet infrastructure. Analysis of the team's documents reveals both values invoked in response to technical constraints and possibilities, such as efficiency and dynamism, as well as values, including privacy, security and anonymity, which stem from a concern for personal liberties. More peripheral communitarian values espoused by the engineers include democratization and trust. The paper considers the contextual and social origins of these values, and then uses them as a method of practicing anticipatory ethics: considering the impact such priorities may have on a future Internet.
A Wearable System for Gait Training in Subjects with Parkinson's Disease
Casamassima, Filippo; Ferrari, Alberto; Milosevic, Bojan; Ginis, Pieter; Farella, Elisabetta; Rocchi, Laura
2014-01-01
In this paper, a system for gait training and rehabilitation for Parkinson's disease (PD) patients in a daily life setting is presented. It is based on a wearable architecture aimed at the provision of real-time auditory feedback. Recent studies have, in fact, shown that PD patients can receive benefit from a motor therapy based on auditory cueing and feedback, as happens in traditional rehabilitation contexts with verbal instructions given by clinical operators. To this extent, a system based on a wireless body sensor network and a smartphone has been developed. The system enables real-time extraction of gait spatio-temporal features and their comparison with a patient's reference walking parameters captured in the lab under clinical operator supervision. Feedback is returned to the user in form of vocal messages, encouraging the user to keep her/his walking behavior or to correct it. This paper describes the overall concept, the proposed usage scenario and the parameters estimated for the gait analysis. It also presents, in detail, the hardware-software architecture of the system and the evaluation of system reliability by testing it on a few subjects. PMID:24686731
Digitalization Culture VS Archaeological Visualization: Integration of Pipelines and Open Issues
NASA Astrophysics Data System (ADS)
Cipriani, L.; Fantini, F.
2017-02-01
Scholars with different backgrounds have carried out extensive surveys centred on how 3D digital models, data acquisition and processing have changed over the years in fields of archaeology and architecture and more in general in the Cultural Heritage panorama: the current framework focused on reality-based modelling is then split in several branches: acquisition, communication and analysis of buildings (Pintus et alii, 2014). Despite the wide set of well-structured and all-encompassing surveys on the IT application in Cultural Heritage, several open issues still seem to be present, in particular once the purpose of digital simulacra is the one to fit with the "pre-informatics" legacy of architectural/archaeological representation (historical drawings with their graphic codes and aesthetics). Starting from a series of heterogeneous matters that came up studying two Italian UNESCO sites, this paper aims at underlining the importance of integrating different pipelines from different technological fields, in order to achieve multipurpose models, capable to comply with graphic codes of traditional survey, as well as semantic enrichment, and last but not least, data compression/portability and texture reliability under different lighting simulation.
Formal Foundations for the Specification of Software Architecture.
1995-03-01
Architectures For- mally: A Case-Study Using KWIC." Kestrel Institute, Palo Alto, CA 94304, April 1994. 58. Kang, Kyo C. Feature-Oriented Domain Analysis ( FODA ...6.3.5 Constraint-Based Architectures ................. 6-60 6.4 Summary ......... ............................. 6-63 VII. Analysis of Process-Based...between these architec- ture theories were investigated. A feasibility analysis on an image processing application demonstrated that architecture theories
NASA Technical Reports Server (NTRS)
Nguyen, Hanson C.; Fraction, James; Ortiz-Acosta, Melyane; Dakermanji, George; Kercheval, Bradford P.; Hernandez-Pellerano, Amri; Kim, David S.; Jung, David S.; Meyer, Steven E.; Mallik, Udayan;
2016-01-01
The Goddard Modular Smallsat Architecture (GMSA) is developed at NASA Goddard Space Flight Center (GSFC) to address future reliability along with minimizing cost and schedule challenges for NASA Cubesat and Smallsat missions.
High-Precision Phenotyping of Grape Bunch Architecture Using Fast 3D Sensor and Automation.
Rist, Florian; Herzog, Katja; Mack, Jenny; Richter, Robert; Steinhage, Volker; Töpfer, Reinhard
2018-03-02
Wine growers prefer cultivars with looser bunch architecture because of the decreased risk for bunch rot. As a consequence, grapevine breeders have to select seedlings and new cultivars with regard to appropriate bunch traits. Bunch architecture is a mosaic of different single traits which makes phenotyping labor-intensive and time-consuming. In the present study, a fast and high-precision phenotyping pipeline was developed. The optical sensor Artec Spider 3D scanner (Artec 3D, L-1466, Luxembourg) was used to generate dense 3D point clouds of grapevine bunches under lab conditions and an automated analysis software called 3D-Bunch-Tool was developed to extract different single 3D bunch traits, i.e., the number of berries, berry diameter, single berry volume, total volume of berries, convex hull volume of grapes, bunch width and bunch length. The method was validated on whole bunches of different grapevine cultivars and phenotypic variable breeding material. Reliable phenotypic data were obtained which show high significant correlations (up to r² = 0.95 for berry number) compared to ground truth data. Moreover, it was shown that the Artec Spider can be used directly in the field where achieved data show comparable precision with regard to the lab application. This non-invasive and non-contact field application facilitates the first high-precision phenotyping pipeline based on 3D bunch traits in large plant sets.
NASA Technical Reports Server (NTRS)
Liu, Kuojuey Ray
1990-01-01
Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Learn, Mark Walter
Sandia National Laboratories is currently developing new processing and data communication architectures for use in future satellite payloads. These architectures will leverage the flexibility and performance of state-of-the-art static-random-access-memory-based Field Programmable Gate Arrays (FPGAs). One such FPGA is the radiation-hardened version of the Virtex-5 being developed by Xilinx. However, not all features of this FPGA are being radiation-hardened by design and could still be susceptible to on-orbit upsets. One such feature is the embedded hard-core PPC440 processor. Since this processor is implemented in the FPGA as a hard-core, traditional mitigation approaches such as Triple Modular Redundancy (TMR) are not availablemore » to improve the processor's on-orbit reliability. The goal of this work is to investigate techniques that can help mitigate the embedded hard-core PPC440 processor within the Virtex-5 FPGA other than TMR. Implementing various mitigation schemes reliably within the PPC440 offers a powerful reconfigurable computing resource to these node-based processing architectures. This document summarizes the work done on the cache mitigation scheme for the embedded hard-core PPC440 processor within the Virtex-5 FPGAs, and describes in detail the design of the cache mitigation scheme and the testing conducted at the radiation effects facility on the Texas A&M campus.« less
New Trends in Robotics for Agriculture: Integration and Assessment of a Real Fleet of Robots
Gonzalez-de-Soto, Mariano; Pajares, Gonzalo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis. PMID:25143976
New trends in robotics for agriculture: integration and assessment of a real fleet of robots.
Emmi, Luis; Gonzalez-de-Soto, Mariano; Pajares, Gonzalo; Gonzalez-de-Santos, Pablo
2014-01-01
Computer-based sensors and actuators such as global positioning systems, machine vision, and laser-based sensors have progressively been incorporated into mobile robots with the aim of configuring autonomous systems capable of shifting operator activities in agricultural tasks. However, the incorporation of many electronic systems into a robot impairs its reliability and increases its cost. Hardware minimization, as well as software minimization and ease of integration, is essential to obtain feasible robotic systems. A step forward in the application of automatic equipment in agriculture is the use of fleets of robots, in which a number of specialized robots collaborate to accomplish one or several agricultural tasks. This paper strives to develop a system architecture for both individual robots and robots working in fleets to improve reliability, decrease complexity and costs, and permit the integration of software from different developers. Several solutions are studied, from a fully distributed to a whole integrated architecture in which a central computer runs all processes. This work also studies diverse topologies for controlling fleets of robots and advances other prospective topologies. The architecture presented in this paper is being successfully applied in the RHEA fleet, which comprises three ground mobile units based on a commercial tractor chassis.
Model Based Mission Assurance: Emerging Opportunities for Robotic Systems
NASA Technical Reports Server (NTRS)
Evans, John W.; DiVenti, Tony
2016-01-01
The emergence of Model Based Systems Engineering (MBSE) in a Model Based Engineering framework has created new opportunities to improve effectiveness and efficiencies across the assurance functions. The MBSE environment supports not only system architecture development, but provides for support of Systems Safety, Reliability and Risk Analysis concurrently in the same framework. Linking to detailed design will further improve assurance capabilities to support failures avoidance and mitigation in flight systems. This also is leading new assurance functions including model assurance and management of uncertainty in the modeling environment. Further, the assurance cases, a structured hierarchal argument or model, are emerging as a basis for supporting a comprehensive viewpoint in which to support Model Based Mission Assurance (MBMA).
Non-functional Avionics Requirements
NASA Astrophysics Data System (ADS)
Paulitsch, Michael; Ruess, Harald; Sorea, Maria
Embedded systems in aerospace become more and more integrated in order to reduce weight, volume/size, and power of hardware for more fuel-effi ciency. Such integration tendencies change architectural approaches of system ar chi tec tures, which subsequently change non-functional requirements for plat forms. This paper provides some insight into state-of-the-practice of non-func tional requirements for developing ultra-critical embedded systems in the aero space industry, including recent changes and trends. In particular, formal requi re ment capture and formal analysis of non-functional requirements of avionic systems - including hard-real time, fault-tolerance, reliability, and per for mance - are exemplified by means of recent developments in SAL and HiLiTE.
NASA Technical Reports Server (NTRS)
Bavuso, Salvatore J.; Rothmann, Elizabeth; Mittal, Nitin; Koppen, Sandra Howell
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems, and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical preprocessor Graphics Oriented (GO) program. GO is a graphical user interface for the HARP engine that enables the drawing of reliability/availability models on a monitor. A mouse is used to select fault tree gates or Markov graphical symbols from a menu for drawing.
NASA Astrophysics Data System (ADS)
Jansen, Florian; Kanal, Florian; Kahmann, Max; Tan, Chuong; Diekamp, Holger; Scelle, Raphael; Budnicki, Aleksander; Sutter, Dirk
2018-02-01
In this work we present an ultrafast laser system distinguished by its industry-ready reliability and its outstanding flexibility that allows for real-time process-inherent parameter. The robust system design and linear amplifier architecture make the all-fiber series TruMicro 2000 ideally suited for passive coupling to hollow-core delivery fibers. In addition to details on the laser system itself, various application examples are shown, including welding of different glasses and ablation of silicon carbide and silicon.
NASA Technical Reports Server (NTRS)
Schurmeier, H. M.
1974-01-01
The long life of Pioneer interplanetary spacecraft is considered along with a general accelerated methodology for long-life mechanical components, dependable long-lived household appliances, and the design and development philosophy to achieve reliability and long life in large turbine generators. Other topics discussed include an integrated management approach to long life in space, artificial heart reliability factors, and architectural concepts and redundancy techniques in fault-tolerant computers. Individual items are announced in this issue.
Need for Cost Optimization of Space Life Support Systems
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Anderson, Grant
2017-01-01
As the nation plans manned missions that go far beyond Earth orbit to Mars, there is an urgent need for a robust, disciplined systems engineering methodology that can identify an optimized Environmental Control and Life Support (ECLSS) architecture for long duration deep space missions. But unlike the previously used Equivalent System Mass (ESM), the method must be inclusive of all driving parameters and emphasize the economic analysis of life support system design. The key parameter for this analysis is Life Cycle Cost (LCC). LCC takes into account the cost for development and qualification of the system, launch costs, operational costs, maintenance costs and all other relevant and associated costs. Additionally, an effective methodology must consider system technical performance, safety, reliability, maintainability, crew time, and other factors that could affect the overall merit of the life support system.
Design-for-reliability (DfR) of aerospace electronics: Attributes and challenges
NASA Astrophysics Data System (ADS)
Bensoussan, A.; Suhir, E.
The next generation of multi-beam satellite systems that would be able to provide effective interactive communication services will have to operate within a highly flexible architecture. One option to develop such flexibility is to employ microwaves and/or optoelectronic components and to make them reliable. The use of optoelectronic devices, equipments and systems will result indeed in significant improvement in the state-of-the-art only provided that the new designs will suggest a novel and effective architecture that will combine the merits of good functional performance, satisfactory mechanical (structural) reliability and high cost effectiveness. The obvious challenge is the ability to design and fabricate equipment based on EEE components that would be able to successfully withstand harsh space environments for the entire duration of the mission. It is imperative that the major players in the space industry, such as manufacturers, industrial users, and space agencies, understand the importance and the limits of the achievable quality and reliability of optoelectronic devices operated in harsh environments. It is equally imperative that the physics of possible failures is well understood and, if necessary, minimized, and that adequate Quality Standards are developed and employed. The space community has to identify and to develop the strategic approach for validating optoelectronic products. This should be done with consideration of numerous intrinsic and extrinsic requirements for the systems' performance. When considering a particular next generation optoelectronic space system, the space community needs to address the following major issues: proof of concept for this system, proof of reliability and proof of performance. This should be done with taking into account the specifics of the anticipated application. High operational reliability cannot be left to the prognostics and health monitoring/management (PHM) effort and stage, no matter how important and - ffective such an effort might be. Reliability should be pursued at all the stages of the equipment lifetime: design, product development, manufacturing, burn-in testing and, of course, subsequent PHM after the space apparatus is launched and operated.
Architectural Analysis of Dynamically Reconfigurable Systems
NASA Technical Reports Server (NTRS)
Lindvall, Mikael; Godfrey, Sally; Ackermann, Chris; Ray, Arnab; Yonkwa, Lyly
2010-01-01
oTpics include: the problem (increased flexibility of architectural styles decrease analyzability, behavior emerges and varies depending on the configuration, does the resulting system run according to the intended design, and architectural decisions can impede or facilitate testing); top down approach to architecture analysis, detection of defects and deviations, and architecture and its testability; currently targeted projects GMSEC and CFS; analyzing software architectures; analyzing runtime events; actual architecture recognition; GMPUB in Dynamic SAVE; sample output from new approach; taking message timing delays into account; CFS examples of architecture and testability; some recommendations for improved testablity; and CFS examples of abstract interfaces and testability; CFS example of opening some internal details.
NASA Technical Reports Server (NTRS)
1983-01-01
User alignment plan, physical and life sciences and applications, commercial requirements national security, space operations, user needs, foreign contacts, mission scenario analysis and architectural concepts, alternative systems concepts, mission operations architectural development, architectural analysis trades, evolution, configuration, and technology development are discussed.
The CHT2 Project: Diachronic 3d Reconstruction of Historic Sites
NASA Astrophysics Data System (ADS)
Guidi, G.; Micoli, L.; Gonizzi Barsanti, S.; Malik, U.
2017-08-01
Digital modelling archaeological and architectural monuments in their current state and in their presumed past aspect has been recognized not only as a way for explaining to the public the genesis of a historical site, but also as an effective tool for research. The search for historical sources, their proper analysis and interdisciplinary relationship between technological disciplines and the humanities are fundamental for obtaining reliable hypothetical reconstructions. This paper presents an experimental activity defined by the project Cultural Heritage Through Time - CHT2 (http://cht2-project.eu), funded in the framework of the Joint Programming Initiative on Cultural Heritage (JPI-CH) of the European Commission. Its goal is to develop time-varying 3D products, from landscape to architectural scale, deals with the implementation of the methodology on one of the case studies: the late Roman circus of Milan, built in the era when the city was the capital of the Western Roman Empire (286-402 A.D). The work presented here covers one of the cases in which the physical evidences have now been almost entirely disappeared. The diachronic reconstruction is based on a proper mix of quantitative data originated by 3D surveys at present time, and historical sources like ancient maps, drawings, archaeological reports, archaeological restrictions decrees and old photographs. Such heterogeneous sources have been first georeferenced and then properly integrated according to the methodology defined in the framework of the CHT2 project, to hypothesize a reliable reconstruction of the area in different historical periods.
Seamless Digital Environment – Data Analytics Use Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxstrand, Johanna
Multiple research efforts in the U.S Department of Energy Light Water Reactor Sustainability (LWRS) Program studies the need and design of an underlying architecture to support the increased amount and use of data in the nuclear power plant. More specifically the three LWRS research efforts; Digital Architecture for an Automated Plant, Automated Work Packages, Computer-Based Procedures for Field Workers, and the Online Monitoring efforts all have identified the need for a digital architecture and more importantly the need for a Seamless Digital Environment (SDE). A SDE provides a mean to access multiple applications, gather the data points needed, conduct themore » analysis requested, and present the result to the user with minimal or no effort by the user. During the 2016 annual Nuclear Information Technology Strategic Leadership (NITSL) group meeting the nuclear utilities identified the need for research focused on data analytics. The effort was to develop and evaluate use cases for data mining and analytics for employing information from plant sensors and database for use in developing improved business analytics. The goal of the study is to research potential approaches to building an analytics solution for equipment reliability, on a small scale, focusing on either a single piece of equipment or a single system. The analytics solution will likely consist of a data integration layer, predictive and machine learning layer and the user interface layer that will display the output of the analysis in a straight forward, easy to consume manner. This report describes the use case study initiated by NITSL and conducted in a collaboration between Idaho National Laboratory, Arizona Public Service – Palo Verde Nuclear Generating Station, and NextAxiom Inc.« less
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
SARA - SURE/ASSIST RELIABILITY ANALYSIS WORKSTATION (UNIX VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
SARA, the SURE/ASSIST Reliability Analysis Workstation, is a bundle of programs used to solve reliability problems. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. The Systems Validation Methods group at NASA Langley Research Center has created a set of four software packages that form the basis for a reliability analysis workstation, including three for use in analyzing reconfigurable, fault-tolerant systems and one for analyzing non-reconfigurable systems. The SARA bundle includes the three for reconfigurable, fault-tolerant systems: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), and PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920). As indicated by the program numbers in parentheses, each of these three packages is also available separately in two machine versions. The fourth package, which is only available separately, is FTC, the Fault Tree Compiler (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree which describes a non-reconfigurable system. PAWS/STEM and SURE are analysis programs which utilize different solution methods, but have a common input language, the SURE language. ASSIST is a preprocessor that generates SURE language from a more abstract definition. ASSIST, SURE, and PAWS/STEM are described briefly in the following paragraphs. For additional details about the individual packages, including pricing, please refer to their respective abstracts. ASSIST, the Abstract Semi-Markov Specification Interface to the SURE Tool program, allows a reliability engineer to describe the failure behavior of a fault-tolerant computer system in an abstract, high-level language. The ASSIST program then automatically generates a corresponding semi-Markov model. A one-page ASSIST-language description may result in a semi-Markov model with thousands of states and transitions. The ASSIST program also includes model-reduction techniques to facilitate efficient modeling of large systems. The semi-Markov model generated by ASSIST is in the format needed for input to SURE and PAWS/STEM. The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. SURE output is tabular. The PAWS/STEM package includes two programs for the creation and evaluation of pure Markov models describing the behavior of fault-tolerant reconfigurable computer systems: the Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The programs that comprise the SARA package were originally developed for use on DEC VAX series computers running VMS and were later ported for use on Sun series computers running SunOS. They are written in C-language, Pascal, and FORTRAN 77. An ANSI compliant C compiler is required in order to compile the C portion of the Sun version source code. The Pascal and FORTRAN code can be compiled on Sun computers using Sun Pascal and Sun Fortran. For the VMS version, VAX C, VAX PASCAL, and VAX FORTRAN can be used to recompile the source code. The standard distribution medium for the VMS version of SARA (COS-10041) is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The standard distribution medium for the Sun version of SARA (COS-10039) is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. Electronic copies of the ASSIST user's manual in TeX and PostScript formats are provided on the distribution medium. DEC, VAX, VMS, and TK50 are registered trademarks of Digital Equipment Corporation. Sun, Sun3, Sun4, and SunOS are trademarks of Sun Microsystems, Inc. TeX is a trademark of the American Mathematical Society. PostScript is a registered trademark of Adobe Systems Incorporated.
TD-LTE Wireless Private Network QoS Transmission Protection
NASA Astrophysics Data System (ADS)
Zhang, Jianming; Cheng, Chao; Wu, Zanhong
With the commencement of construction of the smart grid, the demand power business for reliability and security continues to improve, the reliability transmission of power TD-LTE Wireless Private Network are more and more attention. For TD-LTE power private network, it can provide different QoS services according to the user's business type, to protect the reliable transmission of business. This article describes in detail the AF module of PCC in the EPC network, specifically introduces set up AF module station and QoS mechanisms in the EPS load, fully considers the business characteristics of the special power network, establishing a suitable architecture for mapping QoS parameters, ensuring the implementation of each QoS business. Through using radio bearer management, we can achieve the reliable transmission of each business on physical channel.
Fog-computing concept usage as means to enhance information and control system reliability
NASA Astrophysics Data System (ADS)
Melnik, E. V.; Klimenko, A. B.; Ivanov, D. Ya
2018-05-01
This paper focuses on the reliability issue of information and control systems (ICS). The authors propose using the elements of the fog-computing concept to enhance the reliability function. The key idea of fog-computing is to shift computations to the fog-layer of the network, and thus to decrease the workload of the communication environment and data processing components. As for ICS, workload also can be distributed among sensors, actuators and network infrastructure facilities near the sources of data. The authors simulated typical workload distribution situations for the “traditional” ICS architecture and for the one with fogcomputing concept elements usage. The paper contains some models, selected simulation results and conclusion about the prospects of the fog-computing as a means to enhance ICS reliability.
Wireless Sensors Network (Sensornet)
NASA Technical Reports Server (NTRS)
Perotti, J.
2003-01-01
The Wireless Sensor Network System presented in this paper provides a flexible reconfigurable architecture that could be used in a broad range of applications. It also provides a sensor network with increased reliability; decreased maintainability costs, and assured data availability by autonomously and automatically reconfiguring to overcome communication interferences.
NASA Technical Reports Server (NTRS)
Sproles, Darrell W.; Bavuso, Salvatore J.
1994-01-01
The Hybrid Automated Reliability Predictor (HARP) integrated Reliability (HiRel) tool system for reliability/availability prediction offers a toolbox of integrated reliability/availability programs that can be used to customize the user's application in a workstation or nonworkstation environment. HiRel consists of interactive graphical input/output programs and four reliability/availability modeling engines that provide analytical and simulative solutions to a wide host of highly reliable fault-tolerant system architectures and is also applicable to electronic systems in general. The tool system was designed at the outset to be compatible with most computing platforms and operating systems and some programs have been beta tested within the aerospace community for over 8 years. This document is a user's guide for the HiRel graphical postprocessor program HARPO (HARP Output). HARPO reads ASCII files generated by HARP. It provides an interactive plotting capability that can be used to display alternate model data for trade-off analyses. File data can also be imported to other commercial software programs.
A New On-Line Diagnosis Protocol for the SPIDER Family of Byzantine Fault Tolerant Architectures
NASA Technical Reports Server (NTRS)
Geser, Alfons; Miner, Paul S.
2004-01-01
This paper presents the formal verification of a new protocol for online distributed diagnosis for the SPIDER family of architectures. An instance of the Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) architecture consists of a collection of processing elements communicating over a Reliable Optical Bus (ROBUS). The ROBUS is a specialized fault-tolerant device that guarantees Interactive Consistency, Distributed Diagnosis (Group Membership), and Synchronization in the presence of a bounded number of physical faults. Formal verification of the original SPIDER diagnosis protocol provided a detailed understanding that led to the discovery of a significantly more efficient protocol. The original protocol was adapted from the formally verified protocol used in the MAFT architecture. It required O(N) message exchanges per defendant to correctly diagnose failures in a system with N nodes. The new protocol achieves the same diagnostic fidelity, but only requires O(1) exchanges per defendant. This paper presents this new diagnosis protocol and a formal proof of its correctness using PVS.
Interplay between efficiency and device architecture for small molecule organic solar cells.
Williams, Graeme; Sutty, Sibi; Aziz, Hany
2014-06-21
Small molecule organic solar cells (OSCs) have experienced a resurgence of interest over their polymer solar cell counterparts, owing to their improved batch-to-batch (thus, cell-to-cell) reliability. In this systematic study on OSC device architecture, we investigate five different small molecule OSC structures, including the simple planar heterojunction (PHJ) and bulk heterojunction (BHJ), as well as several planar-mixed structures. The different OSC structures are studied over a wide range of donor:acceptor mixing concentrations to gain a comprehensive understanding of their charge transport behavior. Transient photocurrent decay measurements provide crucial information regarding the interplay between charge sweep-out and charge recombination, and ultimately hint toward space charge effects in planar-mixed structures. Results show that the BHJ/acceptor architecture, comprising a BHJ layer with high C60 acceptor content, generates OSCs with the highest performance by balancing charge generation with charge collection. The performance of other device architectures is largely limited by hole transport, with associated hole accumulation and space charge effects.
Chessa, Manuela; Bianchi, Valentina; Zampetti, Massimo; Sabatini, Silvio P; Solari, Fabio
2012-01-01
The intrinsic parallelism of visual neural architectures based on distributed hierarchical layers is well suited to be implemented on the multi-core architectures of modern graphics cards. The design strategies that allow us to optimally take advantage of such parallelism, in order to efficiently map on GPU the hierarchy of layers and the canonical neural computations, are proposed. Specifically, the advantages of a cortical map-like representation of the data are exploited. Moreover, a GPU implementation of a novel neural architecture for the computation of binocular disparity from stereo image pairs, based on populations of binocular energy neurons, is presented. The implemented neural model achieves good performances in terms of reliability of the disparity estimates and a near real-time execution speed, thus demonstrating the effectiveness of the devised design strategies. The proposed approach is valid in general, since the neural building blocks we implemented are a common basis for the modeling of visual neural functionalities.
Determination of an Optimal Commercial Data Bus Architecture for a Flight Data System
NASA Technical Reports Server (NTRS)
Crawford, Kevin; Johnson, Martin; Humphries, Rick (Technical Monitor)
2001-01-01
NASA/Marshall Space Flight Center (MSFC) is continually looking for methods to reduce cost and schedule while keeping the quality of work high. MSFC is NASA's lead center for space transportation and microgravity research. When supporting NASA's programs several decisions concerning the avionics system must be made. Usually many trade studies must be conducted to determine the best ways to meet the customer's requirements. When deciding the flight data system, one of the first trade studies normally conducted is the determination of the data bus architecture. The schedule, cost, reliability, and environments are some of the factors that are reviewed in the determination of the data bus architecture. Based on the studies, the data bus architecture could result in a proprietary data bus or a commercial data bus. The cost factor usually removes the proprietary data bus from consideration. The commercial data bus's range from Versa Module Eurocard (VME) to Compact PCI to STD 32 to PC 104. If cost, schedule and size are prime factors, VME is usually not considered. If the prime factors are cost, schedule, and size then Compact PCI, STD 32 and PC104 are the choices for the data bus architecture. MSFC's center director has funded a study from his discretionary fund to determine an optimal low cost commercial data bus architecture. The goal of the study is to functionally and environmentally test Compact PCI, STD 32 and PC 104 data bus architectures. This paper will summarize the results of the data bus architecture study.
NASA Astrophysics Data System (ADS)
Knosp, B.; Gangl, M.; Hristova-Veleva, S. M.; Kim, R. M.; Li, P.; Turk, J.; Vu, Q. A.
2015-12-01
The JPL Tropical Cyclone Information System (TCIS) brings together satellite, aircraft, and model forecast data from several NASA, NOAA, and other data centers to assist researchers in comparing and analyzing data and model forecast related to tropical cyclones. The TCIS has been running a near-real time (NRT) data portal during North Atlantic hurricane season that typically runs from June through October each year, since 2010. Data collected by the TCIS varies by type, format, contents, and frequency and is served to the user in two ways: (1) as image overlays on a virtual globe and (2) as derived output from a suite of analysis tools. In order to support these two functions, the data must be collected and then made searchable by criteria such as date, mission, product, pressure level, and geospatial region. Creating a database architecture that is flexible enough to manage, intelligently interrogate, and ultimately present this disparate data to the user in a meaningful way has been the primary challenge. The database solution for the TCIS has been to use a hybrid MySQL + Solr implementation. After testing other relational database and NoSQL solutions, such as PostgreSQL and MongoDB respectively, this solution has given the TCIS the best offerings in terms of query speed and result reliability. This database solution also supports the challenging (and memory overwhelming) geospatial queries that are necessary to support analysis tools requested by users. Though hardly new technologies on their own, our implementation of MySQL + Solr had to be customized and tuned to be able to accurately store, index, and search the TCIS data holdings. In this presentation, we will discuss how we arrived on our MySQL + Solr database architecture, why it offers us the most consistent fast and reliable results, and how it supports our front end so that we can offer users a look into our "big data" holdings.
Disrupted resting-state functional architecture of the brain after 45-day simulated microgravity
Zhou, Yuan; Wang, Yun; Rao, Li-Lin; Liang, Zhu-Yuan; Chen, Xiao-Ping; Zheng, Dang; Tan, Cheng; Tian, Zhi-Qiang; Wang, Chun-Hui; Bai, Yan-Qiang; Chen, Shan-Guang; Li, Shu
2014-01-01
Long-term spaceflight induces both physiological and psychological changes in astronauts. To understand the neural mechanisms underlying these physiological and psychological changes, it is critical to investigate the effects of microgravity on the functional architecture of the brain. In this study, we used resting-state functional MRI (rs-fMRI) to study whether the functional architecture of the brain is altered after 45 days of −6° head-down tilt (HDT) bed rest, which is a reliable model for the simulation of microgravity. Sixteen healthy male volunteers underwent rs-fMRI scans before and after 45 days of −6° HDT bed rest. Specifically, we used a commonly employed graph-based measure of network organization, i.e., degree centrality (DC), to perform a full-brain exploration of the regions that were influenced by simulated microgravity. We subsequently examined the functional connectivities of these regions using a seed-based resting-state functional connectivity (RSFC) analysis. We found decreased DC in two regions, the left anterior insula (aINS) and the anterior part of the middle cingulate cortex (MCC; also called the dorsal anterior cingulate cortex in many studies), in the male volunteers after 45 days of −6° HDT bed rest. Furthermore, seed-based RSFC analyses revealed that a functional network anchored in the aINS and MCC was particularly influenced by simulated microgravity. These results provide evidence that simulated microgravity alters the resting-state functional architecture of the brains of males and suggest that the processing of salience information, which is primarily subserved by the aINS–MCC functional network, is particularly influenced by spaceflight. The current findings provide a new perspective for understanding the relationships between microgravity, cognitive function, autonomic neural function, and central neural activity. PMID:24926242
Reliable and Fault-Tolerant Software-Defined Network Operations Scheme for Remote 3D Printing
NASA Astrophysics Data System (ADS)
Kim, Dongkyun; Gil, Joon-Min
2015-03-01
The recent wide expansion of applicable three-dimensional (3D) printing and software-defined networking (SDN) technologies has led to a great deal of attention being focused on efficient remote control of manufacturing processes. SDN is a renowned paradigm for network softwarization, which has helped facilitate remote manufacturing in association with high network performance, since SDN is designed to control network paths and traffic flows, guaranteeing improved quality of services by obtaining network requests from end-applications on demand through the separated SDN controller or control plane. However, current SDN approaches are generally focused on the controls and automation of the networks, which indicates that there is a lack of management plane development designed for a reliable and fault-tolerant SDN environment. Therefore, in addition to the inherent advantage of SDN, this paper proposes a new software-defined network operations center (SD-NOC) architecture to strengthen the reliability and fault-tolerance of SDN in terms of network operations and management in particular. The cooperation and orchestration between SDN and SD-NOC are also introduced for the SDN failover processes based on four principal SDN breakdown scenarios derived from the failures of the controller, SDN nodes, and connected links. The abovementioned SDN troubles significantly reduce the network reachability to remote devices (e.g., 3D printers, super high-definition cameras, etc.) and the reliability of relevant control processes. Our performance consideration and analysis results show that the proposed scheme can shrink operations and management overheads of SDN, which leads to the enhancement of responsiveness and reliability of SDN for remote 3D printing and control processes.
Mars Hybrid Propulsion System Trajectory Analysis. Part I; Crew Missions
NASA Technical Reports Server (NTRS)
Chai, Patrick R.; Merrill, Raymond G.; Qu, Min
2015-01-01
NASAs Human spaceflight Architecture team is developing a reusable hybrid transportation architecture in which both chemical and electric propulsion systems are used to send crew and cargo to Mars destinations such as Phobos, Deimos, the surface of Mars, and other orbits around Mars. By combining chemical and electrical propulsion into a single space- ship and applying each where it is more effective, the hybrid architecture enables a series of Mars trajectories that are more fuel-efficient than an all chemical architecture without significant increases in flight times. This paper provides the analysis of the interplanetary segments of the three Evolvable Mars Campaign crew missions to Mars using the hybrid transportation architecture. The trajectory analysis provides departure and arrival dates and propellant needs for the three crew missions that are used by the campaign analysis team for campaign build-up and logistics aggregation analysis. Sensitivity analyses were performed to investigate the impact of mass growth, departure window, and propulsion system performance on the hybrid transportation architecture. The results and system analysis from this paper contribute to analyses of the other human spaceflight architecture team tasks and feed into the definition of the Evolvable Mars Campaign.
Reckfort, Julia; Wiese, Hendrik; Pietrzyk, Uwe; Zilles, Karl; Amunts, Katrin; Axer, Markus
2015-01-01
Structural connectivity of the brain can be conceptionalized as a multiscale organization. The present study is built on 3D-Polarized Light Imaging (3D-PLI), a neuroimaging technique targeting the reconstruction of nerve fiber orientations and therefore contributing to the analysis of brain connectivity. Spatial orientations of the fibers are derived from birefringence measurements of unstained histological sections that are interpreted by means of a voxel-based analysis. This implies that a single fiber orientation vector is obtained for each voxel, which reflects the net effect of all comprised fibers. We have utilized two polarimetric setups providing an object space resolution of 1.3 μm/px (microscopic setup) and 64 μm/px (macroscopic setup) to carry out 3D-PLI and retrieve fiber orientations of the same tissue samples, but at complementary voxel sizes (i.e., scales). The present study identifies the main sources which cause a discrepancy of the measured fiber orientations observed when measuring the same sample with the two polarimetric systems. As such sources the differing optical resolutions and diverging retardances of the implemented waveplates were identified. A methodology was implemented that enables the compensation of measured different systems' responses to the same birefringent sample. This opens up new ways to conduct multiscale analysis in brains by means of 3D-PLI and to provide a reliable basis for the transition between different scales of the nerve fiber architecture. PMID:26388744
NASA Technical Reports Server (NTRS)
Monell, D.; Mathias, D.; Reuther, J.; Garn, M.
2003-01-01
A new engineering environment constructed for the purposes of analyzing and designing Reusable Launch Vehicles (RLVs) is presented. The new environment has been developed to allow NASA to perform independent analysis and design of emerging RLV architectures and technologies. The new Advanced Engineering Environment (AEE) is both collaborative and distributed. It facilitates integration of the analyses by both vehicle performance disciplines and life-cycle disciplines. Current performance disciplines supported include: weights and sizing, aerodynamics, trajectories, propulsion, structural loads, and CAD-based geometries. Current life-cycle disciplines supported include: DDT&E cost, production costs, operations costs, flight rates, safety and reliability, and system economics. Involving six NASA centers (ARC, LaRC, MSFC, KSC, GRC and JSC), AEE has been tailored to serve as a web-accessed agency-wide source for all of NASA's future launch vehicle systems engineering functions. Thus, it is configured to facilitate (a) data management, (b) automated tool/process integration and execution, and (c) data visualization and presentation. The core components of the integrated framework are a customized PTC Windchill product data management server, a set of RLV analysis and design tools integrated using Phoenix Integration's Model Center, and an XML-based data capture and transfer protocol. The AEE system has seen production use during the Initial Architecture and Technology Review for the NASA 2nd Generation RLV program, and it continues to undergo development and enhancements in support of its current main customer, the NASA Next Generation Launch Technology (NGLT) program.
Reconfigurable tree architectures using subtree oriented fault tolerance
NASA Technical Reports Server (NTRS)
Lowrie, Matthew B.
1987-01-01
An approach to the design of reconfigurable tree architecture is presented in which spare processors are allocated at the leaves. The approach is unique in that spares are associated with subtrees and sharing of spares between these subtrees can occur. The Subtree Oriented Fault Tolerance (SOFT) approach is more reliable than previous approaches capable of tolerating link and switch failures for both single chip and multichip tree implementations while reducing redundancy in terms of both spare processors and links. VLSI layout is 0(n) for binary trees and is directly extensible to N-ary trees and fault tolerance through performance degradation.
A Messaging Infrastructure for WLCG
NASA Astrophysics Data System (ADS)
Casey, James; Cons, Lionel; Lapka, Wojciech; Paladin, Massimo; Skaburskas, Konstantin
2011-12-01
During the EGEE-III project operational tools such as SAM, Nagios, Gridview, the regional Dashboard and GGUS moved to a communication architecture based on ActiveMQ, an open-source enterprise messaging solution. LHC experiments, in particular ATLAS, developed prototypes of systems using the same messaging infrastructure, validating the system for their use-cases. In this paper we describe the WLCG messaging use cases and outline an improved messaging architecture based on the experience gained during the EGEE-III period. We show how this provides a solid basis for many applications, including the grid middleware, to improve their resilience and reliability.
Development of the Functional Flow Block Diagram for the J-2X Rocket Engine System
NASA Technical Reports Server (NTRS)
White, Thomas; Stoller, Sandra L.; Greene, WIlliam D.; Christenson, Rick L.; Bowen, Barry C.
2007-01-01
The J-2X program calls for the upgrade of the Apollo-era Rocketdyne J-2 engine to higher power levels, using new materials and manufacturing techniques, and with more restrictive safety and reliability requirements than prior human-rated engines in NASA history. Such requirements demand a comprehensive systems engineering effort to ensure success. Pratt & Whitney Rocketdyne system engineers performed a functional analysis of the engine to establish the functional architecture. J-2X functions were captured in six major operational blocks. Each block was divided into sub-blocks or states. In each sub-block, functions necessary to perform each state were determined. A functional engine schematic consistent with the fidelity of the system model was defined for this analysis. The blocks, sub-blocks, and functions were sequentially numbered to differentiate the states in which the function were performed and to indicate the sequence of events. The Engine System was functionally partitioned, to provide separate and unique functional operators. Establishing unique functional operators as work output of the System Architecture process is novel in Liquid Propulsion Engine design. Each functional operator was described such that its unique functionality was identified. The decomposed functions were then allocated to the functional operators both of which were the inputs to the subsystem or component performance specifications. PWR also used a novel approach to identify and map the engine functional requirements to customer-specified functions. The final result was a comprehensive Functional Flow Block Diagram (FFBD) for the J-2X Engine System, decomposed to the component level and mapped to all functional requirements. This FFBD greatly facilitates component specification development, providing a well-defined trade space for functional trades at the subsystem and component level. It also provides a framework for function-based failure modes and effects analysis (FMEA), and a rigorous baseline for the functional architecture.
ESAS-Derived Earth Departure Stage Design for Human Mars Exploration
NASA Technical Reports Server (NTRS)
Flaherty, Kevin; Grant, Michael; Korzun, Ashley; Malo-Molina, Faure; Steinfeldt, Bradley; Stahl, Benjamin; Wilhite, Alan
2007-01-01
The Vision for Space Exploration has set the nation on a course to have humans on Mars as early as 2030. To reduce the cost and risk associated with human Mars exploration, NASA is planning for the Mars architecture to leverage the lunar architecture as fully as possible. This study takes the defined launch vehicles and system capabilities from ESAS and extends their application to DRM 3.0 to design an Earth Departure Stage suitable for the cargo and crew missions to Mars. The impact of a propellant depot in LEO was assessed and sLzed for use with the EDS. To quantitatively assess and compare the effectiveness of alternative designs, an initial baseline architecture was defined using the ESAS launch vehicles and DRM 3.0. The baseline architecture uses three NTR engines, LH2 propellant, no propellant depot in LEO, and launches on the Ares I and Ares V. The Mars transfer and surface elements from DRM 3.0 were considered to be fixed payloads in the design of the EDS. Feasible architecture alternatives were identified from previous architecture studies and anticipated capabilities and compiled in a morphological matrix. ESAS FOMs were used to determine the most critical design attributes for the effectiveness of the EDS. The ESAS-derived FOMs used in this study to assess alternative designs are effectiveness and performance, affordability, reliability, and risk. The individual FOMs were prioritized using the AHP, a method for pairwise comparison. All trades performed were evaluated with respect to the weighted FOMs, creating a Pareto frontier of equivalently ideal solutions. Additionally, each design on the frontier was evaluated based on its fulfillment of the weighted FOMs using TOPSIS, a quantitative method for ordinal ranking of the alternatives. The designs were assessed in an integrated environment using physics-based models for subsystem analysis where possible. However, for certain attributes such as engine type, historical, performance-based mass estimating relations were more easily employed. The elements from the design process were integrated into a single loop, allowing for rapid iteration of subsystem analyses and compilation of resulting designs.
Architecture Framework for Trapped-Ion Quantum Computer based on Performance Simulation Tool
NASA Astrophysics Data System (ADS)
Ahsan, Muhammad
The challenge of building scalable quantum computer lies in striking appropriate balance between designing a reliable system architecture from large number of faulty computational resources and improving the physical quality of system components. The detailed investigation of performance variation with physics of the components and the system architecture requires adequate performance simulation tool. In this thesis we demonstrate a software tool capable of (1) mapping and scheduling the quantum circuit on a realistic quantum hardware architecture with physical resource constraints, (2) evaluating the performance metrics such as the execution time and the success probability of the algorithm execution, and (3) analyzing the constituents of these metrics and visualizing resource utilization to identify system components which crucially define the overall performance. Using this versatile tool, we explore vast design space for modular quantum computer architecture based on trapped ions. We find that while success probability is uniformly determined by the fidelity of physical quantum operation, the execution time is a function of system resources invested at various layers of design hierarchy. At physical level, the number of lasers performing quantum gates, impact the latency of the fault-tolerant circuit blocks execution. When these blocks are used to construct meaningful arithmetic circuit such as quantum adders, the number of ancilla qubits for complicated non-clifford gates and entanglement resources to establish long-distance communication channels, become major performance limiting factors. Next, in order to factorize large integers, these adders are assembled into modular exponentiation circuit comprising bulk of Shor's algorithm. At this stage, the overall scaling of resource-constraint performance with the size of problem, describes the effectiveness of chosen design. By matching the resource investment with the pace of advancement in hardware technology, we find optimal designs for different types of quantum adders. Conclusively, we show that 2,048-bit Shor's algorithm can be reliably executed within the resource budget of 1.5 million qubits.
Differentiated protection method in passive optical networks based on OPEX
NASA Astrophysics Data System (ADS)
Zhang, Zhicheng; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng
2011-12-01
Reliable service delivery becomes more significant due to increased dependency on electronic services all over society and the growing importance of reliable service delivery. As the capability of PON increasing, both residential and business customers may be included in a PON. Meanwhile, OPEX have been proven to be a very important factor of the total cost for a telecommunication operator. Thus, in this paper, we present the partial protection PON architecture and compare the operational expenditures (OPEX) of fully duplicated protection and partly duplicated protection for ONUs with different distributed fiber length, reliability requirement and penalty cost per hour. At last, we propose a differentiated protection method to minimize OPEX.
NASA Technical Reports Server (NTRS)
Sizlo, T. R.; Berg, R. A.; Gilles, D. L.
1979-01-01
An augmentation system for a 230 passenger, twin engine aircraft designed with a relaxation of conventional longitudinal static stability was developed. The design criteria are established and candidate augmentation system control laws and hardware architectures are formulated and evaluated with respect to reliability, flying qualities, and flight path tracking performance. The selected systems are shown to satisfy the interpreted regulatory safety and reliability requirements while maintaining the present DC 10 (study baseline) level of maintainability and reliability for the total flight control system. The impact of certification of the relaxed static stability augmentation concept is also estimated with regard to affected federal regulations, system validation plan, and typical development/installation costs.
Gordon, Evan M.; Stollstorff, Melanie; Vaidya, Chandan J.
2012-01-01
Many researchers have noted that the functional architecture of the human brain is relatively invariant during task performance and the resting state. Indeed, intrinsic connectivity networks (ICNs) revealed by resting-state functional connectivity analyses are spatially similar to regions activated during cognitive tasks. This suggests that patterns of task-related activation in individual subjects may result from the engagement of one or more of these ICNs; however, this has not been tested. We used a novel analysis, spatial multiple regression, to test whether the patterns of activation during an N-back working memory task could be well described by a linear combination of ICNs delineated using Independent Components Analysis at rest. We found that across subjects, the cingulo-opercular Set Maintenance ICN, as well as right and left Frontoparietal Control ICNs, were reliably activated during working memory, while Default Mode and Visual ICNs were reliably deactivated. Further, involvement of Set Maintenance, Frontoparietal Control, and Dorsal Attention ICNs was sensitive to varying working memory load. Finally, the degree of left Frontoparietal Control network activation predicted response speed, while activation in both left Frontoparietal Control and Dorsal Attention networks predicted task accuracy. These results suggest that a close relationship between resting-state networks and task-evoked activation is functionally relevant for behavior, and that spatial multiple regression analysis is a suitable method for revealing that relationship. PMID:21761505
Increasing Small Satellite Reliability- A Public-Private Initiative
NASA Technical Reports Server (NTRS)
Johnson, Michael A.; Beauchamp, Patricia; Schone, Harald; Sheldon, Doug; Fuhrman, Linda; Sullivan, Erica; Fairbanks, Tom; Moe, Miquel; Leitner, Jesse
2017-01-01
At present, CubeSat components and buses are generally not appropriate for missions where significant risk of failure, or the inability to quantify risk or confidence, is acceptable. However, in the future we anticipate that CubeSats will be used for missions requiring reliability of 1-3 years for Earth-observing missions and even longer for Planetary, Heliophysics, and Astrophysics missions. Their growing potential utility is driving an interagency effort to improve and quantify CubeSat reliability, and more generally, small satellite mission risk. The Small Satellite Reliability Initiative (SSRI)—an ongoing activity with broad collaborative participation from civil, DoD, and commercial space systems providers and stakeholders—targets this challenge. The Initiative seeks to define implementable and broadly-accepted approaches to achieve reliability and acceptable risk postures associated with several SmallSat mission risk classes—from “do no harm” missions, to those associated with missions whose failure would result in loss or delay of key national objectives. These approaches will maintain, to the extent practical, cost efficiencies associated with small satellite missions and consider constraints associated with supply chain elements, as appropriate. The SSRI addresses this challenge from two architectural levels—the mission- or system-level, and the component- or subsystem-level. The mission- or system-level scope targets assessment approaches that are efficient and effective, with mitigation strategies that facilitate resiliency to mission or system anomalies while the component- or subsystem-level scope addresses the challenge at lower architectural levels. The initiative does not limit strategies and approaches to proven and traditional methodologies, but is focused on fomenting thought on novel and innovative solutions. This paper discusses the genesis of and drivers for this initiative, how the public-private collaboration is being executed, findings and recommendations derived to date, and next steps towards broadening small satellite mission potential.
Minimum Control Requirements for Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Boulange, Richard; Jones, Harry; Jones, Harry
2002-01-01
Advanced control technologies are not necessary for the safe, reliable and continuous operation of Advanced Life Support (ALS) systems. ALS systems can and are adequately controlled by simple, reliable, low-level methodologies and algorithms. The automation provided by advanced control technologies is claimed to decrease system mass and necessary crew time by reducing buffer size and minimizing crew involvement. In truth, these approaches increase control system complexity without clearly demonstrating an increase in reliability across the ALS system. Unless these systems are as reliable as the hardware they control, there is no savings to be had. A baseline ALS system is presented with the minimal control system required for its continuous safe reliable operation. This baseline control system uses simple algorithms and scheduling methodologies and relies on human intervention only in the event of failure of the redundant backup equipment. This ALS system architecture is designed for reliable operation, with minimal components and minimal control system complexity. The fundamental design precept followed is "If it isn't there, it can't fail".
Implementation of Integrated System Fault Management Capability
NASA Technical Reports Server (NTRS)
Figueroa, Fernando; Schmalzel, John; Morris, Jon; Smith, Harvey; Turowski, Mark
2008-01-01
Fault Management to support rocket engine test mission with highly reliable and accurate measurements; while improving availability and lifecycle costs. CORE ELEMENTS: Architecture, taxonomy, and ontology (ATO) for DIaK management. Intelligent Sensor Processes; Intelligent Element Processes; Intelligent Controllers; Intelligent Subsystem Processes; Intelligent System Processes; Intelligent Component Processes.
2011-06-01
solutions that operate reliable under adverse conditions including a bandwidth-limited environment, and provide them with customised information...236 Klein, G. (1998) Sources of Power: How people make decisions, MIT Press, Cambridge, Mass ., USA, 1998 NATO (2007) NATO Architecture Framework
Design of an FMCW radar baseband signal processing system for automotive application.
Lin, Jau-Jr; Li, Yuan-Ping; Hsu, Wei-Chiang; Lee, Ta-Sung
2016-01-01
For a typical FMCW automotive radar system, a new design of baseband signal processing architecture and algorithms is proposed to overcome the ghost targets and overlapping problems in the multi-target detection scenario. To satisfy the short measurement time constraint without increasing the RF front-end loading, a three-segment waveform with different slopes is utilized. By introducing a new pairing mechanism and a spatial filter design algorithm, the proposed detection architecture not only provides high accuracy and reliability, but also requires low pairing time and computational loading. This proposed baseband signal processing architecture and algorithms balance the performance and complexity, and are suitable to be implemented in a real automotive radar system. Field measurement results demonstrate that the proposed automotive radar signal processing system can perform well in a realistic application scenario.
NASA Astrophysics Data System (ADS)
Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott
2003-09-01
A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.
An assessment of the real-time application capabilities of the SIFT computer system
NASA Technical Reports Server (NTRS)
Butler, R. W.
1982-01-01
The real-time capabilities of the SIFT computer system, a highly reliable multicomputer architecture developed to support the flight controls of a relaxed static stability aircraft, are discussed. The SIFT computer system was designed to meet extremely high reliability requirements and to facilitate a formal proof of its correctness. Although SIFT represents a significant achievement in fault-tolerant system research it presents an unusual and restrictive interface to its users. The characteristics of the user interface and its impact on application system design are assessed.
Managing Complexity in Next Generation Robotic Spacecraft: From a Software Perspective
NASA Technical Reports Server (NTRS)
Reinholtz, Kirk
2008-01-01
This presentation highlights the challenges in the design of software to support robotic spacecraft. Robotic spacecraft offer a higher degree of autonomy, however currently more capabilities are required, primarily in the software, while providing the same or higher degree of reliability. The complexity of designing such an autonomous system is great, particularly while attempting to address the needs for increased capabilities and high reliability without increased needs for time or money. The efforts to develop programming models for the new hardware and the integration of software architecture are highlighted.
Reliability and Validity of Nonsymbolic and Symbolic Comparison Tasks in School-Aged Children.
Castro, Danilka; Estévez, Nancy; Gómez, David; Dartnell, Pablo Ricardo
2017-12-04
Basic numerical processing has been regularly assessed using numerical nonsymbolic and symbolic comparison tasks. It has been assumed that these tasks index similar underlying processes. However, the evidence concerning the reliability and convergent validity across different versions of these tasks is inconclusive. We explored the reliability and convergent validity between two numerical comparison tasks (nonsymbolic vs. symbolic) in school-aged children. The relations between performance in both tasks and mental arithmetic were described and a developmental trajectories' analysis was also conducted. The influence of verbal and visuospatial working memory processes and age was controlled for in the analyses. Results show significant reliability (p < .001) between Block 1 and 2 for nonsymbolic task (global adjusted RT (adjRT): r = .78, global efficiency measures (EMs): r = .74) and, for symbolic task (adjRT: r = .86, EMs: r = .86). Also, significant convergent validity between tasks (p < .001) for both adjRT (r = .71) and EMs (r = .70) were found after controlling for working memory and age. Finally, it was found the relationship between nonsymbolic and symbolic efficiencies varies across the sample's age range. Overall, these findings suggest both tasks index the same underlying cognitive architecture and are appropriate to explore the Approximate Number System (ANS) characteristics. The evidence supports the central role of ANS in arithmetic efficiency and suggests there are differences across the age range assessed, concerning the extent to which efficiency in nonsymbolic and symbolic tasks reflects ANS acuity.
Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures
NASA Technical Reports Server (NTRS)
Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland
1998-01-01
Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.
Jeong, Chan-Seok; Kim, Dongsup
2016-02-24
Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.
Extending LMS to Support IRT-Based Assessment Test Calibration
NASA Astrophysics Data System (ADS)
Fotaris, Panagiotis; Mastoras, Theodoros; Mavridis, Ioannis; Manitsaris, Athanasios
Developing unambiguous and challenging assessment material for measuring educational attainment is a time-consuming, labor-intensive process. As a result Computer Aided Assessment (CAA) tools are becoming widely adopted in academic environments in an effort to improve the assessment quality and deliver reliable results of examinee performance. This paper introduces a methodological and architectural framework which embeds a CAA tool in a Learning Management System (LMS) so as to assist test developers in refining items to constitute assessment tests. An Item Response Theory (IRT) based analysis is applied to a dynamic assessment profile provided by the LMS. Test developers define a set of validity rules for the statistical indices given by the IRT analysis. By applying those rules, the LMS can detect items with various discrepancies which are then flagged for review of their content. Repeatedly executing the aforementioned procedure can improve the overall efficiency of the testing process.
NASA Astrophysics Data System (ADS)
Mirzaei, Masoud; Eshghi, Hossein; Akhlaghi Bagherjeri, Fateme; Mirzaei, Mahdi; Farhadipour, Abolghasem
2018-07-01
α-Aminophosphonates have been rarely explored in the field of crystal engineering. These organic molecules are capable of forming reliable and reproducible supramolecular synthons through non-covalent interactions that can be employed for designing high dimensional supramolecular architectures. Here, we systematically study the influence of conventional and unconventional hydrogen bonding interactions on the formation of these synthons and stability of the crystal packing. The theoretical studies were employed to further confirm the presence of these synthons by comparing the stabilization energies of the dimers and monomers. The dependence of the stability of NH⋯O hydrogen bonds to the aromatic substituents were investigated using NBO analysis. The most stable compound was determined by comparing the HOMO-LUMO energy gap of all compounds and compared with NBO analysis.
[Relational database for urinary stone ambulatory consultation. Assessment of initial outcomes].
Sáenz Medina, J; Páez Borda, A; Crespo Martinez, L; Gómez Dos Santos, V; Barrado, C; Durán Poveda, M
2010-05-01
To create a relational database for monitoring lithiasic patients. We describe the architectural details and the initial results of the statistical analysis. Microsoft Access 2002 was used as template. Four different tables were constructed to gather demographic data (table 1), clinical and laboratory findings (table 2), stone features (table 3) and therapeutic approach (table 4). For a reliability analysis of the database the number of correctly stored data was gathered. To evaluate the performance of the database, a prospective analysis was conducted, from May 2004 to August 2009, on 171 stone free patients after treatment (EWSL, surgery or medical) from a total of 511 patients stored in the database. Lithiasic status (stone free or stone relapse) was used as primary end point, while demographic factors (age, gender), lithiasic history, upper urinary tract alterations and characteristics of the stone (side, location, composition and size) were considered as predictive factors. An univariate analysis was conducted initially by chi square test and supplemented by Kaplan Meier estimates for time to stone recurrence. A multiple Cox proportional hazards regression model was generated to jointly assess the prognostic value of the demographic factors and the predictive value of stones characteristics. For the reliability analysis 22,084 data were available corresponding to 702 consultations on 511 patients. Analysis of data showed a recurrence rate of 85.4% (146/171, median time to recurrence 608 days, range 70-1758). In the univariate and multivariate analysis, none of the factors under consideration had a significant effect on recurrence rate (p=ns). The relational database is useful for monitoring patients with urolithiasis. It allows easy control and update, as well as data storage for later use. The analysis conducted for its evaluation showed no influence of demographic factors and stone features on stone recurrence.
An enhanced performance through agent-based secure approach for mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Bisen, Dhananjay; Sharma, Sanjeev
2018-01-01
This paper proposes an agent-based secure enhanced performance approach (AB-SEP) for mobile ad hoc network. In this approach, agent nodes are selected through optimal node reliability as a factor. This factor is calculated on the basis of node performance features such as degree difference, normalised distance value, energy level, mobility and optimal hello interval of node. After selection of agent nodes, a procedure of malicious behaviour detection is performed using fuzzy-based secure architecture (FBSA). To evaluate the performance of the proposed approach, comparative analysis is done with conventional schemes using performance parameters such as packet delivery ratio, throughput, total packet forwarding, network overhead, end-to-end delay and percentage of malicious detection.
Novel high-density packaging of solid state diode pumped eye-safe laser for LIBS
NASA Astrophysics Data System (ADS)
Bares, Kim; Torgerson, Justin; McNeil, Laine; Maine, Patrick; Patterson, Steve
2018-02-01
Laser-Induced Breakdown Spectroscopy (LIBS) has proven to be a useful research tool for material analysis for decades. However, because of the amount of energy required in a few nanosecond pulse to generate a stable and reliable LIBS signal, the lasers are often large and inefficient, relegating their implementation to research facilities, factory floors, and assembly lines. Small portable LIBS systems are now possible without having to compromise on energy needs by leveraging off of advances in high-density packaging of electronics, opto-mechanics, and highly efficient laser resonator architecture. This paper explores the integration of these techniques to achieve a mJ class eye-safe LIBS laser source, while retaining a small, light-weight package suitable for handheld systems.
Image-Based High-Throughput Field Phenotyping of Crop Roots1[W][OPEN
Bucksch, Alexander; Burridge, James; York, Larry M.; Das, Abhiram; Nord, Eric; Weitz, Joshua S.; Lynch, Jonathan P.
2014-01-01
Current plant phenotyping technologies to characterize agriculturally relevant traits have been primarily developed for use in laboratory and/or greenhouse conditions. In the case of root architectural traits, this limits phenotyping efforts, largely, to young plants grown in specialized containers and growth media. Hence, novel approaches are required to characterize mature root systems of older plants grown under actual soil conditions in the field. Imaging methods able to address the challenges associated with characterizing mature root systems are rare due, in part, to the greater complexity of mature root systems, including the larger size, overlap, and diversity of root components. Our imaging solution combines a field-imaging protocol and algorithmic approach to analyze mature root systems grown in the field. Via two case studies, we demonstrate how image analysis can be utilized to estimate localized root traits that reliably capture heritable architectural diversity as well as environmentally induced architectural variation of both monocot and dicot plants. In the first study, we show that our algorithms and traits (including 13 novel traits inaccessible to manual estimation) can differentiate nine maize (Zea mays) genotypes 8 weeks after planting. The second study focuses on a diversity panel of 188 cowpea (Vigna unguiculata) genotypes to identify which traits are sufficient to differentiate genotypes even when comparing plants whose harvesting date differs up to 14 d. Overall, we find that automatically derived traits can increase both the speed and reproducibility of the trait estimation pipeline under field conditions. PMID:25187526
Designing a Pedagogical Model for Web Engineering Education: An Evolutionary Perspective
ERIC Educational Resources Information Center
Hadjerrouit, Said
2005-01-01
In contrast to software engineering, which relies on relatively well established development approaches, there is a lack of a proven methodology that guides Web engineers in building reliable and effective Web-based systems. Currently, Web engineering lacks process models, architectures, suitable techniques and methods, quality assurance, and a…
A Study of Alternative Computer Architectures for System Reliability and Software Simplification.
1981-04-22
compression. Several known applications of neighborhood processing, such as noise removal, and boundary smoothing, are shown to be special cases of...Processing [21] A small effort was undertaken to implement image array processing at a very low cost. To this end, a standard Qwip Facsimile
PHOBOS Exploration using Two Small Solar Electric Propulsion (SEP) Spacecraft
NASA Technical Reports Server (NTRS)
Lang, J. J.; Baker, J. D.; McElrath, T. P.; Piacentine, J. S.; Snyder, J. S.
2012-01-01
Phobos Surveyor Mission concept provides an innovative low cost, highly reliable approach to exploring the inner solar system 1/16/2013 3 Dual manifest launch. Use only flight proven, well characterize commercial off-the-shelf components. Flexible mission architecture allows for a slew of unique measurements.
Enhancing the Internet of Things Architecture with Flow Semantics
ERIC Educational Resources Information Center
DeSerranno, Allen Ronald
2017-01-01
Internet of Things ("IoT") systems are complex, asynchronous solutions often comprised of various software and hardware components developed in isolation of each other. These components function with different degrees of reliability and performance over an inherently unreliable network, the Internet. Many IoT systems are developed within…
Risk-Based Neuro-Grid Architecture for Multimodal Biometrics
NASA Astrophysics Data System (ADS)
Venkataraman, Sitalakshmi; Kulkarni, Siddhivinayak
Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, government and even home environments. However, such systems would require large distributed datasets with multiple computational realms spanning organisational boundaries and individual privacies.
Supporting Space Systems Design via Systems Dependency Analysis Methodology
NASA Astrophysics Data System (ADS)
Guariniello, Cesare
The increasing size and complexity of space systems and space missions pose severe challenges to space systems engineers. When complex systems and Systems-of-Systems are involved, the behavior of the whole entity is not only due to that of the individual systems involved but also to the interactions and dependencies between the systems. Dependencies can be varied and complex, and designers usually do not perform analysis of the impact of dependencies at the level of complex systems, or this analysis involves excessive computational cost, or occurs at a later stage of the design process, after designers have already set detailed requirements, following a bottom-up approach. While classical systems engineering attempts to integrate the perspectives involved across the variety of engineering disciplines and the objectives of multiple stakeholders, there is still a need for more effective tools and methods capable to identify, analyze and quantify properties of the complex system as a whole and to model explicitly the effect of some of the features that characterize complex systems. This research describes the development and usage of Systems Operational Dependency Analysis and Systems Developmental Dependency Analysis, two methods based on parametric models of the behavior of complex systems, one in the operational domain and one in the developmental domain. The parameters of the developed models have intuitive meaning, are usable with subjective and quantitative data alike, and give direct insight into the causes of observed, and possibly emergent, behavior. The approach proposed in this dissertation combines models of one-to-one dependencies among systems and between systems and capabilities, to analyze and evaluate the impact of failures or delays on the outcome of the whole complex system. The analysis accounts for cascading effects, partial operational failures, multiple failures or delays, and partial developmental dependencies. The user of these methods can assess the behavior of each system based on its internal status and on the topology of its dependencies on systems connected to it. Designers and decision makers can therefore quickly analyze and explore the behavior of complex systems and evaluate different architectures under various working conditions. The methods support educated decision making both in the design and in the update process of systems architecture, reducing the need to execute extensive simulations. In particular, in the phase of concept generation and selection, the information given by the methods can be used to identify promising architectures to be further tested and improved, while discarding architectures that do not show the required level of global features. The methods, when used in conjunction with appropriate metrics, also allow for improved reliability and risk analysis, as well as for automatic scheduling and re-scheduling based on the features of the dependencies and on the accepted level of risk. This dissertation illustrates the use of the two methods in sample aerospace applications, both in the operational and in the developmental domain. The applications show how to use the developed methodology to evaluate the impact of failures, assess the criticality of systems, quantify metrics of interest, quantify the impact of delays, support informed decision making when scheduling the development of systems and evaluate the achievement of partial capabilities. A larger, well-framed case study illustrates how the Systems Operational Dependency Analysis method and the Systems Developmental Dependency Analysis method can support analysis and decision making, at the mid and high level, in the design process of architectures for the exploration of Mars. The case study also shows how the methods do not replace the classical systems engineering methodologies, but support and improve them.
Analysis of Organizational Architectures for the Air Force Tuition Assistance Program
2003-03-01
FORCE TUITION ASSISTANCE PROGRAM THESIS Krista Zimmerman LaPietra AFIT/GOR/ENS/03-15 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR...ANALYSIS OF ORGANIZATIONAL ARCHITECTURES FOR THE AIR FORCE TUITION ASSISTANCE PROGRAM THESIS Presented to the Faculty Department...ANALYSIS OF ORGANIZATIONAL ARCHITECTURES FOR THE AIR FORCE TUITION ASSISTANCE PROGRAM Krista Zimmerman LaPietra, BS
NASA Technical Reports Server (NTRS)
Phillips, Dave; Haas, William; Barth, Tim; Benjamin, Perakath; Graul, Michael; Bagatourova, Olga
2005-01-01
Range Process Simulation Tool (RPST) is a computer program that assists managers in rapidly predicting and quantitatively assessing the operational effects of proposed technological additions to, and/or upgrades of, complex facilities and engineering systems such as the Eastern Test Range. Originally designed for application to space transportation systems, RPST is also suitable for assessing effects of proposed changes in industrial facilities and large organizations. RPST follows a model-based approach that includes finite-capacity schedule analysis and discrete-event process simulation. A component-based, scalable, open architecture makes RPST easily and rapidly tailorable for diverse applications. Specific RPST functions include: (1) definition of analysis objectives and performance metrics; (2) selection of process templates from a processtemplate library; (3) configuration of process models for detailed simulation and schedule analysis; (4) design of operations- analysis experiments; (5) schedule and simulation-based process analysis; and (6) optimization of performance by use of genetic algorithms and simulated annealing. The main benefits afforded by RPST are provision of information that can be used to reduce costs of operation and maintenance, and the capability for affordable, accurate, and reliable prediction and exploration of the consequences of many alternative proposed decisions.
OPSAID Initial Design and Testing Report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, Steven A.; Stamp, Jason Edwin; Chavez, Adrian R.
2007-11-01
Process Control System (PCS) security is critical to our national security. Yet, there are a number of technological, economic, and educational impediments to PCS owners implementing effective security on their systems. OPSAID (Open PCS Security Architecture for Interoperable Design), a project sponsored by the US Department of Energy's Office of Electricity Delivery and Reliability, aims to address this issue through developing and testing an open source architecture for PCS security. Sandia National Laboratories, along with a team of PCS vendors and owners, have developed and tested this PCS security architecture. This report describes their progress to date.2 AcknowledgementsThe authors acknowledgemore » and thank their colleagues for their assistance with the OPSAID project.Sandia National Laboratories: Alex Berry, Charles Perine, Regis Cassidy, Bryan Richardson, Laurence PhillipsTeumim Technical, LLC: Dave TeumimIn addition, the authors are greatly indebted to the invaluable help of the members of the OPSAID Core Team. Their assistance has been critical to the success and industry acceptance of the OPSAID project.Schweitzer Engineering Laboratory: Rhett Smith, Ryan Bradetich, Dennis GammelTelTone: Ori Artman Entergy: Dave Norton, Leonard Chamberlin, Mark AllenThe authors would like to acknowledge that the work that produced the results presented in this paper was funded by the U.S. Department of Energy/Office of Electricity Delivery and Energy Reliability (DOE/OE) as part of the National SCADA Test Bed (NSTB) Program. Executive SummaryProcess control systems (PCS) are very important for critical infrastructure and manufacturing operations, yet cyber security technology in PCS is generally poor. The OPSAID (Open PCS (Process Control System) Security Architecture for Interoperable Design) program is intended to address these security shortcomings by accelerating the availability and deployment of comprehensive security technology for PCS, both for existing PCS and inherently secure PCS in the future. All activities are closely linked to industry outreach and advisory efforts.Generally speaking, the OPSAID project is focused on providing comprehensive security functionality to PCS that communicate using IP. This is done through creating an interoperable PCS security architecture and developing a reference implementation, which is tested extensively for performance and reliability.This report first provides background on the PCS security problem and OPSAID, followed by goals and objectives of the project. The report also includes an overview of the results, including the OPSAID architecture and testing activities, along with results from industry outreach activities. Conclusion and recommendation sections follow. Finally, a series of appendices provide more detailed information regarding architecture and testing activities.Summarizing the project results, the OPSAID architecture was defined, which includes modular security functionality and corresponding component modules. The reference implementation, which includes the collection of component modules, was tested extensively and proved to provide more than acceptable performance in a variety of test scenarios. The primary challenge in implementation and testing was correcting initial configuration errors.OPSAID industry outreach efforts were very successful. A small group of industry partners were extensively involved in both the design and testing of OPSAID. Conference presentations resulted in creating a larger group of potential industry partners.Based upon experience implementing and testing OPSAID, as well as through collecting industry feedback, the OPSAID project has done well and is well received. Recommendations for future work include further development of advanced functionality, refinement of interoperability guidance, additional laboratory and field testing, and industry outreach that includes PCS owner education. 4 5 --This page intentionally left blank --« less
Analysis of Air Traffic Track Data with the AutoBayes Synthesis System
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin Philip; Cate, Karen; Lee, Alan G.
2010-01-01
The Next Generation Air Traffic System (NGATS) is aiming to provide substantial computer support for the air traffic controllers. Algorithms for the accurate prediction of aircraft movements are of central importance for such software systems but trajectory prediction has to work reliably in the presence of unknown parameters and uncertainties. We are using the AutoBayes program synthesis system to generate customized data analysis algorithms that process large sets of aircraft radar track data in order to estimate parameters and uncertainties. In this paper, we present, how the tasks of finding structure in track data, estimation of important parameters in climb trajectories, and the detection of continuous descent approaches can be accomplished with compact task-specific AutoBayes specifications. We present an overview of the AutoBayes architecture and describe, how its schema-based approach generates customized analysis algorithms, documented C/C++ code, and detailed mathematical derivations. Results of experiments with actual air traffic control data are discussed.
Summary of Martian Dust Filtering Challenges and Current Filter Development
NASA Technical Reports Server (NTRS)
O'Hara, William J., IV
2017-01-01
Traditional air particulate filtering in manned spaceflight (Apollo, Shuttle, ISS, etc.) has used cleanable or replaceable catch filters such as screens and High-Efficiency Particulate Arrestance (HEPA) filters. However, the human mission to Mars architecture will require a new approach. It is Martian dust that is the particulate of concern but the need also applies to particulates generated by crew. The Mars Exploration Program Analysis Group (MEPAG) high-lighted this concern in its Mars Science, Goals, Objectives, Investigations and Priorities document [7], by saying specifically that one high priority investigation will be to "Test ISRU atmospheric processing systems to measure resilience with respect to dust and other environmental challenge performance parameters that are critical to the design of a full-scale system." By stating this as high priority the MEPAG is acknowledging that developing and adequately verifying this capability is critical to success of a human mission to Mars. This architecture will require filtering capabilities that are highly reliable, will not restrict the flow path with clogging, and require little to no maintenance. This paper will summarize why this is the case, the general requirements for developing the technology, and the status of the progress made in this area.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
The United States Space Exploration Initiative (SEI) calls for the charting of a new and evolving manned course to the Moon, Mars, and beyond. This paper discusses key challenges in providing effective deep space telecommunications, navigation, and information management (TNIM) architectures and designs for Mars exploration support. The fundamental objectives are to provide the mission with means to monitor and control mission elements, acquire engineering, science, and navigation data, compute state vectors and navigate, and move these data efficiently and automatically between mission nodes for timely analysis and decision-making. Although these objectives do not depart, fundamentally, from those evolved over the past 30 years in supporting deep space robotic exploration, there are several new issues. This paper focuses on summarizing new requirements, identifying related issues and challenges, responding with concepts and strategies which are enabling, and, finally, describing candidate architectures, and driving technologies. The design challenges include the attainment of: 1) manageable interfaces in a large distributed system, 2) highly unattended operations for in-situ Mars telecommunications and navigation functions, 3) robust connectivity for manned and robotic links, 4) information management for efficient and reliable interchange of data between mission nodes, and 5) an adequate Mars-Earth data rate.
The growing need for microservices in bioinformatics.
Williams, Christopher L; Sica, Jeffrey C; Killen, Robert T; Balis, Ulysses G J
2016-01-01
Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Bioinformatics relies on nimble IT framework which can adapt to changing requirements. To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics. Use of the microservices framework is an effective methodology for the fabrication and implementation of reliable and innovative software, made possible in a highly collaborative setting.
The growing need for microservices in bioinformatics
Williams, Christopher L.; Sica, Jeffrey C.; Killen, Robert T.; Balis, Ulysses G. J.
2016-01-01
Objective: Within the information technology (IT) industry, best practices and standards are constantly evolving and being refined. In contrast, computer technology utilized within the healthcare industry often evolves at a glacial pace, with reduced opportunities for justified innovation. Although the use of timely technology refreshes within an enterprise's overall technology stack can be costly, thoughtful adoption of select technologies with a demonstrated return on investment can be very effective in increasing productivity and at the same time, reducing the burden of maintenance often associated with older and legacy systems. In this brief technical communication, we introduce the concept of microservices as applied to the ecosystem of data analysis pipelines. Microservice architecture is a framework for dividing complex systems into easily managed parts. Each individual service is limited in functional scope, thereby conferring a higher measure of functional isolation and reliability to the collective solution. Moreover, maintenance challenges are greatly simplified by virtue of the reduced architectural complexity of each constitutive module. This fact notwithstanding, rendered overall solutions utilizing a microservices-based approach provide equal or greater levels of functionality as compared to conventional programming approaches. Bioinformatics, with its ever-increasing demand for performance and new testing algorithms, is the perfect use-case for such a solution. Moreover, if promulgated within the greater development community as an open-source solution, such an approach holds potential to be transformative to current bioinformatics software development. Context: Bioinformatics relies on nimble IT framework which can adapt to changing requirements. Aims: To present a well-established software design and deployment strategy as a solution for current challenges within bioinformatics Conclusions: Use of the microservices framework is an effective methodology for the fabrication and implementation of reliable and innovative software, made possible in a highly collaborative setting. PMID:27994937
Tai, Dean C.S.; Wang, Shi; Cheng, Chee Leong; Peng, Qiwen; Yan, Jie; Chen, Yongpeng; Sun, Jian; Liang, Xieer; Zhu, Youfu; Rajapakse, Jagath C.; Welsch, Roy E.; So, Peter T.C.; Wee, Aileen; Hou, Jinlin; Yu, Hanry
2014-01-01
Background & Aims There is increasing need for accurate assessment of liver fibrosis/cirrhosis. We aimed to develop qFibrosis, a fully-automated assessment method combining quantification of histopathological architectural features, to address unmet needs in core biopsy evaluation of fibrosis in chronic hepatitis B (CHB) patients. Methods qFibrosis was established as a combined index based on 87 parameters of architectural features. Images acquired from 25 Thioacetamide-treated rat samples and 162 CHB core biopsies were used to train and test qFibrosis and to demonstrate its reproducibility. qFibrosis scoring was analyzed employing Metavir and Ishak fibrosis staging as standard references, and collagen proportionate area (CPA) measurement for comparison. Results qFibrosis faithfully and reliably recapitulates Metavir fibrosis scores, as it can identify differences between all stages in both animal samples (p <0.001) and human biopsies (p <0.05). It is robust to sampling size, allowing for discrimination of different stages in samples of different sizes (area under the curve (AUC): 0.93–0.99 for animal samples: 1–16 mm2; AUC: 0.84–0.97 for biopsies: 10–44 mm in length). qFibrosis can significantly predict staging underestimation in suboptimal biopsies (<15 mm) and under- and over-scoring by different pathologists (p <0.001). qFibrosis can also differentiate between Ishak stages 5 and 6 (AUC: 0.73, p = 0.008), suggesting the possibility of monitoring intra-stage cirrhosis changes. Best of all, qFibrosis demonstrates superior performance to CPA on all counts. Conclusions qFibrosis can improve fibrosis scoring accuracy and throughput, thus allowing for reproducible and reliable analysis of efficacies of anti-fibrotic therapies in clinical research and practice. PMID:24583249
Israel, Benjamin; Buysse, Daniel J; Krafty, Robert T; Begley, Amy; Miewald, Jean; Hall, Martica
2012-09-01
Quantify the short-term stability of multiple indices of sleep and nocturnal physiology in good sleeper controls and primary insomnia patients. Intra-class correlation coefficients (ICC) were used to quantify the short-term stability of study outcomes. Sleep laboratory. Fifty-four adults with primary insomnia (PI) and 22 good sleeper controls (GSC). Visually scored sleep outcomes included indices of sleep duration, continuity, and architecture. Quantitative EEG outcomes included power in the delta, theta, alpha, sigma, and beta bands during NREM sleep. Power spectral analysis was used to estimate high-frequency heart rate variability (HRV) and the ratio of low- to high-frequency HRV power during NREM and REM sleep. With the exception of percent stage 3+4 sleep; visually scored sleep outcomes did not exhibit short-term stability across study nights. Most QEEG outcomes demonstrated short-term stability in both groups. Although power in the beta band was stable in the PI group (ICC = 0.75), it tended to be less stable in GSCs (ICC = 0.55). Both measures of cardiac autonomic tone exhibited short-term stability in GSCs and PIs during NREM and REM sleep. Most QEEG bandwidths and HRV during sleep show high short-term stability in good sleepers and patients with insomnia alike. One night of data is, thus, sufficient to derive reliable estimates of these outcomes in studies focused on group differences or correlates of QEEG and/or HRV. In contrast, one night of data is unlikely to generate reliable estimates of PSG-assessed sleep duration, continuity or architecture, with the exception of slow wave sleep.
Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S
2011-01-01
By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000
High volume data storage architecture analysis
NASA Technical Reports Server (NTRS)
Malik, James M.
1990-01-01
A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached.
The Aeronautical Data Link: Decision Framework for Architecture Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Goode, Plesent W.
2003-01-01
A decision analytic approach that develops optimal data link architecture configuration and behavior to meet multiple conflicting objectives of concurrent and different airspace operations functions has previously been developed. The approach, premised on a formal taxonomic classification that correlates data link performance with operations requirements, information requirements, and implementing technologies, provides a coherent methodology for data link architectural analysis from top-down and bottom-up perspectives. This paper follows the previous research by providing more specific approaches for mapping and transitioning between the lower levels of the decision framework. The goal of the architectural analysis methodology is to assess the impact of specific architecture configurations and behaviors on the efficiency, capacity, and safety of operations. This necessarily involves understanding the various capabilities, system level performance issues and performance and interface concepts related to the conceptual purpose of the architecture and to the underlying data link technologies. Efficient and goal-directed data link architectural network configuration is conditioned on quantifying the risks and uncertainties associated with complex structural interface decisions. Deterministic and stochastic optimal design approaches will be discussed that maximize the effectiveness of architectural designs.
Lunar Exploration Architecture Level Key Drivers and Sensitivities
NASA Technical Reports Server (NTRS)
Goodliff, Kandyce; Cirillo, William; Earle, Kevin; Reeves, J. D.; Shyface, Hilary; Andraschko, Mark; Merrill, R. Gabe; Stromgren, Chel; Cirillo, Christopher
2009-01-01
Strategic level analysis of the integrated behavior of lunar transportation and lunar surface systems architecture options is performed to assess the benefit, viability, affordability, and robustness of system design choices. This analysis employs both deterministic and probabilistic modeling techniques so that the extent of potential future uncertainties associated with each option are properly characterized. The results of these analyses are summarized in a predefined set of high-level Figures of Merit (FOMs) so as to provide senior NASA Constellation Program (CxP) and Exploration Systems Mission Directorate (ESMD) management with pertinent information to better inform strategic level decision making. The strategic level exploration architecture model is designed to perform analysis at as high a level as possible but still capture those details that have major impacts on system performance. The strategic analysis methodology focuses on integrated performance, affordability, and risk analysis, and captures the linkages and feedbacks between these three areas. Each of these results leads into the determination of the high-level FOMs. This strategic level analysis methodology has been previously applied to Space Shuttle and International Space Station assessments and is now being applied to the development of the Constellation Program point-of-departure lunar architecture. This paper provides an overview of the strategic analysis methodology and the lunar exploration architecture analyses to date. In studying these analysis results, the strategic analysis team has identified and characterized key drivers affecting the integrated architecture behavior. These key drivers include inclusion of a cargo lander, mission rate, mission location, fixed-versus- variable costs/return on investment, and the requirement for probabilistic analysis. Results of sensitivity analysis performed on lunar exploration architecture scenarios are also presented.
High power diode lasers emitting from 639 nm to 690 nm
NASA Astrophysics Data System (ADS)
Bao, L.; Grimshaw, M.; DeVito, M.; Kanskar, M.; Dong, W.; Guan, X.; Zhang, S.; Patterson, J.; Dickerson, P.; Kennedy, K.; Li, S.; Haden, J.; Martinsen, R.
2014-03-01
There is increasing market demand for high power reliable red lasers for display and cinema applications. Due to the fundamental material system limit at this wavelength range, red diode lasers have lower efficiency and are more temperature sensitive, compared to 790-980 nm diode lasers. In terms of reliability, red lasers are also more sensitive to catastrophic optical mirror damage (COMD) due to the higher photon energy. Thus developing higher power-reliable red lasers is very challenging. This paper will present nLIGHT's released red products from 639 nm to 690nm, with established high performance and long-term reliability. These single emitter diode lasers can work as stand-alone singleemitter units or efficiently integrate into our compact, passively-cooled Pearl™ fiber-coupled module architectures for higher output power and improved reliability. In order to further improve power and reliability, new chip optimizations have been focused on improving epitaxial design/growth, chip configuration/processing and optical facet passivation. Initial optimization has demonstrated promising results for 639 nm diode lasers to be reliably rated at 1.5 W and 690nm diode lasers to be reliably rated at 4.0 W. Accelerated life-test has started and further design optimization are underway.
NASA Astrophysics Data System (ADS)
Roh, Won B.
Photonic technologies-based computational systems are projected to be able to offer order-of-magnitude improvements in processing speed, due to their intrinsic architectural parallelism and ultrahigh switching speeds; these architectures also minimize connectors, thereby enhancing reliability, and preclude EMP vulnerability. The use of optoelectronic ICs would also extend weapons capabilities in such areas as automated target recognition, systems-state monitoring, and detection avoidance. Fiber-optics technologies have an information-carrying capacity fully five orders of magnitude greater than copper-wire-based systems; energy loss in transmission is two orders of magnitude lower, and error rates one order of magnitude lower. Attention is being given to ZrF glasses for optical fibers with unprecedentedly low scattering levels.
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Preheim, Larry E.
1990-01-01
Data systems requirements in the Earth Observing System (EOS) Space Station Freedom (SSF) eras indicate increasing data volume, increased discipline interplay, higher complexity and broader data integration and interpretation. A response to the needs of the interdisciplinary investigator is proposed, considering the increasing complexity and rising costs of scientific investigation. The EOS Data Information System, conceived to be a widely distributed system with reliable communication links between central processing and the science user community, is described. Details are provided on information architecture, system models, intelligent data management of large complex databases, and standards for archiving ancillary data, using a research library, a laboratory and collaboration services.
NASA Astrophysics Data System (ADS)
Mohamed, Ahmed
Efficient and reliable techniques for power delivery and utilization are needed to account for the increased penetration of renewable energy sources in electric power systems. Such methods are also required for current and future demands of plug-in electric vehicles and high-power electronic loads. Distributed control and optimal power network architectures will lead to viable solutions to the energy management issue with high level of reliability and security. This dissertation is aimed at developing and verifying new techniques for distributed control by deploying DC microgrids, involving distributed renewable generation and energy storage, through the operating AC power system. To achieve the findings of this dissertation, an energy system architecture was developed involving AC and DC networks, both with distributed generations and demands. The various components of the DC microgrid were designed and built including DC-DC converters, voltage source inverters (VSI) and AC-DC rectifiers featuring novel designs developed by the candidate. New control techniques were developed and implemented to maximize the operating range of the power conditioning units used for integrating renewable energy into the DC bus. The control and operation of the DC microgrids in the hybrid AC/DC system involve intelligent energy management. Real-time energy management algorithms were developed and experimentally verified. These algorithms are based on intelligent decision-making elements along with an optimization process. This was aimed at enhancing the overall performance of the power system and mitigating the effect of heavy non-linear loads with variable intensity and duration. The developed algorithms were also used for managing the charging/discharging process of plug-in electric vehicle emulators. The protection of the proposed hybrid AC/DC power system was studied. Fault analysis and protection scheme and coordination, in addition to ideas on how to retrofit currently available protection concepts and devices for AC systems in a DC network, were presented. A study was also conducted on the effect of changing the distribution architecture and distributing the storage assets on the various zones of the network on the system's dynamic security and stability. A practical shipboard power system was studied as an example of a hybrid AC/DC power system involving pulsed loads. Generally, the proposed hybrid AC/DC power system, besides most of the ideas, controls and algorithms presented in this dissertation, were experimentally verified at the Smart Grid Testbed, Energy Systems Research Laboratory. All the developments in this dissertation were experimentally verified at the Smart Grid Testbed.
NASA Astrophysics Data System (ADS)
Yang, Wei; Hall, Trevor J.
2013-12-01
The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.
Li, Bo; Wang, Xin; Jung, Hyun Young; Kim, Young Lae; Robinson, Jeremy T.; Zalalutdinov, Maxim; Hong, Sanghyun; Hao, Ji; Ajayan, Pulickel M.; Wan, Kai-Tak; Jung, Yung Joon
2015-01-01
Suspended single-walled carbon nanotubes (SWCNTs) offer unique functionalities for electronic and electromechanical systems. Due to their outstanding flexible nature, suspended SWCNT architectures have great potential for integration into flexible electronic systems. However, current techniques for integrating SWCNT architectures with flexible substrates are largely absent, especially in a manner that is both scalable and well controlled. Here, we present a new nanostructured transfer paradigm to print scalable and well-defined suspended nano/microscale SWCNT networks on 3D patterned flexible substrates with micro- to nanoscale precision. The underlying printing/transfer mechanism, as well as the mechanical, electromechanical, and mechanical resonance properties of the suspended SWCNTs are characterized, including identifying metrics relevant for reliable and sensitive device structures. Our approach represents a fast, scalable and general method for building suspended nano/micro SWCNT architectures suitable for flexible sensing and actuation systems. PMID:26511284
Li, Bo; Wang, Xin; Jung, Hyun Young; Kim, Young Lae; Robinson, Jeremy T; Zalalutdinov, Maxim; Hong, Sanghyun; Hao, Ji; Ajayan, Pulickel M; Wan, Kai-Tak; Jung, Yung Joon
2015-10-29
Suspended single-walled carbon nanotubes (SWCNTs) offer unique functionalities for electronic and electromechanical systems. Due to their outstanding flexible nature, suspended SWCNT architectures have great potential for integration into flexible electronic systems. However, current techniques for integrating SWCNT architectures with flexible substrates are largely absent, especially in a manner that is both scalable and well controlled. Here, we present a new nanostructured transfer paradigm to print scalable and well-defined suspended nano/microscale SWCNT networks on 3D patterned flexible substrates with micro- to nanoscale precision. The underlying printing/transfer mechanism, as well as the mechanical, electromechanical, and mechanical resonance properties of the suspended SWCNTs are characterized, including identifying metrics relevant for reliable and sensitive device structures. Our approach represents a fast, scalable and general method for building suspended nano/micro SWCNT architectures suitable for flexible sensing and actuation systems.
The Light Node Communication Framework: A New Way to Communicate Inside Smart Homes.
Plantevin, Valère; Bouzouane, Abdenour; Gaboury, Sebastien
2017-10-20
The Internet of things has profoundly changed the way we imagine information science and architecture, and smart homes are an important part of this domain. Created a decade ago, the few existing prototypes use technologies of the day, forcing designers to create centralized and costly architectures that raise some issues concerning reliability, scalability, and ease of access which cannot be tolerated in the context of assistance. In this paper, we briefly introduce a new kind of architecture where the focus is placed on distribution. More specifically, we respond to the first issue we encountered by proposing a lightweight and portable messaging protocol. After running several tests, we observed a maximized bandwidth, whereby no packets were lost and good encryption was obtained. These results tend to prove that our innovation may be employed in a real context of distribution with small entities.
Research on Separation of Three Powers Architecture for Trusted OS
NASA Astrophysics Data System (ADS)
Li, Yu; Zhao, Yong; Xin, Siyuan
The privilege in the operating system (OS) often results in the break of confidentiality and integrity of the system. To solve this problem, several security mechanisms are proposed, such as Role-based Access Control, Separation of Duty. However, these mechanisms can not eliminate the privilege in OS kernel layer. This paper proposes a Separation of Three Powers Architecture (STPA). The authorizations in OS are divided into three parts: System Management Subsystem (SMS), Security Management Subsystem (SEMS) and Audit Subsystem (AS). Mutual support and mutual checks and balances which are the design principles of STPA eliminate the administrator in the kernel layer. Furthermore, the paper gives the formal description for authorization division using the graph theory. Finally, the implementation of STPA is given. Proved by experiments, the Separation of Three Powers Architecture we proposed can provide reliable protection for the OS through authorization division.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lala, J.H.; Nagle, G.A.; Harper, R.E.
1993-05-01
The Maglev control computer system should be designed to verifiably possess high reliability and safety as well as high availability to make Maglev a dependable and attractive transportation alternative to the public. A Maglev control computer system has been designed using a design-for-validation methodology developed earlier under NASA and SDIO sponsorship for real-time aerospace applications. The present study starts by defining the maglev mission scenario and ends with the definition of a maglev control computer architecture. Key intermediate steps included definitions of functional and dependability requirements, synthesis of two candidate architectures, development of qualitative and quantitative evaluation criteria, and analyticalmore » modeling of the dependability characteristics of the two architectures. Finally, the applicability of the design-for-validation methodology was also illustrated by applying it to the German Transrapid TR07 maglev control system.« less
The Light Node Communication Framework: A New Way to Communicate Inside Smart Homes
Bouzouane, Abdenour; Gaboury, Sebastien
2017-01-01
The Internet of things has profoundly changed the way we imagine information science and architecture, and smart homes are an important part of this domain. Created a decade ago, the few existing prototypes use technologies of the day, forcing designers to create centralized and costly architectures that raise some issues concerning reliability, scalability, and ease of access which cannot be tolerated in the context of assistance. In this paper, we briefly introduce a new kind of architecture where the focus is placed on distribution. More specifically, we respond to the first issue we encountered by proposing a lightweight and portable messaging protocol. After running several tests, we observed a maximized bandwidth, whereby no packets were lost and good encryption was obtained. These results tend to prove that our innovation may be employed in a real context of distribution with small entities. PMID:29053581
The WLCG Messaging Service and its Future
NASA Astrophysics Data System (ADS)
Cons, Lionel; Paladin, Massimo
2012-12-01
Enterprise messaging is seen as an attractive mechanism to simplify and extend several portions of the Grid middleware, from low level monitoring to experiments dashboards. The production messaging service currently used by WLCG includes four tightly coupled brokers operated by EGI (running Apache ActiveMQ and designed to host the Grid operational tools such as SAM) as well as two dedicated services for ATLAS-DDM and experiments dashboards (currently also running Apache ActiveMQ). In the future, this service is expected to grow in numbers of applications supported, brokers and technologies. The WLCG Messaging Roadmap identified three areas with room for improvement (security, scalability and availability/reliability) as well as ten practical recommendations to address them. This paper describes a messaging service architecture that is in line with these recommendations as well as a software architecture based on reusable components that ease interactions with the messaging service. These two architectures will support the growth of the WLCG messaging service.
Feasibility of Using Distributed Wireless Mesh Networks for Medical Emergency Response
Braunstein, Brian; Trimble, Troy; Mishra, Rajesh; Manoj, B. S.; Rao, Ramesh; Lenert, Leslie
2006-01-01
Achieving reliable, efficient data communications networks at a disaster site is a difficult task. Network paradigms, such as Wireless Mesh Network (WMN) architectures, form one exemplar for providing high-bandwidth, scalable data communication for medical emergency response activity. WMNs are created by self-organized wireless nodes that use multi-hop wireless relaying for data transfer. In this paper, we describe our experience using a mesh network architecture we developed for homeland security and medical emergency applications. We briefly discuss the architecture and present the traffic behavioral observations made by a client-server medical emergency application tested during a large-scale homeland security drill. We present our traffic measurements, describe lessons learned, and offer functional requirements (based on field testing) for practical 802.11 mesh medical emergency response networks. With certain caveats, the results suggest that 802.11 mesh networks are feasible and scalable systems for field communications in disaster settings. PMID:17238308
Biomorphic architectures for autonomous Nanosat designs
NASA Technical Reports Server (NTRS)
Hasslacher, Brosl; Tilden, Mark W.
1995-01-01
Modern space tool design is the science of making a machine both massively complex while at the same time extremely robust and dependable. We propose a novel nonlinear control technique that produces capable, self-organizing, micron-scale space machines at low cost and in large numbers by parallel silicon assembly. Experiments using biomorphic architectures (with ideal space attributes) have produced a wide spectrum of survival-oriented machines that are reliably domesticated for work applications in specific environments. In particular, several one-chip satellite prototypes show interesting control properties that can be turned into numerous application-specific machines for autonomous, disposable space tasks. We believe that the real power of these architectures lies in their potential to self-assemble into larger, robust, loosely coupled structures. Assembly takes place at hierarchical space scales, with different attendant properties, allowing for inexpensive solutions to many daunting work tasks. The nature of biomorphic control, design, engineering options, and applications are discussed.
NASA Astrophysics Data System (ADS)
van Gend, Carel; Lombaard, Briehan; Sickafoose, Amanda; Whittal, Hamish
2016-07-01
Until recently, software for instruments on the smaller telescopes at the South African Astronomical Observatory (SAAO) has not been designed for remote accessibility and frequently has not been developed using modern software best-practice. We describe a software architecture we have implemented for use with new and upgraded instruments at the SAAO. The architecture was designed to allow for multiple components and to be fast, reliable, remotely- operable, support different user interfaces, employ as much non-proprietary software as possible, and to take future-proofing into consideration. Individual component drivers exist as standalone processes, communicating over a network. A controller layer coordinates the various components, and allows a variety of user interfaces to be used. The Sutherland High-speed Optical Cameras (SHOC) instruments incorporate an Andor electron-multiplying CCD camera, a GPS unit for accurate timing and a pair of filter wheels. We have applied the new architecture to the SHOC instruments, with the camera driver developed using Andor's software development kit. We have used this to develop an innovative web-based user-interface to the instrument.
NASA Astrophysics Data System (ADS)
Garg, Amit Kumar; Madavi, Amresh Ashok; Janyani, Vijay
2017-02-01
A flexible hybrid wavelength division multiplexing-time division multiplexing passive optical network architecture that allows dual rate signals to be sent at 1 and 10 Gbps to each optical networking unit depending upon the traffic load is proposed. The proposed design allows dynamic wavelength allocation with pay-as-you-grow deployment capability. This architecture is capable of providing up to 40 Gbps of equal data rates to all optical distribution networks (ODNs) and up to 70 Gbps of a asymmetrical data rate to the specific ODN. The proposed design handles broadcasting capability with simultaneous point-to-point transmission, which further reduces energy consumption. In this architecture, each module sends a wavelength to each ODN, thus making the architecture fully flexible; this flexibility allows network providers to use only required OLT components and switch off others. The design is also reliable to any module or TRx failure and provides services without any service disruption. Dynamic wavelength allocation and pay-as-you-grow deployment support network extensibility and bandwidth scalability to handle future generation access networks.
Design of a fault tolerant airborne digital computer. Volume 1: Architecture
NASA Technical Reports Server (NTRS)
Wensley, J. H.; Levitt, K. N.; Green, M. W.; Goldberg, J.; Neumann, P. G.
1973-01-01
This volume is concerned with the architecture of a fault tolerant digital computer for an advanced commercial aircraft. All of the computations of the aircraft, including those presently carried out by analogue techniques, are to be carried out in this digital computer. Among the important qualities of the computer are the following: (1) The capacity is to be matched to the aircraft environment. (2) The reliability is to be selectively matched to the criticality and deadline requirements of each of the computations. (3) The system is to be readily expandable. contractible, and (4) The design is to appropriate to post 1975 technology. Three candidate architectures are discussed and assessed in terms of the above qualities. Of the three candidates, a newly conceived architecture, Software Implemented Fault Tolerance (SIFT), provides the best match to the above qualities. In addition SIFT is particularly simple and believable. The other candidates, Bus Checker System (BUCS), also newly conceived in this project, and the Hopkins multiprocessor are potentially more efficient than SIFT in the use of redundancy, but otherwise are not as attractive.
A Survey of Techniques for Modeling and Improving Reliability of Computing Systems
Mittal, Sparsh; Vetter, Jeffrey S.
2015-04-24
Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less
A Conceptual Design for a Reliable Optical Bus (ROBUS)
NASA Technical Reports Server (NTRS)
Miner, Paul S.; Malekpour, Mahyar; Torres, Wilfredo
2002-01-01
The Scalable Processor-Independent Design for Electromagnetic Resilience (SPIDER) is a new family of fault-tolerant architectures under development at NASA Langley Research Center (LaRC). The SPIDER is a general-purpose computational platform suitable for use in ultra-reliable embedded control applications. The design scales from a small configuration supporting a single aircraft function to a large distributed configuration capable of supporting several functions simultaneously. SPIDER consists of a collection of simplex processing elements communicating via a Reliable Optical Bus (ROBUS). The ROBUS is an ultra-reliable, time-division multiple access broadcast bus with strictly enforced write access (no babbling idiots) providing basic fault-tolerant services using formally verified fault-tolerance protocols including Interactive Consistency (Byzantine Agreement), Internal Clock Synchronization, and Distributed Diagnosis. The conceptual design of the ROBUS is presented in this paper including requirements, topology, protocols, and the block-level design. Verification activities, including the use of formal methods, are also discussed.
A Survey of Techniques for Modeling and Improving Reliability of Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Vetter, Jeffrey S.
Recent trends of aggressive technology scaling have greatly exacerbated the occurrences and impact of faults in computing systems. This has made `reliability' a first-order design constraint. To address the challenges of reliability, several techniques have been proposed. In this study, we provide a survey of architectural techniques for improving resilience of computing systems. We especially focus on techniques proposed for microarchitectural components, such as processor registers, functional units, cache and main memory etc. In addition, we discuss techniques proposed for non-volatile memory, GPUs and 3D-stacked processors. To underscore the similarities and differences of the techniques, we classify them based onmore » their key characteristics. We also review the metrics proposed to quantify vulnerability of processor structures. Finally, we believe that this survey will help researchers, system-architects and processor designers in gaining insights into the techniques for improving reliability of computing systems.« less
Designing a Measurement Framework for Response to Intervention in Early Childhood Programs
ERIC Educational Resources Information Center
McConnell, Scott R.; Wackerle-Hollman, Alisha K.; Roloff, Tracy A.; Rodriguez, Michael
2014-01-01
The overall architecture and major components of a measurement system designed and evaluated to support Response to Intervention (RTI) in the areas of language and literacy in early childhood programs are described. Efficient and reliable measurement is essential for implementing any viable RTI system, and implementing such a system in early…
Software Technology for Adaptable, Reliable Systems (STARS)
1994-03-25
Tmeline(3), SECOMO(3), SEER(3), GSFC Software Engineering Lab Model(l), SLIM(4), SEER-SEM(l), SPQR (2), PRICE-S(2), internally-developed models(3), APMSS(1...3 " Timeline - 3 " SASET (Software Architecture Sizing Estimating Tool) - 2 " MicroMan 11- 2 * LCM (Logistics Cost Model) - 2 * SPQR - 2 * PRICE-S - 2
Force Project Technology Presentation to the NRCC
2014-02-04
Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose
Creativity from Constraints: What Can We Learn from Motherwell? From Modrian? From Klee?
ERIC Educational Resources Information Center
Stokes, Patricia D.
2008-01-01
This article presents a problem-solving model of variability and creativity built on the classic Reitman and Simon analyses of musical composition and architectural design. The model focuses on paired constraints: one precluding (or limiting search among) reliable, existing solutions, the other promoting (or directing search to) novel, often…
A support architecture for reliable distributed computing systems
NASA Technical Reports Server (NTRS)
Dasgupta, Partha; Leblanc, Richard J., Jr.
1988-01-01
The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.
The Road from the NASA Access to Space Study to a Reusable Launch Vehicle
NASA Technical Reports Server (NTRS)
Powell, Richard W.; Cook, Stephen A.; Lockwood, Mary Kae
1998-01-01
NASA is cooperating with the aerospace industry to develop a space transportation system that provides reliable access-to-space at a much lower cost than is possible with today's launch vehicles. While this quest has been on-going for many years it received a major impetus when the U.S. Congress mandated as part of the 1993 NASA appropriations bill that: "In view of budget difficulties, present and future..., the National Aeronautics and Space Administration shall ... recommend improvements in space transportation." NASA, working with other organizations, including the Department of Transportation, and the Department of Defense identified three major transportation architecture options that were to be evaluated in the areas of reliability, operability and cost. These architectural options were: (1) retain and upgrade the Space Shuttle and the current expendable launch vehicles; (2) develop new expendable launch vehicles using conventional technologies and transition to these new vehicles beginning in 2005; and (3) develop new reusable vehicles using advanced technology, and transition to these vehicles beginning in 2008. The launch needs mission model was based on 1993 projections of civil, defense, and commercial payload requirements. This "Access to Space" study concluded that the option that provided the greatest potential for meeting the cost, operability, and reliability goals was a rocket-powered single-stage-to-orbit fully reusable launch vehicle (RLV) fleet designed with advanced technologies.
A Novel Solution-Technique Applied to a Novel WAAS Architecture
NASA Technical Reports Server (NTRS)
Bavuso, J.
1998-01-01
The Federal Aviation Administration has embarked on an historic task of modernizing and significantly improving the national air transportation system. One system that uses the Global Positioning System (GPS) to determine aircraft navigational information is called the Wide Area Augmentation System (WAAS). This paper describes a reliability assessment of one candidate system architecture for the WAAS. A unique aspect of this study regards the modeling and solution of a candidate system that allows a novel cold sparing scheme. The cold spare is a WAAS communications satellite that is fabricated and launched after a predetermined number of orbiting satellite failures have occurred and after some stochastic fabrication time transpires. Because these satellites are complex systems with redundant components, they exhibit an increasing failure rate with a Weibull time to failure distribution. Moreover, the cold spare satellite build-time is Weibull and upon launch is considered to be a good-as-new system with an increasing failure rate and a Weibull time to failure distribution as well. The reliability model for this system is non-Markovian because three distinct system clocks are required: the time to failure of the orbiting satellites, the build time for the cold spare, and the time to failure for the launched spare satellite. A powerful dynamic fault tree modeling notation and Monte Carlo simulation technique with importance sampling are shown to arrive at a reliability prediction for a 10 year mission.
Water System Architectures for Moon and Mars Bases
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Hodgson, Edward W.; Kliss, Mark H.
2015-01-01
Water systems for human bases on the moon and Mars will recycle multiple sources of wastewater. Systems for both the moon and Mars will also store water to support and backup the recycling system. Most water system requirements, such as number of crew, quantity and quality of water supply, presence of gravity, and surface mission duration of 6 or 18 months, will be similar for the moon and Mars. If the water system fails, a crew on the moon can quickly receive spare parts and supplies or return to Earth, but a crew on Mars cannot. A recycling system on the moon can have a reasonable reliability goal, such as only one unrecoverable failure every five years, if there is enough stored water to allow time for attempted repairs and for the crew to return if repair fails. The water system that has been developed and successfully operated on the International Space Station (ISS) could be used on a moon base. To achieve the same high level of crew safety on Mars without an escape option, either the recycling system must have much higher reliability or enough water must be stored to allow the crew to survive the full duration of the Mars surface mission. A three loop water system architecture that separately recycles condensate, wash water, and urine and flush can improve reliability and reduce cost for a Mars base.
Scheduling Independent Partitions in Integrated Modular Avionics Systems
Du, Chenglie; Han, Pengcheng
2016-01-01
Recently the integrated modular avionics (IMA) architecture has been widely adopted by the avionics industry due to its strong partition mechanism. Although the IMA architecture can achieve effective cost reduction and reliability enhancement in the development of avionics systems, it results in a complex allocation and scheduling problem. All partitions in an IMA system should be integrated together according to a proper schedule such that their deadlines will be met even under the worst case situations. In order to help provide a proper scheduling table for all partitions in IMA systems, we study the schedulability of independent partitions on a multiprocessor platform in this paper. We firstly present an exact formulation to calculate the maximum scaling factor and determine whether all partitions are schedulable on a limited number of processors. Then with a Game Theory analogy, we design an approximation algorithm to solve the scheduling problem of partitions, by allowing each partition to optimize its own schedule according to the allocations of the others. Finally, simulation experiments are conducted to show the efficiency and reliability of the approach proposed in terms of time consumption and acceptance ratio. PMID:27942013
NASA Astrophysics Data System (ADS)
Zhang, Chongfu; Wang, Zhengsuan; Jin, Wei; Qiu, Kun
2012-11-01
A novel realization method of the optical virtual private networks (OVPN) over multiprotocol label switching/optical packet switching (MPLS/OPS) networks is proposed. In this scheme, the introduction of MPLS control plane makes OVPN over OPS networks more reliable and easier; OVPN makes use of the concept of high reconfiguration of light-paths offered by MPLS, to set up secure tunnels of high bandwidth across intelligent OPS networks. Through resource management, the signal mechanism, connection control, and the architecture of the creation and maintenance of OVPN are efficiently realized. We also present an OVPN architecture with two traffic priorities, which is used to analyze the capacity, throughput, delay time of the proposed networks, and the packet loss rate performance of the OVPN over MPLS/OPS networks based on full mesh topology. The results validate the applicability of such reliable connectivity to high quality services in the OVPN over MPLS/OPS networks. Along with the results, the feasibility of the approach as the basis for the next generation networks is demonstrated and discussed.
E-Governance and Service Oriented Computing Architecture Model
NASA Astrophysics Data System (ADS)
Tejasvee, Sanjay; Sarangdevot, S. S.
2010-11-01
E-Governance is the effective application of information communication and technology (ICT) in the government processes to accomplish safe and reliable information lifecycle management. Lifecycle of the information involves various processes as capturing, preserving, manipulating and delivering information. E-Governance is meant to transform of governance in better manner to the citizens which is transparent, reliable, participatory, and accountable in point of view. The purpose of this paper is to attempt e-governance model, focus on the Service Oriented Computing Architecture (SOCA) that includes combination of information and services provided by the government, innovation, find out the way of optimal service delivery to citizens and implementation in transparent and liable practice. This paper also try to enhance focus on the E-government Service Manager as a essential or key factors service oriented and computing model that provides a dynamically extensible structural design in which all area or branch can bring in innovative services. The heart of this paper examine is an intangible model that enables E-government communication for trade and business, citizen and government and autonomous bodies.
Distributed controller clustering in software defined networks.
Abdelaziz, Ahmed; Fong, Ang Tan; Gani, Abdullah; Garba, Usman; Khan, Suleman; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.
NASA Astrophysics Data System (ADS)
Kaabi, Abderrahmen; Bienvenu, Yves; Ryckelynck, David; Pierre, Bertrand
2014-03-01
Power electronics modules (>100 A, >500 V) are essential components for the development of electrical and hybrid vehicles. These modules are formed from silicon chips (transistors and diodes) assembled on copper substrates by soldering. Owing to the fact that the assembly is heterogeneous, and because of thermal gradients, shear stresses are generated in the solders and cause premature damage to such electronics modules. This work focuses on architectured materials for the substrate and on lead-free solders to reduce the mechanical effects of differential expansion, improve the reliability of the assembly, and achieve a suitable operating temperature (<175°C). These materials are composites whose thermomechanical properties have been optimized by numerical simulation and validated experimentally. The substrates have good thermal conductivity (>280 W m-1 K-1) and a macroscopic coefficient of thermal expansion intermediate between those of Cu and Si, as well as limited structural evolution in service conditions. An approach combining design, optimization, and manufacturing of new materials has been followed in this study, leading to improved thermal cycling behavior of the component.
NASA Astrophysics Data System (ADS)
Murrill, Steven R.; Jacobs, Eddie L.; Franck, Charmaine C.; Petkie, Douglas T.; De Lucia, Frank C.
2015-10-01
The U.S. Army Research Laboratory (ARL) has continued to develop and enhance a millimeter-wave (MMW) and submillimeter- wave (SMMW)/terahertz (THz)-band imaging system performance prediction and analysis tool for both the detection and identification of concealed weaponry, and for pilotage obstacle avoidance. The details of the MATLAB-based model which accounts for the effects of all critical sensor and display components, for the effects of atmospheric attenuation, concealment material attenuation, and active illumination, were reported on at the 2005 SPIE Europe Security and Defence Symposium (Brugge). An advanced version of the base model that accounts for both the dramatic impact that target and background orientation can have on target observability as related to specular and Lambertian reflections captured by an active-illumination-based imaging system, and for the impact of target and background thermal emission, was reported on at the 2007 SPIE Defense and Security Symposium (Orlando). Further development of this tool that includes a MODTRAN-based atmospheric attenuation calculator and advanced system architecture configuration inputs that allow for straightforward performance analysis of active or passive systems based on scanning (single- or line-array detector element(s)) or staring (focal-plane-array detector elements) imaging architectures was reported on at the 2011 SPIE Europe Security and Defence Symposium (Prague). This paper provides a comprehensive review of a newly enhanced MMW and SMMW/THz imaging system analysis and design tool that now includes an improved noise sub-model for more accurate and reliable performance predictions, the capability to account for postcapture image contrast enhancement, and the capability to account for concealment material backscatter with active-illumination- based systems. Present plans for additional expansion of the model's predictive capabilities are also outlined.
Practical Application of Model-based Programming and State-based Architecture to Space Missions
NASA Technical Reports Server (NTRS)
Horvath, Gregory; Ingham, Michel; Chung, Seung; Martin, Oliver; Williams, Brian
2006-01-01
A viewgraph presentation to develop models from systems engineers that accomplish mission objectives and manage the health of the system is shown. The topics include: 1) Overview; 2) Motivation; 3) Objective/Vision; 4) Approach; 5) Background: The Mission Data System; 6) Background: State-based Control Architecture System; 7) Background: State Analysis; 8) Overview of State Analysis; 9) Background: MDS Software Frameworks; 10) Background: Model-based Programming; 10) Background: Titan Model-based Executive; 11) Model-based Execution Architecture; 12) Compatibility Analysis of MDS and Titan Architectures; 13) Integrating Model-based Programming and Execution into the Architecture; 14) State Analysis and Modeling; 15) IMU Subsystem State Effects Diagram; 16) Titan Subsystem Model: IMU Health; 17) Integrating Model-based Programming and Execution into the Software IMU; 18) Testing Program; 19) Computationally Tractable State Estimation & Fault Diagnosis; 20) Diagnostic Algorithm Performance; 21) Integration and Test Issues; 22) Demonstrated Benefits; and 23) Next Steps
A Hybrid Power Management (HPM) Based Vehicle Architecture
NASA Technical Reports Server (NTRS)
Eichenberg, Dennis J.
2011-01-01
Society desires vehicles with reduced fuel consumption and reduced emissions. This presents a challenge and an opportunity for industry and the government. The NASA John H. Glenn Research Center (GRC) has developed a Hybrid Power Management (HPM) based vehicle architecture for space and terrestrial vehicles. GRC's Electrical and Electromagnetics Branch of the Avionics and Electrical Systems Division initiated the HPM Program for the GRC Technology Transfer and Partnership Office. HPM is the innovative integration of diverse, state-of-the-art power devices in an optimal configuration for space and terrestrial applications. The appropriate application and control of the various power devices significantly improves overall system performance and efficiency. The basic vehicle architecture consists of a primary power source, and possibly other power sources, providing all power to a common energy storage system, which is used to power the drive motors and vehicle accessory systems, as well as provide power as an emergency power system. Each component is independent, permitting it to be optimized for its intended purpose. This flexible vehicle architecture can be applied to all vehicles to considerably improve system efficiency, reliability, safety, security, and performance. This unique vehicle architecture has the potential to alleviate global energy concerns, improve the environment, stimulate the economy, and enable new missions.
A Unified Approach to Model-Based Planning and Execution
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Dorais, Gregory A.; Fry, Chuck; Levinson, Richard; Plaunt, Christian; Norvig, Peter (Technical Monitor)
2000-01-01
Writing autonomous software is complex, requiring the coordination of functionally and technologically diverse software modules. System and mission engineers must rely on specialists familiar with the different software modules to translate requirements into application software. Also, each module often encodes the same requirement in different forms. The results are high costs and reduced reliability due to the difficulty of tracking discrepancies in these encodings. In this paper we describe a unified approach to planning and execution that we believe provides a unified representational and computational framework for an autonomous agent. We identify the four main components whose interplay provides the basis for the agent's autonomous behavior: the domain model, the plan database, the plan running module, and the planner modules. This representational and problem solving approach can be applied at all levels of the architecture of a complex agent, such as Remote Agent. In the rest of the paper we briefly describe the Remote Agent architecture. The new agent architecture proposed here aims at achieving the full Remote Agent functionality. We then give the fundamental ideas behind the new agent architecture and point out some implication of the structure of the architecture, mainly in the area of reactivity and interaction between reactive and deliberative decision making. We conclude with related work and current status.
ALLIANCE: An architecture for fault tolerant, cooperative control of heterogeneous mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, L.E.
1995-02-01
This research addresses the problem of achieving fault tolerant cooperation within small- to medium-sized teams of heterogeneous mobile robots. The author describes a novel behavior-based, fully distributed architecture, called ALLIANCE, that utilizes adaptive action selection to achieve fault tolerant cooperative control in robot missions involving loosely coupled, largely independent tasks. The robots in this architecture possess a variety of high-level functions that they can perform during a mission, and must at all times select an appropriate action based on the requirements of the mission, the activities of other robots, the current environmental conditions, and their own internal states. Since suchmore » cooperative teams often work in dynamic and unpredictable environments, the software architecture allows the team members to respond robustly and reliably to unexpected environmental changes and modifications in the robot team that may occur due to mechanical failure, the learning of new skills, or the addition or removal of robots from the team by human intervention. After presenting ALLIANCE, the author describes in detail experimental results of an implementation of this architecture on a team of physical mobile robots performing a cooperative box pushing demonstration. These experiments illustrate the ability of ALLIANCE to achieve adaptive, fault-tolerant cooperative control amidst dynamic changes in the capabilities of the robot team.« less
Analysis of key technologies for virtual instruments metrology
NASA Astrophysics Data System (ADS)
Liu, Guixiong; Xu, Qingui; Gao, Furong; Guan, Qiuju; Fang, Qiang
2008-12-01
Virtual instruments (VIs) require metrological verification when applied as measuring instruments. Owing to the software-centered architecture, metrological evaluation of VIs includes two aspects: measurement functions and software characteristics. Complexity of software imposes difficulties on metrological testing of VIs. Key approaches and technologies for metrology evaluation of virtual instruments are investigated and analyzed in this paper. The principal issue is evaluation of measurement uncertainty. The nature and regularity of measurement uncertainty caused by software and algorithms can be evaluated by modeling, simulation, analysis, testing and statistics with support of powerful computing capability of PC. Another concern is evaluation of software features like correctness, reliability, stability, security and real-time of VIs. Technologies from software engineering, software testing and computer security domain can be used for these purposes. For example, a variety of black-box testing, white-box testing and modeling approaches can be used to evaluate the reliability of modules, components, applications and the whole VI software. The security of a VI can be assessed by methods like vulnerability scanning and penetration analysis. In order to facilitate metrology institutions to perform metrological verification of VIs efficiently, an automatic metrological tool for the above validation is essential. Based on technologies of numerical simulation, software testing and system benchmarking, a framework for the automatic tool is proposed in this paper. Investigation on implementation of existing automatic tools that perform calculation of measurement uncertainty, software testing and security assessment demonstrates the feasibility of the automatic framework advanced.
The Design of a Fault-Tolerant COTS-Based Bus Architecture
NASA Technical Reports Server (NTRS)
Chau, Savio N.; Alkalai, Leon; Burt, John B.; Tai, Ann T.
1999-01-01
In this paper, we report our experiences and findings on the design of a fault-tolerant bus architecture comprised of two COTS buses, the IEEE 1394 and the 12C. This fault-tolerant bus is the backbone system bus for the avionics architecture of the X2000 program at the Jet Propulsion Laboratory. COTS buses are attractive because of the availability of low cost commercial products. However, they are not specifically designed for highly reliable applications such as long-life deep-space missions. The X2000 design team has devised a multi-level fault tolerance approach to compensate for this shortcoming of COTS buses. First, the approach enhances the fault tolerance capabilities of the IEEE 1394 and 12 C buses by adding a layer of fault handling hardware and software. Second, algorithms are developed to enable the IEEE 1394 and the 12 C buses assist each other to isolate and recovery from faults. Third, the set of IEEE 1394 and 12 C buses is duplicated to further enhance system reliability. The X2000 design team has paid special attention to guarantee that all fault tolerance provisions will not cause the bus design to deviate from the commercial standard specifications. Otherwise, the economic attractiveness of using COTS will be diminished. The hardware and software design of the X2000 fault-tolerant bus are being implemented and flight hardware will be delivered to the ST4 and Europa Orbiter missions.
A new flight control and management system architecture and configuration
NASA Astrophysics Data System (ADS)
Kong, Fan-e.; Chen, Zongji
2006-11-01
The advanced fighter should possess the performance such as super-sound cruising, stealth, agility, STOVL(Short Take-Off Vertical Landing),powerful communication and information processing. For this purpose, it is not enough only to improve the aerodynamic and propulsion system. More importantly, it is necessary to enhance the control system. A complete flight control system provides not only autopilot, auto-throttle and control augmentation, but also the given mission management. F-22 and JSF possess considerably outstanding flight control system on the basis of pave pillar and pave pace avionics architecture. But their control architecture is not enough integrated. The main purpose of this paper is to build a novel fighter control system architecture. The control system constructed on this architecture should be enough integrated, inexpensive, fault-tolerant, high safe, reliable and effective. And it will take charge of both the flight control and mission management. Starting from this purpose, this paper finishes the work as follows: First, based on the human nervous control, a three-leveled hierarchical control architecture is proposed. At the top of the architecture, decision level is in charge of decision-making works. In the middle, organization & coordination level will schedule resources, monitor the states of the fighter and switch the control modes etc. And the bottom is execution level which holds the concrete drive and measurement; then, according to their function and resources all the tasks involving flight control and mission management are sorted to individual level; at last, in order to validate the three-leveled architecture, a physical configuration is also showed. The configuration is distributed and applies some new advancement in information technology industry such line replaced module and cluster technology.
Mayo, Ann M
2015-01-01
It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.
Integrated performance and reliability specification for digital avionics systems
NASA Technical Reports Server (NTRS)
Brehm, Eric W.; Goettge, Robert T.
1995-01-01
This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.
Analysis of Employment Flow of Landscape Architecture Graduates in Agricultural Universities
ERIC Educational Resources Information Center
Yao, Xia; He, Linchun
2012-01-01
A statistical analysis of employment flow of landscape architecture graduates was conducted on the employment data of graduates major in landscape architecture in 2008 to 2011. The employment flow of graduates was to be admitted to graduate students, industrial direction and regional distribution, etc. Then, the features of talent flow and factors…
Architecture Studies for Commercial Production of Propellants From the Lunar Poles
NASA Astrophysics Data System (ADS)
Duke, Michael B.; Diaz, Javier; Blair, Brad R.; Oderman, Mark; Vaucher, Marc
2003-01-01
Two architectures are developed that could be used to convert water held in regolith deposits within permanently shadowed lunar craters into propellant for use in near-Earth space. In particular, the model has been applied to an analysis of the commercial feasibility of using lunar derived propellant to convey payloads from low Earth orbit to geosynchronous Earth orbit. Production and transportation system masses were estimated for each architecture and cost analysis was made using the NAFCOM cost model. Data from the cost model were analyzed using a financial analysis tool reported in a companion paper (Lamassoure et al., 2002) to determine under what conditions the architectures might be commercially viable. Analysis of the architectural assumptions is used to identify the principal areas for further research, which include technological development of lunar mining and water extraction systems, power systems, reusable space transportation systems, and orbital propellant depots. The architectures and commercial viability are sensitive to the assumed concentration of ice in the lunar deposits, suggesting that further lunar exploration to determine whether higher-grade deposits exist would be economically justified.
NASA Technical Reports Server (NTRS)
Wheatley, Thomas E.; Michaloski, John L.; Lumia, Ronald
1989-01-01
Analysis of a robot control system leads to a broad range of processing requirements. One fundamental requirement of a robot control system is the necessity of a microcomputer system in order to provide sufficient processing capability.The use of multiple processors in a parallel architecture is beneficial for a number of reasons, including better cost performance, modular growth, increased reliability through replication, and flexibility for testing alternate control strategies via different partitioning. A survey of the progression from low level control synchronizing primitives to higher level communication tools is presented. The system communication and control mechanisms of existing robot control systems are compared to the hierarchical control model. The impact of this design methodology on the current robot control systems is explored.
Future large broadband switched satellite communications networks
NASA Technical Reports Server (NTRS)
Staelin, D. H.; Harvey, R. R.
1979-01-01
Critical technical, market, and policy issues relevant to future large broadband switched satellite networks are summarized. Our market projections for the period 1980 to 2000 are compared. Clusters of switched satellites, in lieu of large platforms, etc., are shown to have significant advantages. Analysis of an optimum terrestrial network architecture suggests the proper densities of ground stations and that link reliabilities 99.99% may entail less than a 10% cost premium for diversity protection at 20/30 GHz. These analyses suggest that system costs increase as the 0.6 power of traffic. Cost estimates for nominal 20/30 GHz satellite and ground facilities suggest optimum system configurations might employ satellites with 285 beams, multiple TDMA bands each carrying 256 Mbps, and 16 ft ground station antennas. A nominal development program is outlined.
Electrophysiological Endophenotypes for Schizophrenia
Owens, Emily; Bachman, Peter; Glahn, David C; Bearden, Carrie E
2016-01-01
Endophenotypes are quantitative, heritable traits that may help to elucidate the pathophysiologic mechanisms underlying complex disease syndromes, such as schizophrenia. They can be assessed at numerous levels of analysis; here, we review electrophysiological endophenotypes that have shown promise in helping us understand schizophrenia from a more mechanistic point of view. For each endophenotype, we describe typical experimental procedures, reliability, heritability, and reported gene and neurobiological associations. We discuss recent findings regarding the genetic architecture of specific electrophysiological endophenotypes, as well as converging evidence from EEG studies implicating disrupted balance of glutamatergic signaling and GABA-ergic inhibition in the pathophysiology of schizophrenia. We conclude that refining the measurement of electrophysiological endophenotypes, expanding genetic association studies, and integrating datasets are important next steps for understanding the mechanisms that connect identified genetic risk loci for schizophrenia to the disease phenotype. PMID:26954597
A Common Foundation of Information and Analytical Capability for AFSPC Decision Making
2005-06-23
System Strategic Master Plan MAPs/MSP CRRAAF TASK FORCE CONOPS MUA Task Weights Engagement Analysis ASIIS Optimization ACEIT COST Analysis...Engangement Architecture Analysis Architecture MUA AFSPC POM S&T Planning Military Utility Analysis ACEIT COST Analysis Joint Capab Integ Develop System
Fan, Bo; Hu, Bin; Yuan, Qingmin; Wen, Shuang; Liu, Tianqing; Bai, Shanshan; Qi, Xiaofeng; Wang, Xin; Yang, Deyong; Sun, Xiuzhen; Song, Xishuang
2017-07-01
Upper tract urinary carcinoma (UTUC) is a relatively uncommon but aggressive disease. Recent publications have assessed the prognostic significance of tumor architecture in UTUC, but there is still controversy regarding the significance and importance of tumor architecture on disease recurrence. We retrospectively reviewed the medical records of 101 patients with clinical UTUC who had undergone surgery. Univariate and multivariate analyses were conducted to identify factors associated with disease recurrence and cancer-specific mortality. As our single center study and the limited sample size may influence the clinical significance, we further quantitatively combined the results with those of existing published literature through a meta-analysis compiled from searching several databases. At a median follow-up of 41.3 months, 25 patients experienced disease recurrence. Spearman's correlation analysis showed that tumor architecture was found to be positively correlated with the tumor location and the histological grade. Kaplan-Meier curves showed that patients with sessile tumor architecture had significantly poor recurrence free survival (RFS) and cancer specific survival (CSS). Furthermore, multivariate analysis suggested that tumor architecture was independent prognostic factors for RFS (Hazard ratio, HR = 2.648) and CSS (HR = 2.072) in UTUC patients. A meta-analysis of investigating tumor architecture and its effects on UTUC prognosis was conducted. After searching PubMed, Medline, Embase, Cochrane Library and Scopus databases, 17 articles met the eligibility criteria for this analysis. The eligible studies included a total of 14,368 patients and combined results showed that sessile tumor architecture was associated with both disease recurrence with a pooled HR estimate of 1.454 and cancer-specific mortality with a pooled HR estimate of 1.416. Tumor architecture is an independent predictor for disease recurrence after radical nephroureterectomy for UTUC. Therefore, closer surveillance is necessary, especially in patients with sessile tumor architecture.
NASA Technical Reports Server (NTRS)
Divito, Ben L.; Butler, Ricky W.; Caldwell, James L.
1990-01-01
A high-level design is presented for a reliable computing platform for real-time control applications. Design tradeoffs and analyses related to the development of the fault-tolerant computing platform are discussed. The architecture is formalized and shown to satisfy a key correctness property. The reliable computing platform uses replicated processors and majority voting to achieve fault tolerance. Under the assumption of a majority of processors working in each frame, it is shown that the replicated system computes the same results as a single processor system not subject to failures. Sufficient conditions are obtained to establish that the replicated system recovers from transient faults within a bounded amount of time. Three different voting schemes are examined and proved to satisfy the bounded recovery time conditions.
"Reliability Of Fiber Optic Lans"
NASA Astrophysics Data System (ADS)
Code n, Michael; Scholl, Frederick; Hatfield, W. Bryan
1987-02-01
Fiber optic Local Area Network Systems are being used to interconnect increasing numbers of nodes. These nodes may include office computer peripherals and terminals, PBX switches, process control equipment and sensors, automated machine tools and robots, and military telemetry and communications equipment. The extensive shared base of capital resources in each system requires that the fiber optic LAN meet stringent reliability and maintainability requirements. These requirements are met by proper system design and by suitable manufacturing and quality procedures at all levels of a vertically integrated manufacturing operation. We will describe the reliability and maintainability of Codenoll's passive star based systems. These include LAN systems compatible with Ethernet (IEEE 802.3) and MAP (IEEE 802.4), and software compatible with IBM Token Ring (IEEE 802.5). No single point of failure exists in this system architecture.
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Comparison of Oxygen Liquefaction Methods for Use on the Martian Surface
NASA Technical Reports Server (NTRS)
Johnson, W. L.; Hauser, D. M.; Plachta, D. W.; Wang, X-Y. J.; Banker, B. F.; Desai, P. S.; Stephens, J. R.; Swanger, A. M.
2017-01-01
In order to use oxygen that is produced on the surface of Mars from In-Situ production processes in a chemical propulsion system, the oxygen must first be converted from vapor phase to liquid phase and then stored within the propellant tanks of the propulsions system. There are multiple ways that this can be accomplished, from simply attaching a liquefaction system onto the propellant tanks to carrying separate tanks for liquefaction and storage of the propellant and loading just prior to launch (the way that traditional rocket launches occur on earth). A study was done into these various methods by which the oxygen (and methane) could be liquefied and stored on the Martian surface. Five different architectures or cycles were considered: Tube-on-Tank (also known as Broad Area Cooling or Distributed Refrigeration), Tube-in-Tank (also known as Integrated Refrigeration and Storage), a modified Linde open liquefaction/refrigeration cycle, the direct mounting of a pulse tube cryocooler onto the tank, and an in-line liquefier at ambient pressure. Models of each architecture were developed to give insight into the performance and losses of each of the options. The results were then compared across eight categories: Mass, Power (both input and heat rejection), Operability, Cost, Manufacturability, Reliability, Volumility, and Scalability. The result was that, given the current state of technology maturity, Tube-on-Tank architectures were the most attractive solution, closely followed by Tube-in-Tank. As a result of this technical analysis and other factors, NASA has determined to focus its Martian surface liquefaction activities and technology development on Tube-on-Tank liquefaction cycles.
Pons, Tirso; Naumoff, Daniil G; Martínez-Fleites, Carlos; Hernández, Lázaro
2004-02-15
Multiple-sequence alignment of glycoside hydrolase (GH) families 32, 43, 62, and 68 revealed three conserved blocks, each containing an acidic residue at an equivalent position in all the enzymes. A detailed analysis of the site-directed mutations so far performed on invertases (GH32), arabinanases (GH43), and bacterial fructosyltransferases (GH68) indicated a direct implication of the conserved residues Asp/Glu (block I), Asp (block II), and Glu (block III) in substrate binding and hydrolysis. These residues are close in space in the 5-bladed beta-propeller fold determined for Cellvibrio japonicus alpha-L-arabinanase Arb43A [Nurizzo et al., Nat Struct Biol 2002;9:665-668] and Bacillus subtilis endo-1,5-alpha-L-arabinanase. A sequence-structure compatibility search using 3D-PSSM, mGenTHREADER, INBGU, and SAM-T02 programs predicted indistinctly the 5-bladed beta-propeller fold of Arb43A and the 6-bladed beta-propeller fold of sialidase/neuraminidase (GH33, GH34, and GH83) as the most reliable topologies for GH families 32, 62, and 68. We conclude that the identified acidic residues are located at the active site of a beta-propeller architecture in GH32, GH43, GH62, and GH68, operating with a canonical reaction mechanism of either inversion (GH43 and likely GH62) or retention (GH32 and GH68) of the anomeric configuration. Also, we propose that the beta-propeller architecture accommodates distinct binding sites for the acceptor saccharide in glycosyl transfer reaction. Copyright 2003 Wiley-Liss, Inc.
Proposed Array-based Deep Space Network for NASA
NASA Technical Reports Server (NTRS)
Bagri, Durgadas S.; Statman, Joseph I.; Gatti, Mark S.
2007-01-01
The current assets of the Deep Space Network (DSN) of the National Aeronautics and Space Administration (NASA), especially the 70-m antennas, are aging and becoming less reliable. Furthermore, they are expensive to operate and difficult to upgrade for operation at Ka-band (321 GHz). Replacing them with comparable monolithic large antennas would be expensive. On the other hand, implementation of similar high-sensitivity assets can be achieved economically using an array-based architecture, where sensitivity is measured by G/T, the ratio of antenna gain to system temperature. An array-based architecture would also provide flexibility in operations and allow for easy addition of more G/T whenever required. Therefore, an array-based plan of the next-generation DSN for NASA has been proposed. The DSN array would provide more flexible downlink capability compared to the current DSN for robust telemetry, tracking and command services to the space missions of NASA and its international partners in a cost effective way. Instead of using the array as an element of the DSN and relying on the existing concept of operation, we explore a broader departure in establishing a more modern concept of operations to reduce the operations costs. This paper presents the array-based architecture for the next generation DSN. It includes system block diagram, operations philosophy, user's view of operations, operations management, and logistics like maintenance philosophy and anomaly analysis and reporting. To develop the various required technologies and understand the logistics of building the array-based lowcost system, a breadboard array of three antennas has been built. This paper briefly describes the breadboard array system and its performance.
NASA Technical Reports Server (NTRS)
Sargusingh, Miriam J.; Nelson, Jason R.
2014-01-01
NASA has highlighted reliability as critical to future human space exploration, particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However, no consensus has been reached on what is meant by improving on reliability, or on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the spring of 2013, the AES Water Recovery Project hosted a series of events at Johnson Space Center with the intended goal of establishing a common language and understanding of NASA's reliability goals, and equipping the projects with acceptable means of assessing the respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools, and techniques associated with evaluating and designing for system reliability. The campaign culminated in a workshop that included members of the Environmental Control and Life Support System and AES communities. The goal of this workshop was to develop a consensus on what reliability means to AES and identify methods for assessing low- to mid-technology readiness level technologies for reliability. This paper details the results of that workshop.
ECLSS Reliability for Long Duration Missions Beyond Lower Earth Orbit
NASA Technical Reports Server (NTRS)
Sargusingh, Miriam J.; Nelson, Jason
2014-01-01
Reliability has been highlighted by NASA as critical to future human space exploration particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However there is no consensus on what is meant by improving on reliability; nor on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the spring of 2013, the AES Water Recovery Project (WRP) hosted a series of events at the NASA Johnson Space Center (JSC) with the intended goal of establishing a common language and understanding of our reliability goals, and equipping the projects with acceptable means of assessing our respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools and techniques associated with evalauating and designing for system reliability. The campaign culminated in a workshop at JSC with members of the ECLSS and AES communities with the goal of developing a consensus on what reliability means to AES and identifying methods for assessing our low to mid-technology readiness level (TRL) technologies for reliability. This paper details the results of the workshop.
ECLSS Reliability for Long Duration Missions Beyond Lower Earth Orbit
NASA Technical Reports Server (NTRS)
Sargusingh, Miriam J.; Nelson, Jason
2014-01-01
Reliability has been highlighted by NASA as critical to future human space exploration particularly in the area of environmental controls and life support systems. The Advanced Exploration Systems (AES) projects have been encouraged to pursue higher reliability components and systems as part of technology development plans. However, there is no consensus on what is meant by improving on reliability; nor on how to assess reliability within the AES projects. This became apparent when trying to assess reliability as one of several figures of merit for a regenerable water architecture trade study. In the Spring of 2013, the AES Water Recovery Project (WRP) hosted a series of events at the NASA Johnson Space Center (JSC) with the intended goal of establishing a common language and understanding of our reliability goals and equipping the projects with acceptable means of assessing our respective systems. This campaign included an educational series in which experts from across the agency and academia provided information on terminology, tools and techniques associated with evaluating and designing for system reliability. The campaign culminated in a workshop at JSC with members of the ECLSS and AES communities with the goal of developing a consensus on what reliability means to AES and identifying methods for assessing our low to mid-technology readiness level (TRL) technologies for reliability. This paper details the results of the workshop.
If it walks like a duck: nanosensor threat assessment
NASA Astrophysics Data System (ADS)
Chachis, George C.
2003-09-01
A convergence of technologies is making deployment of unattended ground nanosensors operationally feasible in terms of energy, communications for both arbitrated and self-organizing distributed, collective behaviors. A number of nano communications technologies are already making network-centric systems possible for MicroElectrical Mechanical (MEM) sensor devices today. Similar technologies may make NanoElectrical Mechanical (NEM) sensor devices operationally feasible a few years from now. Just as organizational behaviors of large numbers of nanodevices can derive strategies from social insects and other group-oriented animals, bio-inspired heuristics for threat assessment provide a conceptual approach for successful integration of nanosensors into unattended smart sensor networks. Biological models such as the organization of social insects or the dynamics of immune systems show promise as biologically-inspired paradigms for protecting nanosensor networks for security scene analysis and battlespace awareness. The paradox of nanosensors is that the smaller the device is the more useful it is but the smaller it is the more vulnerable it is to a variety of threats. In other words simpler means networked nanosensors are more likely to fall prey to a wide-range of attacks including jamming, spoofing, Janisserian recruitment, Pied-Piper distraction, as well as typical attacks computer network security. Thus, unattended sensor technologies call for network architectures that include security and countermeasures to provide reliable scene analysis or battlespace awareness information. Such network centric architectures may well draw upon a variety of bio-inspired approaches to safeguard, validate and make sense of large quantities of information.
Connecting Architecture and Implementation
NASA Astrophysics Data System (ADS)
Buchgeher, Georg; Weinreich, Rainer
Software architectures are still typically defined and described independently from implementation. To avoid architectural erosion and drift, architectural representation needs to be continuously updated and synchronized with system implementation. Existing approaches for architecture representation like informal architecture documentation, UML diagrams, and Architecture Description Languages (ADLs) provide only limited support for connecting architecture descriptions and implementations. Architecture management tools like Lattix, SonarJ, and Sotoarc and UML-tools tackle this problem by extracting architecture information directly from code. This approach works for low-level architectural abstractions like classes and interfaces in object-oriented systems but fails to support architectural abstractions not found in programming languages. In this paper we present an approach for linking and continuously synchronizing a formalized architecture representation to an implementation. The approach is a synthesis of functionality provided by code-centric architecture management and UML tools and higher-level architecture analysis approaches like ADLs.
NASA Technical Reports Server (NTRS)
Smith, Paul H.
1988-01-01
The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.
A space exploration strategy that promotes international and commercial participation
NASA Astrophysics Data System (ADS)
Arney, Dale C.; Wilhite, Alan W.; Chai, Patrick R.; Jones, Christopher A.
2014-01-01
NASA has created a plan to implement the Flexible Path strategy, which utilizes a heavy lift launch vehicle to deliver crew and cargo to orbit. In this plan, NASA would develop much of the transportation architecture (launch vehicle, crew capsule, and in-space propulsion), leaving the other in-space elements open to commercial and international partnerships. This paper presents a space exploration strategy that reverses that philosophy, where commercial and international launch vehicles provide launch services. Utilizing a propellant depot to aggregate propellant on orbit, smaller launch vehicles are capable of delivering all of the mass necessary for space exploration. This strategy has benefits to the architecture in terms of cost, schedule, and reliability.
Feature detection in satellite images using neural network technology
NASA Technical Reports Server (NTRS)
Augusteijn, Marijke F.; Dimalanta, Arturo S.
1992-01-01
A feasibility study of automated classification of satellite images is described. Satellite images were characterized by the textures they contain. In particular, the detection of cloud textures was investigated. The method of second-order gray level statistics, using co-occurrence matrices, was applied to extract feature vectors from image segments. Neural network technology was employed to classify these feature vectors. The cascade-correlation architecture was successfully used as a classifier. The use of a Kohonen network was also investigated but this architecture could not reliably classify the feature vectors due to the complicated structure of the classification problem. The best results were obtained when data from different spectral bands were fused.
Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion
NASA Astrophysics Data System (ADS)
Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger
2007-12-01
Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.
Optically controlled phased-array antenna technology for space communication systems
NASA Technical Reports Server (NTRS)
Kunath, Richard R.; Bhasin, Kul B.
1988-01-01
Using MMICs in phased-array applications above 20 GHz requires complex RF and control signal distribution systems. Conventional waveguide, coaxial cable, and microstrip methods are undesirable due to their high weight, high loss, limited mechanical flexibility and large volume. An attractive alternative to these transmission media, for RF and control signal distribution in MMIC phased-array antennas, is optical fiber. Presented are potential system architectures and their associated characteristics. The status of high frequency opto-electronic components needed to realize the potential system architectures is also discussed. It is concluded that an optical fiber network will reduce weight and complexity, and increase reliability and performance, but may require higher power.
The manned transportation system study - Defining human pathways into space
NASA Technical Reports Server (NTRS)
Lance, Nick; Geyer, Mark S.; Gaunce, Michael T.; Anson, H. W.; Bienhoff, D. G.; Carey, D. A.; Emmett, B. R.; Mccandless, B.; Wetzel, E. D.
1992-01-01
Substantiating data developed by a NASA-industry team (NIT) for subsequent NASA decisions on the 'right' set of manned transportation elements needed for human access to space are discussed. Attention is given to the framework for detailed definition of these manned transportation elements. Identifying and defining architecture evaluation criteria, i.e., attributes, specified the amount and type of data needed for each concept under consideration. Several architectures, each beginning with today's transportation systems, were defined using representative systems to explore future options and address specific questions currently being debated. The present solutions emphasize affordability, safety, routineness, and reliability. Key issues associated with current business practices were challenged and the impact associated with these practices quantified.
A survey of system architecture requirements for health care-based wireless sensor networks.
Egbogah, Emeka E; Fapojuwo, Abraham O
2011-01-01
Wireless Sensor Networks (WSNs) have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs) that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera). However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera) to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC) protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.
Katherine Sinacore; Jefferson Scott Hall; Catherine Potvin; Alejandro A. Royo; Mark J. Ducey; Mark S. Ashton; Shijo Joseph
2017-01-01
The potential benefits of planting trees have generated significant interest with respect to sequestering carbon and restoring other forest based ecosystem services. Reliable estimates of carbon stocks are pivotal for understanding the global carbon balance and for promoting initiatives to mitigate CO2 emissions through forest management. There...
RICIS Symposium 1992: Mission and Safety Critical Systems Research and Applications
NASA Technical Reports Server (NTRS)
1992-01-01
This conference deals with computer systems which control systems whose failure to operate correctly could produce the loss of life and or property, mission and safety critical systems. Topics covered are: the work of standards groups, computer systems design and architecture, software reliability, process control systems, knowledge based expert systems, and computer and telecommunication protocols.
A qualitative approach to systemic diagnosis of the SSME
NASA Technical Reports Server (NTRS)
Bickmore, Timothy W.; Maul, William A.
1993-01-01
A generic software architecture has been developed for posttest diagnostics of rocket engines, and is presently being applied to the posttest analysis of the SSME. This investigation deals with the Systems Section module of the architecture, which is presently under development. Overviews of the manual SSME systems analysis process and the overall SSME diagnostic system architecture are presented.
Liu, Chao; Abu-Jamous, Basel; Brattico, Elvira; Nandi, Asoke K
2017-03-01
In the past decades, neuroimaging of humans has gained a position of status within neuroscience, and data-driven approaches and functional connectivity analyses of functional magnetic resonance imaging (fMRI) data are increasingly favored to depict the complex architecture of human brains. However, the reliability of these findings is jeopardized by too many analysis methods and sometimes too few samples used, which leads to discord among researchers. We propose a tunable consensus clustering paradigm that aims at overcoming the clustering methods selection problem as well as reliability issues in neuroimaging by means of first applying several analysis methods (three in this study) on multiple datasets and then integrating the clustering results. To validate the method, we applied it to a complex fMRI experiment involving affective processing of hundreds of music clips. We found that brain structures related to visual, reward, and auditory processing have intrinsic spatial patterns of coherent neuroactivity during affective processing. The comparisons between the results obtained from our method and those from each individual clustering algorithm demonstrate that our paradigm has notable advantages over traditional single clustering algorithms in being able to evidence robust connectivity patterns even with complex neuroimaging data involving a variety of stimuli and affective evaluations of them. The consensus clustering method is implemented in the R package "UNCLES" available on http://cran.r-project.org/web/packages/UNCLES/index.html .
Enabling Co-Design of Multi-Layer Exascale Storage Architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carothers, Christopher
Growing demands for computing power in applications such as energy production, climate analysis, computational chemistry, and bioinformatics have propelled computing systems toward the exascale: systems with 10 18 floating-point operations per second. These systems, to be designed and constructed over the next decade, will create unprecedented challenges in component counts, power consumption, resource limitations, and system complexity. Data storage and access are an increasingly important and complex component in extreme-scale computing systems, and significant design work is needed to develop successful storage hardware and software architectures at exascale. Co-design of these systems will be necessary to find the best possiblemore » design points for exascale systems. The goal of this work has been to enable the exploration and co-design of exascale storage systems by providing a detailed, accurate, and highly parallel simulation of exascale storage and the surrounding environment. Specifically, this simulation has (1) portrayed realistic application checkpointing and analysis workloads, (2) captured the complexity, scale, and multilayer nature of exascale storage hardware and software, and (3) executed in a timeframe that enables “what if'” exploration of design concepts. We developed models of the major hardware and software components in an exascale storage system, as well as the application I/O workloads that drive them. We used our simulation system to investigate critical questions in reliability and concurrency at exascale, helping guide the design of future exascale hardware and software architectures. Additionally, we provided this system to interested vendors and researchers so that others can explore the design space. We validated the capabilities of our simulation environment by configuring the simulation to represent the Argonne Leadership Computing Facility Blue Gene/Q system and comparing simulation results for application I/O patterns to the results of executions of these I/O kernels on the actual system.« less
Evolutionary Space Communications Architectures for Human/Robotic Exploration and Science Missions
NASA Technical Reports Server (NTRS)
Bhasin, Kul; Hayden, Jeffrey L.
2004-01-01
NASA enterprises have growing needs for an advanced, integrated, communications infrastructure that will satisfy the capabilities needed for multiple human, robotic and scientific missions beyond 2015. Furthermore, the reliable, multipoint infrastructure is required to provide continuous, maximum coverage of areas of concentrated activities, such as around Earth and in the vicinity of the Moon or Mars, with access made available on demand of the human or robotic user. As a first step, the definitions of NASA's future space communications and networking architectures are underway. Architectures that describe the communications and networking needed between the nodal regions consisting of Earth, Moon, Lagrange points, Mars, and the places of interest within the inner and outer solar system have been laid out. These architectures will need the modular flexibility that must be included in the communication and networking technologies to enable the infrastructure to grow in capability with time and to transform from supporting robotic missions in the solar system to supporting human ventures to Mars, Jupiter, Jupiter's moons, and beyond. The protocol-based networking capability seamlessly connects the backbone, access, inter-spacecraft and proximity network elements of the architectures employed in the infrastructure. In this paper, we present the summary of NASA's near and long term needs and capability requirements that were gathered by participative methods. We describe an integrated architecture concept and model that will enable communications for evolutionary robotic and human science missions. We then define the communication nodes, their requirements, and various options to connect them.
Evolutionary Space Communications Architectures for Human/Robotic Exploration and Science Missions
NASA Astrophysics Data System (ADS)
Bhasin, Kul; Hayden, Jeffrey L.
2004-02-01
NASA enterprises have growing needs for an advanced, integrated, communications infrastructure that will satisfy the capabilities needed for multiple human, robotic and scientific missions beyond 2015. Furthermore, the reliable, multipoint infrastructure is required to provide continuous, maximum coverage of areas of concentrated activities, such as around Earth and in the vicinity of the Moon or Mars, with access made available on demand of the human or robotic user. As a first step, the definitions of NASA's future space communications and networking architectures are underway. Architectures that describe the communications and networking needed between the nodal regions consisting of Earth, Moon, Lagrange points, Mars, and the places of interest within the inner and outer solar system have been laid out. These architectures will need the modular flexibility that must be included in the communication and networking technologies to enable the infrastructure to grow in capability with time and to transform from supporting robotic missions in the solar system to supporting human ventures to Mars, Jupiter, Jupiter's moons, and beyond. The protocol-based networking capability seamlessly connects the backbone, access, inter-spacecraft and proximity network elements of the architectures employed in the infrastructure. In this paper, we present the summary of NASA's near and long term needs and capability requirements that were gathered by participative methods. We describe an integrated architecture concept and model that will enable communications for evolutionary robotic and human science missions. We then define the communication nodes, their requirements, and various options to connect them.
Achieving a Launch on Demand Capability
NASA Astrophysics Data System (ADS)
Greenberg, Joel S.
2002-01-01
The ability to place payloads [satellites] into orbit as and when required, often referred to as launch on demand, continues to be an elusive and yet largely unfulfilled goal. But what is the value of achieving launch on demand [LOD], and what metrics are appropriate? Achievement of a desired level of LOD capability must consider transportation system thruput, alternative transportation systems that comprise the transportation architecture, transportation demand, reliability and failure recovery characteristics of the alternatives, schedule guarantees, launch delays, payload integration schedules, procurement policies, and other factors. Measures of LOD capability should relate to the objective of the transportation architecture: the placement of payloads into orbit as and when required. Launch on demand capability must be defined in probabilistic terms such as the probability of not incurring a delay in excess of T when it is determined that it is necessary to place a payload into orbit. Three specific aspects of launch on demand are considered: [1] the ability to recover from adversity [i.e., a launch failure] and to keep up with the steady-state demand for placing satellites into orbit [this has been referred to as operability and resiliency], [2] the ability to respond to the requirement to launch a satellite when the need arises unexpectedly either because of an unexpected [random] on-orbit satellite failure that requires replacement or because of the sudden recognition of an unanticipated requirement, and [3] the ability to recover from adversity [i.e., a launch failure] during the placement of a constellation into orbit. The objective of this paper is to outline a formal approach for analyzing alternative transportation architectures in terms of their ability to provide a LOD capability. The economic aspect of LOD is developed by establishing a relationship between scheduling and the elimination of on-orbit spares while achieving the desired level of on-orbit availability. Results of an analysis are presented. The implications of launch on demand are addressed for each of the above three situations and related architecture performance metrics and computer simulation models are described that may be used to evaluate the implications of architecture and policy changes in terms of LOD requirements. The models and metrics are aimed at providing answers to such questions as: How well does a specified space transportation architecture respond to satellite launch demand and changes thereto? How well does a normally functioning and apparently architecture respond to unanticipated needs? What is the effect of a modification to the architecture on its ability to respond to satellite launch demand, including responding to unanticipated needs? What is the cost of the architecture [including facilities, operations, inventory, and satellites]? What is the sensitivity of overall architecture effectiveness and cost to various transportation system delays? What is the effect of adding [or eliminating] a launch vehicle or family of vehicles to [from] the architecture on its effectiveness and cost? What is the value of improving launch vehicle and satellite compatibility and what are the effects on probability of delay statistics and cost of designing for multi-launch vehicle compatibility
The Exploration of Mars Launch and Assembly Simulation
NASA Technical Reports Server (NTRS)
Cates, Grant; Stromgren, Chel; Mattfeld, Bryan; Cirillo, William; Goodliff, Kandyce
2016-01-01
Advancing human exploration of space beyond Low Earth Orbit, and ultimately to Mars, is of great interest to NASA, other organizations, and space exploration advocates. Various strategies for getting to Mars have been proposed. These include NASA's Design Reference Architecture 5.0, a near-term flyby of Mars advocated by the group Inspiration Mars, and potential options developed for NASA's Evolvable Mars Campaign. Regardless of which approach is used to get to Mars, they all share a need to visualize and analyze their proposed campaign and evaluate the feasibility of the launch and on-orbit assembly segment of the campaign. The launch and assembly segment starts with flight hardware manufacturing and ends with final departure of a Mars Transfer Vehicle (MTV), or set of MTVs, from an assembly orbit near Earth. This paper describes a discrete event simulation based strategic visualization and analysis tool that can be used to evaluate the launch campaign reliability of any proposed strategy for exploration beyond low Earth orbit. The input to the simulation can be any manifest of multiple launches and their associated transit operations between Earth and the exploration destinations, including Earth orbit, lunar orbit, asteroids, moons of Mars, and ultimately Mars. The simulation output includes expected launch dates and ascent outcomes i.e., success or failure. Running 1,000 replications of the simulation provides the capability to perform launch campaign reliability analysis to determine the probability that all launches occur in a timely manner to support departure opportunities and to deliver their payloads to the intended orbit. This allows for quantitative comparisons between alternative scenarios, as well as the capability to analyze options for improving launch campaign reliability. Results are presented for representative strategies.
Barthélémy, Daniel; Caraglio, Yves
2007-01-01
Background and Aims The architecture of a plant depends on the nature and relative arrangement of each of its parts; it is, at any given time, the expression of an equilibrium between endogenous growth processes and exogenous constraints exerted by the environment. The aim of architectural analysis is, by means of observation and sometimes experimentation, to identify and understand these endogenous processes and to separate them from the plasticity of their expression resulting from external influences. Scope Using the identification of several morphological criteria and considering the plant as a whole, from germination to death, architectural analysis is essentially a detailed, multilevel, comprehensive and dynamic approach to plant development. Despite their recent origin, architectural concepts and analysis methods provide a powerful tool for studying plant form and ontogeny. Completed by precise morphological observations and appropriated quantitative methods of analysis, recent researches in this field have greatly increased our understanding of plant structure and development and have led to the establishment of a real conceptual and methodological framework for plant form and structure analysis and representation. This paper is a summarized update of current knowledge on plant architecture and morphology; its implication and possible role in various aspects of modern plant biology is also discussed. PMID:17218346
A Distributed Architecture for Tsunami Early Warning and Collaborative Decision-support in Crises
NASA Astrophysics Data System (ADS)
Moßgraber, J.; Middleton, S.; Hammitzsch, M.; Poslad, S.
2012-04-01
The presentation will describe work on the system architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". The challenges for a Tsunami Early Warning System (TEWS) are manifold and the success of a system depends crucially on the system's architecture. A modern warning system following a system-of-systems approach has to integrate various components and sub-systems such as different information sources, services and simulation systems. Furthermore, it has to take into account the distributed and collaborative nature of warning systems. In order to create an architecture that supports the whole spectrum of a modern, distributed and collaborative warning system one must deal with multiple challenges. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. At the bottom layer it has to reliably integrate a large set of conventional sensors, such as seismic sensors and sensor networks, buoys and tide gauges, and also innovative and unconventional sensors, such as streams of messages from social media services. At the top layer it has to support collaboration on high-level decision processes and facilitates information sharing between organizations. In between, the system has to process all data and integrate information on a semantic level in a timely manner. This complex communication follows an event-driven mechanism allowing events to be published, detected and consumed by various applications within the architecture. Therefore, at the upper layer the event-driven architecture (EDA) aspects are combined with principles of service-oriented architectures (SOA) using standards for communication and data exchange. The most prominent challenges on this layer include providing a framework for information integration on a syntactic and semantic level, leveraging distributed processing resources for a scalable data processing platform, and automating data processing and decision support workflows.
A Model for Communications Satellite System Architecture Assessment
2011-09-01
This is shown in Equation 4. The total system cost includes all development, acquisition, fielding, operations, maintenance and upgrades, and system...protection. A mathematical model was implemented to enable the analysis of communications satellite system architectures based on multiple system... implemented to enable the analysis of communications satellite system architectures based on multiple system attributes. Utilization of the model in
2011-12-01
systems engineering technical and technical management processes. Technical Planning, Stakeholders Requirements Development, and Architecture Design were...Stakeholder Requirements Definition, Architecture Design and Technical Planning. A purposive sampling of AFRL rapid development program managers and engineers...emphasize one process over another however Architecture Design , Implementation scored higher among Technical Processes. Decision Analysis, Technical
NASA Technical Reports Server (NTRS)
1983-01-01
Space station systems characteristics and architecture are described. A manned space station operational analysis is performed to determine crew size, crew task complexity and time tables, and crew equipment to support the definition of systems and subsystems concepts. This analysis is used to select and evaluate the architectural options for development.
A Resource Service Model in the Industrial IoT System Based on Transparent Computing.
Li, Weimin; Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang
2018-03-26
The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system.
Development and validation of sustainability criteria of administrative green schools in Iran.
Meiboudi, Hossein; Lahijanian, Akramolmolok; Shobeiri, Seyed Mohammad; Jozi, Seyed Ali; Azizinezhad, Reza
2017-07-15
Environmental responsibility in school has led to the emergence of a variety of criteria to administer green schools' contributions to sustainability. Sustainability criteria of administrative green schools need validity, reliability and norms. The aim of the current study was to develop and validate assessment criteria for green schools in Iran based on the role of academia. A national survey was conducted to obtain data on sustainability criteria initiatives for green schools and the Iranian profile was defined. An initial pool of 71 items was generated and after its first edition, 63 items were selected to comprise the sustainability criteria. Engineering-architectural and behavioral aspects of this sustainability criteria were evaluated through a sample of 1218 graduate students with environmental degrees from Iran's universities. Exploratory factor analysis using principal components and promax rotation method showed that these 9 criteria have simple structures and are consistent with the theoretical framework. The reliability coefficients of subscales ranged between 0.62 (participation) and 0.84 (building location and position). The study's survey of correlation coefficients between items and subscales illustrated that those coefficients varied between 0.24 and 0.68. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Resource Service Model in the Industrial IoT System Based on Transparent Computing
Wang, Bin; Sheng, Jinfang; Dong, Ke; Li, Zitong; Hu, Yixiang
2018-01-01
The Internet of Things (IoT) has received a lot of attention, especially in industrial scenarios. One of the typical applications is the intelligent mine, which actually constructs the Six-Hedge underground systems with IoT platforms. Based on a case study of the Six Systems in the underground metal mine, this paper summarizes the main challenges of industrial IoT from the aspects of heterogeneity in devices and resources, security, reliability, deployment and maintenance costs. Then, a novel resource service model for the industrial IoT applications based on Transparent Computing (TC) is presented, which supports centralized management of all resources including operating system (OS), programs and data on the server-side for the IoT devices, thus offering an effective, reliable, secure and cross-OS IoT service and reducing the costs of IoT system deployment and maintenance. The model has five layers: sensing layer, aggregation layer, network layer, service and storage layer and interface and management layer. We also present a detailed analysis on the system architecture and key technologies of the model. Finally, the efficiency of the model is shown by an experiment prototype system. PMID:29587450
Analog Module Architecture for Space-Qualified Field-Programmable Mixed-Signal Arrays
NASA Technical Reports Server (NTRS)
Edwards, R. Timothy; Strohbehn, Kim; Jaskulek, Steven E.; Katz, Richard
1999-01-01
Spacecraft require all manner of both digital and analog circuits. Onboard digital systems are constructed almost exclusively from field-programmable gate array (FPGA) circuits providing numerous advantages over discrete design including high integration density, high reliability, fast turn-around design cycle time, lower mass, volume, and power consumption, and lower parts acquisition and flight qualification costs. Analog and mixed-signal circuits perform tasks ranging from housekeeping to signal conditioning and processing. These circuits are painstakingly designed and built using discrete components due to a lack of options for field-programmability. FPAA (Field-Programmable Analog Array) and FPMA (Field-Programmable Mixed-signal Array) parts exist but not in radiation-tolerant technology and not necessarily in an architecture optimal for the design of analog circuits for spaceflight applications. This paper outlines an architecture proposed for an FPAA fabricated in an existing commercial digital CMOS process used to make radiation-tolerant antifuse-based FPGA devices. The primary concerns are the impact of the technology and the overall array architecture on the flexibility of programming, the bandwidth available for high-speed analog circuits, and the accuracy of the components for high-performance applications.
A Multi-Purpose Modular Electronics Integration Node for Exploration Extravehicular Activity
NASA Technical Reports Server (NTRS)
Hodgson, Edward; Papale, William; Wichowski, Robert; Rosenbush, David; Hawes, Kevin; Stankiewicz, Tom
2013-01-01
As NASA works to develop an effective integrated portable life support system design for exploration Extravehicular activity (EVA), alternatives to the current system s electrical power and control architecture are needed to support new requirements for flexibility, maintainability, reliability, and reduced mass and volume. Experience with the current Extravehicular Mobility Unit (EMU) has demonstrated that the current architecture, based in a central power supply, monitoring and control unit, with dedicated analog wiring harness connections to active components in the system has a significant impact on system packaging and seriously constrains design flexibility in adapting to component obsolescence and changing system needs over time. An alternative architecture based in the use of a digital data bus offers possible wiring harness and system power savings, but risks significant penalties in component complexity and cost. A hybrid architecture that relies on a set of electronic and power interface nodes serving functional models within the Portable Life Support System (PLSS) is proposed to minimize both packaging and component level penalties. A common interface node hardware design can further reduce penalties by reducing the nonrecurring development costs, making miniaturization more practical, maximizing opportunities for maturation and reliability growth, providing enhanced fault tolerance, and providing stable design interfaces for system components and a central control. Adaptation to varying specific module requirements can be achieved with modest changes in firmware code within the module. A preliminary design effort has developed a common set of hardware interface requirements and functional capabilities for such a node based on anticipated modules comprising an exploration PLSS, and a prototype node has been designed assembled, programmed, and tested. One instance of such a node has been adapted to support testing the swingbed carbon dioxide and humidity control element in NASA s advanced PLSS 2.0 test article. This paper will describe the common interface node design concept, results of the prototype development and test effort, and plans for use in NASA PLSS 2.0 integrated tests.
Avionics System Architecture for the NASA Orion Vehicle
NASA Technical Reports Server (NTRS)
Baggerman, Clint; McCabe, Mary; Verma, Dinesh
2009-01-01
It has been 30 years since the National Aeronautics and Space Administration (NASA) last developed a crewed spacecraft capable of launch, on-orbit operations, and landing. During that time, aerospace avionics technologies have greatly advanced in capability, and these technologies have enabled integrated avionics architectures for aerospace applications. The inception of NASA s Orion Crew Exploration Vehicle (CEV) spacecraft offers the opportunity to leverage the latest integrated avionics technologies into crewed space vehicle architecture. The outstanding question is to what extent to implement these advances in avionics while still meeting the unique crewed spaceflight requirements for safety, reliability and maintainability. Historically, aircraft and spacecraft have very similar avionics requirements. Both aircraft and spacecraft must have high reliability. They also must have as much computing power as possible and provide low latency between user control and effecter response while minimizing weight, volume, and power. However, there are several key differences between aircraft and spacecraft avionics. Typically, the overall spacecraft operational time is much shorter than aircraft operation time, but the typical mission time (and hence, the time between preventive maintenance) is longer for a spacecraft than an aircraft. Also, the radiation environment is typically more severe for spacecraft than aircraft. A "loss of mission" scenario (i.e. - the mission is not a success, but there are no casualties) arguably has a greater impact on a multi-million dollar spaceflight mission than a typical commercial flight. Such differences need to be weighted when determining if an aircraft-like integrated modular avionics (IMA) system is suitable for a crewed spacecraft. This paper will explore the preliminary design process of the Orion vehicle avionics system by first identifying the Orion driving requirements and the difference between Orion requirements and those of other previous crewed spacecraft avionics systems. Common systems engineering methods will be used to evaluate the value propositions, or the factors that weight most heavily in design consideration, of Orion and other aerospace systems. Then, the current Orion avionics architecture will be presented and evaluated.
GOES-R GS Product Generation Infrastructure Operations
NASA Astrophysics Data System (ADS)
Blanton, M.; Gundy, J.
2012-12-01
GOES-R GS Product Generation Infrastructure Operations: The GOES-R Ground System (GS) will produce a much larger set of products with higher data density than previous GOES systems. This requires considerably greater compute and memory resources to achieve the necessary latency and availability for these products. Over time, new algorithms could be added and existing ones removed or updated, but the GOES-R GS cannot go down during this time. To meet these GOES-R GS processing needs, the Harris Corporation will implement a Product Generation (PG) infrastructure that is scalable, extensible, extendable, modular and reliable. The primary parts of the PG infrastructure are the Service Based Architecture (SBA), which includes the Distributed Data Fabric (DDF). The SBA is the middleware that encapsulates and manages science algorithms that generate products. The SBA is divided into three parts, the Executive, which manages and configures the algorithm as a service, the Dispatcher, which provides data to the algorithm, and the Strategy, which determines when the algorithm can execute with the available data. The SBA is a distributed architecture, with services connected to each other over a compute grid and is highly scalable. This plug-and-play architecture allows algorithms to be added, removed, or updated without affecting any other services or software currently running and producing data. Algorithms require product data from other algorithms, so a scalable and reliable messaging is necessary. The SBA uses the DDF to provide this data communication layer between algorithms. The DDF provides an abstract interface over a distributed and persistent multi-layered storage system (memory based caching above disk-based storage) and an event system that allows algorithm services to know when data is available and to get the data that they need to begin processing when they need it. Together, the SBA and the DDF provide a flexible, high performance architecture that can meet the needs of product processing now and as they grow in the future.
NASA Astrophysics Data System (ADS)
Strotov, Valery V.; Taganov, Alexander I.; Konkin, Yuriy V.; Kolesenkov, Aleksandr N.
2017-10-01
Task of processing and analysis of obtained Earth remote sensing data on ultra-small spacecraft board is actual taking into consideration significant expenditures of energy for data transfer and low productivity of computers. Thereby, there is an issue of effective and reliable storage of the general information flow obtained from onboard systems of information collection, including Earth remote sensing data, into a specialized data base. The paper has considered peculiarities of database management system operation with the multilevel memory structure. For storage of data in data base the format has been developed that describes a data base physical structure which contains required parameters for information loading. Such structure allows reducing a memory size occupied by data base because it is not necessary to store values of keys separately. The paper has shown architecture of the relational database management system oriented into embedment into the onboard ultra-small spacecraft software. Data base for storage of different information, including Earth remote sensing data, can be developed by means of such database management system for its following processing. Suggested database management system architecture has low requirements to power of the computer systems and memory resources on the ultra-small spacecraft board. Data integrity is ensured under input and change of the structured information.
Framework for a space shuttle main engine health monitoring system
NASA Technical Reports Server (NTRS)
Hawman, Michael W.; Galinaitis, William S.; Tulpule, Sharayu; Mattedi, Anita K.; Kamenetz, Jeffrey
1990-01-01
A framework developed for a health management system (HMS) which is directed at improving the safety of operation of the Space Shuttle Main Engine (SSME) is summarized. An emphasis was placed on near term technology through requirements to use existing SSME instrumentation and to demonstrate the HMS during SSME ground tests within five years. The HMS framework was developed through an analysis of SSME failure modes, fault detection algorithms, sensor technologies, and hardware architectures. A key feature of the HMS framework design is that a clear path from the ground test system to a flight HMS was maintained. Fault detection techniques based on time series, nonlinear regression, and clustering algorithms were developed and demonstrated on data from SSME ground test failures. The fault detection algorithms exhibited 100 percent detection of faults, had an extremely low false alarm rate, and were robust to sensor loss. These algorithms were incorporated into a hierarchical decision making strategy for overall assessment of SSME health. A preliminary design for a hardware architecture capable of supporting real time operation of the HMS functions was developed. Utilizing modular, commercial off-the-shelf components produced a reliable low cost design with the flexibility to incorporate advances in algorithm and sensor technology as they become available.
Mathematical Modeling of RNA-Based Architectures for Closed Loop Control of Gene Expression.
Agrawal, Deepak K; Tang, Xun; Westbrook, Alexandra; Marshall, Ryan; Maxwell, Colin S; Lucks, Julius; Noireaux, Vincent; Beisel, Chase L; Dunlop, Mary J; Franco, Elisa
2018-05-08
Feedback allows biological systems to control gene expression precisely and reliably, even in the presence of uncertainty, by sensing and processing environmental changes. Taking inspiration from natural architectures, synthetic biologists have engineered feedback loops to tune the dynamics and improve the robustness and predictability of gene expression. However, experimental implementations of biomolecular control systems are still far from satisfying performance specifications typically achieved by electrical or mechanical control systems. To address this gap, we present mathematical models of biomolecular controllers that enable reference tracking, disturbance rejection, and tuning of the temporal response of gene expression. These controllers employ RNA transcriptional regulators to achieve closed loop control where feedback is introduced via molecular sequestration. Sensitivity analysis of the models allows us to identify which parameters influence the transient and steady state response of a target gene expression process, as well as which biologically plausible parameter values enable perfect reference tracking. We quantify performance using typical control theory metrics to characterize response properties and provide clear selection guidelines for practical applications. Our results indicate that RNA regulators are well-suited for building robust and precise feedback controllers for gene expression. Additionally, our approach illustrates several quantitative methods useful for assessing the performance of biomolecular feedback control systems.
Automotive System for Remote Surface Classification.
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-04-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions.
Automotive System for Remote Surface Classification
Bystrov, Aleksandr; Hoare, Edward; Tran, Thuy-Yung; Clarke, Nigel; Gashinova, Marina; Cherniakov, Mikhail
2017-01-01
In this paper we shall discuss a novel approach to road surface recognition, based on the analysis of backscattered microwave and ultrasonic signals. The novelty of our method is sonar and polarimetric radar data fusion, extraction of features for separate swathes of illuminated surface (segmentation), and using of multi-stage artificial neural network for surface classification. The developed system consists of 24 GHz radar and 40 kHz ultrasonic sensor. The features are extracted from backscattered signals and then the procedures of principal component analysis and supervised classification are applied to feature data. The special attention is paid to multi-stage artificial neural network which allows an overall increase in classification accuracy. The proposed technique was tested for recognition of a large number of real surfaces in different weather conditions with the average accuracy of correct classification of 95%. The obtained results thereby demonstrate that the use of proposed system architecture and statistical methods allow for reliable discrimination of various road surfaces in real conditions. PMID:28368297
Principles, Techniques, and Applications of Tissue Microfluidics
NASA Technical Reports Server (NTRS)
Wade, Lawrence A.; Kartalov, Emil P.; Shibata, Darryl; Taylor, Clive
2011-01-01
The principle of tissue microfluidics and its resultant techniques has been applied to cell analysis. Building microfluidics to suit a particular tissue sample would allow the rapid, reliable, inexpensive, highly parallelized, selective extraction of chosen regions of tissue for purposes of further biochemical analysis. Furthermore, the applicability of the techniques ranges beyond the described pathology application. For example, they would also allow the posing and successful answering of new sets of questions in many areas of fundamental research. The proposed integration of microfluidic techniques and tissue slice samples is called "tissue microfluidics" because it molds the microfluidic architectures in accordance with each particular structure of each specific tissue sample. Thus, microfluidics can be built around the tissues, following the tissue structure, or alternatively, the microfluidics can be adapted to the specific geometry of particular tissues. By contrast, the traditional approach is that microfluidic devices are structured in accordance with engineering considerations, while the biological components in applied devices are forced to comply with these engineering presets.
External Dependencies-Driven Architecture Discovery and Analysis of Implemented Systems
NASA Technical Reports Server (NTRS)
Ganesan, Dharmalingam; Lindvall, Mikael; Ron, Monica
2014-01-01
A method for architecture discovery and analysis of implemented systems (AIS) is disclosed. The premise of the method is that architecture decisions are inspired and influenced by the external entities that the software system makes use of. Examples of such external entities are COTS components, frameworks, and ultimately even the programming language itself and its libraries. Traces of these architecture decisions can thus be found in the implemented software and is manifested in the way software systems use such external entities. While this fact is often ignored in contemporary reverse engineering methods, the AIS method actively leverages and makes use of the dependencies to external entities as a starting point for the architecture discovery. The AIS method is demonstrated using the NASA's Space Network Access System (SNAS). The results show that, with abundant evidence, the method offers reusable and repeatable guidelines for discovering the architecture and locating potential risks (e.g. low testability, decreased performance) that are hidden deep in the implementation. The analysis is conducted by using external dependencies to identify, classify and review a minimal set of key source code files. Given the benefits of analyzing external dependencies as a way to discover architectures, it is argued that external dependencies deserve to be treated as first-class citizens during reverse engineering. The current structure of a knowledge base of external entities and analysis questions with strategies for getting answers is also discussed.
Performance Analysis of Multilevel Parallel Applications on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Biegel, Bryan A. (Technical Monitor); Jost, G.; Jin, H.; Labarta J.; Gimenez, J.; Caubet, J.
2003-01-01
Parallel programming paradigms include process level parallelism, thread level parallelization, and multilevel parallelism. This viewgraph presentation describes a detailed performance analysis of these paradigms for Shared Memory Architecture (SMA). This analysis uses the Paraver Performance Analysis System. The presentation includes diagrams of a flow of useful computations.
Optimization of shared autonomy vehicle control architectures for swarm operations.
Sengstacken, Aaron J; DeLaurentis, Daniel A; Akbarzadeh-T, Mohammad R
2010-08-01
The need for greater capacity in automotive transportation (in the midst of constrained resources) and the convergence of key technologies from multiple domains may eventually produce the emergence of a "swarm" concept of operations. The swarm, which is a collection of vehicles traveling at high speeds and in close proximity, will require technology and management techniques to ensure safe, efficient, and reliable vehicle interactions. We propose a shared autonomy control approach, in which the strengths of both human drivers and machines are employed in concert for this management. Building from a fuzzy logic control implementation, optimal architectures for shared autonomy addressing differing classes of drivers (represented by the driver's response time) are developed through a genetic-algorithm-based search for preferred fuzzy rules. Additionally, a form of "phase transition" from a safe to an unsafe swarm architecture as the amount of sensor capability is varied uncovers key insights on the required technology to enable successful shared autonomy for swarm operations.
Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop
NASA Astrophysics Data System (ADS)
Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.
2018-04-01
The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.
Materassi, Donatello; Baschieri, Paolo; Tiribilli, Bruno; Zuccheri, Giampaolo; Samorì, Bruno
2009-08-01
We describe the realization of an atomic force microscope architecture designed to perform customizable experiments in a flexible and automatic way. Novel technological contributions are given by the software implementation platform (RTAI-LINUX), which is free and open source, and from a functional point of view, by the implementation of hard real-time control algorithms. Some other technical solutions such as a new way to estimate the optical lever constant are described as well. The adoption of this architecture provides many degrees of freedom in the device behavior and, furthermore, allows one to obtain a flexible experimental instrument at a relatively low cost. In particular, we show how such a system has been employed to obtain measures in sophisticated single-molecule force spectroscopy experiments [Fernandez and Li, Science 303, 1674 (2004)]. Experimental results on proteins already studied using the same methodologies are provided in order to show the reliability of the measure system.
An architectural approach to create self organizing control systems for practical autonomous robots
NASA Technical Reports Server (NTRS)
Greiner, Helen
1991-01-01
For practical industrial applications, the development of trainable robots is an important and immediate objective. Therefore, the developing of flexible intelligence directly applicable to training is emphasized. It is generally agreed upon by the AI community that the fusion of expert systems, neural networks, and conventionally programmed modules (e.g., a trajectory generator) is promising in the quest for autonomous robotic intelligence. Autonomous robot development is hindered by integration and architectural problems. Some obstacles towards the construction of more general robot control systems are as follows: (1) Growth problem; (2) Software generation; (3) Interaction with environment; (4) Reliability; and (5) Resource limitation. Neural networks can be successfully applied to some of these problems. However, current implementations of neural networks are hampered by the resource limitation problem and must be trained extensively to produce computationally accurate output. A generalization of conventional neural nets is proposed, and an architecture is offered in an attempt to address the above problems.
NASA Technical Reports Server (NTRS)
Ruiz, Ian B.; Burke, Gary R.; Lung, Gerald; Whitaker, William D.; Nowicki, Robert M.
2004-01-01
The Jet Propulsion Laboratory (JPL) has developed a command interface chip-set that primarily consists of two mixed-signal ASICs'; the Command Interface ASIC (CIA) and Analog Interface ASIC (AIA). The Open-systems architecture employed during the design of this chip-set enables its use as both an intelligent gateway between the system's flight computer and the control, actuation, and activation of the spacecraft's loads, valves, and pyrotechnics respectfully as well as the regulator of the spacecraft power bus. Furthermore, the architecture is highly adaptable and employed fault-tolerant design methods enabling a host of other mission uses including reliable remote data collection. The objective of this design is to both provide a needed flight component that meets the stringent environmental requirements of current deep space missions and to add a new element to a growing library that can be used as a standard building block for future missions to the outer planets.
Newborn screening healthcare information system based on service-oriented architecture.
Hsieh, Sung-Huai; Hsieh, Sheau-Ling; Chien, Yin-Hsiu; Weng, Yung-Ching; Hsu, Kai-Ping; Chen, Chi-Huang; Tu, Chien-Ming; Wang, Zhenyu; Lai, Feipei
2010-08-01
In this paper, we established a newborn screening system under the HL7/Web Services frameworks. We rebuilt the NTUH Newborn Screening Laboratory's original standalone architecture, having various heterogeneous systems operating individually, and restructured it into a Service-Oriented Architecture (SOA), distributed platform for further integrity and enhancements of sample collections, testing, diagnoses, evaluations, treatments or follow-up services, screening database management, as well as collaboration, communication among hospitals; decision supports and improving screening accuracy over the Taiwan neonatal systems are also addressed. In addition, the new system not only integrates the newborn screening procedures among phlebotomy clinics, referral hospitals, as well as the newborn screening center in Taiwan, but also introduces new models of screening procedures for the associated, medical practitioners. Furthermore, it reduces the burden of manual operations, especially the reporting services, those were heavily dependent upon previously. The new system can accelerate the whole procedures effectively and efficiently. It improves the accuracy and the reliability of the screening by ensuring the quality control during the processing as well.
Hernandez-Jayo, Unai; De-la-Iglesia, Idoia; Perez, Jagoba
2015-07-29
V-Alert is a cooperative application to be deployed in the frame of Smart Cities with the aim of reducing the probability of accidents involving Vulnerable Road Users (VRU) and vehicles. The architecture of V-Alert combines short- and long-range communication technologies in order to provide more time to the drivers and VRU to take the appropriate maneuver and avoid a possible collision. The information generated by mobile sensors (vehicles and cyclists) is sent over this heterogeneous communication architecture and processed in a central server, the Drivers Cloud, which is in charge of generating the messages that are shown on the drivers' and cyclists' Human Machine Interface (HMI). First of all, V-Alert has been tested in a simulated scenario to check the communications architecture in a complex scenario and, once it was validated, all the elements of V-Alert have been moved to a real scenario to check the application reliability. All the results are shown along the length of this paper.
Application of Risk within Net Present Value Calculations for Government Projects
NASA Technical Reports Server (NTRS)
Grandl, Paul R.; Youngblood, Alisha D.; Componation, Paul; Gholston, Sampson
2007-01-01
In January 2004, President Bush announced a new vision for space exploration. This included retirement of the current Space Shuttle fleet by 2010 and the development of new set of launch vehicles. The President's vision did not include significant increases in the NASA budget, so these development programs need to be cost conscious. Current trade study procedures address factors such as performance, reliability, safety, manufacturing, maintainability, operations, and costs. It would be desirable, however, to have increased insight into the cost factors behind each of the proposed system architectures. This paper reports on a set of component trade studies completed on the upper stage engine for the new launch vehicles. Increased insight into architecture costs was developed by including a Net Present Value (NPV) method and applying a set of associated risks to the base parametric cost data. The use of the NPV method along with the risks was found to add fidelity to the trade study and provide additional information to support the selection of a more robust design architecture.
Airport Surface Network Architecture Definition
NASA Technical Reports Server (NTRS)
Nguyen, Thanh C.; Eddy, Wesley M.; Bretmersky, Steven C.; Lawas-Grodek, Fran; Ellis, Brenda L.
2006-01-01
Currently, airport surface communications are fragmented across multiple types of systems. These communication systems for airport operations at most airports today are based dedicated and separate architectures that cannot support system-wide interoperability and information sharing. The requirements placed upon the Communications, Navigation, and Surveillance (CNS) systems in airports are rapidly growing and integration is urgently needed if the future vision of the National Airspace System (NAS) and the Next Generation Air Transportation System (NGATS) 2025 concept are to be realized. To address this and other problems such as airport surface congestion, the Space Based Technologies Project s Surface ICNS Network Architecture team at NASA Glenn Research Center has assessed airport surface communications requirements, analyzed existing and future surface applications, and defined a set of architecture functions that will help design a scalable, reliable and flexible surface network architecture to meet the current and future needs of airport operations. This paper describes the systems approach or methodology to networking that was employed to assess airport surface communications requirements, analyze applications, and to define the surface network architecture functions as the building blocks or components of the network. The systems approach used for defining these functions is relatively new to networking. It is viewing the surface network, along with its environment (everything that the surface network interacts with or impacts), as a system. Associated with this system are sets of services that are offered by the network to the rest of the system. Therefore, the surface network is considered as part of the larger system (such as the NAS), with interactions and dependencies between the surface network and its users, applications, and devices. The surface network architecture includes components such as addressing/routing, network management, network performance and security.
A flexible architecture for advanced process control solutions
NASA Astrophysics Data System (ADS)
Faron, Kamyar; Iourovitski, Ilia
2005-05-01
Advanced Process Control (APC) is now mainstream practice in the semiconductor manufacturing industry. Over the past decade and a half APC has evolved from a "good idea", and "wouldn"t it be great" concept to mandatory manufacturing practice. APC developments have primarily dealt with two major thrusts, algorithms and infrastructure, and often the line between them has been blurred. The algorithms have evolved from very simple single variable solutions to sophisticated and cutting edge adaptive multivariable (input and output) solutions. Spending patterns in recent times have demanded that the economics of a comprehensive APC infrastructure be completely justified for any and all cost conscious manufacturers. There are studies suggesting integration costs as high as 60% of the total APC solution costs. Such cost prohibitive figures clearly diminish the return on APC investments. This has limited the acceptance and development of pure APC infrastructure solutions for many fabs. Modern APC solution architectures must satisfy the wide array of requirements from very manual R&D environments to very advanced and automated "lights out" manufacturing facilities. A majority of commercially available control solutions and most in house developed solutions lack important attributes of scalability, flexibility, and adaptability and hence require significant resources for integration, deployment, and maintenance. Many APC improvement efforts have been abandoned and delayed due to legacy systems and inadequate architectural design. Recent advancements (Service Oriented Architectures) in the software industry have delivered ideal technologies for delivering scalable, flexible, and reliable solutions that can seamlessly integrate into any fabs" existing system and business practices. In this publication we shall evaluate the various attributes of the architectures required by fabs and illustrate the benefits of a Service Oriented Architecture to satisfy these requirements. Blue Control Technologies has developed an advance service oriented architecture Run to Run Control System which addresses these requirements.
NASA Astrophysics Data System (ADS)
Ghasemi, S.; Khorasani, K.
2015-10-01
In this paper, the problem of fault detection and isolation (FDI) of the attitude control subsystem (ACS) of spacecraft formation flying systems is considered. For developing the FDI schemes, an extended Kalman filter (EKF) is utilised which belongs to a class of nonlinear state estimation methods. Three architectures, namely centralised, decentralised, and semi-decentralised, are considered and the corresponding FDI strategies are designed and constructed. Appropriate residual generation techniques and threshold selection criteria are proposed for these architectures. The capabilities of the proposed architectures for accomplishing the FDI tasks are studied through extensive numerical simulations for a team of four satellites in formation flight. Using a confusion matrix evaluation criterion, it is shown that the centralised architecture can achieve the most reliable results relative to the semi-decentralised and decentralised architectures at the expense of availability of a centralised processing module that requires the entire team information set. On the other hand, the semi-decentralised performance is close to the centralised scheme without relying on the availability of the entire team information set. Furthermore, the results confirm that the FDI results in formations with angular velocity measurement sensors achieve higher level of accuracy, true faulty, and precision, along with lower level of false healthy misclassification as compared to the formations that utilise attitude measurement sensors.
Toward reliable and repeatable automated STEM-EDS metrology with high throughput
NASA Astrophysics Data System (ADS)
Zhong, Zhenxin; Donald, Jason; Dutrow, Gavin; Roller, Justin; Ugurlu, Ozan; Verheijen, Martin; Bidiuk, Oleksii
2018-03-01
New materials and designs in complex 3D architectures in logic and memory devices have raised complexity in S/TEM metrology. In this paper, we report about a newly developed, automated, scanning transmission electron microscopy (STEM) based, energy dispersive X-ray spectroscopy (STEM-EDS) metrology method that addresses these challenges. Different methodologies toward repeatable and efficient, automated STEM-EDS metrology with high throughput are presented: we introduce the best known auto-EDS acquisition and quantification methods for robust and reliable metrology and present how electron exposure dose impacts the EDS metrology reproducibility, either due to poor signalto-noise ratio (SNR) at low dose or due to sample modifications at high dose conditions. Finally, we discuss the limitations of the STEM-EDS metrology technique and propose strategies to optimize the process both in terms of throughput and metrology reliability.
NASA Technical Reports Server (NTRS)
Takada, Kevin C.; Ghariani, Ahmed E.; Van Keuren,
2015-01-01
The state-of-the-art Oxygen Generation Assembly (OGA) has been reliably producing breathing oxygen for the crew aboard the International Space Station (ISS) for over eight years. Lessons learned from operating the ISS OGA have led to proposing incremental improvements to advance the baseline design for use in a future long duration mission. These improvements are intended to reduce system weight, crew maintenance time and resupply mass from Earth while increasing reliability. The proposed improvements include replacing the cell stack membrane material, deleting the nitrogen purge equipment, replacing the hydrogen sensors, deleting the wastewater interface, replacing the hydrogen dome and redesigning the cell stack power supply. The development work to date will be discussed and forward work will be outlined. Additionally, a redesigned system architecture will be proposed.
Delay and Disruption Tolerant Networking MACHETE Model
NASA Technical Reports Server (NTRS)
Segui, John S.; Jennings, Esther H.; Gao, Jay L.
2011-01-01
To verify satisfaction of communication requirements imposed by unique missions, as early as 2000, the Communications Networking Group at the Jet Propulsion Laboratory (JPL) saw the need for an environment to support interplanetary communication protocol design, validation, and characterization. JPL's Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE), described in Simulator of Space Communication Networks (NPO-41373) NASA Tech Briefs, Vol. 29, No. 8 (August 2005), p. 44, combines various commercial, non-commercial, and in-house custom tools for simulation and performance analysis of space networks. The MACHETE environment supports orbital analysis, link budget analysis, communications network simulations, and hardware-in-the-loop testing. As NASA is expanding its Space Communications and Navigation (SCaN) capabilities to support planned and future missions, building infrastructure to maintain services and developing enabling technologies, an important and broader role is seen for MACHETE in design-phase evaluation of future SCaN architectures. To support evaluation of the developing Delay Tolerant Networking (DTN) field and its applicability for space networks, JPL developed MACHETE models for DTN Bundle Protocol (BP) and Licklider/Long-haul Transmission Protocol (LTP). DTN is an Internet Research Task Force (IRTF) architecture providing communication in and/or through highly stressed networking environments such as space exploration and battlefield networks. Stressed networking environments include those with intermittent (predictable and unknown) connectivity, large and/or variable delays, and high bit error rates. To provide its services over existing domain specific protocols, the DTN protocols reside at the application layer of the TCP/IP stack, forming a store-and-forward overlay network. The key capabilities of the Bundle Protocol include custody-based reliability, the ability to cope with intermittent connectivity, the ability to take advantage of scheduled and opportunistic connectivity, and late binding of names to addresses.
Meaningful Cost-Benefit Analysis for Service-Oriented Architecture Projects
2010-05-01
SOA to identify these activities and shows how those costs come to be commingled with other development and maintenance activities. The paper argues...affected by SOA . To be consistent with the separation suggested above, this paper suggests the following new activities: • Enterprise architecture...Government or Federal Rights License 14. ABSTRACT This paper argues that proper cost-benefit analysis of service-oriented architecture projects is not
Digital avionics systems - Principles and practices (2nd revised and enlarged edition)
NASA Technical Reports Server (NTRS)
Spitzer, Cary R.
1993-01-01
The state of the art in digital avionics systems is surveyed. The general topics addressed include: establishing avionics system requirements; avionics systems essentials in data bases, crew interfaces, and power; fault tolerance, maintainability, and reliability; architectures; packaging and fitting the system into the aircraft; hardware assessment and validation; software design, assessment, and validation; determining the costs of avionics.
Using microgrids to enhance energy security and resilience
Lu, Xiaonan; Wang, Jianhui; Guo, Liping
2016-12-05
Although microgrids are now widely studied, challenges still exist. A reliable control architecture needs to be developed to coordinate different devices. Advanced forecasting and demand response management approaches should be implemented to cope with the intermittence of renewable generation. Furthermore, interconnection issues should be further studied to eliminate the influence of microgrid integration and achieve coordinated operation throughout the system.
An operating system for future aerospace vehicle computer systems
NASA Technical Reports Server (NTRS)
Foudriat, E. C.; Berman, W. J.; Will, R. W.; Bynum, W. L.
1984-01-01
The requirements for future aerospace vehicle computer operating systems are examined in this paper. The computer architecture is assumed to be distributed with a local area network connecting the nodes. Each node is assumed to provide a specific functionality. The network provides for communication so that the overall tasks of the vehicle are accomplished. The O/S structure is based upon the concept of objects. The mechanisms for integrating node unique objects with node common objects in order to implement both the autonomy and the cooperation between nodes is developed. The requirements for time critical performance and reliability and recovery are discussed. Time critical performance impacts all parts of the distributed operating system; e.g., its structure, the functional design of its objects, the language structure, etc. Throughout the paper the tradeoffs - concurrency, language structure, object recovery, binding, file structure, communication protocol, programmer freedom, etc. - are considered to arrive at a feasible, maximum performance design. Reliability of the network system is considered. A parallel multipath bus structure is proposed for the control of delivery time for time critical messages. The architecture also supports immediate recovery for the time critical message system after a communication failure.
Distributed controller clustering in software defined networks
Gani, Abdullah; Akhunzada, Adnan; Talebian, Hamid; Choo, Kim-Kwang Raymond
2017-01-01
Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability. PMID:28384312
DOT National Transportation Integrated Search
2007-11-01
The purpose of this document is to provide an Architecture Analysis : for the Next Generation 911 (NG911) System (or system : of systems). The U.S. Department of Transportation (USDOT) : understands that access to emergency services...
Resting-state fMRI correlations: From link-wise unreliability to whole brain stability.
Pannunzi, Mario; Hindriks, Rikkert; Bettinardi, Ruggero G; Wenger, Elisabeth; Lisofsky, Nina; Martensson, Johan; Butler, Oisin; Filevich, Elisa; Becker, Maxi; Lochstet, Martyna; Kühn, Simone; Deco, Gustavo
2017-08-15
The functional architecture of spontaneous BOLD fluctuations has been characterized in detail by numerous studies, demonstrating its potential relevance as a biomarker. However, the systematic investigation of its consistency is still in its infancy. Here, we analyze within- and between-subject variability and test-retest reliability of resting-state functional connectivity (FC) in a unique data set comprising multiple fMRI scans (42) from 5 subjects, and 50 single scans from 50 subjects. We adopt a statistical framework that enables us to identify different sources of variability in FC. We show that the low reliability of single links can be significantly improved by using multiple scans per subject. Moreover, in contrast to earlier studies, we show that spatial heterogeneity in FC reliability is not significant. Finally, we demonstrate that despite the low reliability of individual links, the information carried by the whole-brain FC matrix is robust and can be used as a functional fingerprint to identify individual subjects from the population. Copyright © 2017 Elsevier Inc. All rights reserved.
The Aeronautical Data Link: Taxonomy, Architectural Analysis, and Optimization
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Goode, Plesent W.
2002-01-01
The future Communication, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) System will rely on global satellite navigation, and ground-based and satellite based communications via Multi-Protocol Networks (e.g. combined Aeronautical Telecommunications Network (ATN)/Internet Protocol (IP)) to bring about needed improvements in efficiency and safety of operations to meet increasing levels of air traffic. This paper will discuss the development of an approach that completely describes optimal data link architecture configuration and behavior to meet the multiple conflicting objectives of concurrent and different operations functions. The practical application of the approach enables the design and assessment of configurations relative to airspace operations phases. The approach includes a formal taxonomic classification, an architectural analysis methodology, and optimization techniques. The formal taxonomic classification provides a multidimensional correlation of data link performance with data link service, information protocol, spectrum, and technology mode; and to flight operations phase and environment. The architectural analysis methodology assesses the impact of a specific architecture configuration and behavior on the local ATM system performance. Deterministic and stochastic optimization techniques maximize architectural design effectiveness while addressing operational, technology, and policy constraints.
Bai, Xufeng; Zhao, Hu; Huang, Yong; Xie, Weibo; Han, Zhongmin; Zhang, Bo; Guo, Zilong; Yang, Lin; Dong, Haijiao; Xue, Weiya; Li, Guangwei; Hu, Gang; Hu, Yong; Xing, Yongzhong
2016-07-01
Panicle architecture determines the number of spikelets per panicle (SPP) and is highly associated with grain yield in rice ( L.). Understanding the genetic basis of panicle architecture is important for improving the yield of rice grain. In this study, we dissected panicle architecture traits into eight components, which were phenotyped from a germplasm collection of 529 cultivars. Multiple regression analysis revealed that the number of secondary branch (NSB) was the major factor that contributed to SPP. Genome-wide association analysis was performed independently for the eight particle architecture traits observed in the and rice subpopulations compared with the whole rice population. In total, 30 loci were associated with these traits. Of these, 13 loci were closely linked to known panicle architecture genes, and 17 novel loci were repeatedly identified in different environments. An association signal cluster was identified for NSB and number of spikelets per secondary branch (NSSB) in the region of 31.6 to 31.7 Mb on chromosome 4. In addition to the common associations detected in both and subpopulations, many associated loci were unique to one subpopulation. For example, and were specifically associated with panicle length (PL) in and rice, respectively. Moreover, the -mediated flowering genes and were associated with the formation of panicle architecture in rice. These results suggest that different gene networks regulate panicle architecture in and rice. Copyright © 2016 Crop Science Society of America.
Rapid earthquake detection through GPU-Based template matching
NASA Astrophysics Data System (ADS)
Mu, Dawei; Lee, En-Jui; Chen, Po
2017-12-01
The template-matching algorithm (TMA) has been widely adopted for improving the reliability of earthquake detection. The TMA is based on calculating the normalized cross-correlation coefficient (NCC) between a collection of selected template waveforms and the continuous waveform recordings of seismic instruments. In realistic applications, the computational cost of the TMA is much higher than that of traditional techniques. In this study, we provide an analysis of the TMA and show how the GPU architecture provides an almost ideal environment for accelerating the TMA and NCC-based pattern recognition algorithms in general. So far, our best-performing GPU code has achieved a speedup factor of more than 800 with respect to a common sequential CPU code. We demonstrate the performance of our GPU code using seismic waveform recordings from the ML 6.6 Meinong earthquake sequence in Taiwan.
NASA Astrophysics Data System (ADS)
Cofré, Aarón; Vargas, Asticio; Torres-Ruiz, Fabián A.; Campos, Juan; Lizana, Angel; del Mar Sánchez-López, María; Moreno, Ignacio
2017-11-01
We present a quantitative analysis of the performance of a complete snapshot polarimeter based on a polarization diffraction grating (PDGr). The PDGr is generated in a common path polarization interferometer with a Z optical architecture that uses two liquid-crystal on silicon (LCoS) displays to imprint two different phase-only diffraction gratings onto two orthogonal linear states of polarization. As a result, we obtain a programmable PDGr capable to act as a simultaneous polarization state generator (PSG), yielding diffraction orders with different states of polarization. The same system is also shown to operate as a polarization state analyzer (PSA), therefore useful for the realization of a snapshot polarimeter. We analyze its performance using quantitative metrics such as the conditional number, and verify its reliability for the detection of states of polarization.
An automated environment for multiple spacecraft engineering subsystem mission operations
NASA Technical Reports Server (NTRS)
Bahrami, K. A.; Hioe, K.; Lai, J.; Imlay, E.; Schwuttke, U.; Hsu, E.; Mikes, S.
1990-01-01
Flight operations at the Jet Propulsion Laboratory (JPL) are now performed by teams of specialists, each team dedicated to a particular spacecraft. Certain members of each team are responsible for monitoring the performances of their respective spacecraft subsystems. Ground operations, which are very complex, are manual, labor-intensive, slow, and tedious, and therefore costly and inefficient. The challenge of the new decade is to operate a large number of spacecraft simultaneously while sharing limited human and computer resources, without compromising overall reliability. The Engineering Analysis Subsystem Environment (EASE) is an architecture that enables fewer controllers to monitor and control spacecraft engineering subsystems. A prototype of EASE has been installed in the JPL Space Flight Operations Facility for on-line testing. This article describes the underlying concept, development, testing, and benefits of the EASE prototype.
Architecture of a framework for providing information services for public transport.
García, Carmelo R; Pérez, Ricardo; Lorenzo, Alvaro; Quesada-Arencibia, Alexis; Alayón, Francisco; Padrón, Gabino
2012-01-01
This paper presents OnRoute, a framework for developing and running ubiquitous software that provides information services to passengers of public transportation, including payment systems and on-route guidance services. To achieve a high level of interoperability, accessibility and context awareness, OnRoute uses the ubiquitous computing paradigm. To guarantee the quality of the software produced, the reliable software principles used in critical contexts, such as automotive systems, are also considered by the framework. The main components of its architecture (run-time, system services, software components and development discipline) and how they are deployed in the transportation network (stations and vehicles) are described in this paper. Finally, to illustrate the use of OnRoute, the development of a guidance service for travellers is explained.
Centralized vs decentralized lunar power system study
NASA Astrophysics Data System (ADS)
Metcalf, Kenneth; Harty, Richard B.; Perronne, Gerald E.
1991-09-01
Three power-system options are considered with respect to utilization on a lunar base: the fully centralized option, the fully decentralized option, and a hybrid comprising features of the first two options. Power source, power conditioning, and power transmission are considered separately, and each architecture option is examined with ac and dc distribution, high and low voltage transmission, and buried and suspended cables. Assessments are made on the basis of mass, technological complexity, cost, reliability, and installation complexity, however, a preferred power-system architecture is not proposed. Preferred options include transmission based on ac, transmission voltages of 2000-7000 V with buried high-voltage lines and suspended low-voltage lines. Assessments of the total cost associated with the installations are required to determine the most suitable power system.
Spacecraft on-board SAR image generation for EOS-type missions
NASA Technical Reports Server (NTRS)
Liu, K. Y.; Arens, W. E.; Assal, H. M.; Vesecky, J. F.
1987-01-01
Spacecraft on-board synthetic aperture radar (SAR) image generation is an extremely difficult problem because of the requirements for high computational rates (usually on the order of Giga-operations per second), high reliability (some missions last up to 10 years), and low power dissipation and mass (typically less than 500 watts and 100 Kilograms). Recently, a JPL study was performed to assess the feasibility of on-board SAR image generation for EOS-type missions. This paper summarizes the results of that study. Specifically, it proposes a processor architecture using a VLSI time-domain parallel array for azimuth correlation. Using available space qualifiable technology to implement the proposed architecture, an on-board SAR processor having acceptable power and mass characteristics appears feasible for EOS-type applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
April M. Whaley; Stacey M. L. Hendrickson; Ronald L. Boring
In response to Staff Requirements Memorandum (SRM) SRM-M061020, the U.S. Nuclear Regulatory Commission (NRC) is sponsoring work to update the technical basis underlying human reliability analysis (HRA) in an effort to improve the robustness of HRA. The ultimate goal of this work is to develop a hybrid of existing methods addressing limitations of current HRA models and in particular issues related to intra- and inter-method variabilities and results. This hybrid method is now known as the Integrated Decision-tree Human Event Analysis System (IDHEAS). Existing HRA methods have looked at elements of the psychological literature, but there has not previously beenmore » a systematic attempt to translate the complete span of cognition from perception to action into mechanisms that can inform HRA. Therefore, a first step of this effort was to perform a literature search of psychology, cognition, behavioral science, teamwork, and operating performance to incorporate current understanding of human performance in operating environments, thus affording an improved technical foundation for HRA. However, this literature review went one step further by mining the literature findings to establish causal relationships and explicit links between the different types of human failures, performance drivers and associated performance measures ultimately used for quantification. This is the first of two papers that detail the literature review (paper 1) and its product (paper 2). This paper describes the literature review and the high-level architecture used to organize the literature review, and the second paper (Whaley, Hendrickson, Boring, & Xing, these proceedings) describes the resultant cognitive framework.« less
Technology Challenges for Deep-Throttle Cryogenic Engines for Space Exploration
NASA Technical Reports Server (NTRS)
Brown, Kendall K.; Nelson, Karl W.
2005-01-01
Historically, cryogenic rocket engines have not been used for in-space applications due to their additional complexity, the mission need for high reliability, and the challenges of propellant boil-off. While the mission and vehicle architectures are not yet defined for the lunar and Martian robotic and human exploration objectives, cryogenic rocket engines offer the potential for higher performance and greater architecture/mission flexibility. In-situ cryogenic propellant production could enable a more robust exploration program by significantly reducing the propellant mass delivered to low earth orbit, thus warranting the evaluation of cryogenic rocket engines versus the hypergolic bi-propellant engines used in the Apollo program. A multi-use engine. one which can provide the functionality that separate engines provided in the Apollo mission architecture, is desirable for lunar and Mars exploration missions because it increases overall architecture effectiveness through commonality and modularity. The engine requirement derivation process must address each unique mission application and each unique phase within each mission. The resulting requirements, such as thrust level, performance, packaging, bum duration, number of operations; required impulses for each trajectory phase; operation after extended space or surface exposure; availability for inspection and maintenance; throttle range for planetary descent, ascent, acceleration limits and many more must be addressed. Within engine system studies, the system and component technology, capability, and risks must be evaluated and a balance between the appropriate amount of technology-push and technology-pull must be addressed. This paper will summarize many of the key technology challenges associated with using high-performance cryogenic liquid propellant rocket engine systems and components in the exploration program architectures. The paper is divided into two areas. The first area describes how the mission requirements affect the engine system requirements and create system level technology challenges. An engine system architecture for multiple applications or a family of engines based upon a set of core technologies, design, and fabrication approaches may reduce overall programmatic cost and risk. The engine system discussion will also address the characterization of engine cycle figures of merit, configurations, and design approaches for some in-space vehicle alternatives under consideration. The second area evaluates the component-level technology challenges induced from the system requirements. Component technology issues are discussed addressing injector, thrust chamber, ignition system, turbopump assembly, and valve design for the challenging requirements of high reliability, robustness, fault tolerance, deep throttling, reasonable performance (with respect to weight and specific impulse).
Technology Challenges for Deep-Throttle Cryogenic Engines for Space Exploration
NASA Astrophysics Data System (ADS)
Brown, Kendall K.; Nelson, Karl W.
2005-02-01
Historically, cryogenic rocket engines have not been used for in-space applications due to their additional complexity, the mission need for high reliability, and the challenges of propellant boil-off. While the mission and vehicle architectures are not yet defined for the lunar and Martian robotic and human exploration objectives, cryogenic rocket engines offer the potential for higher performance and greater architecture/mission flexibility. In-situ cryogenic propellant production could enable a more robust exploration program by significantly reducing the propellant mass delivered to low earth orbit, thus warranting the evaluation of cryogenic rocket engines versus the hypergolic bipropellant engines used in the Apollo program. A multi-use engine, one which can provide the functionality that separate engines provided in the Apollo mission architecture, is desirable for lunar and Mars exploration missions because it increases overall architecture effectiveness through commonality and modularity. The engine requirement derivation process must address each unique mission application and each unique phase within each mission. The resulting requirements, such as thrust level, performance, packaging, burn duration, number of operations; required impulses for each trajectory phase; operation after extended space or surface exposure; availability for inspection and maintenance; throttle range for planetary descent, ascent, acceleration limits and many more must be addressed. Within engine system studies, the system and component technology, capability, and risks must be evaluated and a balance between the appropriate amount of technology-push and technology-pull must be addressed. This paper will summarize many of the key technology challenges associated with using high-performance cryogenic liquid propellant rocket engine systems and components in the exploration program architectures. The paper is divided into two areas. The first area describes how the mission requirements affect the engine system requirements and create system level technology challenges. An engine system architecture for multiple applications or a family of engines based upon a set of core technologies, design, and fabrication approaches may reduce overall programmatic cost and risk. The engine system discussion will also address the characterization of engine cycle figures of merit, configurations, and design approaches for some in-space vehicle alternatives under consideration. The second area evaluates the component-level technology challenges induced from the system requirements. Component technology issues are discussed addressing injector, thrust chamber, ignition system, turbopump assembly, and valve design for the challenging requirements of high reliability, robustness, fault tolerance, deep throttling, reasonable performance (with respect to weight and specific impulse).
NASA Advanced Explorations Systems: Advancements in Life Support Systems
NASA Technical Reports Server (NTRS)
Shull, Sarah A.; Schneider, Walter F.
2016-01-01
The NASA Advanced Exploration Systems (AES) Life Support Systems (LSS) project strives to develop reliable, energy-efficient, and low-mass spacecraft systems to provide environmental control and life support systems (ECLSS) critical to enabling long duration human missions beyond low Earth orbit (LEO). Highly reliable, closed-loop life support systems are among the capabilities required for the longer duration human space exploration missions assessed by NASA's Habitability Architecture Team (HAT). The LSS project is focused on four areas: architecture and systems engineering for life support systems, environmental monitoring, air revitalization, and wastewater processing and water management. Starting with the international space station (ISS) LSS systems as a point of departure (where applicable), the mission of the LSS project is three-fold: 1. Address discrete LSS technology gaps 2. Improve the reliability of LSS systems 3. Advance LSS systems towards integrated testing on the ISS. This paper summarized the work being done in the four areas listed above to meet these objectives. Details will be given on the following focus areas: Systems Engineering and Architecture- With so many complex systems comprising life support in space, it is important to understand the overall system requirements to define life support system architectures for different space mission classes, ensure that all the components integrate well together and verify that testing is as representative of destination environments as possible. Environmental Monitoring- In an enclosed spacecraft that is constantly operating complex machinery for its own basic functionality as well as science experiments and technology demonstrations, it's possible for the environment to become compromised. While current environmental monitors aboard the ISS will alert crew members and mission control if there is an emergency, long-duration environmental monitoring cannot be done in-orbit as current methodologies rely largely on sending environmental samples back to Earth. The LSS project is developing onboard analysis capabilities that will replace the need to return air and water samples from space for ground analysis. Air Revitalization- The air revitalization task is comprised of work in carbon dioxide removal, oxygen generation and recovery and trace contamination and particulate control. The CO2 Removal and associated air drying development efforts under the LSS project are focused both on improving the current SOA technology on the ISS and assessing and examining the viability of other sorbents and technologies available in academia and industry. The Oxygen Generation and Recovery technology development area encompasses several sub-tasks in an effort to supply O2 to the crew at the required conditions, to recover O2 from metabolic CO2, and to recycle recovered O2 back to the cabin environment. Current state-of-the-art oxygen generation systems aboard space station are capable of generating or recovering approximately 40% of required oxygen; for exploration missions this percentage needs to be greatly increased. A spacecraft cabin trace contaminant and particulate control system serves to keep the environment below the spacecraft maximum allowable concentration (SMAC) for chemicals and particulates. Both passive (filters) and active (scrubbers) methods contribute to the overall TC & PC design. Work in the area of trace contamination and particulate control under the LSS project is focused on making improvements to the SOA TC & PC systems on ISS to improve performance and reduce consumables. Wastewater Processing and Water Management- A major goal of the LSS project is the development of water recovery systems to support long duration human exploration beyond LEO. Current space station wastewater processing and water management systems distill urine and wastewater to recover water from urine and humidity condensate in the spacecraft at a approximately 74% recovery rate. For longer, farther missions into deep space, that recovery rate must be greatly increased so that astronauts can journey for months without resupply cargo ships from Earth.
Innovative fiber-laser architecture-based compact wind lidar
NASA Astrophysics Data System (ADS)
Prasad, Narasimha S.; Tracy, Allen; Vetorino, Steve; Higgins, Richard; Sibell, Russ
2016-03-01
This paper describes an innovative, compact and eyesafe coherent lidar system developed for use in wind and wake vortex sensing applications. This advanced lidar system is field ruggedized with reduced size, weight, and power consumption (SWaP) configured based on an all-fiber and modular architecture. The all-fiber architecture is developed using a fiber seed laser that is coupled to uniquely configured fiber amplifier modules and associated photonic elements including an integrated 3D scanner. The scanner provides user programmable continuous 360 degree azimuth and 180 degree elevation scan angles. The system architecture eliminates free-space beam alignment issues and allows plug and play operation using graphical user interface software modules. Besides its all fiber architecture, the lidar system also provides pulsewidth agility to aid in improving range resolution. Operating at 1.54 microns and with a PRF of up to 20 KHz, the wind lidar is air cooled with overall dimensions of 30" x 46" x 60" and is designed as a Class 1 system. This lidar is capable of measuring wind velocities greater than 120 +/- 0.2 m/s over ranges greater than 10 km and with a range resolution of less than 15 m. This compact and modular system is anticipated to provide mobility, reliability, and ease of field deployment for wind and wake vortex measurements. The current lidar architecture is amenable for trace gas sensing and as such it is being evolved for airborne and space based platforms. In this paper, the key features of wind lidar instrumentation and its functionality are discussed followed by results of recent wind forecast measurements on a wind farm.