Feedback loops and temporal misalignment in component-based hydrologic modeling
NASA Astrophysics Data System (ADS)
Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.
2011-12-01
In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.
Feature-based component model for design of embedded systems
NASA Astrophysics Data System (ADS)
Zha, Xuan Fang; Sriram, Ram D.
2004-11-01
An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.
Design of a component-based integrated environmental modeling framework
Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...
Baad-Hansen, Thomas; Kold, Søren; Kaptein, Bart L; Søballe, Kjeld
2007-08-01
In RSA, tantalum markers attached to metal-backed acetabular cups are often difficult to detect on stereo radiographs due to the high density of the metal shell. This results in occlusion of the prosthesis markers and may lead to inconclusive migration results. Within the last few years, new software systems have been developed to solve this problem. We compared the precision of 3 RSA systems in migration analysis of the acetabular component. A hemispherical and a non-hemispherical acetabular component were mounted in a phantom. Both acetabular components underwent migration analyses with 3 different RSA systems: conventional RSA using tantalum markers, an RSA system using a hemispherical cup algorithm, and a novel model-based RSA system. We found narrow confidence intervals, indicating high precision of the conventional marker system and model-based RSA with regard to migration and rotation. The confidence intervals of conventional RSA and model-based RSA were narrower than those of the hemispherical cup algorithm-based system regarding cup migration and rotation. The model-based RSA software combines the precision of the conventional RSA software with the convenience of the hemispherical cup algorithm-based system. Based on our findings, we believe that these new tools offer an improvement in the measurement of acetabular component migration.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.
2011-01-01
The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.
Design and Application of an Ontology for Component-Based Modeling of Water Systems
NASA Astrophysics Data System (ADS)
Elag, M.; Goodall, J. L.
2012-12-01
Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.
Performance-based maintenance of gas turbines for reliable control of degraded power systems
NASA Astrophysics Data System (ADS)
Mo, Huadong; Sansavini, Giovanni; Xie, Min
2018-03-01
Maintenance actions are necessary for ensuring proper operations of control systems under component degradation. However, current condition-based maintenance (CBM) models based on component health indices are not suitable for degraded control systems. Indeed, failures of control systems are only determined by the controller outputs, and the feedback mechanism compensates the control performance loss caused by the component deterioration. Thus, control systems may still operate normally even if the component health indices exceed failure thresholds. This work investigates the CBM model of control systems and employs the reduced control performance as a direct degradation measure for deciding maintenance activities. The reduced control performance depends on the underlying component degradation modelled as a Wiener process and the feedback mechanism. To this aim, the controller features are quantified by developing a dynamic and stochastic control block diagram-based simulation model, consisting of the degraded components and the control mechanism. At each inspection, the system receives a maintenance action if the control performance deterioration exceeds its preventive-maintenance or failure thresholds. Inspired by realistic cases, the component degradation model considers random start time and unit-to-unit variability. The cost analysis of maintenance model is conducted via Monte Carlo simulation. Optimal maintenance strategies are investigated to minimize the expected maintenance costs, which is a direct consequence of the control performance. The proposed framework is able to design preventive maintenance actions on a gas power plant, to ensuring required load frequency control performance against a sudden load increase. The optimization results identify the trade-off between system downtime and maintenance costs as a function of preventive maintenance thresholds and inspection frequency. Finally, the control performance-based maintenance model can reduce maintenance costs as compared to CBM and pre-scheduled maintenance.
Maximum flow-based resilience analysis: From component to system
Jin, Chong; Li, Ruiying; Kang, Rui
2017-01-01
Resilience, the ability to withstand disruptions and recover quickly, must be considered during system design because any disruption of the system may cause considerable loss, including economic and societal. This work develops analytic maximum flow-based resilience models for series and parallel systems using Zobel’s resilience measure. The two analytic models can be used to evaluate quantitatively and compare the resilience of the systems with the corresponding performance structures. For systems with identical components, the resilience of the parallel system increases with increasing number of components, while the resilience remains constant in the series system. A Monte Carlo-based simulation method is also provided to verify the correctness of our analytic resilience models and to analyze the resilience of networked systems based on that of components. A road network example is used to illustrate the analysis process, and the resilience comparison among networks with different topologies but the same components indicates that a system with redundant performance is usually more resilient than one without redundant performance. However, not all redundant capacities of components can improve the system resilience, the effectiveness of the capacity redundancy depends on where the redundant capacity is located. PMID:28545135
Robustness of Flexible Systems With Component-Level Uncertainties
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.
2000-01-01
Robustness of flexible systems in the presence of model uncertainties at the component level is considered. Specifically, an approach for formulating robustness of flexible systems in the presence of frequency and damping uncertainties at the component level is presented. The synthesis of the components is based on a modifications of a controls-based algorithm for component mode synthesis. The formulation deals first with robustness of synthesized flexible systems. It is then extended to deal with global (non-synthesized ) dynamic models with component-level uncertainties by projecting uncertainties from component levels to system level. A numerical example involving a two-dimensional simulated docking problem is worked out to demonstrate the feasibility of the proposed approach.
Methodology Evaluation Framework for Component-Based System Development.
ERIC Educational Resources Information Center
Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran
2003-01-01
Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…
An ontology for component-based models of water resource systems
NASA Astrophysics Data System (ADS)
Elag, Mostafa; Goodall, Jonathan L.
2013-08-01
Component-based modeling is an approach for simulating water resource systems where a model is composed of a set of components, each with a defined modeling objective, interlinked through data exchanges. Component-based modeling frameworks are used within the hydrologic, atmospheric, and earth surface dynamics modeling communities. While these efforts have been advancing, it has become clear that the water resources modeling community in particular, and arguably the larger earth science modeling community as well, faces a challenge of fully and precisely defining the metadata for model components. The lack of a unified framework for model component metadata limits interoperability between modeling communities and the reuse of models across modeling frameworks due to ambiguity about the model and its capabilities. To address this need, we propose an ontology for water resources model components that describes core concepts and relationships using the Web Ontology Language (OWL). The ontology that we present, which is termed the Water Resources Component (WRC) ontology, is meant to serve as a starting point that can be refined over time through engagement by the larger community until a robust knowledge framework for water resource model components is achieved. This paper presents the methodology used to arrive at the WRC ontology, the WRC ontology itself, and examples of how the ontology can aid in component-based water resources modeling by (i) assisting in identifying relevant models, (ii) encouraging proper model coupling, and (iii) facilitating interoperability across earth science modeling frameworks.
The Livingstone Model of a Main Propulsion System
NASA Technical Reports Server (NTRS)
Bajwa, Anupa; Sweet, Adam; Korsmeyer, David (Technical Monitor)
2003-01-01
Livingstone is a discrete, propositional logic-based inference engine that has been used for diagnosis of physical systems. We present a component-based model of a Main Propulsion System (MPS) and say how it is used with Livingstone (L2) in order to implement a diagnostic system for integrated vehicle health management (IVHM) for the Propulsion IVHM Technology Experiment (PITEX). We start by discussing the process of conceptualizing such a model. We describe graphical tools that facilitated the generation of the model. The model is composed of components (which map onto physical components), connections between components and constraints. A component is specified by variables, with a set of discrete, qualitative values for each variable in its local nominal and failure modes. For each mode, the model specifies the component's behavior and transitions. We describe the MPS components' nominal and fault modes and associated Livingstone variables and data structures. Given this model, and observed external commands and observations from the system, Livingstone tracks the state of the MPS over discrete time-steps by choosing trajectories that are consistent with observations. We briefly discuss how the compiled model fits into the overall PITEX architecture. Finally we summarize our modeling experience, discuss advantages and disadvantages of our approach, and suggest enhancements to the modeling process.
Applications of SPICE for modeling miniaturized biomedical sensor systems
NASA Technical Reports Server (NTRS)
Mundt, C. W.; Nagle, H. T.
2000-01-01
This paper proposes a model for a miniaturized signal conditioning system for biopotential and ion-selective electrode arrays. The system consists of three main components: sensors, interconnections, and signal conditioning chip. The model for this system is based on SPICE. Transmission-line based equivalent circuits are used to represent the sensors, lumped resistance-capacitance circuits describe the interconnections, and a model for the signal conditioning chip is extracted from its layout. A system for measurements of biopotentials and ionic activities can be miniaturized and optimized for cardiovascular applications based on the development of an integrated SPICE system model of its electrochemical, interconnection, and electronic components.
Controlled cooling of an electronic system based on projected conditions
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
2016-05-17
Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.
Controlled cooling of an electronic system based on projected conditions
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
2015-08-18
Energy efficient control of a cooling system cooling an electronic system is provided based, in part, on projected conditions. The control includes automatically determining an adjusted control setting(s) for an adjustable cooling component(s) of the cooling system. The automatically determining is based, at least in part, on projected power consumed by the electronic system at a future time and projected temperature at the future time of a heat sink to which heat extracted is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the cooling system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on an experimentally obtained model(s) relating the targeted temperature and power consumption of the adjustable cooling component(s) of the cooling system.
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components. The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the addition of nitrogen (N) and sediment modeling compo...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parsons, T.; King, R.
This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less
Robust high-performance control for robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1991-01-01
Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies have been merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal component that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and a feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nominal feedforward signal.
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong
2011-12-01
Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.
A model-based executive for commanding robot teams
NASA Technical Reports Server (NTRS)
Barrett, Anthony
2005-01-01
The paper presents a way to robustly command a system of systems as a single entity. Instead of modeling each component system in isolation and then manually crafting interaction protocols, this approach starts with a model of the collective population as a single system. By compiling the model into separate elements for each component system and utilizing a teamwork model for coordination, it circumvents the complexities of manually crafting robust interaction protocols. The resulting systems are both globally responsive by virtue of a team oriented interaction model and locally responsive by virtue of a distributed approach to model-based fault detection, isolation, and recovery.
Low-Dimensional Models for Physiological Systems: Nonlinear Coupling of Gas and Liquid Flows
NASA Astrophysics Data System (ADS)
Staples, A. E.; Oran, E. S.; Boris, J. P.; Kailasanath, K.
2006-11-01
Current computational models of biological organisms focus on the details of a specific component of the organism. For example, very detailed models of the human heart, an aorta, a vein, or part of the respiratory or digestive system, are considered either independently from the rest of the body, or as interacting simply with other systems and components in the body. In actual biological organisms, these components and systems are strongly coupled and interact in complex, nonlinear ways leading to complicated global behavior. Here we describe a low-order computational model of two physiological systems, based loosely on a circulatory and respiratory system. Each system is represented as a one-dimensional fluid system with an interconnected series of mass sources, pumps, valves, and other network components, as appropriate, representing different physical organs and system components. Preliminary results from a first version of this model system are presented.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-10-28
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.
NASA Enterprise Architecture and Its Use in Transition of Research Results to Operations
NASA Astrophysics Data System (ADS)
Frisbie, T. E.; Hall, C. M.
2006-12-01
Enterprise architecture describes the design of the components of an enterprise, their relationships and how they support the objectives of that enterprise. NASA Stennis Space Center leads several projects involving enterprise architecture tools used to gather information on research assets within NASA's Earth Science Division. In the near future, enterprise architecture tools will link and display the relevant requirements, parameters, observatories, models, decision systems, and benefit/impact information relationships and map to the Federal Enterprise Architecture Reference Models. Components configured within the enterprise architecture serving the NASA Applied Sciences Program include the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool. The Earth Science Components Knowledge Base systematically catalogues NASA missions, sensors, models, data products, model products, and network partners appropriate for consideration in NASA Earth Science applications projects. The Systems Components database is a centralized information warehouse of NASA's Earth Science research assets and a critical first link in the implementation of enterprise architecture. The Earth Science Architecture Tool is used to analyze potential NASA candidate systems that may be beneficial to decision-making capabilities of other Federal agencies. Use of the current configuration of NASA enterprise architecture (the Earth Science Components Knowledge Base, the Systems Components database, and the Earth Science Architecture Tool) has far exceeded its original intent and has tremendous potential for the transition of research results to operational entities.
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-01-01
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.
2017-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125
THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.
Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R
2016-07-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
NASA Technical Reports Server (NTRS)
Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.;
2016-01-01
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.
An architecture for object-oriented intelligent control of power systems in space
NASA Technical Reports Server (NTRS)
Holmquist, Sven G.; Jayaram, Prakash; Jansen, Ben H.
1993-01-01
A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation process uses the FSM level for its reasoning base.
Benchmarking a Soil Moisture Data Assimilation System for Agricultural Drought Monitoring
NASA Technical Reports Server (NTRS)
Hun, Eunjin; Crow, Wade T.; Holmes, Thomas; Bolten, John
2014-01-01
Despite considerable interest in the application of land surface data assimilation systems (LDAS) for agricultural drought applications, relatively little is known about the large-scale performance of such systems and, thus, the optimal methodological approach for implementing them. To address this need, this paper evaluates an LDAS for agricultural drought monitoring by benchmarking individual components of the system (i.e., a satellite soil moisture retrieval algorithm, a soil water balance model and a sequential data assimilation filter) against a series of linear models which perform the same function (i.e., have the same basic inputoutput structure) as the full system component. Benchmarking is based on the calculation of the lagged rank cross-correlation between the normalized difference vegetation index (NDVI) and soil moisture estimates acquired for various components of the system. Lagged soil moistureNDVI correlations obtained using individual LDAS components versus their linear analogs reveal the degree to which non-linearities andor complexities contained within each component actually contribute to the performance of the LDAS system as a whole. Here, a particular system based on surface soil moisture retrievals from the Land Parameter Retrieval Model (LPRM), a two-layer Palmer soil water balance model and an Ensemble Kalman filter (EnKF) is benchmarked. Results suggest significant room for improvement in each component of the system.
Systems engineering interfaces: A model based approach
NASA Astrophysics Data System (ADS)
Fosse, E.; Delp, C. L.
The engineering of interfaces is a critical function of the discipline of Systems Engineering. Included in interface engineering are instances of interaction. Interfaces provide the specifications of the relevant properties of a system or component that can be connected to other systems or components while instances of interaction are identified in order to specify the actual integration to other systems or components. Current Systems Engineering practices rely on a variety of documents and diagrams to describe interface specifications and instances of interaction. The SysML[1] specification provides a precise model based representation for interfaces and interface instance integration. This paper will describe interface engineering as implemented by the Operations Revitalization Task using SysML, starting with a generic case and culminating with a focus on a Flight System to Ground Interaction. The reusability of the interface engineering approach presented as well as its extensibility to more complex interfaces and interactions will be shown. Model-derived tables will support the case studies shown and are examples of model-based documentation products.
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...
2016-08-22
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theurich, Gerhard; DeLuca, C.; Campbell, T.
The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less
Advanced and secure architectural EHR approaches.
Blobel, Bernd
2006-01-01
Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
Robust high-performance control for robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1989-01-01
Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies were merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal componet that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nomical feedforward signal.
A UML-based ontology for describing hospital information system architectures.
Winter, A; Brigl, B; Wendt, T
2001-01-01
To control the heterogeneity inherent to hospital information systems the information management needs appropriate hospital information systems modeling methods or techniques. This paper shows that, for several reasons, available modeling approaches are not able to answer relevant questions of information management. To overcome this major deficiency we offer an UML-based ontology for describing hospital information systems architectures. This ontology views at three layers: the domain layer, the logical tool layer, and the physical tool layer, and defines the relevant components. The relations between these components, especially between components of different layers make the answering of our information management questions possible.
Model-Drive Architecture for Agent-Based Systems
NASA Technical Reports Server (NTRS)
Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.
2004-01-01
The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.
Object-oriented biomedical system modelling--the language.
Hakman, M; Groth, T
1999-11-01
The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.
NASA Astrophysics Data System (ADS)
Peckham, S. D.; DeLuca, C.; Gochis, D. J.; Arrigo, J.; Kelbert, A.; Choi, E.; Dunlap, R.
2014-12-01
In order to better understand and predict environmental hazards of weather/climate, ecology and deep earth processes, geoscientists develop and use physics-based computational models. These models are used widely both in academic and federal communities. Because of the large effort required to develop and test models, there is widespread interest in component-based modeling, which promotes model reuse and simplified coupling to tackle problems that often cross discipline boundaries. In component-based modeling, the goal is to make relatively small changes to models that make it easy to reuse them as "plug-and-play" components. Sophisticated modeling frameworks exist to rapidly couple these components to create new composite models. They allow component models to exchange variables while accommodating different programming languages, computational grids, time-stepping schemes, variable names and units. Modeling frameworks have arisen in many modeling communities. CSDMS (Community Surface Dynamics Modeling System) serves the academic earth surface process dynamics community, while ESMF (Earth System Modeling Framework) serves many federal Earth system modeling projects. Others exist in both the academic and federal domains and each satisfies design criteria that are determined by the community they serve. While they may use different interface standards or semantic mediation strategies, they share fundamental similarities. The purpose of the Earth System Bridge project is to develop mechanisms for interoperability between modeling frameworks, such as the ability to share a model or service component. This project has three main goals: (1) Develop a Framework Description Language (ES-FDL) that allows modeling frameworks to be described in a standard way so that their differences and similarities can be assessed. (2) Demonstrate that if a model is augmented with a framework-agnostic Basic Model Interface (BMI), then simple, universal adapters can go from BMI to a modeling framework's native component interface. (3) Create semantic mappings between modeling frameworks that support semantic mediation. This third goal involves creating a crosswalk between the CF Standard Names and the CSDMS Standard Names (a set of naming conventions). This talk will summarize progress towards these goals.
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
A component-based system for agricultural drought monitoring by remote sensing.
Dong, Heng; Li, Jun; Yuan, Yanbin; You, Lin; Chen, Chao
2017-01-01
In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China's Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring.
A component-based system for agricultural drought monitoring by remote sensing
Yuan, Yanbin; You, Lin; Chen, Chao
2017-01-01
In recent decades, various kinds of remote sensing-based drought indexes have been proposed and widely used in the field of drought monitoring. However, the drought-related software and platform development lag behind the theoretical research. The current drought monitoring systems focus mainly on information management and publishing, and cannot implement professional drought monitoring or parameter inversion modelling, especially the models based on multi-dimensional feature space. In view of the above problems, this paper aims at fixing this gap with a component-based system named RSDMS to facilitate the application of drought monitoring by remote sensing. The system is designed and developed based on Component Object Model (COM) to ensure the flexibility and extendibility of modules. RSDMS realizes general image-related functions such as data management, image display, spatial reference management, image processing and analysis, and further provides drought monitoring and evaluation functions based on internal and external models. Finally, China’s Ningxia region is selected as the study area to validate the performance of RSDMS. The experimental results show that RSDMS provide an efficient and scalable support to agricultural drought monitoring. PMID:29236700
Reliability Evaluation of Machine Center Components Based on Cascading Failure Analysis
NASA Astrophysics Data System (ADS)
Zhang, Ying-Zhi; Liu, Jin-Tong; Shen, Gui-Xiang; Long, Zhe; Sun, Shu-Guang
2017-07-01
In order to rectify the problems that the component reliability model exhibits deviation, and the evaluation result is low due to the overlook of failure propagation in traditional reliability evaluation of machine center components, a new reliability evaluation method based on cascading failure analysis and the failure influenced degree assessment is proposed. A direct graph model of cascading failure among components is established according to cascading failure mechanism analysis and graph theory. The failure influenced degrees of the system components are assessed by the adjacency matrix and its transposition, combined with the Pagerank algorithm. Based on the comprehensive failure probability function and total probability formula, the inherent failure probability function is determined to realize the reliability evaluation of the system components. Finally, the method is applied to a machine center, it shows the following: 1) The reliability evaluation values of the proposed method are at least 2.5% higher than those of the traditional method; 2) The difference between the comprehensive and inherent reliability of the system component presents a positive correlation with the failure influenced degree of the system component, which provides a theoretical basis for reliability allocation of machine center system.
Knowledge Management System Model for Learning Organisations
ERIC Educational Resources Information Center
Amin, Yousif; Monamad, Roshayu
2017-01-01
Based on the literature of knowledge management (KM), this paper reports on the progress of developing a new knowledge management system (KMS) model with components architecture that are distributed over the widely-recognised socio-technical system (STS) aspects to guide developers for selecting the most applicable components to support their KM…
2010-06-01
data such as the NSMB B-series, or be based on hydrodynamic (lifting line) predict ions. The power including still air drag and any margin that is...Provide Fuel Function 3.6 Fuel Oil System Component REQ.1.4 Fuel Efficiency Requirement 1.1 Generate Mechanical En... Function 1.1 Prime Mover Component...3.3 Provide Lubrication Function 3.7 Lube Oil System Component 3.4 Provide Cooling Water Function 3.3 Cooling System Component 3.5 Provide Combust ion
An architecture for the development of real-time fault diagnosis systems using model-based reasoning
NASA Technical Reports Server (NTRS)
Hall, Gardiner A.; Schuetzle, James; Lavallee, David; Gupta, Uday
1992-01-01
Presented here is an architecture for implementing real-time telemetry based diagnostic systems using model-based reasoning. First, we describe Paragon, a knowledge acquisition tool for offline entry and validation of physical system models. Paragon provides domain experts with a structured editing capability to capture the physical component's structure, behavior, and causal relationships. We next describe the architecture of the run time diagnostic system. The diagnostic system, written entirely in Ada, uses the behavioral model developed offline by Paragon to simulate expected component states as reflected in the telemetry stream. The diagnostic algorithm traces causal relationships contained within the model to isolate system faults. Since the diagnostic process relies exclusively on the behavioral model and is implemented without the use of heuristic rules, it can be used to isolate unpredicted faults in a wide variety of systems. Finally, we discuss the implementation of a prototype system constructed using this technique for diagnosing faults in a science instrument. The prototype demonstrates the use of model-based reasoning to develop maintainable systems with greater diagnostic capabilities at a lower cost.
A Linguistic Model in Component Oriented Programming
NASA Astrophysics Data System (ADS)
Crăciunean, Daniel Cristian; Crăciunean, Vasile
2016-12-01
It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.
Hybrid Modeling for Testing Intelligent Software for Lunar-Mars Closed Life Support
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Nicholson, Leonard S. (Technical Monitor)
1999-01-01
Intelligent software is being developed for closed life support systems with biological components, for human exploration of the Moon and Mars. The intelligent software functions include planning/scheduling, reactive discrete control and sequencing, management of continuous control, and fault detection, diagnosis, and management of failures and errors. Four types of modeling information have been essential to system modeling and simulation to develop and test the software and to provide operational model-based what-if analyses: discrete component operational and failure modes; continuous dynamic performance within component modes, modeled qualitatively or quantitatively; configuration of flows and power among components in the system; and operations activities and scenarios. CONFIG, a multi-purpose discrete event simulation tool that integrates all four types of models for use throughout the engineering and operations life cycle, has been used to model components and systems involved in the production and transfer of oxygen and carbon dioxide in a plant-growth chamber and between that chamber and a habitation chamber with physicochemical systems for gas processing.
HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Brownston, Lee
2012-01-01
Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The component model is expressed as operating modes of the component and conditions for transitions between these various modes. Faults are modeled as transitions whose conditions for transitions are unknown (and have to be inferred through the reasoning process). Finally, the behavior of the components is expressed as a set of variables/ parameters and relations governing the interaction between the variables. The hybrid nature of the systems being modeled is captured by a combination of the above transitional model and behavioral model. Stochasticity is captured as probabilities associated with transitions (indicating the likelihood of that transition being taken), as well as noise on the sensed variables.
NASA Astrophysics Data System (ADS)
Chatterjee, A.; Anderson, J. L.; Moncrieff, M.; Collins, N.; Danabasoglu, G.; Hoar, T.; Karspeck, A. R.; Neale, R. B.; Raeder, K.; Tribbia, J. J.
2014-12-01
We present a quantitative evaluation of the simulated MJO in analyses produced with a coupled data assimilation (CDA) framework developed at the National Center for Atmosphere Research. This system is based on the Community Earth System Model (CESM; previously known as the Community Climate System Model -CCSM) interfaced to a community facility for ensemble data assimilation (Data Assimilation Research Testbed - DART). The system (multi-component CDA) assimilates data into each of the respective ocean/atmosphere/land model components during the assimilation step followed by an exchange of information between the model components during the forecast step. Note that this is an advancement over many existing prototypes of coupled data assimilation systems, which typically assimilate observations only in one of the model components (i.e., single-component CDA). The more realistic treatment of air-sea interactions and improvements to the model mean state in the multi-component CDA recover many aspects of MJO representation, from its space-time structure and propagation (see Figure 1) to the governing relationships between precipitation and sea surface temperature on intra-seasonal scales. Standard qualitative and process-based diagnostics identified by the MJO Task Force (currently under the auspices of the Working Group on Numerical Experimentation) have been used to detect the MJO signals across a suite of coupled model experiments involving both multi-component and single-component DA experiments as well as a free run of the coupled CESM model (i.e., CMIP5 style without data assimilation). Short predictability experiments during the boreal winter are used to demonstrate that the decay rates of the MJO convective anomalies are slower in the multi-component CDA system, which allows it to retain the MJO dynamics for a longer period. We anticipate that the knowledge gained through this study will enhance our understanding of the MJO feedback mechanisms across the air-sea interface, especially regarding ocean impacts on the MJO as well as highlight the capability of coupled data assimilation systems for related tropical intraseasonal variability predictions.
NASA Technical Reports Server (NTRS)
Palusinski, O. A.; Allgyer, T. T.; Mosher, R. A.; Bier, M.; Saville, D. A.
1981-01-01
A mathematical model of isoelectric focusing at the steady state has been developed for an M-component system of electrochemically defined ampholytes. The model is formulated from fundamental principles describing the components' chemical equilibria, mass transfer resulting from diffusion and electromigration, and electroneutrality. The model consists of ordinary differential equations coupled with a system of algebraic equations. The model is implemented on a digital computer using FORTRAN-based simulation software. Computer simulation data are presented for several two-component systems showing the effects of varying the isoelectric points and dissociation constants of the constituents.
Object-Oriented Modeling of an Energy Harvesting System Based on Thermoelectric Generators
NASA Astrophysics Data System (ADS)
Nesarajah, Marco; Frey, Georg
This paper deals with the modeling of an energy harvesting system based on thermoelectric generators (TEG), and the validation of the model by means of a test bench. TEGs are capable to improve the overall energy efficiency of energy systems, e.g. combustion engines or heating systems, by using the remaining waste heat to generate electrical power. Previously, a component-oriented model of the TEG itself was developed in Modelica® language. With this model any TEG can be described and simulated given the material properties and the physical dimension. Now, this model was extended by the surrounding components to a complete model of a thermoelectric energy harvesting system. In addition to the TEG, the model contains the cooling system, the heat source, and the power electronics. To validate the simulation model, a test bench was built and installed on an oil-fired household heating system. The paper reports results of the measurements and discusses the validity of the developed simulation models. Furthermore, the efficiency of the proposed energy harvesting system is derived and possible improvements based on design variations tested in the simulation model are proposed.
A Model-Based Prognostics Approach Applied to Pneumatic Valves
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Goebel, Kai
2011-01-01
Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.
Documenting Models for Interoperability and Reusability ...
Many modeling frameworks compartmentalize science via individual models that link sets of small components to create larger modeling workflows. Developing integrated watershed models increasingly requires coupling multidisciplinary, independent models, as well as collaboration between scientific communities, since component-based modeling can integrate models from different disciplines. Integrated Environmental Modeling (IEM) systems focus on transferring information between components by capturing a conceptual site model; establishing local metadata standards for input/output of models and databases; managing data flow between models and throughout the system; facilitating quality control of data exchanges (e.g., checking units, unit conversions, transfers between software languages); warning and error handling; and coordinating sensitivity/uncertainty analyses. Although many computational software systems facilitate communication between, and execution of, components, there are no common approaches, protocols, or standards for turn-key linkages between software systems and models, especially if modifying components is not the intent. Using a standard ontology, this paper reviews how models can be described for discovery, understanding, evaluation, access, and implementation to facilitate interoperability and reusability. In the proceedings of the International Environmental Modelling and Software Society (iEMSs), 8th International Congress on Environmental Mod
Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B
2013-01-01
The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.
A particle swarm model for estimating reliability and scheduling system maintenance
NASA Astrophysics Data System (ADS)
Puzis, Rami; Shirtz, Dov; Elovici, Yuval
2016-05-01
Modifying data and information system components may introduce new errors and deteriorate the reliability of the system. Reliability can be efficiently regained with reliability centred maintenance, which requires reliability estimation for maintenance scheduling. A variant of the particle swarm model is used to estimate reliability of systems implemented according to the model view controller paradigm. Simulations based on data collected from an online system of a large financial institute are used to compare three component-level maintenance policies. Results show that appropriately scheduled component-level maintenance greatly reduces the cost of upholding an acceptable level of reliability by reducing the need in system-wide maintenance.
Agent-based models of cellular systems.
Cannata, Nicola; Corradini, Flavio; Merelli, Emanuela; Tesei, Luca
2013-01-01
Software agents are particularly suitable for engineering models and simulations of cellular systems. In a very natural and intuitive manner, individual software components are therein delegated to reproduce "in silico" the behavior of individual components of alive systems at a given level of resolution. Individuals' actions and interactions among individuals allow complex collective behavior to emerge. In this chapter we first introduce the readers to software agents and multi-agent systems, reviewing the evolution of agent-based modeling of biomolecular systems in the last decade. We then describe the main tools, platforms, and methodologies available for programming societies of agents, possibly profiting also of toolkits that do not require advanced programming skills.
A Study on Components of Internal Control-Based Administrative System in Secondary Schools
ERIC Educational Resources Information Center
Montri, Paitoon; Sirisuth, Chaiyuth; Lammana, Preeda
2015-01-01
The aim of this study was to study the components of the internal control-based administrative system in secondary schools, and make a Confirmatory Factor Analysis (CFA) to confirm the goodness of fit of empirical data and component model that resulted from the CFA. The study consisted of three steps: 1) studying of principles, ideas, and theories…
Sustainability-based decision making is a challenging process that requires balancing trade-offs among social, economic, and environmental components. System Dynamic (SD) models can be useful tools to inform sustainability-based decision making because they provide a holistic co...
Online model-based diagnosis to support autonomous operation of an advanced life support system.
Biswas, Gautam; Manders, Eric-Jan; Ramirez, John; Mahadevan, Nagabhusan; Abdelwahed, Sherif
2004-01-01
This article describes methods for online model-based diagnosis of subsystems of the advanced life support system (ALS). The diagnosis methodology is tailored to detect, isolate, and identify faults in components of the system quickly so that fault-adaptive control techniques can be applied to maintain system operation without interruption. We describe the components of our hybrid modeling scheme and the diagnosis methodology, and then demonstrate the effectiveness of this methodology by building a detailed model of the reverse osmosis (RO) system of the water recovery system (WRS) of the ALS. This model is validated with real data collected from an experimental testbed at NASA JSC. A number of diagnosis experiments run on simulated faulty data are presented and the results are discussed.
Online model-based diagnosis to support autonomous operation of an advanced life support system
NASA Technical Reports Server (NTRS)
Biswas, Gautam; Manders, Eric-Jan; Ramirez, John; Mahadevan, Nagabhusan; Abdelwahed, Sherif
2004-01-01
This article describes methods for online model-based diagnosis of subsystems of the advanced life support system (ALS). The diagnosis methodology is tailored to detect, isolate, and identify faults in components of the system quickly so that fault-adaptive control techniques can be applied to maintain system operation without interruption. We describe the components of our hybrid modeling scheme and the diagnosis methodology, and then demonstrate the effectiveness of this methodology by building a detailed model of the reverse osmosis (RO) system of the water recovery system (WRS) of the ALS. This model is validated with real data collected from an experimental testbed at NASA JSC. A number of diagnosis experiments run on simulated faulty data are presented and the results are discussed.
Prabhakar, P.; Sames, William J.; Dehoff, Ryan R.; ...
2015-03-28
Here, a computational modeling approach to simulate residual stress formation during the electron beam melting (EBM) process within the additive manufacturing (AM) technologies for Inconel 718 is presented in this paper. The EBM process has demonstrated a high potential to fabricate components with complex geometries, but the resulting components are influenced by the thermal cycles observed during the manufacturing process. When processing nickel based superalloys, very high temperatures (approx. 1000 °C) are observed in the powder bed, base plate, and build. These high temperatures, when combined with substrate adherence, can result in warping of the base plate and affect themore » final component by causing defects. It is important to have an understanding of the thermo-mechanical response of the entire system, that is, its mechanical behavior towards thermal loading occurring during the EBM process prior to manufacturing a component. Therefore, computational models to predict the response of the system during the EBM process will aid in eliminating the undesired process conditions, a priori, in order to fabricate the optimum component. Such a comprehensive computational modeling approach is demonstrated to analyze warping of the base plate, stress and plastic strain accumulation within the material, and thermal cycles in the system during different stages of the EBM process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, C.H.; Ready, A.B.; Rea, J.
1995-06-01
Versions of the computer program PROATES (PROcess Analysis for Thermal Energy Systems) have been used since 1979 to analyse plant performance improvement proposals relating to existing plant and also to evaluate new plant designs. Several plant modifications have been made to improve performance based on the model predictions and the predicted performance has been realised in practice. The program was born out of a need to model the overall steady state performance of complex plant to enable proposals to change plant component items or operating strategy to be evaluated. To do this with confidence it is necessary to model themore » multiple thermodynamic interactions between the plant components. The modelling system is modular in concept allowing the configuration of individual plant components to represent any particular power plant design. A library exists of physics based modules which have been extensively validated and which provide representations of a wide range of boiler, turbine and CW system components. Changes to model data and construction is achieved via a user friendly graphical model editing/analysis front-end with results being presented via the computer screen or hard copy. The paper describes briefly the modelling system but concentrates mainly on the application of the modelling system to assess design re-optimisation, firing with different fuels and the re-powering of an existing plant.« less
NASA Astrophysics Data System (ADS)
Kong, Changduk; Lim, Semyeong; Kim, Keunwoo
2013-03-01
The Neural Networks is mostly used to engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measuring performance data, and proposes a fault diagnostic system using the base performance model and artificial intelligent methods such as Fuzzy and Neural Networks. Each real engine performance model, which is named as the base performance model that can simulate a new engine performance, is inversely made using its performance test data. Therefore the condition monitoring of each engine can be more precisely carried out through comparison with measuring performance data. The proposed diagnostic system identifies firstly the faulted components using Fuzzy Logic, and then quantifies faults of the identified components using Neural Networks leaned by fault learning data base obtained from the developed base performance model. In leaning the measuring performance data of the faulted components, the FFBP (Feed Forward Back Propagation) is used. In order to user's friendly purpose, the proposed diagnostic program is coded by the GUI type using MATLAB.
NASA Technical Reports Server (NTRS)
Bole, Brian; Goebel, Kai; Vachtsevanos, George
2012-01-01
This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.
Markstrom, Steven L.; Niswonger, Richard G.; Regan, R. Steven; Prudic, David E.; Barlow, Paul M.
2008-01-01
The need to assess the effects of variability in climate, biota, geology, and human activities on water availability and flow requires the development of models that couple two or more components of the hydrologic cycle. An integrated hydrologic model called GSFLOW (Ground-water and Surface-water FLOW) was developed to simulate coupled ground-water and surface-water resources. The new model is based on the integration of the U.S. Geological Survey Precipitation-Runoff Modeling System (PRMS) and the U.S. Geological Survey Modular Ground-Water Flow Model (MODFLOW). Additional model components were developed, and existing components were modified, to facilitate integration of the models. Methods were developed to route flow among the PRMS Hydrologic Response Units (HRUs) and between the HRUs and the MODFLOW finite-difference cells. This report describes the organization, concepts, design, and mathematical formulation of all GSFLOW model components. An important aspect of the integrated model design is its ability to conserve water mass and to provide comprehensive water budgets for a location of interest. This report includes descriptions of how water budgets are calculated for the integrated model and for individual model components. GSFLOW provides a robust modeling system for simulating flow through the hydrologic cycle, while allowing for future enhancements to incorporate other simulation techniques.
Modeling Hydraulic Components for Automated FMEA of a Braking System
2014-12-23
Modeling Hydraulic Components for Automated FMEA of a Braking System Peter Struss, Alessandro Fraracci Tech. Univ. of Munich, 85748 Garching...Germany struss@in.tum.de ABSTRACT This paper presents work on model-based automation of failure-modes-and-effects analysis ( FMEA ) applied to...the hydraulic part of a vehicle braking system. We describe the FMEA task and the application problem and outline the foundations for automating the
A GIS-based modeling system for petroleum waste management. Geographical information system.
Chen, Z; Huang, G H; Li, J B
2003-01-01
With an urgent need for effective management of petroleum-contaminated sites, a GIS-aided simulation (GISSIM) system is presented in this study. The GISSIM contains two components: an advanced 3D numerical model and a geographical information system (GIS), which are integrated within a general framework. The modeling component undertakes simulation for the fate of contaminants in subsurface unsaturated and saturated zones. The GIS component is used in three areas throughout the system development and implementation process: (i) managing spatial and non-spatial databases; (ii) linking inputs, model, and outputs; and (iii) providing an interface between the GISSIM and its users. The developed system is applied to a North American case study. Concentrations of benzene, toluene, and xylenes in groundwater under a petroleum-contaminated site are dynamically simulated. Reasonable outputs have been obtained and presented graphically. They provide quantitative and scientific bases for further assessment of site-contamination impacts and risks, as well as decisions on practical remediation actions.
PRMS-IV, the precipitation-runoff modeling system, version 4
Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.
2015-01-01
Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.
Update on Small Modular Reactors Dynamic System Modeling Tool: Web Application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, Richard Edward; Cetiner, Sacit M.; Fugate, David L.
Previous reports focused on the development of component and system models as well as end-to-end system models using Modelica and Dymola for two advanced reactor architectures: (1) Advanced Liquid Metal Reactor and (2) fluoride high-temperature reactor (FHR). The focus of this report is the release of the first beta version of the web-based application for model use and collaboration, as well as an update on the FHR model. The web-based application allows novice users to configure end-to-end system models from preconfigured choices to investigate the instrumentation and controls implications of these designs and allows for the collaborative development of individualmore » component models that can be benchmarked against test systems for potential inclusion in the model library. A description of this application is provided along with examples of its use and a listing and discussion of all the models that currently exist in the library.« less
Active imaging system performance model for target acquisition
NASA Astrophysics Data System (ADS)
Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.
2007-04-01
The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.
Information Retrieval Using UMLS-based Structured Queries
Fagan, Lawrence M.; Berrios, Daniel C.; Chan, Albert; Cucina, Russell; Datta, Anupam; Shah, Maulik; Surendran, Sujith
2001-01-01
During the last three years, we have developed and described components of ELBook, a semantically based information-retrieval system [1-4]. Using these components, domain experts can specify a query model, indexers can use the query model to index documents, and end-users can search these documents for instances of indexed queries.
Min, Yul Ha; Park, Hyeoun-Ae; Chung, Eunja; Lee, Hyunsook
2013-12-01
The purpose of this paper is to describe the components of a next-generation electronic nursing records system ensuring full semantic interoperability and integrating evidence into the nursing records system. A next-generation electronic nursing records system based on detailed clinical models and clinical practice guidelines was developed at Seoul National University Bundang Hospital in 2013. This system has two components, a terminology server and a nursing documentation system. The terminology server manages nursing narratives generated from entity-attribute-value triplets of detailed clinical models using a natural language generation system. The nursing documentation system provides nurses with a set of nursing narratives arranged around the recommendations extracted from clinical practice guidelines. An electronic nursing records system based on detailed clinical models and clinical practice guidelines was successfully implemented in a hospital in Korea. The next-generation electronic nursing records system can support nursing practice and nursing documentation, which in turn will improve data quality.
NASA Astrophysics Data System (ADS)
Avitabile, P.; O'Callahan, J.
2003-07-01
Inclusion of rotational effects is critical for the accuracy of the predicted system characteristics, in almost all system modelling studies. However, experimentally derived information for the description of one or more of the components for the system will generally not have any rotational effects included in the description of the component. The lack of rotational effects has long affected the results from any system model development whether using a modal-based approach or an impedance-based approach. Several new expansion processes are described herein for the development of FRFs needed for impedance-based system models. These techniques expand experimentally derived mode shapes, residual modes from the modal parameter estimation process and FRFs directly to allow for the inclusion of the necessary rotational dof. The FRFs involving translational to rotational dofs are developed as well as the rotational to rotational dof. Examples are provided to show the use of these techniques.
A stable systemic risk ranking in China's banking sector: Based on principal component analysis
NASA Astrophysics Data System (ADS)
Fang, Libing; Xiao, Binqing; Yu, Honghai; You, Qixing
2018-02-01
In this paper, we compare five popular systemic risk rankings, and apply principal component analysis (PCA) model to provide a stable systemic risk ranking for the Chinese banking sector. Our empirical results indicate that five methods suggest vastly different systemic risk rankings for the same bank, while the combined systemic risk measure based on PCA provides a reliable ranking. Furthermore, according to factor loadings of the first component, PCA combined ranking is mainly based on fundamentals instead of market price data. We clearly find that price-based rankings are not as practical a method as fundamentals-based ones. This PCA combined ranking directly shows systemic risk contributions of each bank for banking supervision purpose and reminds banks to prevent and cope with the financial crisis in advance.
Design, fabrication and test of a trace contaminant control system
NASA Technical Reports Server (NTRS)
1975-01-01
A trace contaminant control system was designed, fabricated, and evaluated to determine suitability of the system concept to future manned spacecraft. Two different models were considered. The load model initially required by the contract was based on the Space Station Prototype (SSP) general specifications SVSK HS4655, reflecting a change from a 9 man crew to a 6 man crew of the model developed in previous phases of this effort. Trade studies and a system preliminary design were accomplished based on this contaminant load, including computer analyses to define the optimum system configuration in terms of component arrangements, flow rates and component sizing. At the completion of the preliminary design effort a revised contaminant load model was developed for the SSP. Additional analyses were then conducted to define the impact of this new contaminant load model on the system configuration. A full scale foam-core mock-up with the appropriate SSP system interfaces was also fabricated.
A modular method for evaluating the performance of picture archiving and communication systems.
Sanders, W H; Kant, L A; Kudrimoti, A
1993-08-01
Modeling can be used to predict the performance of picture archiving and communication system (PACS) configurations under various load conditions at an early design stage. This is important because choices made early in the design of a system can have a significant impact on the performance of the resulting implementation. Because PACS consist of many types of components, it is important to do such evaluations in a modular manner, so that alternative configurations and designs can be easily investigated. Stochastic activity networks (SANs) and reduced base model construction methods can aid in doing this. SANs are a model type particularly suited to the evaluation of systems in which several activities may be in progress concurrently, and each activity may affect the others through the results of its completion. Together with SANs, reduced base model construction methods provide a means to build highly modular models, in which models of particular components can be easily reused. In this article, we investigate the use of SANs and reduced base model construction techniques in evaluating PACS. Construction and solution of the models is done using UltraSAN, a graphic-oriented software tool for model specification, analysis, and simulation. The method is illustrated via the evaluation of a realistically sized PACS for a typical United States hospital of 300 to 400 beds, and the derivation of system response times and component utilizations.
NASA Astrophysics Data System (ADS)
García, Constantino A.; Otero, Abraham; Félix, Paulo; Presedo, Jesús; Márquez, David G.
2018-07-01
In the past few decades, it has been recognized that 1 / f fluctuations are ubiquitous in nature. The most widely used mathematical models to capture the long-term memory properties of 1 / f fluctuations have been stochastic fractal models. However, physical systems do not usually consist of just stochastic fractal dynamics, but they often also show some degree of deterministic behavior. The present paper proposes a model based on fractal stochastic and deterministic components that can provide a valuable basis for the study of complex systems with long-term correlations. The fractal stochastic component is assumed to be a fractional Brownian motion process and the deterministic component is assumed to be a band-limited signal. We also provide a method that, under the assumptions of this model, is able to characterize the fractal stochastic component and to provide an estimate of the deterministic components present in a given time series. The method is based on a Bayesian wavelet shrinkage procedure that exploits the self-similar properties of the fractal processes in the wavelet domain. This method has been validated over simulated signals and over real signals with economical and biological origin. Real examples illustrate how our model may be useful for exploring the deterministic-stochastic duality of complex systems, and uncovering interesting patterns present in time series.
Hierarchical graphs for rule-based modeling of biochemical systems
2011-01-01
Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal) of an edge represents a class of association (dissociation) reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR) complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for specifying rule-based models, such as the BioNetGen language (BNGL). Thus, the proposed use of hierarchical graphs should promote clarity and better understanding of rule-based models. PMID:21288338
Development of a Water Recovery System Resource Tracking Model
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Sargusingh, Miriam; Shull, Sarah; Moore, Michael
2015-01-01
A simulation model has been developed to track water resources in an exploration vehicle using Regenerative Life Support (RLS) systems. The Resource Tracking Model (RTM) integrates the functions of all the vehicle components that affect the processing and recovery of water during simulated missions. The approach used in developing the RTM enables its use as part of a complete vehicle simulation for real time mission studies. Performance data for the components in the RTM is focused on water processing. The data provided to the model has been based on the most recent information available regarding the technology of the component. This paper will describe the process of defining the RLS system to be modeled, the way the modeling environment was selected, and how the model has been implemented. Results showing how the RLS components exchange water are provided in a set of test cases.
Development of a Water Recovery System Resource Tracking Model
NASA Technical Reports Server (NTRS)
Chambliss, Joe; Stambaugh, Imelda; Sarguishm, Miriam; Shull, Sarah; Moore, Michael
2014-01-01
A simulation model has been developed to track water resources in an exploration vehicle using regenerative life support (RLS) systems. The model integrates the functions of all the vehicle components that affect the processing and recovery of water during simulated missions. The approach used in developing the model results in the RTM being a part of of a complete vehicle simulation that can be used in real time mission studies. Performance data for the variety of components in the RTM is focused on water processing and has been defined based on the most recent information available for the technology of the component. This paper will describe the process of defining the RLS system to be modeled and then the way the modeling environment was selected and how the model has been implemented. Results showing how the variety of RLS components exchange water are provided in a set of test cases.
Space Shuttle critical function audit
NASA Technical Reports Server (NTRS)
Sacks, Ivan J.; Dipol, John; Su, Paul
1990-01-01
A large fault-tolerance model of the main propulsion system of the US space shuttle has been developed. This model is being used to identify single components and pairs of components that will cause loss of shuttle critical functions. In addition, this model is the basis for risk quantification of the shuttle. The process used to develop and analyze the model is digraph matrix analysis (DMA). The DMA modeling and analysis process is accessed via a graphics-based computer user interface. This interface provides coupled display of the integrated system schematics, the digraph models, the component database, and the results of the fault tolerance and risk analyses.
NASA Astrophysics Data System (ADS)
Biermann, D.; Gausemeier, J.; Heim, H.-P.; Hess, S.; Petersen, M.; Ries, A.; Wagner, T.
2014-05-01
In this contribution a framework for the computer-aided planning and optimisation of functional graded components is presented. The framework is divided into three modules - the "Component Description", the "Expert System" for the synthetisation of several process chains and the "Modelling and Process Chain Optimisation". The Component Description module enhances a standard computer-aided design (CAD) model by a voxel-based representation of the graded properties. The Expert System synthesises process steps stored in the knowledge base to generate several alternative process chains. Each process chain is capable of producing components according to the enhanced CAD model and usually consists of a sequence of heating-, cooling-, and forming processes. The dependencies between the component and the applied manufacturing processes as well as between the processes themselves need to be considered. The Expert System utilises an ontology for that purpose. The ontology represents all dependencies in a structured way and connects the information of the knowledge base via relations. The third module performs the evaluation of the generated process chains. To accomplish this, the parameters of each process are optimised with respect to the component specification, whereby the result of the best parameterisation is used as representative value. Finally, the process chain which is capable of manufacturing a functionally graded component in an optimal way regarding to the property distributions of the component description is presented by means of a dedicated specification technique.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
NASA Astrophysics Data System (ADS)
Balaji, V.; Benson, Rusty; Wyman, Bruce; Held, Isaac
2016-10-01
Climate models represent a large variety of processes on a variety of timescales and space scales, a canonical example of multi-physics multi-scale modeling. Current hardware trends, such as Graphical Processing Units (GPUs) and Many Integrated Core (MIC) chips, are based on, at best, marginal increases in clock speed, coupled with vast increases in concurrency, particularly at the fine grain. Multi-physics codes face particular challenges in achieving fine-grained concurrency, as different physics and dynamics components have different computational profiles, and universal solutions are hard to come by. We propose here one approach for multi-physics codes. These codes are typically structured as components interacting via software frameworks. The component structure of a typical Earth system model consists of a hierarchical and recursive tree of components, each representing a different climate process or dynamical system. This recursive structure generally encompasses a modest level of concurrency at the highest level (e.g., atmosphere and ocean on different processor sets) with serial organization underneath. We propose to extend concurrency much further by running more and more lower- and higher-level components in parallel with each other. Each component can further be parallelized on the fine grain, potentially offering a major increase in the scalability of Earth system models. We present here first results from this approach, called coarse-grained component concurrency, or CCC. Within the Geophysical Fluid Dynamics Laboratory (GFDL) Flexible Modeling System (FMS), the atmospheric radiative transfer component has been configured to run in parallel with a composite component consisting of every other atmospheric component, including the atmospheric dynamics and all other atmospheric physics components. We will explore the algorithmic challenges involved in such an approach, and present results from such simulations. Plans to achieve even greater levels of coarse-grained concurrency by extending this approach within other components, such as the ocean, will be discussed.
A conceptual model for megaprogramming
NASA Technical Reports Server (NTRS)
Tracz, Will
1990-01-01
Megaprogramming is component-based software engineering and life-cycle management. Magaprogramming and its relationship to other research initiatives (common prototyping system/common prototyping language, domain specific software architectures, and software understanding) are analyzed. The desirable attributes of megaprogramming software components are identified and a software development model and resulting prototype megaprogramming system (library interconnection language extended by annotated Ada) are described.
NASA Astrophysics Data System (ADS)
Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward
2018-04-01
A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.
1986-01-01
An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.
A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems
NASA Astrophysics Data System (ADS)
Pawlicki, Ted
1988-03-01
Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.
A Practical Application of Microcomputers to Control an Active Solar System.
ERIC Educational Resources Information Center
Goldman, David S.; Warren, William
1984-01-01
Describes the design and implementation of a microcomputer-based model active solar heating system. Includes discussions of: (1) the active solar components (solar collector, heat exchanger, pump, and fan necessary to provide forced air heating); (2) software components; and (3) hardware components (in the form of sensors and actuators). (JN)
A dashboard-based system for supporting diabetes care.
Dagliati, Arianna; Sacchi, Lucia; Tibollo, Valentina; Cogni, Giulia; Teliti, Marsida; Martinez-Millana, Antonio; Traver, Vicente; Segagni, Daniele; Posada, Jorge; Ottaviano, Manuel; Fico, Giuseppe; Arredondo, Maria Teresa; De Cata, Pasquale; Chiovato, Luca; Bellazzi, Riccardo
2018-05-01
To describe the development, as part of the European Union MOSAIC (Models and Simulation Techniques for Discovering Diabetes Influence Factors) project, of a dashboard-based system for the management of type 2 diabetes and assess its impact on clinical practice. The MOSAIC dashboard system is based on predictive modeling, longitudinal data analytics, and the reuse and integration of data from hospitals and public health repositories. Data are merged into an i2b2 data warehouse, which feeds a set of advanced temporal analytic models, including temporal abstractions, care-flow mining, drug exposure pattern detection, and risk-prediction models for type 2 diabetes complications. The dashboard has 2 components, designed for (1) clinical decision support during follow-up consultations and (2) outcome assessment on populations of interest. To assess the impact of the clinical decision support component, a pre-post study was conducted considering visit duration, number of screening examinations, and lifestyle interventions. A pilot sample of 700 Italian patients was investigated. Judgments on the outcome assessment component were obtained via focus groups with clinicians and health care managers. The use of the decision support component in clinical activities produced a reduction in visit duration (P ≪ .01) and an increase in the number of screening exams for complications (P < .01). We also observed a relevant, although nonstatistically significant, increase in the proportion of patients receiving lifestyle interventions (from 69% to 77%). Regarding the outcome assessment component, focus groups highlighted the system's capability of identifying and understanding the characteristics of patient subgroups treated at the center. Our study demonstrates that decision support tools based on the integration of multiple-source data and visual and predictive analytics do improve the management of a chronic disease such as type 2 diabetes by enacting a successful implementation of the learning health care system cycle.
Model-Based Engineering for Supply Chain Risk Management
2015-09-30
Privacy, 2009 [19] Julien Delange Wheel Brake System Example using AADL; Feiler, Peter; Hansson, Jörgen; de Niz, Dionisio; & Wrage, Lutz. System ...University Software Engineering Institute Abstract—Expanded use of commercial components has increased the complexity of system assurance...verification. Model- based engineering (MBE) offers a means to design, develop, analyze, and maintain a complex system architecture. Architecture Analysis
An Object Model for a Rocket Engine Numerical Simulator
NASA Technical Reports Server (NTRS)
Mitra, D.; Bhalla, P. N.; Pratap, V.; Reddy, P.
1998-01-01
Rocket Engine Numerical Simulator (RENS) is a packet of software which numerically simulates the behavior of a rocket engine. Different parameters of the components of an engine is the input to these programs. Depending on these given parameters the programs output the behaviors of those components. These behavioral values are then used to guide the design of or to diagnose a model of a rocket engine "built" by a composition of these programs simulating different components of the engine system. In order to use this software package effectively one needs to have a flexible model of a rocket engine. These programs simulating different components then should be plugged into this modular representation. Our project is to develop an object based model of such an engine system. We are following an iterative and incremental approach in developing the model, as is the standard practice in the area of object oriented design and analysis of softwares. This process involves three stages: object modeling to represent the components and sub-components of a rocket engine, dynamic modeling to capture the temporal and behavioral aspects of the system, and functional modeling to represent the transformational aspects. This article reports on the first phase of our activity under a grant (RENS) from the NASA Lewis Research center. We have utilized Rambaugh's object modeling technique and the tool UML for this purpose. The classes of a rocket engine propulsion system are developed and some of them are presented in this report. The next step, developing a dynamic model for RENS, is also touched upon here. In this paper we will also discuss the advantages of using object-based modeling for developing this type of an integrated simulator over other tools like an expert systems shell or a procedural language, e.g., FORTRAN. Attempts have been made in the past to use such techniques.
Component Models for Semantic Web Languages
NASA Astrophysics Data System (ADS)
Henriksson, Jakob; Aßmann, Uwe
Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.
Engine Data Interpretation System (EDIS)
NASA Technical Reports Server (NTRS)
Cost, Thomas L.; Hofmann, Martin O.
1990-01-01
A prototype of an expert system was developed which applies qualitative or model-based reasoning to the task of post-test analysis and diagnosis of data resulting from a rocket engine firing. A combined component-based and process theory approach is adopted as the basis for system modeling. Such an approach provides a framework for explaining both normal and deviant system behavior in terms of individual component functionality. The diagnosis function is applied to digitized sensor time-histories generated during engine firings. The generic system is applicable to any liquid rocket engine but was adapted specifically in this work to the Space Shuttle Main Engine (SSME). The system is applied to idealized data resulting from turbomachinery malfunction in the SSME.
An industrial information integration approach to in-orbit spacecraft
NASA Astrophysics Data System (ADS)
Du, Xiaoning; Wang, Hong; Du, Yuhao; Xu, Li Da; Chaudhry, Sohail; Bi, Zhuming; Guo, Rong; Huang, Yongxuan; Li, Jisheng
2017-01-01
To operate an in-orbit spacecraft, the spacecraft status has to be monitored autonomously by collecting and analysing real-time data, and then detecting abnormities and malfunctions of system components. To develop an information system for spacecraft state detection, we investigate the feasibility of using ontology-based artificial intelligence in the system development. We propose a new modelling technique based on the semantic web, agent, scenarios and ontologies model. In modelling, the subjects of astronautics fields are classified, corresponding agents and scenarios are defined, and they are connected by the semantic web to analyse data and detect failures. We introduce the modelling methodologies and the resulted framework of the status detection information system in this paper. We discuss system components as well as their interactions in details. The system has been prototyped and tested to illustrate its feasibility and effectiveness. The proposed modelling technique is generic which can be extended and applied to the system development of other large-scale and complex information systems.
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
A System-Science Approach towards Model Construction for Curriculum Development.
ERIC Educational Resources Information Center
Chang, Ren-Jung; Yang, Hui-Chin
A new morphological model based on modern system science and engineering is constructed and proposed for curriculum research and development. A curriculum system is recognized as an engineering system that constitutes three components: clients, resources, and knowledge. Unlike the objective models that are purely rational and neatly sequential in…
An approach to the mathematical modelling of a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Averner, M. M.
1981-01-01
An approach to the design of a computer based model of a closed ecological life-support system suitable for use in extraterrestrial habitats is presented. The model is based on elemental mass balance and contains representations of the metabolic activities of biological components. The model can be used as a tool in evaluating preliminary designs for closed regenerative life support systems and as a method for predicting the behavior of such systems.
NEMS - National Energy Modeling System: An Overview
2009-01-01
The National Energy Modeling System: An Overview 2009 a summary description of NEMS and each of its components. NEMS is a computer-based, energy-economy modeling system of energy markets for the midterm period through 2030. The NEMS is used to produce the Annual Energy Outlook.
Accurate and efficient modeling of the detector response in small animal multi-head PET systems.
Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto
2013-10-07
In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction as detector response component. The comparisons confirm previous research results, showing that the usage of an accurate system model with a realistic detector response leads to reconstructed images with better resolution and contrast recovery at low levels of image roughness.
Accurate and efficient modeling of the detector response in small animal multi-head PET systems
NASA Astrophysics Data System (ADS)
Cecchetti, Matteo; Moehrs, Sascha; Belcari, Nicola; Del Guerra, Alberto
2013-10-01
In fully three-dimensional PET imaging, iterative image reconstruction techniques usually outperform analytical algorithms in terms of image quality provided that an appropriate system model is used. In this study we concentrate on the calculation of an accurate system model for the YAP-(S)PET II small animal scanner, with the aim to obtain fully resolution- and contrast-recovered images at low levels of image roughness. For this purpose we calculate the system model by decomposing it into a product of five matrices: (1) a detector response component obtained via Monte Carlo simulations, (2) a geometric component which describes the scanner geometry and which is calculated via a multi-ray method, (3) a detector normalization component derived from the acquisition of a planar source, (4) a photon attenuation component calculated from x-ray computed tomography data, and finally, (5) a positron range component is formally included. This system model factorization allows the optimization of each component in terms of computation time, storage requirements and accuracy. The main contribution of this work is a new, efficient way to calculate the detector response component for rotating, planar detectors, that consists of a GEANT4 based simulation of a subset of lines of flight (LOFs) for a single detector head whereas the missing LOFs are obtained by using intrinsic detector symmetries. Additionally, we introduce and analyze a probability threshold for matrix elements of the detector component to optimize the trade-off between the matrix size in terms of non-zero elements and the resulting quality of the reconstructed images. In order to evaluate our proposed system model we reconstructed various images of objects, acquired according to the NEMA NU 4-2008 standard, and we compared them to the images reconstructed with two other system models: a model that does not include any detector response component and a model that approximates analytically the depth of interaction as detector response component. The comparisons confirm previous research results, showing that the usage of an accurate system model with a realistic detector response leads to reconstructed images with better resolution and contrast recovery at low levels of image roughness.
Frydel, Derek; Levin, Yan
2018-01-14
In the present work, we investigate a gas-liquid transition in a two-component Gaussian core model, where particles of the same species repel and those of different species attract. Unlike a similar transition in a one-component system with particles having attractive interactions at long separations and repulsive interactions at short separations, a transition in the two-component system is not driven solely by interactions but by a specific feature of the interactions, the correlations. This leads to extremely low critical temperature, as correlations are dominant in the strong-coupling limit. By carrying out various approximations based on standard liquid-state methods, we show that a gas-liquid transition of the two-component system poses a challenging theoretical problem.
NASA Astrophysics Data System (ADS)
Frydel, Derek; Levin, Yan
2018-01-01
In the present work, we investigate a gas-liquid transition in a two-component Gaussian core model, where particles of the same species repel and those of different species attract. Unlike a similar transition in a one-component system with particles having attractive interactions at long separations and repulsive interactions at short separations, a transition in the two-component system is not driven solely by interactions but by a specific feature of the interactions, the correlations. This leads to extremely low critical temperature, as correlations are dominant in the strong-coupling limit. By carrying out various approximations based on standard liquid-state methods, we show that a gas-liquid transition of the two-component system poses a challenging theoretical problem.
Dynamics of Rotating Multi-component Turbomachinery Systems
NASA Technical Reports Server (NTRS)
Lawrence, Charles
1993-01-01
The ultimate objective of turbomachinery vibration analysis is to predict both the overall, as well as component dynamic response. To accomplish this objective requires complete engine structural models, including multistages of bladed disk assemblies, flexible rotor shafts and bearings, and engine support structures and casings. In the present approach each component is analyzed as a separate structure and boundary information is exchanged at the inter-component connections. The advantage of this tactic is that even though readily available detailed component models are utilized, accurate and comprehensive system response information may be obtained. Sample problems, which include a fixed base rotating blade and a blade on a flexible rotor, are presented.
A cloud-based approach for interoperable electronic health records (EHRs).
Bahga, Arshdeep; Madisetti, Vijay K
2013-09-01
We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security.
Agent-Based Modeling in Systems Pharmacology.
Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M
2015-11-01
Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling.
Bringing Back the Social Affordances of the Paper Memo to Aerospace Systems Engineering Work
NASA Technical Reports Server (NTRS)
Davidoff, Scott; Holloway, Alexandra
2014-01-01
Model-based systems engineering (MBSE) is a relatively new field that brings together the interdisciplinary study of technological components of a project (systems engineering) with a model-based ontology to express the hierarchical and behavioral relationships between the components (computational modeling). Despite the compelling promises of the benefits of MBSE, such as improved communication and productivity due to an underlying language and data model, we observed hesitation to its adoption at the NASA Jet Propulsion Laboratory. To investigate, we conducted a six-month ethnographic field investigation and needs validation with 19 systems engineers. This paper contributes our observations of a generational shift in one of JPL's core technologies. We report on a cultural misunderstanding between communities of practice that bolsters the existing technology drag. Given the high cost of failure, we springboard our observations into a design hypothesis - an intervention that blends the social affordances of the narrative-based work flow with the rich technological advantages of explicit data references and relationships of the model-based approach. We provide a design rationale, and the results of our evaluation.
Respiratory protective device design using control system techniques
NASA Technical Reports Server (NTRS)
Burgess, W. A.; Yankovich, D.
1972-01-01
The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.
Adaptive model-based control systems and methods for controlling a gas turbine
NASA Technical Reports Server (NTRS)
Brunell, Brent Jerome (Inventor); Mathews, Jr., Harry Kirk (Inventor); Kumar, Aditya (Inventor)
2004-01-01
Adaptive model-based control systems and methods are described so that performance and/or operability of a gas turbine in an aircraft engine, power plant, marine propulsion, or industrial application can be optimized under normal, deteriorated, faulted, failed and/or damaged operation. First, a model of each relevant system or component is created, and the model is adapted to the engine. Then, if/when deterioration, a fault, a failure or some kind of damage to an engine component or system is detected, that information is input to the model-based control as changes to the model, constraints, objective function, or other control parameters. With all the information about the engine condition, and state and directives on the control goals in terms of an objective function and constraints, the control then solves an optimization so the optimal control action can be determined and taken. This model and control may be updated in real-time to account for engine-to-engine variation, deterioration, damage, faults and/or failures using optimal corrective control action command(s).
Proposal for Holistic Assessment of Urban System Resilience to Natural Disasters
NASA Astrophysics Data System (ADS)
Koren, David; Kilar, Vojko; Rus, Katarina
2017-10-01
Urban system is a complex mix of interdependent components and dynamic interactions between them that enable it to function effectively. Resilience of urban system indicates the ability of a system to resist, absorb, accommodate to and recover from the effects of a hazard in a timely and efficient manner. In the relevant literature, most studies consider individual components separately. On the other hand, the purpose of this paper is to assess the urban system as a whole, considering all relevant components and their interactions. The goal is a study of possibilities for holistic evaluation of urban system resilience to natural disasters. Findings from the preliminary study are presented: (i) the definition of urban system and categorization of its components, (ii) a set of attributes of individual components with impact on disaster resilience of the entire system and (iii) review of different methods and approaches for resilience assessment. Based on literature review and extensive preliminary studies a new conceptual framework for urban resilience assessment is proposed. In the presented paper, a conceptual model of urban system by abstraction of its components as nodes (buildings), patches - specific nodes with spatial properties (open space), links (infrastructures) and base layer (community) is created. In the suggested model, each component is defined by its own quantitative attributes, which have been identified to have an important impact on the urban system resilience to natural disasters. System is presented as a mathematical graph model. Natural disaster is considered an external factor that affects the existing system and leads to some system distortion. In further analyses, mathematical simulation of various natural disasters scenarios is going to be carried out, followed by comparison of the system functionality before and after the accident. Various properties of the system (accessibility, transition, complexity etc.) are going to be analysed with graph theory. The final result is going to be an identification of critical points and system bottlenecks as basis for further actions of risk mitigation.
NASA Astrophysics Data System (ADS)
Anderson, Thomas S.
2016-05-01
The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.
Impact of multilayered compression bandages on sub-bandage interface pressure: a model.
Al Khaburi, J; Nelson, E A; Hutchinson, J; Dehghani-Sanij, A A
2011-03-01
Multi-component medical compression bandages are widely used to treat venous leg ulcers. The sub-bandage interface pressures induced by individual components of the multi-component compression bandage systems are not always simply additive. Current models to explain compression bandage performance do not take account of the increase in leg circumference when each bandage is applied, and this may account for the difference between predicted and actual pressures. To calculate the interface pressure when a multi-component compression bandage system is applied to a leg. Use thick wall cylinder theory to estimate the sub-bandage pressure over the leg when a multi-component compression bandage is applied to a leg. A mathematical model was developed based on thick cylinder theory to include bandage thickness in the calculation of the interface pressure in multi-component compression systems. In multi-component compression systems, the interface pressure corresponds to the sum of the pressures applied by individual bandage layers. However, the change in the limb diameter caused by additional bandage layers should be considered in the calculation. Adding the interface pressure produced by single components without considering the bandage thickness will result in an overestimate of the overall interface pressure produced by the multi-component compression systems. At the ankle (circumference 25 cm) this error can be 19.2% or even more in the case of four components bandaging systems. Bandage thickness should be considered when calculating the pressure applied using multi-component compression systems.
Parametric System Model for a Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.
2015-01-01
A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of the Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 Wth) modules as the thermal building block from which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass, and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component- and system-level trades.
Parametric System Model for a Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Schmitz, Paul C.
2014-01-01
A Parametric System Model (PSM) was created in order to explore conceptual designs, the impact of component changes and power level on the performance of Stirling Radioisotope Generator (SRG). Using the General Purpose Heat Source (GPHS approximately 250 watt thermal) modules as the thermal building block around which a SRG is conceptualized, trade studies are performed to understand the importance of individual component scaling on isotope usage. Mathematical relationships based on heat and power throughput, temperature, mass and volume were developed for each of the required subsystems. The PSM uses these relationships to perform component and system level trades.
NASA Astrophysics Data System (ADS)
Baehr, J.; Fröhlich, K.; Botzet, M.; Domeisen, D. I. V.; Kornblueh, L.; Notz, D.; Piontek, R.; Pohlmann, H.; Tietsche, S.; Müller, W. A.
2015-05-01
A seasonal forecast system is presented, based on the global coupled climate model MPI-ESM as used for CMIP5 simulations. We describe the initialisation of the system and analyse its predictive skill for surface temperature. The presented system is initialised in the atmospheric, oceanic, and sea ice component of the model from reanalysis/observations with full field nudging in all three components. For the initialisation of the ensemble, bred vectors with a vertically varying norm are implemented in the ocean component to generate initial perturbations. In a set of ensemble hindcast simulations, starting each May and November between 1982 and 2010, we analyse the predictive skill. Bias-corrected ensemble forecasts for each start date reproduce the observed surface temperature anomalies at 2-4 months lead time, particularly in the tropics. Niño3.4 sea surface temperature anomalies show a small root-mean-square error and predictive skill up to 6 months. Away from the tropics, predictive skill is mostly limited to the ocean, and to regions which are strongly influenced by ENSO teleconnections. In summary, the presented seasonal prediction system based on a coupled climate model shows predictive skill for surface temperature at seasonal time scales comparable to other seasonal prediction systems using different underlying models and initialisation strategies. As the same model underlying our seasonal prediction system—with a different initialisation—is presently also used for decadal predictions, this is an important step towards seamless seasonal-to-decadal climate predictions.
A modeling framework for exposing risks in complex systems.
Sharit, J
2000-08-01
This article introduces and develops a modeling framework for exposing risks in the form of human errors and adverse consequences in high-risk systems. The modeling framework is based on two components: a two-dimensional theory of accidents in systems developed by Perrow in 1984, and the concept of multiple system perspectives. The theory of accidents differentiates systems on the basis of two sets of attributes. One set characterizes the degree to which systems are interactively complex; the other emphasizes the extent to which systems are tightly coupled. The concept of multiple perspectives provides alternative descriptions of the entire system that serve to enhance insight into system processes. The usefulness of these two model components derives from a modeling framework that cross-links them, enabling a variety of work contexts to be exposed and understood that would otherwise be very difficult or impossible to identify. The model components and the modeling framework are illustrated in the case of a large and comprehensive trauma care system. In addition to its general utility in the area of risk analysis, this methodology may be valuable in applications of current methods of human and system reliability analysis in complex and continually evolving high-risk systems.
A Distributed Approach to System-Level Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil
2012-01-01
Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Taylor; Guo, Yi; Veers, Paul
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, Yasin; Mathur, Jyotirmay; Bhandari, Mahabir S
2016-01-01
The paper describes a case study of an information technology office building with a radiant cooling system and a conventional variable air volume (VAV) system installed side by side so that performancecan be compared. First, a 3D model of the building involving architecture, occupancy, and HVAC operation was developed in EnergyPlus, a simulation tool. Second, a different calibration methodology was applied to develop the base case for assessing the energy saving potential. This paper details the calibration of the whole building energy model to the component level, including lighting, equipment, and HVAC components such as chillers, pumps, cooling towers, fans,more » etc. Also a new methodology for the systematic selection of influence parameter has been developed for the calibration of a simulated model which requires large time for the execution. The error at the whole building level [measured in mean bias error (MBE)] is 0.2%, and the coefficient of variation of root mean square error (CvRMSE) is 3.2%. The total errors in HVAC at the hourly are MBE = 8.7% and CvRMSE = 23.9%, which meet the criteria of ASHRAE 14 (2002) for hourly calibration. Different suggestions have been pointed out to generalize the energy saving of radiant cooling system through the existing building system. So a base case model was developed by using the calibrated model for quantifying the energy saving potential of the radiant cooling system. It was found that a base case radiant cooling system integrated with DOAS can save 28% energy compared with the conventional VAV system.« less
Nambe Pueblo Water Budget and Forecasting model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brainard, James Robert
2009-10-01
This report documents The Nambe Pueblo Water Budget and Water Forecasting model. The model has been constructed using Powersim Studio (PS), a software package designed to investigate complex systems where flows and accumulations are central to the system. Here PS has been used as a platform for modeling various aspects of Nambe Pueblo's current and future water use. The model contains three major components, the Water Forecast Component, Irrigation Scheduling Component, and the Reservoir Model Component. In each of the components, the user can change variables to investigate the impacts of water management scenarios on future water use. The Watermore » Forecast Component includes forecasting for industrial, commercial, and livestock use. Domestic demand is also forecasted based on user specified current population, population growth rates, and per capita water consumption. Irrigation efficiencies are quantified in the Irrigated Agriculture component using critical information concerning diversion rates, acreages, ditch dimensions and seepage rates. Results from this section are used in the Water Demand Forecast, Irrigation Scheduling, and the Reservoir Model components. The Reservoir Component contains two sections, (1) Storage and Inflow Accumulations by Categories and (2) Release, Diversion and Shortages. Results from both sections are derived from the calibrated Nambe Reservoir model where historic, pre-dam or above dam USGS stream flow data is fed into the model and releases are calculated.« less
Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš
2016-01-01
Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis-related components and tumor-suppressor genes, suggesting that this combinatorial perturbation may lead to a better target for decreasing cell proliferation and inducing apoptosis. Finally, our approach shows a potential to identify and prioritize therapeutic targets through systemic perturbation analysis of large-scale computational models of signal transduction. Although some components of the presented computational results have been validated against independent gene expression data sets, more laboratory experiments are warranted to more comprehensively validate the presented results. PMID:26904540
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
Current Standardization and Cooperative Efforts Related to Industrial Information Infrastructures.
1993-05-01
Data Management Systems: Components used to store, manage, and retrieve data. Data management includes knowledge bases, database management...Application Development Tools and Methods X/Open and POSIX APIs Integrated Design Support System (IDS) Knowledge -Based Systems (KBS) Application...IDEFlx) Yourdon Jackson System Design (JSD) Knowledge -Based Systems (KBSs) Structured Systems Development (SSD) Semantic Unification Meta-Model
NASA Astrophysics Data System (ADS)
Elag, M.; Goodall, J. L.
2013-12-01
Hydrologic modeling often requires the re-use and integration of models from different disciplines to simulate complex environmental systems. Component-based modeling introduces a flexible approach for integrating physical-based processes across disciplinary boundaries. Several hydrologic-related modeling communities have adopted the component-based approach for simulating complex physical systems by integrating model components across disciplinary boundaries in a workflow. However, it is not always straightforward to create these interdisciplinary models due to the lack of sufficient knowledge about a hydrologic process. This shortcoming is a result of using informal methods for organizing and sharing information about a hydrologic process. A knowledge-based ontology provides such standards and is considered the ideal approach for overcoming this challenge. The aims of this research are to present the methodology used in analyzing the basic hydrologic domain in order to identify hydrologic processes, the ontology itself, and how the proposed ontology is integrated with the Water Resources Component (WRC) ontology. The proposed ontology standardizes the definitions of a hydrologic process, the relationships between hydrologic processes, and their associated scientific equations. The objective of the proposed Hydrologic Process (HP) Ontology is to advance the idea of creating a unified knowledge framework for components' metadata by introducing a domain-level ontology for hydrologic processes. The HP ontology is a step toward an explicit and robust domain knowledge framework that can be evolved through the contribution of domain users. Analysis of the hydrologic domain is accomplished using the Formal Concept Approach (FCA), in which the infiltration process, an important hydrologic process, is examined. Two infiltration methods, the Green-Ampt and Philip's methods, were used to demonstrate the implementation of information in the HP ontology. Furthermore, a SPARQL service is provided for semantic-based querying of the ontology.
A reduced order, test verified component mode synthesis approach for system modeling applications
NASA Astrophysics Data System (ADS)
Butland, Adam; Avitabile, Peter
2010-05-01
Component mode synthesis (CMS) is a very common approach used for the generation of large system models. In general, these modeling techniques can be separated into two categories: those utilizing a combination of constraint modes and fixed interface normal modes and those based on a combination of free interface normal modes and residual flexibility terms. The major limitation of the methods utilizing constraint modes and fixed interface normal modes is the inability to easily obtain the required information from testing; the result of this limitation is that constraint mode-based techniques are primarily used with numerical models. An alternate approach is proposed which utilizes frequency and shape information acquired from modal testing to update reduced order finite element models using exact analytical model improvement techniques. The connection degrees of freedom are then rigidly constrained in the test verified, reduced order model to provide the boundary conditions necessary for constraint modes and fixed interface normal modes. The CMS approach is then used with this test verified, reduced order model to generate the system model for further analysis. A laboratory structure is used to show the application of the technique with both numerical and simulated experimental components to describe the system and validate the proposed approach. Actual test data is then used in the approach proposed. Due to typical measurement data contaminants that are always included in any test, the measured data is further processed to remove contaminants and is then used in the proposed approach. The final case using improved data with the reduced order, test verified components is shown to produce very acceptable results from the Craig-Bampton component mode synthesis approach. Use of the technique with its strengths and weaknesses are discussed.
MDA-based EHR application security services.
Blobel, Bernd; Pharow, Peter
2004-01-01
Component-oriented, distributed, virtual EHR systems have to meet enhanced security and privacy requirements. In the context of advanced architectural paradigms such as component-orientation, model-driven, and knowledge-based, standardised security services needed have to be specified and implemented in an integrated way following the same paradigm. This concerns the deployment of formal models, meta-languages, reference models such as the ISO RM-ODP, and development as well as implementation tools. International projects' results presented proceed on that streamline.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
CRAX/Cassandra Reliability Analysis Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, D.
1999-02-10
Over the past few years Sandia National Laboratories has been moving toward an increased dependence on model- or physics-based analyses as a means to assess the impact of long-term storage on the nuclear weapons stockpile. These deterministic models have also been used to evaluate replacements for aging systems, often involving commercial off-the-shelf components (COTS). In addition, the models have been used to assess the performance of replacement components manufactured via unique, small-lot production runs. In either case, the limited amount of available test data dictates that the only logical course of action to characterize the reliability of these components ismore » to specifically consider the uncertainties in material properties, operating environment etc. within the physics-based (deterministic) model. This not only provides the ability to statistically characterize the expected performance of the component or system, but also provides direction regarding the benefits of additional testing on specific components within the system. An effort was therefore initiated to evaluate the capabilities of existing probabilistic methods and, if required, to develop new analysis methods to support the inclusion of uncertainty in the classical design tools used by analysts and design engineers at Sandia. The primary result of this effort is the CMX (Cassandra Exoskeleton) reliability analysis software.« less
Urbina, Angel; Mahadevan, Sankaran; Paez, Thomas L.
2012-03-01
Here, performance assessment of complex systems is ideally accomplished through system-level testing, but because they are expensive, such tests are seldom performed. On the other hand, for economic reasons, data from tests on individual components that are parts of complex systems are more readily available. The lack of system-level data leads to a need to build computational models of systems and use them for performance prediction in lieu of experiments. Because their complexity, models are sometimes built in a hierarchical manner, starting with simple components, progressing to collections of components, and finally, to the full system. Quantification of uncertainty inmore » the predicted response of a system model is required in order to establish confidence in the representation of actual system behavior. This paper proposes a framework for the complex, but very practical problem of quantification of uncertainty in system-level model predictions. It is based on Bayes networks and uses the available data at multiple levels of complexity (i.e., components, subsystem, etc.). Because epistemic sources of uncertainty were shown to be secondary, in this application, aleatoric only uncertainty is included in the present uncertainty quantification. An example showing application of the techniques to uncertainty quantification of measures of response of a real, complex aerospace system is included.« less
Linking a modified EPIC-based growth model (UPGM) with a component-based watershed model (AGES-W)
USDA-ARS?s Scientific Manuscript database
Agricultural models and decision support systems (DSS) for assessing water use and management are increasingly being applied to diverse geographic regions at different scales. This requires models that can simulate different crops, however, very few plant growth models are available that “easily” ...
2011-01-01
ABSTRACT Title of Document: MODELING OF WATER-BREATHING PROPULSION SYSTEMS UTILIZING THE ALUMINUM-SEAWATER REACTION AND SOLID...Hybrid Aluminum Combustor (HAC): a novel underwater power system based on the exothermic reaction of aluminum with seawater. The system is modeled ...using a NASA-developed framework called Numerical Propulsion System Simulation (NPSS) by assembling thermodynamic models developed for each component
A Tool for Model-Based Generation of Scenario-driven Electric Power Load Profiles
NASA Technical Reports Server (NTRS)
Rozek, Matthew L.; Donahue, Kenneth M.; Ingham, Michel D.; Kaderka, Justin D.
2015-01-01
Power consumption during all phases of spacecraft flight is of great interest to the aerospace community. As a result, significant analysis effort is exerted to understand the rates of electrical energy generation and consumption under many operational scenarios of the system. Previously, no standard tool existed for creating and maintaining a power equipment list (PEL) of spacecraft components that consume power, and no standard tool existed for generating power load profiles based on this PEL information during mission design phases. This paper presents the Scenario Power Load Analysis Tool (SPLAT) as a model-based systems engineering tool aiming to solve those problems. SPLAT is a plugin for MagicDraw (No Magic, Inc.) that aids in creating and maintaining a PEL, and also generates a power and temporal variable constraint set, in Maple language syntax, based on specified operational scenarios. The constraint set can be solved in Maple to show electric load profiles (i.e. power consumption from loads over time). SPLAT creates these load profiles from three modeled inputs: 1) a list of system components and their respective power modes, 2) a decomposition hierarchy of the system into these components, and 3) the specification of at least one scenario, which consists of temporal constraints on component power modes. In order to demonstrate how this information is represented in a system model, a notional example of a spacecraft planetary flyby is introduced. This example is also used to explain the overall functionality of SPLAT, and how this is used to generate electric power load profiles. Lastly, a cursory review of the usage of SPLAT on the Cold Atom Laboratory project is presented to show how the tool was used in an actual space hardware design application.
NASA Technical Reports Server (NTRS)
Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.
1990-01-01
An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huff, Kathryn D.
Component level and system level abstraction of detailed computational geologic repository models have resulted in four rapid computational models of hydrologic radionuclide transport at varying levels of detail. Those models are described, as is their implementation in Cyder, a software library of interchangeable radionuclide transport models appropriate for representing natural and engineered barrier components of generic geology repository concepts. A proof of principle demonstration was also conducted in which these models were used to represent the natural and engineered barrier components of a repository concept in a reducing, homogenous, generic geology. This base case demonstrates integration of the Cyder openmore » source library with the Cyclus computational fuel cycle systems analysis platform to facilitate calculation of repository performance metrics with respect to fuel cycle choices. (authors)« less
Evaluating model accuracy for model-based reasoning
NASA Technical Reports Server (NTRS)
Chien, Steve; Roden, Joseph
1992-01-01
Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.
Wu, Zujian; Pang, Wei; Coghill, George M
Computational modelling of biochemical systems based on top-down and bottom-up approaches has been well studied over the last decade. In this research, after illustrating how to generate atomic components by a set of given reactants and two user pre-defined component patterns, we propose an integrative top-down and bottom-up modelling approach for stepwise qualitative exploration of interactions among reactants in biochemical systems. Evolution strategy is applied to the top-down modelling approach to compose models, and simulated annealing is employed in the bottom-up modelling approach to explore potential interactions based on models constructed from the top-down modelling process. Both the top-down and bottom-up approaches support stepwise modular addition or subtraction for the model evolution. Experimental results indicate that our modelling approach is feasible to learn the relationships among biochemical reactants qualitatively. In addition, hidden reactants of the target biochemical system can be obtained by generating complex reactants in corresponding composed models. Moreover, qualitatively learned models with inferred reactants and alternative topologies can be used for further web-lab experimental investigations by biologists of interest, which may result in a better understanding of the system.
NASA Astrophysics Data System (ADS)
Liang, Likai; Bi, Yushen
Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.
Intelligent tutoring systems for systems engineering methodologies
NASA Technical Reports Server (NTRS)
Meyer, Richard J.; Toland, Joel; Decker, Louis
1991-01-01
The general goal is to provide the technology required to build systems that can provide intelligent tutoring in IDEF (Integrated Computer Aided Manufacturing Definition Method) modeling. The following subject areas are covered: intelligent tutoring systems for systems analysis methodologies; IDEF tutor architecture and components; developing cognitive skills for IDEF modeling; experimental software; and PC based prototype.
Improved Modeling in a Matlab-Based Navigation System
NASA Technical Reports Server (NTRS)
Deutschmann, Julie; Bar-Itzhack, Itzhack; Harman, Rick; Larimore, Wallace E.
1999-01-01
An innovative approach to autonomous navigation is available for low earth orbit satellites. The system is developed in Matlab and utilizes an Extended Kalman Filter (EKF) to estimate the attitude and trajectory based on spacecraft magnetometer and gyro data. Preliminary tests of the system with real spacecraft data from the Rossi X-Ray Timing Explorer Satellite (RXTE) indicate the existence of unmodeled errors in the magnetometer data. Incorporating into the EKF a statistical model that describes the colored component of the effective measurement of the magnetic field vector could improve the accuracy of the trajectory and attitude estimates and also improve the convergence time. This model is identified as a first order Markov process. With the addition of the model, the EKF attempts to identify the non-white components of the noise allowing for more accurate estimation of the original state vector, i.e. the orbital elements and the attitude. Working in Matlab allows for easy incorporation of new models into the EKF and the resulting navigation system is generic and can easily be applied to future missions resulting in an alternative in onboard or ground-based navigation.
Hierarchical graphs for better annotations of rule-based models of biochemical systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Bin; Hlavacek, William
2009-01-01
In the graph-based formalism of the BioNetGen language (BNGL), graphs are used to represent molecules, with a colored vertex representing a component of a molecule, a vertex label representing the internal state of a component, and an edge representing a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions, with a rule that specifies addition (removal) of an edge representing a class of association (dissociation) reactions and with a rule that specifies a change of vertex label representing a class of reactions that affect the internal state of amore » molecular component. A set of rules comprises a mathematical/computational model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Here, for purposes of model annotation, we propose an extension of BNGL that involves the use of hierarchical graphs to represent (1) relationships among components and subcomponents of molecules and (2) relationships among classes of reactions defined by rules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR)/CD3 complex. Likewise, we illustrate how hierarchical graphs can be used to document the similarity of two related rules for kinase-catalyzed phosphorylation of a protein substrate. We also demonstrate how a hierarchical graph representing a protein can be encoded in an XML-based format.« less
Multiple Damage Progression Paths in Model-Based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Goebel, Kai Frank
2011-01-01
Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active
Remote information service access system based on a client-server-service model
Konrad, Allan M.
1996-01-01
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.
Remote information service access system based on a client-server-service model
Konrad, A.M.
1997-12-09
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.
Remote information service access system based on a client-server-service model
Konrad, Allan M.
1999-01-01
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.
Remote information service access system based on a client-server-service model
Konrad, A.M.
1996-08-06
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user`s local host, thereby providing ease of use and minimal software maintenance for users of that remote service. 16 figs.
Remote information service access system based on a client-server-service model
Konrad, Allan M.
1997-01-01
A local host computing system, a remote host computing system as connected by a network, and service functionalities: a human interface service functionality, a starter service functionality, and a desired utility service functionality, and a Client-Server-Service (CSS) model is imposed on each service functionality. In one embodiment, this results in nine logical components and three physical components (a local host, a remote host, and an intervening network), where two of the logical components are integrated into one Remote Object Client component, and that Remote Object Client component and the other seven logical components are deployed among the local host and remote host in a manner which eases compatibility and upgrade problems, and provides an illusion to a user that a desired utility service supported on a remote host resides locally on the user's local host, thereby providing ease of use and minimal software maintenance for users of that remote service.
TRANSFORM - TRANsient Simulation Framework of Reconfigurable Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenwood, Michael S; Cetiner, Mustafa S; Fugate, David L
Existing development tools for early stage design and scoping of energy systems are often time consuming to use, proprietary, and do not contain the necessary function to model complete systems (i.e., controls, primary, and secondary systems) in a common platform. The Modelica programming language based TRANSFORM tool (1) provides a standardized, common simulation environment for early design of energy systems (i.e., power plants), (2) provides a library of baseline component modules to be assembled into full plant models using available geometry, design, and thermal-hydraulic data, (3) defines modeling conventions for interconnecting component models, and (4) establishes user interfaces and supportmore » tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.« less
NASA Astrophysics Data System (ADS)
Wallace, Jon Michael
2003-10-01
Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.
Framework for a clinical information system.
Van De Velde, R; Lansiers, R; Antonissen, G
2002-01-01
The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
DOT National Transportation Integrated Search
2001-09-01
The goal of this project is to comprehensively model the activity-travel patterns of workers as well as non-workers in a household. The activity-travel system will take as input various land use, socio-demographic, activity system, and transportation...
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components under the Object Modeling System Version 3 (OMS3). The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the ad...
Toward the Modularization of Decision Support Systems
NASA Astrophysics Data System (ADS)
Raskin, R. G.
2009-12-01
Decision support systems are typically developed entirely from scratch without the use of modular components. This “stovepiped” approach is inefficient and costly because it prevents a developer from leveraging the data, models, tools, and services of other developers. Even when a decision support component is made available, it is difficult to know what problem it solves, how it relates to other components, or even that the component exists, The Spatial Decision Support (SDS) Consortium was formed in 2008 to organize the body of knowledge in SDS within a common portal. The portal identifies the canonical steps in the decision process and enables decision support components to be registered, categorized, and searched. This presentation describes how a decision support system can be assembled from modular models, data, tools and services, based on the needs of the Earth science application.
A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines
NASA Astrophysics Data System (ADS)
Wang, Bin; Zhao, Haocen; Ye, Zhifeng
2017-08-01
Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.
Formulation of an experimental substructure model using a Craig-Bampton based transmission simulator
NASA Astrophysics Data System (ADS)
Kammer, Daniel C.; Allen, Mathew S.; Mayes, Randy L.
2015-12-01
Experimental-analytical substructuring is attractive when there is motivation to replace one or more system subcomponents with an experimental model. This experimentally derived substructure can then be coupled to finite element models of the rest of the structure to predict the system response. The transmission simulator method couples a fixture to the component of interest during a vibration test in order to improve the experimental model for the component. The transmission simulator is then subtracted from the tested system to produce the experimental component. The method reduces ill-conditioning by imposing a least squares fit of constraints between substructure modal coordinates to connect substructures, instead of directly connecting physical interface degrees of freedom. This paper presents an alternative means of deriving the experimental substructure model, in which a Craig-Bampton representation of the transmission simulator is created and subtracted from the experimental measurements. The corresponding modal basis of the transmission simulator is described by the fixed-interface modes, rather than free modes that were used in the original approach. These modes do a better job of representing the shape of the transmission simulator as it responds within the experimental system, leading to more accurate results using fewer modes. The new approach is demonstrated using a simple finite element model based example with a redundant interface.
CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2015-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.
Physics-of-Failure Approach to Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.
2017-01-01
As more and more electric vehicles emerge in our daily operation progressively, a very critical challenge lies in accurate prediction of the electrical components present in the system. In case of electric vehicles, computing remaining battery charge is safety-critical. In order to tackle and solve the prediction problem, it is essential to have awareness of the current state and health of the system, especially since it is necessary to perform condition-based predictions. To be able to predict the future state of the system, it is also required to possess knowledge of the current and future operations of the vehicle. In this presentation our approach to develop a system level health monitoring safety indicator for different electronic components is presented which runs estimation and prediction algorithms to determine state-of-charge and estimate remaining useful life of respective components. Given models of the current and future system behavior, the general approach of model-based prognostics can be employed as a solution to the prediction problem and further for decision making.
ERIC Educational Resources Information Center
Su, Chung-Ho; Cheng, Ching-Hsue
2016-01-01
This study aims to explore the factors in a patient's rehabilitation achievement after a total knee replacement (TKR) patient exercises, using a PCA-ANFIS emotion model-based game rehabilitation system, which combines virtual reality (VR) and motion capture technology. The researchers combine a principal component analysis (PCA) and an adaptive…
Improved estimation of random vibration loads in launch vehicles
NASA Technical Reports Server (NTRS)
Mehta, R.; Erwin, E.; Suryanarayan, S.; Krishna, Murali M. R.
1993-01-01
Random vibration induced load is an important component of the total design load environment for payload and launch vehicle components and their support structures. The current approach to random vibration load estimation is based, particularly at the preliminary design stage, on the use of Miles' equation which assumes a single degree-of-freedom (DOF) system and white noise excitation. This paper examines the implications of the use of multi-DOF system models and response calculation based on numerical integration using the actual excitation spectra for random vibration load estimation. The analytical study presented considers a two-DOF system and brings out the effects of modal mass, damping and frequency ratios on the random vibration load factor. The results indicate that load estimates based on the Miles' equation can be significantly different from the more accurate estimates based on multi-DOF models.
Elementary Students' Mental Models of the Solar System
ERIC Educational Resources Information Center
Calderon-Canales, Elena; Flores-Camacho, Fernando; Gallegos-Cazares, Leticia
2013-01-01
This research project aimed to identify and analyze Mexican primary school students' ideas about the components of the solar system. In particular, this study focused on conceptions of the solar system and representations of the dynamics of the solar system based on the functional and structural models that students make in school. Using a…
Radiology information system: a workflow-based approach.
Zhang, Jinyan; Lu, Xudong; Nie, Hongchao; Huang, Zhengxing; van der Aalst, W M P
2009-09-01
Introducing workflow management technology in healthcare seems to be prospective in dealing with the problem that the current healthcare Information Systems cannot provide sufficient support for the process management, although several challenges still exist. The purpose of this paper is to study the method of developing workflow-based information system in radiology department as a use case. First, a workflow model of typical radiology process was established. Second, based on the model, the system could be designed and implemented as a group of loosely coupled components. Each component corresponded to one task in the process and could be assembled by the workflow management system. The legacy systems could be taken as special components, which also corresponded to the tasks and were integrated through transferring non-work- flow-aware interfaces to the standard ones. Finally, a workflow dashboard was designed and implemented to provide an integral view of radiology processes. The workflow-based Radiology Information System was deployed in the radiology department of Zhejiang Chinese Medicine Hospital in China. The results showed that it could be adjusted flexibly in response to the needs of changing process, and enhance the process management in the department. It can also provide a more workflow-aware integration method, comparing with other methods such as IHE-based ones. The workflow-based approach is a new method of developing radiology information system with more flexibility, more functionalities of process management and more workflow-aware integration. The work of this paper is an initial endeavor for introducing workflow management technology in healthcare.
NASA Astrophysics Data System (ADS)
Göll, S.; Samsun, R. C.; Peters, R.
Fuel-cell-based auxiliary power units can help to reduce fuel consumption and emissions in transportation. For this application, the combination of solid oxide fuel cells (SOFCs) with upstream fuel processing by autothermal reforming (ATR) is seen as a highly favorable configuration. Notwithstanding the necessity to improve each single component, an optimized architecture of the fuel cell system as a whole must be achieved. To enable model-based analyses, a system-level approach is proposed in which the fuel cell system is modeled as a multi-stage thermo-chemical process using the "flowsheeting" environment PRO/II™. Therein, the SOFC stack and the ATR are characterized entirely by corresponding thermodynamic processes together with global performance parameters. The developed model is then used to achieve an optimal system layout by comparing different system architectures. A system with anode and cathode off-gas recycling was identified to have the highest electric system efficiency. Taking this system as a basis, the potential for further performance enhancement was evaluated by varying four parameters characterizing different system components. Using methods from the design and analysis of experiments, the effects of these parameters and of their interactions were quantified, leading to an overall optimized system with encouraging performance data.
SLS Model Based Design: A Navigation Perspective
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin
2018-01-01
The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.
World Energy Projection System Plus: An Overview
2016-01-01
This report contains a summary description of the methodology and scope of WEPS and each of its component models. WEPS is a computer-based, energy modeling system of long-term international energy markets for the period through 2035. The system was used to produce the International Energy Outlook 2011.
An assembly system based on industrial robot with binocular stereo vision
NASA Astrophysics Data System (ADS)
Tang, Hong; Xiao, Nanfeng
2017-01-01
This paper proposes an electronic part and component assembly system based on an industrial robot with binocular stereo vision. Firstly, binocular stereo vision with a visual attention mechanism model is used to get quickly the image regions which contain the electronic parts and components. Secondly, a deep neural network is adopted to recognize the features of the electronic parts and components. Thirdly, in order to control the end-effector of the industrial robot to grasp the electronic parts and components, a genetic algorithm (GA) is proposed to compute the transition matrix and the inverse kinematics of the industrial robot (end-effector), which plays a key role in bridging the binocular stereo vision and the industrial robot. Finally, the proposed assembly system is tested in LED component assembly experiments, and the results denote that it has high efficiency and good applicability.
Development of Parametric Mass and Volume Models for an Aerospace SOFC/Gas Turbine Hybrid System
NASA Technical Reports Server (NTRS)
Tornabene, Robert; Wang, Xiao-yen; Steffen, Christopher J., Jr.; Freeh, Joshua E.
2005-01-01
In aerospace power systems, mass and volume are key considerations to produce a viable design. The utilization of fuel cells is being studied for a commercial aircraft electrical power unit. Based on preliminary analyses, a SOFC/gas turbine system may be a potential solution. This paper describes the parametric mass and volume models that are used to assess an aerospace hybrid system design. The design tool utilizes input from the thermodynamic system model and produces component sizing, performance, and mass estimates. The software is designed such that the thermodynamic model is linked to the mass and volume model to provide immediate feedback during the design process. It allows for automating an optimization process that accounts for mass and volume in its figure of merit. Each component in the system is modeled with a combination of theoretical and empirical approaches. A description of the assumptions and design analyses is presented.
Passive simulation of the nonlinear port-Hamiltonian modeling of a Rhodes Piano
NASA Astrophysics Data System (ADS)
Falaize, Antoine; Hélie, Thomas
2017-03-01
This paper deals with the time-domain simulation of an electro-mechanical piano: the Fender Rhodes. A simplified description of this multi-physical system is considered. It is composed of a hammer (nonlinear mechanical component), a cantilever beam (linear damped vibrating component) and a pickup (nonlinear magneto-electronic transducer). The approach is to propose a power-balanced formulation of the complete system, from which a guaranteed-passive simulation is derived to generate physically-based realistic sound synthesis. Theses issues are addressed in four steps. First, a class of Port-Hamiltonian Systems is introduced: these input-to-output systems fulfill a power balance that can be decomposed into conservative, dissipative and source parts. Second, physical models are proposed for each component and are recast in the port-Hamiltonian formulation. In particular, a finite-dimensional model of the cantilever beam is derived, based on a standard modal decomposition applied to the Euler-Bernoulli model. Third, these systems are interconnected, providing a nonlinear finite-dimensional Port-Hamiltonian System of the piano. Fourth, a passive-guaranteed numerical method is proposed. This method is built to preserve the power balance in the discrete-time domain, and more precisely, its decomposition structured into conservative, dissipative and source parts. Finally, simulations are performed for a set of physical parameters, based on empirical but realistic values. They provide a variety of audio signals which are perceptively relevant and qualitatively similar to some signals measured on a real instrument.
Brahms Mobile Agents: Architecture and Field Tests
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2002-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2016-12-01
Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.
A practically unconditionally gradient stable scheme for the N-component Cahn-Hilliard system
NASA Astrophysics Data System (ADS)
Lee, Hyun Geun; Choi, Jeong-Whan; Kim, Junseok
2012-02-01
We present a practically unconditionally gradient stable conservative nonlinear numerical scheme for the N-component Cahn-Hilliard system modeling the phase separation of an N-component mixture. The scheme is based on a nonlinear splitting method and is solved by an efficient and accurate nonlinear multigrid method. The scheme allows us to convert the N-component Cahn-Hilliard system into a system of N-1 binary Cahn-Hilliard equations and significantly reduces the required computer memory and CPU time. We observe that our numerical solutions are consistent with the linear stability analysis results. We also demonstrate the efficiency of the proposed scheme with various numerical experiments.
Optimum Vehicle Component Integration with InVeST (Integrated Vehicle Simulation Testbed)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, W; Paddack, E; Aceves, S
2001-12-27
We have developed an Integrated Vehicle Simulation Testbed (InVeST). InVeST is based on the concept of Co-simulation, and it allows the development of virtual vehicles that can be analyzed and optimized as an overall integrated system. The virtual vehicle is defined by selecting different vehicle components from a component library. Vehicle component models can be written in multiple programming languages running on different computer platforms. At the same time, InVeST provides full protection for proprietary models. Co-simulation is a cost-effective alternative to competing methodologies, such as developing a translator or selecting a single programming language for all vehicle components. InVeSTmore » has been recently demonstrated using a transmission model and a transmission controller model. The transmission model was written in SABER and ran on a Sun/Solaris workstation, while the transmission controller was written in MATRIXx and ran on a PC running Windows NT. The demonstration was successfully performed. Future plans include the applicability of Co-simulation and InVeST to analysis and optimization of multiple complex systems, including those of Intelligent Transportation Systems.« less
NASA Astrophysics Data System (ADS)
Kropotov, Y. A.; Belov, A. A.; Proskuryakov, A. Y.; Kolpakov, A. A.
2018-05-01
The paper considers models and methods for estimating signals during the transmission of information messages in telecommunication systems of audio exchange. One-dimensional probability distribution functions that can be used to isolate useful signals, and acoustic noise interference are presented. An approach to the estimation of the correlation and spectral functions of the parameters of acoustic signals is proposed, based on the parametric representation of acoustic signals and the components of the noise components. The paper suggests an approach to improving the efficiency of interference cancellation and highlighting the necessary information when processing signals from telecommunications systems. In this case, the suppression of acoustic noise is based on the methods of adaptive filtering and adaptive compensation. The work also describes the models of echo signals and the structure of subscriber devices in operational command telecommunications systems.
C-Language Integrated Production System, Version 6.0
NASA Technical Reports Server (NTRS)
Riley, Gary; Donnell, Brian; Ly, Huyen-Anh Bebe; Ortiz, Chris
1995-01-01
C Language Integrated Production System (CLIPS) computer programs are specifically intended to model human expertise or other knowledge. CLIPS is designed to enable research on, and development and delivery of, artificial intelligence on conventional computers. CLIPS 6.0 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming: representation of knowledge as heuristics - essentially, rules of thumb that specify set of actions performed in given situation. Object-oriented programming: modeling of complex systems comprised of modular components easily reused to model other systems or create new components. Procedural-programming: representation of knowledge in ways similar to those of such languages as C, Pascal, Ada, and LISP. Version of CLIPS 6.0 for IBM PC-compatible computers requires DOS v3.3 or later and/or Windows 3.1 or later.
Systems Engineering Model and Training Application for Desktop Environment
NASA Technical Reports Server (NTRS)
May, Jeffrey T.
2010-01-01
Provide a graphical user interface based simulator for desktop training, operations and procedure development and system reference. This simulator allows for engineers to train and further understand the dynamics of their system from their local desktops. It allows the users to train and evaluate their system at a pace and skill level based on the user's competency and from a perspective based on the user's need. The simulator will not require any special resources to execute and should generally be available for use. The interface is based on a concept of presenting the model of the system in ways that best suits the user's application or training needs. The three levels of views are Component View, the System View (overall system), and the Console View (monitor). These views are portals into a single model, so changing the model from one view or from a model manager Graphical User Interface will be reflected on all other views.
EON: a component-based approach to automation of protocol-directed therapy.
Musen, M A; Tu, S W; Das, A K; Shahar, Y
1996-01-01
Provision of automated support for planning protocol-directed therapy requires a computer program to take as input clinical data stored in an electronic patient-record system and to generate as output recommendations for therapeutic interventions and laboratory testing that are defined by applicable protocols. This paper presents a synthesis of research carried out at Stanford University to model the therapy-planning task and to demonstrate a component-based architecture for building protocol-based decision-support systems. We have constructed general-purpose software components that (1) interpret abstract protocol specifications to construct appropriate patient-specific treatment plans; (2) infer from time-stamped patient data higher-level, interval-based, abstract concepts; (3) perform time-oriented queries on a time-oriented patient database; and (4) allow acquisition and maintenance of protocol knowledge in a manner that facilitates efficient processing both by humans and by computers. We have implemented these components in a computer system known as EON. Each of the components has been developed, evaluated, and reported independently. We have evaluated the integration of the components as a composite architecture by implementing T-HELPER, a computer-based patient-record system that uses EON to offer advice regarding the management of patients who are following clinical trial protocols for AIDS or HIV infection. A test of the reuse of the software components in a different clinical domain demonstrated rapid development of a prototype application to support protocol-based care of patients who have breast cancer. PMID:8930854
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...
USDA-ARS?s Scientific Manuscript database
AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality (H/WQ) simulation components under the Object Modeling System (OMS3) environmental modeling framework. AgES-W has recently been enhanced with the addition of nitrogen (N) a...
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
An Overview of the Iowa Flood Forecasting and Monitoring System
NASA Astrophysics Data System (ADS)
Krajewski, W. F.
2016-12-01
Following the 2008 flood that devastated eastern Iowa the state legislators established the Iowa Flood Center at the University of Iowa with the mission of translational research towards flood mitigation. The Center has adavanced several components towards this goal. In particular, the Center has developed (1) state-wide flood inundation maps based on airborne lidar-based topography data and hydraulic models; (2) a network of nearly 250 real-time ultrasonic river stage sensors; (3) a detailed rainfall-runoff model for real time streamflow forecasting; and (4) cyberinfrastructure to acquire and manage data that includes High Performance Computing and browser-based information system designed for use by general public. The author discusses these components, their operational performance and their potential to assist in development of similar nation-wide systems. Specifically, many developments taking place at the National Water Center can benefit from the Iowa system serving as a reference.
NASA Astrophysics Data System (ADS)
Ascough, J. C.; David, O.; Heathman, G. C.; Smith, D. R.; Green, T. R.; Krause, P.; Kipka, H.; Fink, M.
2010-12-01
The Object Modeling System 3 (OMS3), currently being developed by the USDA-ARS Agricultural Systems Research Unit and Colorado State University (Fort Collins, CO), provides a component-based environmental modeling framework which allows the implementation of single- or multi-process modules that can be developed and applied as custom-tailored model configurations. OMS3 as a “lightweight” modeling framework contains four primary foundations: modeling resources (e.g., components) annotated with modeling metadata; domain specific knowledge bases and ontologies; tools for calibration, sensitivity analysis, and model optimization; and methods for model integration and performance scalability. The core is able to manage modeling resources and development tools for model and simulation creation, execution, evaluation, and documentation. OMS3 is based on the Java platform but is highly interoperable with C, C++, and FORTRAN on all major operating systems and architectures. The ARS Conservation Effects Assessment Project (CEAP) Watershed Assessment Study (WAS) Project Plan provides detailed descriptions of ongoing research studies at 14 benchmark watersheds in the United States. In order to satisfy the requirements of CEAP WAS Objective 5 (“develop and verify regional watershed models that quantify environmental outcomes of conservation practices in major agricultural regions”), a new watershed model development approach was initiated to take advantage of OMS3 modeling framework capabilities. Specific objectives of this study were to: 1) disaggregate and refactor various agroecosystem models (e.g., J2K-S, SWAT, WEPP) and implement hydrological, N dynamics, and crop growth science components under OMS3, 2) assemble a new modular watershed scale model for fully-distributed transfer of water and N loading between land units and stream channels, and 3) evaluate the accuracy and applicability of the modular watershed model for estimating stream flow and N dynamics. The Cedar Creek watershed (CCW) in northeastern Indiana, USA was selected for application of the OMS3-based AgroEcoSystem-Watershed (AgES-W) model. AgES-W performance for stream flow and N loading was assessed using Nash-Sutcliffe model efficiency (ENS) and percent bias (PBIAS) model evaluation statistics. Comparisons of daily and average monthly simulated and observed stream flow and N loads for the 1997-2005 simulation period resulted in PBIAS and ENS values that were similar or better than those reported in the literature for SWAT stream flow and N loading predictions at a similar scale. The results show that the AgES-W model was able to reproduce the hydrological and N dynamics of the CCW with sufficient quality, and should serve as a foundation upon which to better quantify additional water quality indicators (e.g., sediment transport and P dynamics) at the watershed scale.
Abby, Sophie S.; Néron, Bertrand; Ménager, Hervé; Touchon, Marie; Rocha, Eduardo P. C.
2014-01-01
Motivation Biologists often wish to use their knowledge on a few experimental models of a given molecular system to identify homologs in genomic data. We developed a generic tool for this purpose. Results Macromolecular System Finder (MacSyFinder) provides a flexible framework to model the properties of molecular systems (cellular machinery or pathway) including their components, evolutionary associations with other systems and genetic architecture. Modelled features also include functional analogs, and the multiple uses of a same component by different systems. Models are used to search for molecular systems in complete genomes or in unstructured data like metagenomes. The components of the systems are searched by sequence similarity using Hidden Markov model (HMM) protein profiles. The assignment of hits to a given system is decided based on compliance with the content and organization of the system model. A graphical interface, MacSyView, facilitates the analysis of the results by showing overviews of component content and genomic context. To exemplify the use of MacSyFinder we built models to detect and class CRISPR-Cas systems following a previously established classification. We show that MacSyFinder allows to easily define an accurate “Cas-finder” using publicly available protein profiles. Availability and Implementation MacSyFinder is a standalone application implemented in Python. It requires Python 2.7, Hmmer and makeblastdb (version 2.2.28 or higher). It is freely available with its source code under a GPLv3 license at https://github.com/gem-pasteur/macsyfinder. It is compatible with all platforms supporting Python and Hmmer/makeblastdb. The “Cas-finder” (models and HMM profiles) is distributed as a compressed tarball archive as Supporting Information. PMID:25330359
Modelling Wind Turbine Failures based on Weather Conditions
NASA Astrophysics Data System (ADS)
Reder, Maik; Melero, Julio J.
2017-11-01
A large proportion of the overall costs of a wind farm is directly related to operation and maintenance (O&M) tasks. By applying predictive O&M strategies rather than corrective approaches these costs can be decreased significantly. Here, especially wind turbine (WT) failure models can help to understand the components’ degradation processes and enable the operators to anticipate upcoming failures. Usually, these models are based on the age of the systems or components. However, latest research shows that the on-site weather conditions also affect the turbine failure behaviour significantly. This study presents a novel approach to model WT failures based on the environmental conditions to which they are exposed to. The results focus on general WT failures, as well as on four main components: gearbox, generator, pitch and yaw system. A penalised likelihood estimation is used in order to avoid problems due to for example highly correlated input covariates. The relative importance of the model covariates is assessed in order to analyse the effect of each weather parameter on the model output.
Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur
2012-01-01
This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.
Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur
2012-01-01
This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions. PMID:23112650
Predicting phase equilibria in one-component systems
NASA Astrophysics Data System (ADS)
Korchuganova, M. R.; Esina, Z. N.
2015-07-01
It is shown that Simon equation coefficients for n-alkanes and n-alcohols can be modeled using critical and triple point parameters. Predictions of the phase liquid-vapor, solid-vapor, and liquid-solid equilibria in one-component systems are based on the Clausius-Clapeyron relation, Van der Waals and Simon equations, and the principle of thermodynamic similarity.
OFMspert: An architecture for an operator's associate that evolves to an intelligent tutor
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1991-01-01
With the emergence of new technology for both human-computer interaction and knowledge-based systems, a range of opportunities exist which enhance the effectiveness and efficiency of controllers of high-risk engineering systems. The design of an architecture for an operator's associate is described. This associate is a stand-alone model-based system designed to interact with operators of complex dynamic systems, such as airplanes, manned space systems, and satellite ground control systems in ways comparable to that of a human assistant. The operator function model expert system (OFMspert) architecture and the design and empirical validation of OFMspert's understanding component are described. The design and validation of OFMspert's interactive and control components are also described. A description of current work in which OFMspert provides the foundation in the development of an intelligent tutor that evolves to an assistant, as operator expertise evolves from novice to expert, is provided.
Force Project Technology Presentation to the NRCC
2014-02-04
Functional Bridge components Smart Odometer Adv Pretreatment Smart Bridge Multi-functional Gap Crossing Fuel Automated Tracking System Adv...comprehensive matrix of candidate composite material systems and textile reinforcement architectures via modeling/analyses and testing. Product(s...Validated Dynamic Modeling tool based on parametric study using material models to reliably predict the textile mechanics of the hose
Hyper-Book: A Formal Model for Electronic Books.
ERIC Educational Resources Information Center
Catenazzi, Nadia; Sommaruga, Lorenzo
1994-01-01
Presents a model for electronic books based on the paper book metaphor. Discussion includes how the book evolves under the effects of its functional components; the use and impact of the model for organizing and presenting electronic documents in the context of electronic publishing; and the possible applications of a system based on the model.…
Life and reliability models for helicopter transmissions
NASA Technical Reports Server (NTRS)
Savage, M.; Knorr, R. J.; Coy, J. J.
1982-01-01
Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.
System cost/performance analysis (study 2.3). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Kazangey, T.
1973-01-01
The relationships between performance, safety, cost, and schedule parameters were identified and quantified in support of an overall effort to generate program models and methodology that provide insight into a total space vehicle program. A specific space vehicle system, the attitude control system (ACS), was used, and a modeling methodology was selected that develops a consistent set of quantitative relationships among performance, safety, cost, and schedule, based on the characteristics of the components utilized in candidate mechanisms. These descriptive equations were developed for a three-axis, earth-pointing, mass expulsion ACS. A data base describing typical candidate ACS components was implemented, along with a computer program to perform sample calculations. This approach, implemented on a computer, is capable of determining the effect of a change in functional requirements to the ACS mechanization and the resulting cost and schedule. By a simple extension of this modeling methodology to the other systems in a space vehicle, a complete space vehicle model can be developed. Study results and recommendations are presented.
Meteorological Processes Affecting Air Quality – Research and Model Development Needs
Meteorology modeling is an important component of air quality modeling systems that defines the physical and dynamical environment for atmospheric chemistry. The meteorology models used for air quality applications are based on numerical weather prediction models that were devel...
Structural Similitude and Scaling Laws
NASA Technical Reports Server (NTRS)
Simitses, George J.
1998-01-01
Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in other industries as well, Ship building, automobile and railway car construction all rely heavily on testing. Regardless of the application, a scaled-down (by a large factor) model (scale model) which closely represents the structural behavior of the full-scale system (prototype) can prove to be an extremely beneficial tool. This possible development must be based on the existence of certain structural parameters that control the behavior of a structural system when acted upon by static and/or dynamic loads. If such structural parameters exist, a scaled-down replica can be built, which will duplicate the response of the full-scale system. The two systems are then said to be structurally similar. The term, then, that best describes this similarity is structural similitude. Similarity of systems requires that the relevant system parameters be identical and these systems be governed by a unique set of characteristic equations. Thus, if a relation or equation of variables is written for a system, it is valid for all systems which are similar to it. Each variable in a model is proportional to the corresponding variable of the prototype. This ratio, which plays an essential role in predicting the relationship between the model and its prototype, is called the scale factor.
The Invasive Species Forecasting System
NASA Technical Reports Server (NTRS)
Schnase, John; Most, Neal; Gill, Roger; Ma, Peter
2011-01-01
The Invasive Species Forecasting System (ISFS) provides computational support for the generic work processes found in many regional-scale ecosystem modeling applications. Decision support tools built using ISFS allow a user to load point occurrence field sample data for a plant species of interest and quickly generate habitat suitability maps for geographic regions of management concern, such as a national park, monument, forest, or refuge. This type of decision product helps resource managers plan invasive species protection, monitoring, and control strategies for the lands they manage. Until now, scientists and resource managers have lacked the data-assembly and computing capabilities to produce these maps quickly and cost efficiently. ISFS focuses on regional-scale habitat suitability modeling for invasive terrestrial plants. ISFS s component architecture emphasizes simplicity and adaptability. Its core services can be easily adapted to produce model-based decision support tools tailored to particular parks, monuments, forests, refuges, and related management units. ISFS can be used to build standalone run-time tools that require no connection to the Internet, as well as fully Internet-based decision support applications. ISFS provides the core data structures, operating system interfaces, network interfaces, and inter-component constraints comprising the canonical workflow for habitat suitability modeling. The predictors, analysis methods, and geographic extents involved in any particular model run are elements of the user space and arbitrarily configurable by the user. ISFS provides small, lightweight, readily hardened core components of general utility. These components can be adapted to unanticipated uses, are tailorable, and require at most a loosely coupled, nonproprietary connection to the Web. Users can invoke capabilities from a command line; programmers can integrate ISFS's core components into more complex systems and services. Taken together, these features enable a degree of decentralization and distributed ownership that have helped other types of scientific information services succeed in recent years.
A Nonlinear Model for Gene-Based Gene-Environment Interaction.
Sa, Jian; Liu, Xu; He, Tao; Liu, Guifen; Cui, Yuehua
2016-06-04
A vast amount of literature has confirmed the role of gene-environment (G×E) interaction in the etiology of complex human diseases. Traditional methods are predominantly focused on the analysis of interaction between a single nucleotide polymorphism (SNP) and an environmental variable. Given that genes are the functional units, it is crucial to understand how gene effects (rather than single SNP effects) are influenced by an environmental variable to affect disease risk. Motivated by the increasing awareness of the power of gene-based association analysis over single variant based approach, in this work, we proposed a sparse principle component regression (sPCR) model to understand the gene-based G×E interaction effect on complex disease. We first extracted the sparse principal components for SNPs in a gene, then the effect of each principal component was modeled by a varying-coefficient (VC) model. The model can jointly model variants in a gene in which their effects are nonlinearly influenced by an environmental variable. In addition, the varying-coefficient sPCR (VC-sPCR) model has nice interpretation property since the sparsity on the principal component loadings can tell the relative importance of the corresponding SNPs in each component. We applied our method to a human birth weight dataset in Thai population. We analyzed 12,005 genes across 22 chromosomes and found one significant interaction effect using the Bonferroni correction method and one suggestive interaction. The model performance was further evaluated through simulation studies. Our model provides a system approach to evaluate gene-based G×E interaction.
Formulation of an experimental substructure model using a Craig-Bampton based transmission simulator
Kammer, Daniel C.; Allen, Matthew S.; Mayes, Randall L.
2015-09-26
An experimental–analytical substructuring is attractive when there is motivation to replace one or more system subcomponents with an experimental model. This experimentally derived substructure can then be coupled to finite element models of the rest of the structure to predict the system response. The transmission simulator method couples a fixture to the component of interest during a vibration test in order to improve the experimental model for the component. The transmission simulator is then subtracted from the tested system to produce the experimental component. This method reduces ill-conditioning by imposing a least squares fit of constraints between substructure modal coordinatesmore » to connect substructures, instead of directly connecting physical interface degrees of freedom. This paper presents an alternative means of deriving the experimental substructure model, in which a Craig–Bampton representation of the transmission simulator is created and subtracted from the experimental measurements. The corresponding modal basis of the transmission simulator is described by the fixed-interface modes, rather than free modes that were used in the original approach. Moreover, these modes do a better job of representing the shape of the transmission simulator as it responds within the experimental system, leading to more accurate results using fewer modes. The new approach is demonstrated using a simple finite element model based example with a redundant interface.« less
Formulation of an experimental substructure model using a Craig-Bampton based transmission simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kammer, Daniel C.; Allen, Matthew S.; Mayes, Randall L.
An experimental–analytical substructuring is attractive when there is motivation to replace one or more system subcomponents with an experimental model. This experimentally derived substructure can then be coupled to finite element models of the rest of the structure to predict the system response. The transmission simulator method couples a fixture to the component of interest during a vibration test in order to improve the experimental model for the component. The transmission simulator is then subtracted from the tested system to produce the experimental component. This method reduces ill-conditioning by imposing a least squares fit of constraints between substructure modal coordinatesmore » to connect substructures, instead of directly connecting physical interface degrees of freedom. This paper presents an alternative means of deriving the experimental substructure model, in which a Craig–Bampton representation of the transmission simulator is created and subtracted from the experimental measurements. The corresponding modal basis of the transmission simulator is described by the fixed-interface modes, rather than free modes that were used in the original approach. Moreover, these modes do a better job of representing the shape of the transmission simulator as it responds within the experimental system, leading to more accurate results using fewer modes. The new approach is demonstrated using a simple finite element model based example with a redundant interface.« less
Harbison, K; Kelly, J; Burnell, L; Silva, J
1995-01-01
The Scenario-based Engineering Process (SEP) is a user-focused methodology for large and complex system design. This process supports new application development from requirements analysis with domain models to component selection, design and modification, implementation, integration, and archival placement. It is built upon object-oriented methodologies, domain modeling strategies, and scenario-based techniques to provide an analysis process for mapping application requirements to available components. We are using SEP in the health care applications that we are developing. The process has already achieved success in the manufacturing and military domains and is being adopted by many organizations. SEP should prove viable in any domain containing scenarios that can be decomposed into tasks.
Zholtkevych, G N; Nosov, K V; Bespalov, Yu G; Rak, L I; Abhishek, M; Vysotskaya, E V
2018-05-24
The state-of-art research in the field of life's organization confronts the need to investigate a number of interacting components, their properties and conditions of sustainable behaviour within a natural system. In biology, ecology and life sciences, the performance of such stable system is usually related to homeostasis, a property of the system to actively regulate its state within a certain allowable limits. In our previous work, we proposed a deterministic model for systems' homeostasis. The model was based on dynamical system's theory and pairwise relationships of competition, amensalism and antagonism taken from theoretical biology and ecology. However, the present paper proposes a different dimension to our previous results based on the same model. In this paper, we introduce the influence of inter-component relationships in a system, wherein the impact is characterized by direction (neutral, positive, or negative) as well as its (absolute) value, or strength. This makes the model stochastic which, in our opinion, is more consistent with real-world elements affected by various random factors. The case study includes two examples from areas of hydrobiology and medicine. The models acquired for these cases enabled us to propose a convincing explanation for corresponding phenomena identified by different types of natural systems.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. T. Clark; M. J. Russell; R. E. Spears
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less
From scenarios to domain models: processes and representations
NASA Astrophysics Data System (ADS)
Haddock, Gail; Harbison, Karan
1994-03-01
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
A Risk-Based Approach for Aerothermal/TPS Analysis and Testing
NASA Technical Reports Server (NTRS)
Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak
2007-01-01
The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.
Research on modeling and conduction disturbance simulation of secondary power system in a device
NASA Astrophysics Data System (ADS)
Ding, Xu; Yu, Zhi-Yong; Jin, Rui
2017-06-01
To find electromagnetic interference (EMI) and other problems in the secondary power supply system design quickly and effectively, simulations are carried out under the Saber simulation software platform. The DC/DC converter model with complete performance and electromagnetic characteristics is established by combining parametric modeling with Mast language. By using the method of macro modeling, the hall current sensor and power supply filter model are established respectively based on the function, schematic diagram of the components. Also the simulation of the component model and the whole secondary power supply system are carried out. The simulation results show that the proposed model satisfies the functional requirements of the system and has high accuracy. At the same time, due to the ripple characteristics in the DC/DC converter modeling, it can be used as a conducted interference model to simulate the power bus conducted emission CE102 project under the condition that the simulated load is full, which provides a useful reference for the electromagnetic interference suppression of the system.
JAMS - a software platform for modular hydrological modelling
NASA Astrophysics Data System (ADS)
Kralisch, Sven; Fischer, Christian
2015-04-01
Current challenges of understanding and assessing the impacts of climate and land use changes on environmental systems demand for an ever-increasing integration of data and process knowledge in corresponding simulation models. Software frameworks that allow for a seamless creation of integrated models based on less complex components (domain models, process simulation routines) have therefore gained increasing attention during the last decade. JAMS is an Open-Source software framework that has been especially designed to cope with the challenges of eco-hydrological modelling. This is reflected by (i) its flexible approach for representing time and space, (ii) a strong separation of process simulation components from the declarative description of more complex models using domain specific XML, (iii) powerful analysis and visualization functions for spatial and temporal input and output data, and (iv) parameter optimization and uncertainty analysis functions commonly used in environmental modelling. Based on JAMS, different hydrological and nutrient-transport simulation models were implemented and successfully applied during the last years. We will present the JAMS core concepts and give an overview of models, simulation components and support tools available for that framework. Sample applications will be used to underline the advantages of component-based model designs and to show how JAMS can be used to address the challenges of integrated hydrological modelling.
A Distributed Approach to System-Level Prognostics
2012-09-01
the end of (useful) life ( EOL ) and/or the remaining useful life (RUL) of components, subsystems, or systems. The prognostics problem itself can be...system state estimate, computes EOL and/or RUL. In this paper, we focus on a model-based prognostics approach (Orchard & Vachtse- vanos, 2009; Daigle...been focused on individual components, and determining their EOL and RUL, e.g., (Orchard & Vachtsevanos, 2009; Saha & Goebel, 2009; Daigle & Goebel
ERIC Educational Resources Information Center
Oner, Diler; Adadan, Emine
2016-01-01
This study investigated the effectiveness of an integrated web-based portfolio system, namely the BOUNCE System, which primarily focuses on improving preservice teachers' reflective thinking skills. BOUNCE©, the software component of the system, was designed and developed to support a teaching practice model including a cycle of activities to be…
AiGERM: A logic programming front end for GERM
NASA Technical Reports Server (NTRS)
Hashim, Safaa H.
1990-01-01
AiGerm (Artificially Intelligent Graphical Entity Relation Modeler) is a relational data base query and programming language front end for MCC (Mission Control Center)/STP's (Space Test Program) Germ (Graphical Entity Relational Modeling) system. It is intended as an add-on component of the Germ system to be used for navigating very large networks of information. It can also function as an expert system shell for prototyping knowledge-based systems. AiGerm provides an interface between the programming language and Germ.
Research on The Construction of Flexible Multi-body Dynamics Model based on Virtual Components
NASA Astrophysics Data System (ADS)
Dong, Z. H.; Ye, X.; Yang, F.
2018-05-01
Focus on the harsh operation condition of space manipulator, which cannot afford relative large collision momentum, this paper proposes a new concept and technology, called soft-contact technology. In order to solve the problem of collision dynamics of flexible multi-body system caused by this technology, this paper also proposes the concepts of virtual components and virtual hinges, and constructs flexible dynamic model based on virtual components, and also studies on its solutions. On this basis, this paper uses NX to carry out model and comparison simulation for space manipulator in 3 different modes. The results show that using the model of multi-rigid body + flexible body hinge + controllable damping can make effective control on amplitude for the force and torque caused by target satellite collision.
A Stakeholder-Based System Dynamics Model of Return-to-Work: A Research Protocol.
Jetha, Arif; Pransky, Glenn; Fish, Jon; Jeffries, Susan; Hettinger, Lawrence J
2015-07-16
Returning to work following a job-related injury or illness can be a complex process, influenced by a range of interrelated personal, psychosocial, and organizational components. System dynamics modelling (SDM) takes a sociotechnical systems perspective to view return-to-work (RTW) as a system made up of multiple feedback relationships between influential components. To build the RTW SDM, a mixed-method approach will be used. The first stage, that has already been completed, involved creating a baseline model using key informant interviews. Second, in two manufacturing companies, stakeholder-based models will be developed through interviews and focus groups with senior management, frontline workers, and frontline supervisors. Participants will be asked about the RTW process in general and more targeted questions regarding influential components. Participants will also be led through a reference mode exercise where they will be asked to estimate the direction, shape and magnitude of relationships between influential components. Data will be entered into the software program Vensim that provides a platform for visualizing system-structure and simulating the effects of adapting components. Finally, preliminary model validity testing will be conducted to provide insights on model generalizability and sensitivity. The proposed methodology will create a SDM of the RTW process using feedback relationships of influential components. It will also provide an important simulation tool to understand system behaviour that underlies complex RTW cases, and examine anticipated and unanticipated consequences of disability management policies. Significance for public healthWhile the incidence of occupational injuries and illnesses has declined over the past two decades, the proportion resulting in sickness absence has actually increased. Implementing strategies to address sickness absences and promote return-to-work (RTW) can significantly benefit physical and mental health, and work outcomes like worker engagement, job satisfaction and job strain. As a key social determinant of health, participation in paid work can also ensure that work-disabled individuals generate income necessary for access to housing, education, food, and social services that also benefit health. Improving RTW outcomes can also have significant societal benefits such as a reduction in workers compensation costs, increased economic activity and less burden on social assistance programs. Despite its benefits, returning to work after injury or illness is not a straightforward process and can be complicated by the individual, psychosocial, organizational and regulatory components that influence a disabled person's ability to resume work activities.
An automatic chip structure optical inspection system for electronic components
NASA Astrophysics Data System (ADS)
Song, Zhichao; Xue, Bindang; Liang, Jiyuan; Wang, Ke; Chen, Junzhang; Liu, Yunhe
2018-01-01
An automatic chip structure inspection system based on machine vision is presented to ensure the reliability of electronic components. It consists of four major modules, including a metallographic microscope, a Gigabit Ethernet high-resolution camera, a control system and a high performance computer. An auto-focusing technique is presented to solve the problem that the chip surface is not on the same focusing surface under the high magnification of the microscope. A panoramic high-resolution image stitching algorithm is adopted to deal with the contradiction between resolution and field of view, caused by different sizes of electronic components. In addition, we establish a database to storage and callback appropriate parameters to ensure the consistency of chip images of electronic components with the same model. We use image change detection technology to realize the detection of chip images of electronic components. The system can achieve high-resolution imaging for chips of electronic components with various sizes, and clearly imaging for the surface of chip with different horizontal and standardized imaging for ones with the same model, and can recognize chip defects.
NASA Astrophysics Data System (ADS)
Nemravová, J. A.; Harmanec, P.; Brož, M.; Vokrouhlický, D.; Mourard, D.; Hummel, C. A.; Cameron, C.; Matthews, J. M.; Bolton, C. T.; Božić, H.; Chini, R.; Dembsky, T.; Engle, S.; Farrington, C.; Grunhut, J. H.; Guenther, D. B.; Guinan, E. F.; Korčáková, D.; Koubský, P.; Kříček, R.; Kuschnig, R.; Mayer, P.; McCook, G. P.; Moffat, A. F. J.; Nardetto, N.; Prša, A.; Ribeiro, J.; Rowe, J.; Rucinski, S.; Škoda, P.; Šlechta, M.; Tallon-Bosc, I.; Votruba, V.; Weiss, W. W.; Wolf, M.; Zasche, P.; Zavala, R. T.
2016-10-01
Context. Compact hierarchical systems are important because the effects caused by the dynamical interaction among its members occur ona human timescale. These interactions play a role in the formation of close binaries through Kozai cycles with tides. One such system is ξ Tauri: it has three hierarchical orbits: 7.14 d (eclipsing components Aa, Ab), 145 d (components Aa+Ab, B), and 51 yr (components Aa+Ab+B, C). Aims: We aim to obtain physical properties of the system and to study the dynamical interaction between its components. Methods: Our analysis is based on a large series of spectroscopic photometric (including space-borne) observations and long-baseline optical and infrared spectro-interferometric observations. We used two approaches to infer the system properties: a set of observation-specific models, where all components have elliptical trajectories, and an N-body model, which computes the trajectory of each component by integrating Newton's equations of motion. Results: The triple subsystem exhibits clear signs of dynamical interaction. The most pronounced are the advance of the apsidal line and eclipse-timing variations. We determined the geometry of all three orbits using both observation-specific and N-body models. The latter correctly accounted for observed effects of the dynamical interaction, predicted cyclic variations of orbital inclinations, and determined the sense of motion of all orbits. Using perturbation theory, we demonstrate that prominent secular and periodic dynamical effects are explainable with a quadrupole interaction. We constrained the basic properties of all components, especially of members of the inner triple subsystem and detected rapid low-amplitude light variations that we attribute to co-rotating surface structures of component B. We also estimated the radius of component B. Properties of component C remain uncertain because of its low relative luminosity. We provide an independent estimate of the distance to the system. Conclusions: The accuracy and consistency of our results make ξ Tau an excellent test bed for models of formation and evolution of hierarchical systems. Full Tables D.1-D.7 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/594/A55Based on data from the MOST satellite, a former Canadian Space Agency mission, jointly operated by Microsatellite Systems Canada Inc. (MSCI; formerly Dynacon Inc.), the University of Toronto Institute for Aerospace Studies and the University of British Columbia, with the assistance of the University of Vienna.
Emily B. Schultz; J. Clint Iles; Thomas G. Matney; Andrew W. Ezell; James S. Meadows; Theodor D. Leininger; al. et.
2010-01-01
Greater emphasis is being placed on Southern bottomland hardwood management, but relatively few growth and yield prediction systems exist that are based on sufficient measurements. We present the aggregate stand-level expected yield and structural component equations for a red oak (Quercus section Lobatae)-sweetgum (Liquidambar styraciflua L.) growth and yield model....
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lai, Jih-Sheng
This paper introduces control system design based softwares, SIMNON and MATLAB/SIMULINK, for power electronics system simulation. A complete power electronics system typically consists of a rectifier bridge along with its smoothing capacitor, an inverter, and a motor. The system components, featuring discrete or continuous, linear or nonlinear, are modeled in mathematical equations. Inverter control methods,such as pulse-width-modulation and hysteresis current control, are expressed in either computer algorithms or digital circuits. After describing component models and control methods, computer programs are then developed for complete systems simulation. Simulation results are mainly used for studying system performances, such as input and outputmore » current harmonics, torque ripples, and speed responses. Key computer programs and simulation results are demonstrated for educational purposes.« less
Component model reduction via the projection and assembly method
NASA Technical Reports Server (NTRS)
Bernard, Douglas E.
1989-01-01
The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.
Control strategy optimization of HVAC plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio
In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components andmore » energy systems, and is sufficiently fast to make it applicable to real-time setting.« less
NASA Astrophysics Data System (ADS)
Tallapragada, V.
2017-12-01
NOAA's Next Generation Global Prediction System (NGGPS) has provided the unique opportunity to develop and implement a non-hydrostatic global model based on Geophysical Fluid Dynamics Laboratory (GFDL) Finite Volume Cubed Sphere (FV3) Dynamic Core at National Centers for Environmental Prediction (NCEP), making a leap-step advancement in seamless prediction capabilities across all spatial and temporal scales. Model development efforts are centralized with unified model development in the NOAA Environmental Modeling System (NEMS) infrastructure based on Earth System Modeling Framework (ESMF). A more sophisticated coupling among various earth system components is being enabled within NEMS following National Unified Operational Prediction Capability (NUOPC) standards. The eventual goal of unifying global and regional models will enable operational global models operating at convective resolving scales. Apart from the advanced non-hydrostatic dynamic core and coupling to various earth system components, advanced physics and data assimilation techniques are essential for improved forecast skill. NGGPS is spearheading ambitious physics and data assimilation strategies, concentrating on creation of a Common Community Physics Package (CCPP) and Joint Effort for Data Assimilation Integration (JEDI). Both initiatives are expected to be community developed, with emphasis on research transitioning to operations (R2O). The unified modeling system is being built to support the needs of both operations and research. Different layers of community partners are also established with specific roles/responsibilities for researchers, core development partners, trusted super-users, and operations. Stakeholders are engaged at all stages to help drive the direction of development, resources allocations and prioritization. This talk presents the current and future plans of unified model development at NCEP for weather, sub-seasonal, and seasonal climate prediction applications with special emphasis on implementation of NCEP FV3 Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS) into operations by 2019.
System principles, mathematical models and methods to ensure high reliability of safety systems
NASA Astrophysics Data System (ADS)
Zaslavskyi, V.
2017-04-01
Modern safety and security systems are composed of a large number of various components designed for detection, localization, tracking, collecting, and processing of information from the systems of monitoring, telemetry, control, etc. They are required to be highly reliable in a view to correctly perform data aggregation, processing and analysis for subsequent decision making support. On design and construction phases of the manufacturing of such systems a various types of components (elements, devices, and subsystems) are considered and used to ensure high reliability of signals detection, noise isolation, and erroneous commands reduction. When generating design solutions for highly reliable systems a number of restrictions and conditions such as types of components and various constrains on resources should be considered. Various types of components perform identical functions; however, they are implemented using diverse principles, approaches and have distinct technical and economic indicators such as cost or power consumption. The systematic use of different component types increases the probability of tasks performing and eliminates the common cause failure. We consider type-variety principle as an engineering principle of system analysis, mathematical models based on this principle, and algorithms for solving optimization problems of highly reliable safety and security systems design. Mathematical models are formalized in a class of two-level discrete optimization problems of large dimension. The proposed approach, mathematical models, algorithms can be used for problem solving of optimal redundancy on the basis of a variety of methods and control devices for fault and defects detection in technical systems, telecommunication networks, and energy systems.
Cui, Yanyan; Gong, Dongwei; Yang, Bo; Chen, Hua; Tu, Ming-Hsiang; Zhang, Chaonan; Li, Huan; Liang, Naiwen; Jiang, Liping; Chang, Polun
2018-01-01
Comprehensive Geriatric Assessments (CGAs) have been recommended to be used for better monitoring the health status of elder residents and providing quality care. This study reported how our nurses perceived the usability of CGA component of a mobile integrated-care long term care support system developed in China. We used the Continuity Assessment Record and Evaluation (CARE), developed in the US, as the core CGA component of our Android-based support system, in which apps were designed for all key stakeholders for delivering quality long term care. A convenience sample of 18 subjects from local long term care facilities in Shanghai, China were invited to assess the CGA assessment component in terms of Technology Acceptance Model for Mobile based on real field trial assessment. All (100%) were satisfied with the mobile CGA component. 88.9% perceived the system was easy to learn and use. 99.4% showed their willingness to use for their work. We concluded it is technically feasible to implement a CGA-based mobile integrated care support system in China.
System parameter identification from projection of inverse analysis
NASA Astrophysics Data System (ADS)
Liu, K.; Law, S. S.; Zhu, X. Q.
2017-05-01
The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.
The Software Architecture of Global Climate Models
NASA Astrophysics Data System (ADS)
Alexander, K. A.; Easterbrook, S. M.
2011-12-01
It has become common to compare and contrast the output of multiple global climate models (GCMs), such as in the Climate Model Intercomparison Project Phase 5 (CMIP5). However, intercomparisons of the software architecture of GCMs are almost nonexistent. In this qualitative study of seven GCMs from Canada, the United States, and Europe, we attempt to fill this gap in research. We describe the various representations of the climate system as computer programs, and account for architectural differences between models. Most GCMs now practice component-based software engineering, where Earth system components (such as the atmosphere or land surface) are present as highly encapsulated sub-models. This architecture facilitates a mix-and-match approach to climate modelling that allows for convenient sharing of model components between institutions, but it also leads to difficulty when choosing where to draw the lines between systems that are not encapsulated in the real world, such as sea ice. We also examine different styles of couplers in GCMs, which manage interaction and data flow between components. Finally, we pay particular attention to the varying levels of complexity in GCMs, both between and within models. Many GCMs have some components that are significantly more complex than others, a phenomenon which can be explained by the respective institution's research goals as well as the origin of the model components. In conclusion, although some features of software architecture have been adopted by every GCM we examined, other features show a wide range of different design choices and strategies. These architectural differences may provide new insights into variability and spread between models.
A computer-based time study system for timber harvesting operations
Jingxin Wang; Joe McNeel; John Baumgras
2003-01-01
A computer-based time study system was developed for timber harvesting operations. Object-oriented techniques were used to model and design the system. The front-end of the time study system resides on the MS Windows CE and the back-end is supported by MS Access. The system consists of three major components: a handheld system, data transfer interface, and data storage...
Sub-component modeling for face image reconstruction in video communications
NASA Astrophysics Data System (ADS)
Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.
2008-08-01
Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.
NASA Astrophysics Data System (ADS)
Sell, K.; Herbert, B.; Schielack, J.
2004-05-01
Students organize scientific knowledge and reason about environmental issues through manipulation of mental models. The nature of the environmental sciences, which are focused on the study of complex, dynamic systems, may present cognitive difficulties to students in their development of authentic, accurate mental models of environmental systems. The inquiry project seeks to develop and assess the coupling of information technology (IT)-based learning with physical models in order to foster rich mental model development of environmental systems in geoscience undergraduate students. The manipulation of multiple representations, the development and testing of conceptual models based on available evidence, and exposure to authentic, complex and ill-constrained problems were the components of investigation utilized to reach the learning goals. Upper-level undergraduate students enrolled in an environmental geology course at Texas A&M University participated in this research which served as a pilot study. Data based on rubric evaluations interpreted by principal component analyses suggest students' understanding of the nature of scientific inquiry is limited and the ability to cross scales and link systems proved problematic. Results categorized into content knowledge and cognition processes where reasoning, critical thinking and cognitive load were driving factors behind difficulties in student learning. Student mental model development revealed multiple misconceptions and lacked complexity and completeness to represent the studied systems. Further, the positive learning impacts of the implemented modules favored the physical model over the IT-based learning projects, likely due to cognitive load issues. This study illustrates the need to better understand student difficulties in solving complex problems when using IT, where the appropriate scaffolding can then be implemented to enhance student learning of the earth system sciences.
The Iterative Research Cycle: Process-Based Model Evaluation
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2014-12-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.
Simulink-Based Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV)
NASA Technical Reports Server (NTRS)
Christhilf, David m.; Bacon, Barton J.
2006-01-01
The Simulation Architecture for Evaluating Controls for Aerospace Vehicles (SAREC-ASV) is a Simulink-based approach to providing an engineering quality desktop simulation capability for finding trim solutions, extracting linear models for vehicle analysis and control law development, and generating open-loop and closed-loop time history responses for control system evaluation. It represents a useful level of maturity rather than a finished product. The layout is hierarchical and supports concurrent component development and validation, with support from the Concurrent Versions System (CVS) software management tool. Real Time Workshop (RTW) is used to generate pre-compiled code for substantial component modules, and templates permit switching seamlessly between original Simulink and code compiled for various platforms. Two previous limitations are addressed. Turn around time for incorporating tabular model components was improved through auto-generation of required Simulink diagrams based on data received in XML format. The layout was modified to exploit a Simulink "compile once, evaluate multiple times" capability for zero elapsed time for use in trimming and linearizing. Trim is achieved through a Graphical User Interface (GUI) with a narrow, script definable interface to the vehicle model which facilitates incorporating new models.
NASA Astrophysics Data System (ADS)
Hipsey, Matthew R.; Hamilton, David P.; Hanson, Paul C.; Carey, Cayelan C.; Coletti, Janaine Z.; Read, Jordan S.; Ibelings, Bas W.; Valesini, Fiona J.; Brookes, Justin D.
2015-09-01
Maintaining the health of aquatic systems is an essential component of sustainable catchment management, however, degradation of water quality and aquatic habitat continues to challenge scientists and policy-makers. To support management and restoration efforts aquatic system models are required that are able to capture the often complex trajectories that these systems display in response to multiple stressors. This paper explores the abilities and limitations of current model approaches in meeting this challenge, and outlines a strategy based on integration of flexible model libraries and data from observation networks, within a learning framework, as a means to improve the accuracy and scope of model predictions. The framework is comprised of a data assimilation component that utilizes diverse data streams from sensor networks, and a second component whereby model structural evolution can occur once the model is assessed against theoretically relevant metrics of system function. Given the scale and transdisciplinary nature of the prediction challenge, network science initiatives are identified as a means to develop and integrate diverse model libraries and workflows, and to obtain consensus on diagnostic approaches to model assessment that can guide model adaptation. We outline how such a framework can help us explore the theory of how aquatic systems respond to change by bridging bottom-up and top-down lines of enquiry, and, in doing so, also advance the role of prediction in aquatic ecosystem management.
An Evaluation Research Model for System-Wide Textbook Selection.
ERIC Educational Resources Information Center
Talmage, Harriet; Walberg, Herbert T.
One component of an evaluation research model for system-wide selection of curriculum materials is reported: implementation of an evaluation design for obtaining data that permits professional and lay persons to base curriculum materials decisions on a "best fit" principle. The design includes teacher characteristics, learning environment…
SOA-based model for value-added ITS services delivery.
Herrera-Quintero, Luis Felipe; Maciá-Pérez, Francisco; Marcos-Jorquera, Diego; Gilart-Iglesias, Virgilio
2014-01-01
Integration is currently a key factor in intelligent transportation systems (ITS), especially because of the ever increasing service demands originating from the ITS industry and ITS users. The current ITS landscape is made up of multiple technologies that are tightly coupled, and its interoperability is extremely low, which limits ITS services generation. Given this fact, novel information technologies (IT) based on the service-oriented architecture (SOA) paradigm have begun to introduce new ways to address this problem. The SOA paradigm allows the construction of loosely coupled distributed systems that can help to integrate the heterogeneous systems that are part of ITS. In this paper, we focus on developing an SOA-based model for integrating information technologies (IT) into ITS to achieve ITS service delivery. To develop our model, the ITS technologies and services involved were identified, catalogued, and decoupled. In doing so, we applied our SOA-based model to integrate all of the ITS technologies and services, ranging from the lowest-level technical components, such as roadside unit as a service (RSUAAS), to the most abstract ITS services that will be offered to ITS users (value-added services). To validate our model, a functionality case study that included all of the components of our model was designed.
Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems
NASA Astrophysics Data System (ADS)
Yang, Le; Wang, Shuo; Feng, Jianghua
2017-11-01
Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.
Distillation Designs for the Lunar Surface
NASA Technical Reports Server (NTRS)
Boul, Peter J.; Lange,Kevin E.; Conger, Bruce; Anderson, Molly
2010-01-01
Gravity-based distillation methods may be applied to the purification of wastewater on the lunar base. These solutions to water processing are robust physical separation techniques, which may be more advantageous than many other techniques for their simplicity in design and operation. The two techniques can be used in conjunction with each other to obtain high purity water. The components and feed compositions for modeling waste water streams are presented in conjunction with the Aspen property system for traditional stage distillation. While the individual components for each of the waste streams will vary naturally within certain bounds, an analog model for waste water processing is suggested based on typical concentration ranges for these components. Target purity levels for recycled water are determined for each individual component based on NASA s required maximum contaminant levels for potable water Optimum parameters such as reflux ratio, feed stage location, and processing rates are determined with respect to the power consumption of the process. Multistage distillation is evaluated for components in wastewater to determine the minimum number of stages necessary for each of 65 components in humidity condensate and urine wastewater mixed streams.
A Comparative Study of High and Low Fidelity Fan Models for Turbofan Engine System Simulation
NASA Technical Reports Server (NTRS)
Reed, John A.; Afjeh, Abdollah A.
1991-01-01
In this paper, a heterogeneous propulsion system simulation method is presented. The method is based on the formulation of a cycle model of a gas turbine engine. The model includes the nonlinear characteristics of the engine components via use of empirical data. The potential to simulate the entire engine operation on a computer without the aid of data is demonstrated by numerically generating "performance maps" for a fan component using two flow models of varying fidelity. The suitability of the fan models were evaluated by comparing the computed performance with experimental data. A discussion of the potential benefits and/or difficulties in connecting simulations solutions of differing fidelity is given.
An integrative assessment of the commercial air transportation system via adaptive agents
NASA Astrophysics Data System (ADS)
Lim, Choon Giap
The overarching research objective is to address the tightly-coupled interactions between the demand-side and supply-side components of the United States Commercial Air Transportation System (CATS) in a time-variant environment. A system-of-system perspective is adopted, where the scope is extended beyond the National Airspace System (NAS) level to the National Transportation System (NTS) level to capture the intermodal and multimodal relationships between the NTS stakeholders. The Agent-Based Modeling and Simulation technique is employed where the NTS/NAS is treated as an integrated Multi-Agent System comprising of consumer and service provider agents, representing the demand-side and supply-side components respectively. Successful calibration and validation of both model components against the observable real world data resulted in a CATS simulation tool where the aviation demand is estimated from socioeconomic and demographic properties of the population instead of merely based on enplanement growth multipliers. This valuable achievement enabled a 20-year outlook simulation study to investigate the implications of a global fuel price hike on the airline industry and the U.S. CATS at large. Simulation outcomes revealed insights into the airline competitive behaviors and the subsequent responses from transportation consumers.
Effect of the dietary inclusion of soybean components on the innate immune system in zebrafish.
Fuentes-Appelgren, Pamela; Opazo, Rafael; Barros, Luis; Feijoó, Carmen G; Urzúa, Victoria; Romero, Jaime
2014-02-01
Some components of plant-based meals, such as saponins and vegetal proteins, have been proposed as inducers of intestinal inflammation in some fish. However, the molecular and cellular bases for this phenomenon have not been reported. In this work, zebrafish were used as a model to evaluate the effects of individual soybean meal components, such as saponins and soy proteins. Zebrafish larvae fed a fish meal feed containing soy components were assessed according to low and high inclusion levels. The granulocytes associated with the digestive tract and the induction of genes related to the immune system were quantitated as markers of the effects of the dietary components. A significant increase in the number of granulocytes was observed after feeding fish diets containing high saponin or soy protein contents. These dietary components also induced the expression of genes related to the innate immune system, including myeloid-specific peroxidase, as well as the complement protein and cytokines. These results reveal the influence of dietary components on the stimulation of the immune system. These observations could be significant to understanding the contributions of saponin and soy protein to the onset of enteritis in aqua-cultured fish, and this knowledge may aid in defining the role of the innate immune system in other inflammatory diseases involving dietary components in mammals.
Two-phase thermodynamic model for computing entropies of liquids reanalyzed
NASA Astrophysics Data System (ADS)
Sun, Tao; Xian, Jiawei; Zhang, Huai; Zhang, Zhigang; Zhang, Yigang
2017-11-01
The two-phase thermodynamic (2PT) model [S.-T. Lin et al., J. Chem. Phys. 119, 11792-11805 (2003)] provides a promising paradigm to efficiently determine the ionic entropies of liquids from molecular dynamics. In this model, the vibrational density of states (VDoS) of a liquid is decomposed into a diffusive gas-like component and a vibrational solid-like component. By treating the diffusive component as hard sphere (HS) gas and the vibrational component as harmonic oscillators, the ionic entropy of the liquid is determined. Here we examine three issues crucial for practical implementations of the 2PT model: (i) the mismatch between the VDoS of the liquid system and that of the HS gas; (ii) the excess entropy of the HS gas; (iii) the partition of the gas-like and solid-like components. Some of these issues have not been addressed before, yet they profoundly change the entropy predicted from the model. Based on these findings, a revised 2PT formalism is proposed and successfully tested in systems with Lennard-Jones potentials as well as many-atom potentials of liquid metals. Aside from being capable of performing quick entropy estimations for a wide range of systems, the formalism also supports fine-tuning to accurately determine entropies at specific thermal states.
Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2003-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.
Current Pressure Transducer Application of Model-based Prognostics Using Steady State Conditions
NASA Technical Reports Server (NTRS)
Teubert, Christopher; Daigle, Matthew J.
2014-01-01
Prognostics is the process of predicting a system's future states, health degradation/wear, and remaining useful life (RUL). This information plays an important role in preventing failure, reducing downtime, scheduling maintenance, and improving system utility. Prognostics relies heavily on wear estimation. In some components, the sensors used to estimate wear may not be fast enough to capture brief transient states that are indicative of wear. For this reason it is beneficial to be capable of detecting and estimating the extent of component wear using steady-state measurements. This paper details a method for estimating component wear using steady-state measurements, describes how this is used to predict future states, and presents a case study of a current/pressure (I/P) Transducer. I/P Transducer nominal and off-nominal behaviors are characterized using a physics-based model, and validated against expected and observed component behavior. This model is used to map observed steady-state responses to corresponding fault parameter values in the form of a lookup table. This method was chosen because of its fast, efficient nature, and its ability to be applied to both linear and non-linear systems. Using measurements of the steady state output, and the lookup table, wear is estimated. A regression is used to estimate the wear propagation parameter and characterize the damage progression function, which are used to predict future states and the remaining useful life of the system.
SLS Navigation Model-Based Design Approach
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed
NASA Technical Reports Server (NTRS)
Cohen, Gerald C. (Inventor); McMann, Catherine M. (Inventor)
1991-01-01
An improved method and system for automatically generating reliability models for use with a reliability evaluation tool is described. The reliability model generator of the present invention includes means for storing a plurality of low level reliability models which represent the reliability characteristics for low level system components. In addition, the present invention includes means for defining the interconnection of the low level reliability models via a system architecture description. In accordance with the principles of the present invention, a reliability model for the entire system is automatically generated by aggregating the low level reliability models based on the system architecture description.
A novel energy recovery system for parallel hybrid hydraulic excavator.
Li, Wei; Cao, Baoyu; Zhu, Zhencai; Chen, Guoan
2014-01-01
Hydraulic excavator energy saving is important to relieve source shortage and protect environment. This paper mainly discusses the energy saving for the hybrid hydraulic excavator. By analyzing the excess energy of three hydraulic cylinders in the conventional hydraulic excavator, a new boom potential energy recovery system is proposed. The mathematical models of the main components including boom cylinder, hydraulic motor, and hydraulic accumulator are built. The natural frequency of the proposed energy recovery system is calculated based on the mathematical models. Meanwhile, the simulation models of the proposed system and a conventional energy recovery system are built by AMESim software. The results show that the proposed system is more effective than the conventional energy saving system. At last, the main components of the proposed energy recovery system including accumulator and hydraulic motor are analyzed for improving the energy recovery efficiency. The measures to improve the energy recovery efficiency of the proposed system are presented.
A Novel Energy Recovery System for Parallel Hybrid Hydraulic Excavator
Li, Wei; Cao, Baoyu; Zhu, Zhencai; Chen, Guoan
2014-01-01
Hydraulic excavator energy saving is important to relieve source shortage and protect environment. This paper mainly discusses the energy saving for the hybrid hydraulic excavator. By analyzing the excess energy of three hydraulic cylinders in the conventional hydraulic excavator, a new boom potential energy recovery system is proposed. The mathematical models of the main components including boom cylinder, hydraulic motor, and hydraulic accumulator are built. The natural frequency of the proposed energy recovery system is calculated based on the mathematical models. Meanwhile, the simulation models of the proposed system and a conventional energy recovery system are built by AMESim software. The results show that the proposed system is more effective than the conventional energy saving system. At last, the main components of the proposed energy recovery system including accumulator and hydraulic motor are analyzed for improving the energy recovery efficiency. The measures to improve the energy recovery efficiency of the proposed system are presented. PMID:25405215
PlanWorks: A Debugging Environment for Constraint Based Planning Systems
NASA Technical Reports Server (NTRS)
Daley, Patrick; Frank, Jeremy; Iatauro, Michael; McGann, Conor; Taylor, Will
2005-01-01
Numerous planning and scheduling systems employ underlying constraint reasoning systems. Debugging such systems involves the search for errors in model rules, constraint reasoning algorithms, search heuristics, and the problem instance (initial state and goals). In order to effectively find such problems, users must see why each state or action is in a plan by tracking causal chains back to part of the initial problem instance. They must be able to visualize complex relationships among many different entities and distinguish between those entities easily. For example, a variable can be in the scope of several constraints, as well as part of a state or activity in a plan; the activity can arise as a consequence of another activity and a model rule. Finally, they must be able to track each logical inference made during planning. We have developed PlanWorks, a comprehensive system for debugging constraint-based planning and scheduling systems. PlanWorks assumes a strong transaction model of the entire planning process, including adding and removing parts of the constraint network, variable assignment, and constraint propagation. A planner logs all transactions to a relational database that is tailored to support queries for of specialized views to display different forms of data (e.g. constraints, activities, resources, and causal links). PlanWorks was specifically developed for the Extensible Universal Remote Operations Planning Architecture (EUROPA(sub 2)) developed at NASA, but the underlying principles behind PlanWorks make it useful for many constraint-based planning systems. The paper is organized as follows. We first describe some fundamentals of EUROPA(sub 2). We then describe PlanWorks' principal components. We then discuss each component in detail, and then describe inter-component navigation features. We close with a discussion of how PlanWorks is used to find model flaws.
PAIRS, The GIS-Based Incident Response System for Pennsylvania, and NASA
NASA Technical Reports Server (NTRS)
Conrad, Eric; Arbegast, Daniel; Maynard, Nancy; Vicente, Gilberto
2003-01-01
Over the past several years the Pennsylvania Departments of Environmental Protection (DEP), Health (DOH), and Agriculture (PDA) built the GIs-based Pennsylvania West Nile Surveillance System. That system has become a model for collecting data that has a field component, laboratory component, reporting and mapping component, and a public information component. Given the success of the West Nile Virus System and the events of September 11, 2001, DEP then embarked on the development of the Pennsylvania Incident Response System, or PAIRS. PAIRS is an effective GIs-based approach to providing a system for response to incidents of any kind, including terrorism because it is building upon the existing experience, infrastructure and databases that were successfully developed to respond to the West Nile Virus by DEP, DOH, and PDA. The proposed system can be described as one that supports data acquisition, laboratory forensics, decision making/response, and communications. Decision makers will have tools to view and analyze data from various sources and, at the same time, to communicate with the large numbers of people responding to the same incident. Recent collaborations with NASA partners are creating mechanisms for the PAIRS system to incorporate space-based and other remote sensing geophysical parameters relevant to public health assessment and management, such as surface temperatures, precipitation, land cover/land use change, and humidity. This presentation will describe the PAIRS system and outline the Pennsylvania-NASA collaboration for integration of space-based data into the PAIRS system.
Hipsey, Matthew R.; Hamilton, David P.; Hanson, Paul C.; Carey, Cayelan C.; Coletti, Janaine Z; Read, Jordan S.; Ibelings, Bas W; Valensini, Fiona J; Brookes, Justin D
2015-01-01
Maintaining the health of aquatic systems is an essential component of sustainable catchmentmanagement, however, degradation of water quality and aquatic habitat continues to challenge scientistsand policy-makers. To support management and restoration efforts aquatic system models are requiredthat are able to capture the often complex trajectories that these systems display in response to multiplestressors. This paper explores the abilities and limitations of current model approaches in meeting this chal-lenge, and outlines a strategy based on integration of flexible model libraries and data from observationnetworks, within a learning framework, as a means to improve the accuracy and scope of model predictions.The framework is comprised of a data assimilation component that utilizes diverse data streams from sensornetworks, and a second component whereby model structural evolution can occur once the model isassessed against theoretically relevant metrics of system function. Given the scale and transdisciplinarynature of the prediction challenge, network science initiatives are identified as a means to develop and inte-grate diverse model libraries and workflows, and to obtain consensus on diagnostic approaches to modelassessment that can guide model adaptation. We outline how such a framework can help us explore thetheory of how aquatic systems respond to change by bridging bottom-up and top-down lines of enquiry,and, in doing so, also advance the role of prediction in aquatic ecosystem management.
Optimization of replacement and inspection decisions for multiple components on a power system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mauney, D.A.
1994-12-31
The use of optimization on the rescheduling of replacement dates provided a very proactive approach to deciding when components on individual units need to be addressed with a run/repair/replace decision. Including the effects of time value of money and taxes and unit need inside the spreadsheet model allowed the decision maker to concentrate on the effects of engineering input and replacement date decisions on the final net present value (NPV). The personal computer (PC)-based model was applied to a group of 140 forced outage critical fossil plant tube components across a power system. The estimated resulting NPV of the optimizationmore » was in the tens of millions of dollars. This PC spreadsheet model allows the interaction of inputs from structural reliability risk assessment models, plant foreman interviews, and actual failure history on a by component by unit basis across a complete power production system. This model includes not only the forced outage performance of these components caused by tube failures but, in addition, the forecasted need of the individual units on the power system and the expected cost of their replacement power if forced off line. The use of cash flow analysis techniques in the spreadsheet model results in the calculation of an NPV for a whole combination of replacement dates. This allows rapid assessments of {open_quotes}what if{close_quotes} scenarios of major maintenance projects on a systemwide basis and not just on a unit-by-unit basis.« less
A Model-based Approach to Reactive Self-Configuring Systems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Nayak, P. Pandurang
1996-01-01
This paper describes Livingstone, an implemented kernel for a self-reconfiguring autonomous system, that is reactive and uses component-based declarative models. The paper presents a formal characterization of the representation formalism used in Livingstone, and reports on our experience with the implementation in a variety of domains. Livingstone's representation formalism achieves broad coverage of hybrid software/hardware systems by coupling the concurrent transition system models underlying concurrent reactive languages with the discrete qualitative representations developed in model-based reasoning. We achieve a reactive system that performs significant deductions in the sense/response loop by drawing on our past experience at building fast prepositional conflict-based algorithms for model-based diagnosis, and by framing a model-based configuration manager as a prepositional, conflict-based feedback controller that generates focused, optimal responses. Livingstone automates all these tasks using a single model and a single core deductive engine, thus making significant progress towards achieving a central goal of model-based reasoning. Livingstone, together with the HSTS planning and scheduling engine and the RAPS executive, has been selected as the core autonomy architecture for Deep Space One, the first spacecraft for NASA's New Millennium program.
Distance Learning Success--A Perspective from Socio-Technical Systems Theory
ERIC Educational Resources Information Center
Wang, Jianfeng; Solan, David; Ghods, Abe
2010-01-01
With widespread adoption of computer-based distance education as a mission-critical component of the institution's educational program, the need for evaluation has emerged. In this research, we aim to expand on the systems approach by offering a model for evaluation based on socio-technical systems theory addressing a stated need in the literature…
Model-based approach to study the impact of biofuels on the sustainability of an ecological system
The importance and complexity of sustainability have been well recognized and a formal study of sustainability based on system theory approaches is imperative as many of the relationships between various components of the ecosystem could be nonlinear, intertwined and non-intuitiv...
Model based approach to Study the Impact of Biofuels on the Sustainability of an Ecological System
The importance and complexity of sustainability has been well recognized and a formal study of sustainability based on system theory approaches is imperative as many of the relationships between various components of the ecosystem could be nonlinear, intertwined and non intuitive...
Thermal control systems for low-temperature heat rejection on a lunar base
NASA Technical Reports Server (NTRS)
Sridhar, K. R.; Gottmann, Matthias
1992-01-01
One of the important issues in the lunar base architecture is the design of a Thermal Control System (TCS) to reject the low temperature heat from the base. The TCS ensures that the base and all components inside are maintained within the operating temperature range. A significant portion of the total mass of the TCS is due to the radiator. Shading the radiation from the sun and the hot lunar soil could decrease the radiator operating temperature significantly. Heat pumps have been in use for terrestrial applications. To optimize the mass of the heat pump augmented TCS, all promising options have to be evaluated and compared. Careful attention is given to optimizing system operating parameters, working fluids, and component masses. The systems are modeled for full load operation.
Waterflood control system for maximizing total oil recovery
Patzek, Tadeusz Wiktor; Silin, Dimitriy Borisovic; De, Asoke Kumar
2005-06-07
A control system and method for determining optimal fluid injection pressure is based upon a model of a growing hydrofracture due to waterflood injection pressure. This model is used to develop a control system optimizing the injection pressure by using a prescribed injection goal coupled with the historical times, pressures, and volume of injected fluid at a single well. In this control method, the historical data is used to derive two major flow components: the transitional component, where cumulative injection volume is scaled as the square root of time, and a steady-state breakthrough component, which scales linearly with respect to time. These components provide diagnostic information and allow for the prevention of rapid fracture growth and associated massive water break through that is an important part of a successful waterflood, thereby extending the life of both injection and associated production wells in waterflood secondary oil recovery operations.
Waterflood control system for maximizing total oil recovery
Patzek, Tadeusz Wiktor [Oakland, CA; Silin, Dimitriy Borisovich [Pleasant Hill, CA; De, Asoke Kumar [San Jose, CA
2007-07-24
A control system and method for determining optimal fluid injection pressure is based upon a model of a growing hydrofracture due to waterflood injection pressure. This model is used to develop a control system optimizing the injection pressure by using a prescribed injection goal coupled with the historical times, pressures, and volume of injected fluid at a single well. In this control method, the historical data is used to derive two major flow components: the transitional component, where cumulative injection volume is scaled as the square root of time, and a steady-state breakthrough component, which scales linearly with respect to time. These components provide diagnostic information and allow for the prevention of rapid fracture growth and associated massive water break through that is an important part of a successful waterflood, thereby extending the life of both injection and associated production wells in waterflood secondary oil recovery operations.
Engine structures analysis software: Component Specific Modeling (COSMO)
NASA Astrophysics Data System (ADS)
McKnight, R. L.; Maffeo, R. J.; Schwartz, S.
1994-08-01
A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.
Engine Structures Analysis Software: Component Specific Modeling (COSMO)
NASA Technical Reports Server (NTRS)
Mcknight, R. L.; Maffeo, R. J.; Schwartz, S.
1994-01-01
A component specific modeling software program has been developed for propulsion systems. This expert program is capable of formulating the component geometry as finite element meshes for structural analysis which, in the future, can be spun off as NURB geometry for manufacturing. COSMO currently has geometry recipes for combustors, turbine blades, vanes, and disks. Component geometry recipes for nozzles, inlets, frames, shafts, and ducts are being added. COSMO uses component recipes that work through neutral files with the Technology Benefit Estimator (T/BEST) program which provides the necessary base parameters and loadings. This report contains the users manual for combustors, turbine blades, vanes, and disks.
Unifying Model-Based and Reactive Programming within a Model-Based Executive
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)
1999-01-01
Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.
A Model-Driven, Science Data Product Registration Service
NASA Astrophysics Data System (ADS)
Hardman, S.; Ramirez, P.; Hughes, J. S.; Joyner, R.; Cayanan, M.; Lee, H.; Crichton, D. J.
2011-12-01
The Planetary Data System (PDS) has undertaken an effort to overhaul the PDS data architecture (including the data model, data structures, data dictionary, etc.) and to deploy an upgraded software system (including data services, distributed data catalog, etc.) that fully embraces the PDS federation as an integrated system while taking advantage of modern innovations in information technology (including networking capabilities, processing speeds, and software breakthroughs). A core component of this new system is the Registry Service that will provide functionality for tracking, auditing, locating, and maintaining artifacts within the system. These artifacts can range from data files and label files, schemas, dictionary definitions for objects and elements, documents, services, etc. This service offers a single reference implementation of the registry capabilities detailed in the Consultative Committee for Space Data Systems (CCSDS) Registry Reference Model White Book. The CCSDS Reference Model in turn relies heavily on the Electronic Business using eXtensible Markup Language (ebXML) standards for registry services and the registry information model, managed by the OASIS consortium. Registries are pervasive components in most information systems. For example, data dictionaries, service registries, LDAP directory services, and even databases provide registry-like services. These all include an account of informational items that are used in large-scale information systems ranging from data values such as names and codes, to vocabularies, services and software components. The problem is that many of these registry-like services were designed with their own data models associated with the specific type of artifact they track. Additionally these services each have their own specific interface for interacting with the service. This Registry Service implements the data model specified in the ebXML Registry Information Model (RIM) specification that supports the various artifacts above as well as offering the flexibility to support customer-defined artifacts. Key features for the Registry Service include: - Model-based configuration specifying customer-defined artifact types, metadata attributes to capture for each artifact type, supported associations and classification schemes. - A REST-based external interface that is accessible via the Hypertext Transfer Protocol (HTTP). - Federation of Registry Service instances allowing associations between registered artifacts across registries as well as queries for artifacts across those same registries. A federation also enables features such as replication and synchronization if desired for a given deployment. In addition to its use as a core component of the PDS, the generic implementation of the Registry Service facilitates its applicability as a core component in any science data archive or science data system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saurav, Kumar; Chandan, Vikas
District-heating-and-cooling (DHC) systems are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increasemore » the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components such as buildings, pipes, valves, heating source, etc., interacting with each other. In this paper, we focus on building modelling. In particular, we present a gray-box methodology for thermal modelling of buildings. Gray-box modelling is a hybrid of data driven and physics based models where, coefficients of the equations from physics based models are learned using data. This approach allows us to capture the dynamics of the buildings more effectively as compared to pure data driven approach. Additionally, this approach results in a simpler models as compared to pure physics based models. We first develop the individual components of the building such as temperature evolution, flow controller, etc. These individual models are then integrated in to the complete gray-box model for the building. The model is validated using data collected from one of the buildings at Lule{\\aa}, a city on the coast of northern Sweden.« less
NASA Technical Reports Server (NTRS)
Xu, Tian-Bing; Su, Ji; Jiang, Xiaoning; Rehrig, Paul W.; Zhang, Shujun; Shrout, Thomas R.; Zhang, Qiming
2006-01-01
An electroactive polymer (EAP)-ceramic hybrid actuation system (HYBAS) was developed recently at NASA Langley Research Center. This paper focuses on the effect of the bending stiffness of the EAP component on the performance of a HYBAS, in which the actuation of the EAP element can match the theoretical prediction at various length/thickness ratios for a constant elastic modulus of the EAP component. The effects on the bending stiffness of the elastic modulus and length/thickness ratio of the EAP component were studied. A critical bending stiffness to keep the actuation of the EAP element suitable for a rigid beam theory-based modeling was found for electron irradiated P(VDF-TrFE) copolymer. For example, the agreement of experimental data and theoretical modeling for a HYBAS with the length/thickness ratio of EAP element at 375 times is demonstrated. However, the beam based theoretical modeling becomes invalid (i.e., the profile of the HYBAS movement does not follow the prediction of theoretical modeling) when the bending stiffness is lower than a critical value.
Coupling of snow and permafrost processes using the Basic Modeling Interface (BMI)
NASA Astrophysics Data System (ADS)
Wang, K.; Overeem, I.; Jafarov, E. E.; Piper, M.; Stewart, S.; Clow, G. D.; Schaefer, K. M.
2017-12-01
We developed a permafrost modeling tool based by implementing the Kudryavtsev empirical permafrost active layer depth model (the so-called "Ku" component). The model is specifically set up to have a basic model interface (BMI), which enhances the potential coupling to other earth surface processes model components. This model is accessible through the Web Modeling Tool in Community Surface Dynamics Modeling System (CSDMS). The Kudryavtsev model has been applied for entire Alaska to model permafrost distribution at high spatial resolution and model predictions have been verified by Circumpolar Active Layer Monitoring (CALM) in-situ observations. The Ku component uses monthly meteorological forcing, including air temperature, snow depth, and snow density, and predicts active layer thickness (ALT) and temperature on the top of permafrost (TTOP), which are important factors in snow-hydrological processes. BMI provides an easy approach to couple the models with each other. Here, we provide a case of coupling the Ku component to snow process components, including the Snow-Degree-Day (SDD) method and Snow-Energy-Balance (SEB) method, which are existing components in the hydrological model TOPOFLOW. The work flow is (1) get variables from meteorology component, set the values to snow process component, and advance the snow process component, (2) get variables from meteorology and snow component, provide these to the Ku component and advance, (3) get variables from snow process component, set the values to meteorology component, and advance the meteorology component. The next phase is to couple the permafrost component with fully BMI-compliant TOPOFLOW hydrological model, which could provide a useful tool to investigate the permafrost hydrological effect.
Component-Oriented Behavior Extraction for Autonomic System Design
NASA Technical Reports Server (NTRS)
Bakera, Marco; Wagner, Christian; Margaria,Tiziana; Hinchey, Mike; Vassev, Emil; Steffen, Bernhard
2009-01-01
Rich and multifaceted domain specific specification languages like the Autonomic System Specification Language (ASSL) help to design reliable systems with self-healing capabilities. The GEAR game-based Model Checker has been used successfully to investigate properties of the ESA Exo- Mars Rover in depth. We show here how to enable GEAR s game-based verification techniques for ASSL via systematic model extraction from a behavioral subset of the language, and illustrate it on a description of the Voyager II space mission.
Marine mammals' influence on ecosystem processes affecting fisheries in the Barents Sea is trivial.
Corkeron, Peter J
2009-04-23
Some interpretations of ecosystem-based fishery management include culling marine mammals as an integral component. The current Norwegian policy on marine mammal management is one example. Scientific support for this policy includes the Scenario Barents Sea (SBS) models. These modelled interactions between cod, Gadus morhua, herring, Clupea harengus, capelin, Mallotus villosus and northern minke whales, Balaenoptera acutorostrata. Adding harp seals Phoca groenlandica into this top-down modelling approach resulted in unrealistic model outputs. Another set of models of the Barents Sea fish-fisheries system focused on interactions within and between the three fish populations, fisheries and climate. These model key processes of the system successfully. Continuing calls to support the SBS models despite their failure suggest a belief that marine mammal predation must be a problem for fisheries. The best available scientific evidence provides no justification for marine mammal culls as a primary component of an ecosystem-based approach to managing the fisheries of the Barents Sea.
NASA Technical Reports Server (NTRS)
Cole, Gary L.; Richard, Jacques C.
1991-01-01
An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.
Evaluation of HardSys/HardDraw, An Expert System for Electromagnetic Interactions Modelling
1993-05-01
interactions ir complex systems. This report gives a description of HardSys/HardDraw and reviews the main concepts used in its design. Various aspects of its ...HardDraw, an expert system for the modelling of electromagnetic interactions in complex systems. It consists of two main components: HardSys and HardDraw...HardSys is the advisor part of the expert system. It is knowledge-based, that is it contains a database of models and properties for various types of
Development of a railway wagon-track interaction model: Case studies on excited tracks
NASA Astrophysics Data System (ADS)
Xu, Lei; Chen, Xianmai; Li, Xuwei; He, Xianglin
2018-02-01
In this paper, a theoretical framework for modeling the railway wagon-ballast track interactions is presented, in which the dynamic equations of motion of wagon-track systems are constructed by effectively coupling the linear and nonlinear dynamic characteristics of system components. For the linear components, the energy-variational principle is directly used to derive their dynamic matrices, while for the nonlinear components, the dynamic equilibrium method is implemented to deduce the load vectors, based on which a novel railway wagon-ballast track interaction model is developed, and being validated by comparing with the experimental data measured from a heavy haul railway and another advanced model. With this study, extensive contributions in figuring out the critical speed of instability, limits and localizations of track irregularities over derailment accidents are presented by effectively integrating the dynamic simulation model, the track irregularity probabilistic model and time-frequency analysis method. The proposed approaches can provide crucial information to guarantee the running safety and stability of the wagon-track system when considering track geometries and various running speeds.
Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission
NASA Technical Reports Server (NTRS)
Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan
2010-01-01
The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.
Computer-Aided Modeling and Analysis of Power Processing Systems (CAMAPPS). Phase 1: Users handbook
NASA Technical Reports Server (NTRS)
Kim, S.; Lee, J.; Cho, B. H.; Lee, F. C.
1986-01-01
The EASY5 macro component models developed for the spacecraft power system simulation are described. A brief explanation about how to use the macro components with the EASY5 Standard Components to build a specific system is given through an example. The macro components are ordered according to the following functional group: converter power stage models, compensator models, current-feedback models, constant frequency control models, load models, solar array models, and shunt regulator models. Major equations, a circuit model, and a program listing are provided for each macro component.
Object-oriented approach for gas turbine engine simulation
NASA Technical Reports Server (NTRS)
Curlett, Brian P.; Felder, James L.
1995-01-01
An object-oriented gas turbine engine simulation program was developed. This program is a prototype for a more complete, commercial grade engine performance program now being proposed as part of the Numerical Propulsion System Simulator (NPSS). This report discusses architectural issues of this complex software system and the lessons learned from developing the prototype code. The prototype code is a fully functional, general purpose engine simulation program, however, only the component models necessary to model a transient compressor test rig have been written. The production system will be capable of steady state and transient modeling of almost any turbine engine configuration. Chief among the architectural considerations for this code was the framework in which the various software modules will interact. These modules include the equation solver, simulation code, data model, event handler, and user interface. Also documented in this report is the component based design of the simulation module and the inter-component communication paradigm. Object class hierarchies for some of the code modules are given.
Muller, Erik B; Nisbet, Roger M
2014-06-01
Ocean acidification is likely to impact the calcification potential of marine organisms. In part due to the covarying nature of the ocean carbonate system components, including pH and CO2 and CO3(2-) levels, it remains largely unclear how each of these components may affect calcification rates quantitatively. We develop a process-based bioenergetic model that explains how several components of the ocean carbonate system collectively affect growth and calcification rates in Emiliania huxleyi, which plays a major role in marine primary production and biogeochemical carbon cycling. The model predicts that under the IPCC A2 emission scenario, its growth and calcification potential will have decreased by the end of the century, although those reductions are relatively modest. We anticipate that our model will be relevant for many other marine calcifying organisms, and that it can be used to improve our understanding of the impact of climate change on marine systems. © 2014 John Wiley & Sons Ltd.
Concept for a Differential Lock and Traction Control Model in Automobiles
NASA Astrophysics Data System (ADS)
Shukul, A. K.; Hansra, S. K.
2014-01-01
The automobile is a complex integration of electronics and mechanical components. One of the major components is the differential which is limited due to its shortcomings. The paper proposes a concept of a cost effective differential lock and traction for passenger cars to sports utility vehicles alike, employing a parallel braking mechanism coming into action based on the relative speeds of the wheels driven by the differential. The paper highlights the employment of minimum number of components unlike the already existing systems. The system was designed numerically for the traction control and differential lock for the world's cheapest car. The paper manages to come up with all the system parameters and component costing making it a cost effective system.
Structural design methodologies for ceramic-based material systems
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Chulya, Abhisak; Gyekenyesi, John P.
1991-01-01
One of the primary pacing items for realizing the full potential of ceramic-based structural components is the development of new design methods and protocols. The focus here is on low temperature, fast-fracture analysis of monolithic, whisker-toughened, laminated, and woven ceramic composites. A number of design models and criteria are highlighted. Public domain computer algorithms, which aid engineers in predicting the fast-fracture reliability of structural components, are mentioned. Emphasis is not placed on evaluating the models, but instead is focused on the issues relevant to the current state of the art.
RF control at SSCL — an object oriented design approach
NASA Astrophysics Data System (ADS)
Dohan, D. A.; Osberg, E.; Biggs, R.; Bossom, J.; Chillara, K.; Richter, R.; Wade, D.
1994-12-01
The Superconducting Super Collider (SSC) in Texas, the construction of which was stopped in 1994, would have represented a major challenge in accelerator research and development. This paper addresses the issues encountered in the parallel design and construction of the control systems for the RF equipment for the five accelerators comprising the SSC. An extensive analysis of the components of the RF control systems has been undertaken, based upon the Schlaer-Mellor object-oriented analysis and design (OOA/OOD) methodology. The RF subsystem components such as amplifiers, tubes, power supplies, PID loops, etc. were analyzed to produce OOA information, behavior and process models. Using these models, OOD was iteratively applied to develop a generic RF control system design. This paper describes the results of this analysis and the development of 'bridges' between the analysis objects, and the EPICS-based software and underlying VME-based hardware architectures. The application of this approach to several of the SSCL RF control systems is discussed.
NASA Astrophysics Data System (ADS)
Murrill, Steven R.; Franck, Charmaine C.; Espinola, Richard L.; Petkie, Douglas T.; De Lucia, Frank C.; Jacobs, Eddie L.
2011-11-01
The U.S. Army Research Laboratory (ARL) and the U.S. Army Night Vision and Electronic Sensors Directorate (NVESD) have developed a terahertz-band imaging system performance model/tool for detection and identification of concealed weaponry. The details of the MATLAB-based model which accounts for the effects of all critical sensor and display components, and for the effects of atmospheric attenuation, concealment material attenuation, and active illumination, were reported on at the 2005 SPIE Europe Security & Defence Symposium (Brugge). An advanced version of the base model that accounts for both the dramatic impact that target and background orientation can have on target observability as related to specular and Lambertian reflections captured by an active-illumination-based imaging system, and for the impact of target and background thermal emission, was reported on at the 2007 SPIE Defense and Security Symposium (Orlando). This paper will provide a comprehensive review of an enhanced, user-friendly, Windows-executable, terahertz-band imaging system performance analysis and design tool that now includes additional features such as a MODTRAN-based atmospheric attenuation calculator and advanced system architecture configuration inputs that allow for straightforward performance analysis of active or passive systems based on scanning (single- or line-array detector element(s)) or staring (focal-plane-array detector elements) imaging architectures. This newly enhanced THz imaging system design tool is an extension of the advanced THz imaging system performance model that was developed under the Defense Advanced Research Project Agency's (DARPA) Terahertz Imaging Focal-Plane Technology (TIFT) program. This paper will also provide example system component (active-illumination source and detector) trade-study analyses using the new features of this user-friendly THz imaging system performance analysis and design tool.
SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.
Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi
2010-01-01
Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.
A discrete decentralized variable structure robotic controller
NASA Technical Reports Server (NTRS)
Tumeh, Zuheir S.
1989-01-01
A decentralized trajectory controller for robotic manipulators is designed and tested using a multiprocessor architecture and a PUMA 560 robot arm. The controller is made up of a nominal model-based component and a correction component based on a variable structure suction control approach. The second control component is designed using bounds on the difference between the used and actual values of the model parameters. Since the continuous manipulator system is digitally controlled along a trajectory, a discretized equivalent model of the manipulator is used to derive the controller. The motivation for decentralized control is that the derived algorithms can be executed in parallel using a distributed, relatively inexpensive, architecture where each joint is assigned a microprocessor. Nonlinear interaction and coupling between joints is treated as a disturbance torque that is estimated and compensated for.
Information visualisation based on graph models
NASA Astrophysics Data System (ADS)
Kasyanov, V. N.; Kasyanova, E. V.
2013-05-01
Information visualisation is a key component of support tools for many applications in science and engineering. A graph is an abstract structure that is widely used to model information for its visualisation. In this paper, we consider practical and general graph formalism called hierarchical graphs and present the Higres and Visual Graph systems aimed at supporting information visualisation on the base of hierarchical graph models.
Onboard Nonlinear Engine Sensor and Component Fault Diagnosis and Isolation Scheme
NASA Technical Reports Server (NTRS)
Tang, Liang; DeCastro, Jonathan A.; Zhang, Xiaodong
2011-01-01
A method detects and isolates in-flight sensor, actuator, and component faults for advanced propulsion systems. In sharp contrast to many conventional methods, which deal with either sensor fault or component fault, but not both, this method considers sensor fault, actuator fault, and component fault under one systemic and unified framework. The proposed solution consists of two main components: a bank of real-time, nonlinear adaptive fault diagnostic estimators for residual generation, and a residual evaluation module that includes adaptive thresholds and a Transferable Belief Model (TBM)-based residual evaluation scheme. By employing a nonlinear adaptive learning architecture, the developed approach is capable of directly dealing with nonlinear engine models and nonlinear faults without the need of linearization. Software modules have been developed and evaluated with the NASA C-MAPSS engine model. Several typical engine-fault modes, including a subset of sensor/actuator/components faults, were tested with a mild transient operation scenario. The simulation results demonstrated that the algorithm was able to successfully detect and isolate all simulated faults as long as the fault magnitudes were larger than the minimum detectable/isolable sizes, and no misdiagnosis occurred
Modeling the microstructure of surface by applying BRDF function
NASA Astrophysics Data System (ADS)
Plachta, Kamil
2017-06-01
The paper presents the modeling of surface microstructure using a bidirectional reflectance distribution function. This function contains full information about the reflectance properties of the flat surfaces - it is possible to determine the share of the specular, directional and diffuse components in the reflected luminous stream. The software is based on the authorial algorithm that uses selected elements of this function models, which allows to determine the share of each component. Basing on obtained data, the surface microstructure of each material can be modeled, which allows to determine the properties of this materials. The concentrator directs the reflected solar radiation onto the photovoltaic surface, increasing, at the same time, the value of the incident luminous stream. The paper presents an analysis of selected materials that can be used to construct the solar concentrator system. The use of concentrator increases the power output of the photovoltaic system by up to 17% as compared to the standard solution.
NASA Astrophysics Data System (ADS)
Elkhateeb, M. M.; Nouh, M. I.; Nelson, R. H.
2015-02-01
A first photometric study for the newly discovered systems USNO-B1.0 1091-0130715 and GSC-03449-0680 was carried out by means of recent a windows interface version of the Wilson and Devinney code based on model atmospheres by Kurucz (1993). The accepted models reveal some absolute parameters for both systems, which are used in deriving the spectral type of the system components and their evolutionary status. Distances to each systems and physical properties were estimated. Comparisons of the computed physical parameters with stellar models are discussed. The components of the system USNO-B1.0 1091-0130715 and the primary of the system GSC-03449-0680 are found to be on or near the ZAMS track, while the secondary of GSC-03449-0680 system found to be severely under luminous and too cool compared to its ZAMS mass.
NASA Technical Reports Server (NTRS)
Liu, Xu; Smith, William L.; Zhou, Daniel K.; Larar, Allen
2005-01-01
Modern infrared satellite sensors such as Atmospheric Infrared Sounder (AIRS), Cosmic Ray Isotope Spectrometer (CrIS), Thermal Emission Spectrometer (TES), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) and Infrared Atmospheric Sounding Interferometer (IASI) are capable of providing high spatial and spectral resolution infrared spectra. To fully exploit the vast amount of spectral information from these instruments, super fast radiative transfer models are needed. This paper presents a novel radiative transfer model based on principal component analysis. Instead of predicting channel radiance or transmittance spectra directly, the Principal Component-based Radiative Transfer Model (PCRTM) predicts the Principal Component (PC) scores of these quantities. This prediction ability leads to significant savings in computational time. The parameterization of the PCRTM model is derived from properties of PC scores and instrument line shape functions. The PCRTM is very accurate and flexible. Due to its high speed and compressed spectral information format, it has great potential for super fast one-dimensional physical retrievals and for Numerical Weather Prediction (NWP) large volume radiance data assimilation applications. The model has been successfully developed for the National Polar-orbiting Operational Environmental Satellite System Airborne Sounder Testbed - Interferometer (NAST-I) and AIRS instruments. The PCRTM model performs monochromatic radiative transfer calculations and is able to include multiple scattering calculations to account for clouds and aerosols.
Burgos, Ana; Páez, Rosaura; Carmona, Estela; Rivas, Hilda
2013-12-01
Community-Based Environmental Monitoring (CBM) is a social practice that makes a valuable contribution to environmental management and construction of active societies for sustainable future. However, its documentation and analysis show deficiencies that hinder contrast and comparison of processes and effects. Based on systems approach, this article presents a model of CBM to orient assessment of programs, with heuristic or practical goals. In a focal level, the model comprises three components, the social subject, the object of monitoring, and the means of action, and five processes, data management, social learning, assimilation/decision making, direct action, and linking. Emergent properties were also identified in the focal and suprafocal levels considering community self-organization, response capacity, and autonomy for environmental management. The model was applied to the assessment of a CBM program of water quality implemented in rural areas in Mexico. Attributes and variables (indicators) for components, processes, and emergent properties were selected to measure changes that emerged since the program implementation. The assessment of the first 3 years (2010-2012) detected changes that indicated movement towards the expected results, but it revealed also the need to adjust the intervention strategy and procedures. Components and processes of the model reflected relevant aspects of the CBM in real world. The component called means of action as a key element to transit "from the data to the action." The CBM model offered a conceptual framework with advantages to understand CBM as a socioecological event and to strengthen its implementation under different conditions and contexts.
The Numerical Propulsion System Simulation: A Multidisciplinary Design System for Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Lytle, John K.
1999-01-01
Advances in computational technology and in physics-based modeling are making large scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze ma or propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of design process and to provide the designer with critical information about the components early in the design process. This paper describes the development of the Numerical Propulsion System Simulation (NPSS), a multidisciplinary system of analysis tools that is focussed on extending the simulation capability from components to the full system. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Framework for a clinical information system.
Van de Velde, R
2000-01-01
The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
An integrated weather and sea-state forecasting system for the Arabian Peninsula (WASSF)
NASA Astrophysics Data System (ADS)
Kallos, George; Galanis, George; Spyrou, Christos; Mitsakou, Christina; Solomos, Stavros; Bartsotas, Nikolaos; Kalogrei, Christina; Athanaselis, Ioannis; Sofianos, Sarantis; Vervatis, Vassios; Axaopoulos, Panagiotis; Papapostolou, Alexandros; Qahtani, Jumaan Al; Alaa, Elyas; Alexiou, Ioannis; Beard, Daniel
2013-04-01
Nowadays, large industrial conglomerates such as the Saudi ARAMCO, require a series of weather and sea state forecasting products that cannot be found in state meteorological offices or even commercial data providers. The two major objectives of the system is prevention and mitigation of environmental problems and of course early warning of local conditions associated with extreme weather events. The management and operations part is related to early warning of weather and sea-state events that affect operations of various facilities. The environmental part is related to air quality and especially the desert dust levels in the atmosphere. The components of the integrated system include: (i) a weather and desert dust prediction system with forecasting horizon of 5 days, (ii) a wave analysis and prediction component for Red Sea and Arabian Gulf, (iii) an ocean circulation and tidal analysis and prediction of both Red Sea and Arabian Gulf and (iv) an Aviation part specializing in the vertical structure of the atmosphere and extreme events that affect air transport and other operations. Specialized data sets required for on/offshore operations are provided ate regular basis. State of the art modeling components are integrated to a unique system that distributes the produced analysis and forecasts to each department. The weather and dust prediction system is SKIRON/Dust, the wave analysis and prediction system is based on WAM cycle 4 model from ECMWF, the ocean circulation model is MICOM while the tidal analysis and prediction is a development of the Ocean Physics and Modeling Group of University of Athens, incorporating the Tidal Model Driver. A nowcasting subsystem is included. An interactive system based on Google Maps gives the capability to extract and display the necessary information for any location of the Arabian Peninsula, the Red Sea and Arabian Gulf.
A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; King, Brent
1995-01-01
This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.
NASA Technical Reports Server (NTRS)
Grubb, Matt
2016-01-01
The NASA Operational Simulator for Small Satellites (NOS3) is a suite of tools to aid in areas such as software development, integration test (IT), mission operations training, verification and validation (VV), and software systems check-out. NOS3 provides a software development environment, a multi-target build system, an operator interface-ground station, dynamics and environment simulations, and software-based hardware models. NOS3 enables the development of flight software (FSW) early in the project life cycle, when access to hardware is typically not available. For small satellites there are extensive lead times on many of the commercial-off-the-shelf (COTS) components as well as limited funding for engineering test units (ETU). Considering the difficulty of providing a hardware test-bed to each developer tester, hardware models are modeled based upon characteristic data or manufacturers data sheets for each individual component. The fidelity of each hardware models is such that FSW executes unaware that physical hardware is not present. This allows binaries to be compiled for both the simulation environment, and the flight computer, without changing the FSW source code. For hardware models that provide data dependent on the environment, such as a GPS receiver or magnetometer, an open-source tool from NASA GSFC (42 Spacecraft Simulation) is used to provide the necessary data. The underlying infrastructure used to transfer messages between FSW and the hardware models can also be used to monitor, intercept, and inject messages, which has proven to be beneficial for VV of larger missions such as James Webb Space Telescope (JWST). As hardware is procured, drivers can be added to the environment to enable hardware-in-the-loop (HWIL) testing. When strict time synchronization is not vital, any number of combinations of hardware components and software-based models can be tested. The open-source operator interface used in NOS3 is COSMOS from Ball Aerospace. For testing, plug-ins are implemented in COSMOS to control the NOS3 simulations, while the command and telemetry tools available in COSMOS are used to communicate with FSW. NOS3 is actively being used for FSW development and component testing of the Simulation-to-Flight 1 (STF-1) CubeSat. As NOS3 matures, hardware models have been added for common CubeSat components such as Novatel GPS receivers, ClydeSpace electrical power systems and batteries, ISISpace antenna systems, etc. In the future, NASA IVV plans to distribute NOS3 to other CubeSat developers and release the suite to the open-source community.
Towards a 3d Spatial Urban Energy Modelling Approach
NASA Astrophysics Data System (ADS)
Bahu, J.-M.; Koch, A.; Kremers, E.; Murshed, S. M.
2013-09-01
Today's needs to reduce the environmental impact of energy use impose dramatic changes for energy infrastructure and existing demand patterns (e.g. buildings) corresponding to their specific context. In addition, future energy systems are expected to integrate a considerable share of fluctuating power sources and equally a high share of distributed generation of electricity. Energy system models capable of describing such future systems and allowing the simulation of the impact of these developments thus require a spatial representation in order to reflect the local context and the boundary conditions. This paper describes two recent research approaches developed at EIFER in the fields of (a) geo-localised simulation of heat energy demand in cities based on 3D morphological data and (b) spatially explicit Agent-Based Models (ABM) for the simulation of smart grids. 3D city models were used to assess solar potential and heat energy demand of residential buildings which enable cities to target the building refurbishment potentials. Distributed energy systems require innovative modelling techniques where individual components are represented and can interact. With this approach, several smart grid demonstrators were simulated, where heterogeneous models are spatially represented. Coupling 3D geodata with energy system ABMs holds different advantages for both approaches. On one hand, energy system models can be enhanced with high resolution data from 3D city models and their semantic relations. Furthermore, they allow for spatial analysis and visualisation of the results, with emphasis on spatially and structurally correlations among the different layers (e.g. infrastructure, buildings, administrative zones) to provide an integrated approach. On the other hand, 3D models can benefit from more detailed system description of energy infrastructure, representing dynamic phenomena and high resolution models for energy use at component level. The proposed modelling strategies conceptually and practically integrate urban spatial and energy planning approaches. The combined modelling approach that will be developed based on the described sectorial models holds the potential to represent hybrid energy systems coupling distributed generation of electricity with thermal conversion systems.
NASA Astrophysics Data System (ADS)
Peña, M.; Saha, S.; Wu, X.; Wang, J.; Tripp, P.; Moorthi, S.; Bhattacharjee, P.
2016-12-01
The next version of the operational Climate Forecast System (version 3, CFSv3) will be a fully coupled six-components system with diverse applications to earth system modeling, including weather and climate predictions. This system will couple the earth's atmosphere, land, ocean, sea-ice, waves and aerosols for both data assimilation and modeling. It will also use the NOAA Environmental Modeling System (NEMS) software super structure to couple these components. The CFSv3 is part of the next Unified Global Coupled System (UGCS), which will unify the global prediction systems that are now operational at NCEP. The UGCS is being developed through the efforts of dedicated research and engineering teams and through coordination across many CPO/MAPP and NGGPS groups. During this development phase, the UGCS is being tested for seasonal purposes and undergoes frequent revisions. Each new revision is evaluated to quickly discover, isolate and solve problems that negatively impact its performance. In the UGCS-seasonal model, components (e.g., ocean, sea-ice, atmosphere, etc.) are coupled through a NEMS-based "mediator". In this numerical infrastructure, model diagnostics and forecast validation are carried out, both component by component, and as a whole. The next stage, model optimization, will require enhanced performance diagnostics tools to help prioritize areas of numerical improvements. After the technical development of the UGCS-seasonal is completed, it will become the first realization of the CFSv3. All future development of this system will be carried out by the climate team at NCEP, in scientific collaboration with the groups that developed the individual components, as well as the climate community. A unique challenge to evaluate this unified weather-climate system is the large number of variables, which evolve over a wide range of temporal and spatial scales. A small set of performance measures and scorecard displays are been created, and collaboration and software contributions from research and operational centers are being incorporated. A status of the CFSv3/UGCS-seasonal development and examples of its performance and measuring tools will be presented.
Distributed Damage Estimation for Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Bregon, Anibal; Roychoudhury, Indranil
2011-01-01
Model-based prognostics approaches capture system knowledge in the form of physics-based models of components, and how they fail. These methods consist of a damage estimation phase, in which the health state of a component is estimated, and a prediction phase, in which the health state is projected forward in time to determine end of life. However, the damage estimation problem is often multi-dimensional and computationally intensive. We propose a model decomposition approach adapted from the diagnosis community, called possible conflicts, in order to both improve the computational efficiency of damage estimation, and formulate a damage estimation approach that is inherently distributed. Local state estimates are combined into a global state estimate from which prediction is performed. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the approach.
Neyens, David M; Childers, Ashley Kay
2017-07-01
To determine the barriers and facilitators associated with willingness to use personal health information management (PHIM) systems to support an existing worksite wellness program (WWP). The study design involved a Web-based survey. The study setting was a regional hospital. Hospital employees comprised the study subjects. Willingness, barriers, and facilitators associated with PHIM were measured. Bivariate logit models were used to model two binary dependent variables. One model predicted the likelihood of believing PHIM systems would positively affect overall health and willingness to use. Another predicted the likelihood of worrying about online security and not believing PHIM systems would benefit health goals. Based on 333 responses, believing PHIM systems would positively affect health was highly associated with willingness to use PHIM systems (p < .01). Those comfortable online were 7.22 times more willing to use PHIM systems. Participants in exercise-based components of WWPs were 3.03 times more likely to be willing to use PHIM systems. Those who worried about online security were 5.03 times more likely to believe PHIM systems would not help obtain health goals. Comfort with personal health information online and exercise-based WWP experience was associated with willingness to use PHIM systems. However, nutrition-based WWPs did not have similar effects. Implementation barriers relate to technology anxiety and trust in security, as well as experience with specific WWP activities. Identifying differences between WWP components and addressing technology concerns before implementation of PHIM systems into WWPs may facilitate improved adoption and usage.
Modelling of robotic work cells using agent based-approach
NASA Astrophysics Data System (ADS)
Sękala, A.; Banaś, W.; Gwiazda, A.; Monica, Z.; Kost, G.; Hryniewicz, P.
2016-08-01
In the case of modern manufacturing systems the requirements, both according the scope and according characteristics of technical procedures are dynamically changing. This results in production system organization inability to keep up with changes in a market demand. Accordingly, there is a need for new design methods, characterized, on the one hand with a high efficiency and on the other with the adequate level of the generated organizational solutions. One of the tools that could be used for this purpose is the concept of agent systems. These systems are the tools of artificial intelligence. They allow assigning to agents the proper domains of procedures and knowledge so that they represent in a self-organizing system of an agent environment, components of a real system. The agent-based system for modelling robotic work cell should be designed taking into consideration many limitations considered with the characteristic of this production unit. It is possible to distinguish some grouped of structural components that constitute such a system. This confirms the structural complexity of a work cell as a specific production system. So it is necessary to develop agents depicting various aspects of the work cell structure. The main groups of agents that are used to model a robotic work cell should at least include next pattern representatives: machine tool agents, auxiliary equipment agents, robots agents, transport equipment agents, organizational agents as well as data and knowledge bases agents. In this way it is possible to create the holarchy of the agent-based system.
Commercial Demand Module - NEMS Documentation
2017-01-01
Documents the objectives, analytical approach and development of the National Energy Modeling System (NEMS) Commercial Sector Demand Module. The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.
NASA Astrophysics Data System (ADS)
Xie, Lian; Liu, Huiqing; Peng, Machuan
The effects of wave-current interactions on the storm surge and inundation induced by Hurricane Hugo in and around the Charleston Harbor and its adjacent coastal regions are examined by using a three-dimensional (3-D) wave-current coupled modeling system. The 3-D storm surge and inundation modeling component of the coupled system is based on the Princeton ocean model (POM), whereas the wave modeling component is based on the third-generation wave model, simulating waves nearshore (SWAN). The results indicate that the effects of wave-induced surface, bottom, and radiation stresses can separately or in combination produce significant changes in storm surge and inundation. The effects of waves vary spatially. In some areas, the contribution of waves to peak storm surge during Hurricane Hugo reached as high as 0.76 m which led to substantial changes in the inundation and drying areas simulated by the storm surge model.
Evaluating performances of simplified physically based landslide susceptibility models.
NASA Astrophysics Data System (ADS)
Capparelli, Giovanna; Formetta, Giuseppe; Versace, Pasquale
2015-04-01
Rainfall induced shallow landslides cause significant damages involving loss of life and properties. Prediction of shallow landslides susceptible locations is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, and statistics. Usually to accomplish this task two main approaches are used: statistical or physically based model. This paper presents a package of GIS based models for landslide susceptibility analysis. It was integrated in the NewAge-JGrass hydrological model using the Object Modeling System (OMS) modeling framework. The package includes three simplified physically based models for landslides susceptibility analysis (M1, M2, and M3) and a component for models verifications. It computes eight goodness of fit indices (GOF) by comparing pixel-by-pixel model results and measurements data. Moreover, the package integration in NewAge-JGrass allows the use of other components such as geographic information system tools to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. The system offers the possibility to investigate and fairly compare the quality and the robustness of models and models parameters, according a procedure that includes: i) model parameters estimation by optimizing each of the GOF index separately, ii) models evaluation in the ROC plane by using each of the optimal parameter set, and iii) GOF robustness evaluation by assessing their sensitivity to the input parameter variation. This procedure was repeated for all three models. The system was applied for a case study in Calabria (Italy) along the Salerno-Reggio Calabria highway, between Cosenza and Altilia municipality. The analysis provided that among all the optimized indices and all the three models, Average Index (AI) optimization coupled with model M3 is the best modeling solution for our test case. This research was funded by PON Project No. 01_01503 "Integrated Systems for Hydrogeological Risk Monitoring, Early Warning and Mitigation Along the Main Lifelines", CUP B31H11000370005, in the framework of the National Operational Program for "Research and Competitiveness" 2007-2013.
Overview of Threats and Failure Models for Safety-Relevant Computer-Based Systems
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2015-01-01
This document presents a high-level overview of the threats to safety-relevant computer-based systems, including (1) a description of the introduction and activation of physical and logical faults; (2) the propagation of their effects; and (3) function-level and component-level error and failure mode models. These models can be used in the definition of fault hypotheses (i.e., assumptions) for threat-risk mitigation strategies. This document is a contribution to a guide currently under development that is intended to provide a general technical foundation for designers and evaluators of safety-relevant systems.
Systems Engineering and Application of System Performance Modeling in SIM Lite Mission
NASA Technical Reports Server (NTRS)
Moshir, Mehrdad; Murphy, David W.; Milman, Mark H.; Meier, David L.
2010-01-01
The SIM Lite Astrometric Observatory will be the first space-based Michelson interferometer operating in the visible wavelength, with the ability to perform ultra-high precision astrometric measurements on distant celestial objects. SIM Lite data will address in a fundamental way questions such as characterization of Earth-mass planets around nearby stars. To accomplish these goals it is necessary to rely on a model-based systems engineering approach - much more so than most other space missions. This paper will describe in further detail the components of this end-to-end performance model, called "SIM-sim", and show how it has helped the systems engineering process.
Semantically Enhanced Online Configuration of Feedback Control Schemes.
Milis, Georgios M; Panayiotou, Christos G; Polycarpou, Marios M
2018-03-01
Recent progress toward the realization of the "Internet of Things" has improved the ability of physical and soft/cyber entities to operate effectively within large-scale, heterogeneous systems. It is important that such capacity be accompanied by feedback control capabilities sufficient to ensure that the overall systems behave according to their specifications and meet their functional objectives. To achieve this, such systems require new architectures that facilitate the online deployment, composition, interoperability, and scalability of control system components. Most current control systems lack scalability and interoperability because their design is based on a fixed configuration of specific components, with knowledge of their individual characteristics only implicitly passed through the design. This paper addresses the need for flexibility when replacing components or installing new components, which might occur when an existing component is upgraded or when a new application requires a new component, without the need to readjust or redesign the overall system. A semantically enhanced feedback control architecture is introduced for a class of systems, aimed at accommodating new components into a closed-loop control framework by exploiting the semantic inference capabilities of an ontology-based knowledge model. This architecture supports continuous operation of the control system, a crucial property for large-scale systems for which interruptions have negative impact on key performance metrics that may include human comfort and welfare or economy costs. A case-study example from the smart buildings domain is used to illustrate the proposed architecture and semantic inference mechanisms.
A Logical Account of Diagnosis with Multiple Theories
NASA Technical Reports Server (NTRS)
Pandurang, P.; Lum, Henry Jr. (Technical Monitor)
1994-01-01
Model-based diagnosis is a powerful, first-principles approach to diagnosis. The primary drawback with model-based diagnosis is that it is based on a system model, and this model might be inappropriate. The inappropriateness of models usually stems from the fundamental tradeoff between completeness and efficiency. Recently, Struss has developed an elegant proposal for diagnosis with multiple models. Struss characterizes models as relations and develops a precise notion of abstraction. He defines relations between models and analyzes the effect of a model switch on the space of possible diagnoses. In this paper we extend Struss's proposal in three ways. First, our account of diagnosis with multiple models is based on representing models as more expressive first-order theories, rather than as relations. A key technical contribution is the use of a general notion of abstraction based on interpretations between theories. Second, Struss conflates component modes with models, requiring him to define models relations such as choices which result in non-relational models. We avoid this problem by differentiating component modes from models. Third, we present a more general account of simplifications that correctly handles situations where the simplification contradicts the base theory.
Integrated Workforce Modeling System
NASA Technical Reports Server (NTRS)
Moynihan, Gary P.
2000-01-01
There are several computer-based systems, currently in various phases of development at KSC, which encompass some component, aspect, or function of workforce modeling. These systems may offer redundant capabilities and/or incompatible interfaces. A systems approach to workforce modeling is necessary in order to identify and better address user requirements. This research has consisted of two primary tasks. Task 1 provided an assessment of existing and proposed KSC workforce modeling systems for their functionality and applicability to the workforce planning function. Task 2 resulted in the development of a proof-of-concept design for a systems approach to workforce modeling. The model incorporates critical aspects of workforce planning, including hires, attrition, and employee development.
Remotely piloted vehicle: Application of the GRASP analysis method
NASA Technical Reports Server (NTRS)
Andre, W. L.; Morris, J. B.
1981-01-01
The application of General Reliability Analysis Simulation Program (GRASP) to the remotely piloted vehicle (RPV) system is discussed. The model simulates the field operation of the RPV system. By using individual component reliabilities, the overall reliability of the RPV system is determined. The results of the simulations are given in operational days. The model represented is only a basis from which more detailed work could progress. The RPV system in this model is based on preliminary specifications and estimated values. The use of GRASP from basic system definition, to model input, and to model verification is demonstrated.
Improving Adolescent Judgment and Decision Making
Dansereau, Donald F.; Knight, Danica K.; Flynn, Patrick M.
2013-01-01
Human judgment and decision making (JDM) has substantial room for improvement, especially among adolescents. Increased technological and social complexity “ups the ante” for developing impactful JDM interventions and aids. Current explanatory advances in this field emphasize dual processing models that incorporate both experiential and analytic processing systems. According to these models, judgment and decisions based on the experiential system are rapid and stem from automatic reference to previously stored episodes. Those based on the analytic system are viewed as slower and consciously developed. These models also hypothesize that metacognitive (self-monitoring) activities embedded in the analytic system influence how and when the two systems are used. What is not included in these models is the development of an intersection between the two systems. Because such an intersection is strongly suggested by memory and educational research as the basis of wisdom/expertise, the present paper describes an Integrated Judgment and Decision-Making Model (IJDM) that incorporates this component. Wisdom/expertise is hypothesized to contain a collection of schematic structures that can emerge from the accumulation of similar episodes or repeated analytic practice. As will be argued, in comparisons to dual system models, the addition of this component provides a broader basis for selecting and designing interventions to improve adolescent JDM. Its development also has implications for generally enhancing cognitive interventions by adopting principles from athletic training to create automated, expert behaviors. PMID:24391350
NASA Technical Reports Server (NTRS)
Packard, Michael H.
2002-01-01
Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.
A locomotive-track coupled vertical dynamics model with gear transmissions
NASA Astrophysics Data System (ADS)
Chen, Zaigang; Zhai, Wanming; Wang, Kaiyun
2017-02-01
A gear transmission system is a key element in a locomotive for the transmission of traction or braking forces between the motor and the wheel-rail interface. Its dynamic performance has a direct effect on the operational reliability of the locomotive and its components. This paper proposes a comprehensive locomotive-track coupled vertical dynamics model, in which the locomotive is driven by axle-hung motors. In this coupled dynamics model, the dynamic interactions between the gear transmission system and the other components, e.g. motor and wheelset, are considered based on the detailed analysis of its structural properties and working mechanism. Thus, the mechanical transmission system for power delivery from the motor to the wheelset via gear transmission is coupled with a traditional locomotive-track dynamics system via the wheel-rail contact interface and the gear mesh interface. This developed dynamics model enables investigations of the dynamic performance of the entire dynamics system under the excitations from the wheel-rail contact interface and/or the gear mesh interface. Dynamic interactions are demonstrated by numerical simulations using this dynamics model. The results indicate that both of the excitations from the wheel-rail contact interface and the gear mesh interface have a significant effect on the dynamic responses of the components in this coupled dynamics system.
Stability and Control of Human Trunk Movement During Walking.
Wu, Q.; Sepehri, N.; Thornton-Trump, A. B.; Alexander, M.
1998-01-01
A mathematical model has been developed to study the control mechanisms of human trunk movement during walking. The trunk is modeled as a base-excited inverted pendulum with two-degrees of rotational freedom. The base point, corresponding to the bony landmark of the sacrum, can move in three-dimensional space in a general way. Since the stability of upright posture is essential for human walking, a controller has been designed such that the stability of the pendulum about the upright position is guaranteed. The control laws are developed based on Lyapunov's stability theory and include feedforward and linear feedback components. It is found that the feedforward component plays a critical role in keeping postural stability, and the linear feedback component, (resulting from viscoelastic function of the musculoskeletal system) can effectively duplicate the pattern of trunk movement. The mathematical model is validated by comparing the simulation results with those based on gait measurements performed in the Biomechanics Laboratory at the University of Manitoba.
CEREF: A hybrid data-driven model for forecasting annual streamflow from a socio-hydrological system
NASA Astrophysics Data System (ADS)
Zhang, Hongbo; Singh, Vijay P.; Wang, Bin; Yu, Yinghao
2016-09-01
Hydrological forecasting is complicated by flow regime alterations in a coupled socio-hydrologic system, encountering increasingly non-stationary, nonlinear and irregular changes, which make decision support difficult for future water resources management. Currently, many hybrid data-driven models, based on the decomposition-prediction-reconstruction principle, have been developed to improve the ability to make predictions of annual streamflow. However, there exist many problems that require further investigation, the chief among which is the direction of trend components decomposed from annual streamflow series and is always difficult to ascertain. In this paper, a hybrid data-driven model was proposed to capture this issue, which combined empirical mode decomposition (EMD), radial basis function neural networks (RBFNN), and external forces (EF) variable, also called the CEREF model. The hybrid model employed EMD for decomposition and RBFNN for intrinsic mode function (IMF) forecasting, and determined future trend component directions by regression with EF as basin water demand representing the social component in the socio-hydrologic system. The Wuding River basin was considered for the case study, and two standard statistical measures, root mean squared error (RMSE) and mean absolute error (MAE), were used to evaluate the performance of CEREF model and compare with other models: the autoregressive (AR), RBFNN and EMD-RBFNN. Results indicated that the CEREF model had lower RMSE and MAE statistics, 42.8% and 7.6%, respectively, than did other models, and provided a superior alternative for forecasting annual runoff in the Wuding River basin. Moreover, the CEREF model can enlarge the effective intervals of streamflow forecasting compared to the EMD-RBFNN model by introducing the water demand planned by the government department to improve long-term prediction accuracy. In addition, we considered the high-frequency component, a frequent subject of concern in EMD-based forecasting, and results showed that removing high-frequency component is an effective measure to improve forecasting precision and is suggested for use with the CEREF model for better performance. Finally, the study concluded that the CEREF model can be used to forecast non-stationary annual streamflow change as a co-evolution of hydrologic and social systems with better accuracy. Also, the modification about removing high-frequency can further improve the performance of the CEREF model. It should be noted that the CEREF model is beneficial for data-driven hydrologic forecasting in complex socio-hydrologic systems, and as a simple data-driven socio-hydrologic forecasting model, deserves more attention.
Reliability analysis based on the losses from failures.
Todinov, M T
2006-04-01
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.
Characterization of natural ventilation in wastewater collection systems.
Ward, Matthew; Corsi, Richard; Morton, Robert; Knapp, Tom; Apgar, Dirk; Quigley, Chris; Easter, Chris; Witherspoon, Jay; Pramanik, Amit; Parker, Wayne
2011-03-01
The purpose of the study was to characterize natural ventilation in full-scale gravity collection system components while measuring other parameters related to ventilation. Experiments were completed at four different locations in the wastewater collection systems of Los Angeles County Sanitation Districts, Los Angeles, California, and the King County Wastewater Treatment District, Seattle, Washington. The subject components were concrete gravity pipes ranging in diameter from 0.8 to 2.4 m (33 to 96 in.). Air velocity was measured in each pipe using a carbon-monoxide pulse tracer method. Air velocity was measured entering or exiting the components at vents using a standpipe and hotwire anemometer arrangement. Ambient wind speed, temperature, and relative humidity; headspace temperature and relative humidity; and wastewater flow and temperature were measured. The field experiments resulted in a large database of measured ventilation and related parameters characterizing ventilation in full-scale gravity sewers. Measured ventilation rates ranged from 23 to 840 L/s. The experimental data was used to evaluate existing ventilation models. Three models that were based upon empirical extrapolation, computational fluid dynamics, and thermodynamics, respectively, were evaluated based on predictive accuracy compared to the measured data. Strengths and weaknesses in each model were found and these observations were used to propose a concept for an improved ventilation model.
NASA Astrophysics Data System (ADS)
Juromskiy, V. M.
2016-09-01
It is developed a mathematical model for an electric drive of high-speed separation device in terms of the modeling dynamic systems Simulink, MATLAB. The model is focused on the study of the automatic control systems of the power factor (Cosφ) of an actuator by compensating the reactive component of the total power by switching a capacitor bank in series with the actuator. The model is based on the methodology of the structural modeling of dynamic processes.
GIS-based spatial decision support system for grain logistics management
NASA Astrophysics Data System (ADS)
Zhen, Tong; Ge, Hongyi; Jiang, Yuying; Che, Yi
2010-07-01
Grain logistics is the important component of the social logistics, which can be attributed to frequent circulation and the great quantity. At present time, there is no modern grain logistics distribution management system, and the logistics cost is the high. Geographic Information Systems (GIS) have been widely used for spatial data manipulation and model operations and provide effective decision support through its spatial database management capabilities and cartographic visualization. In the present paper, a spatial decision support system (SDSS) is proposed to support policy makers and to reduce the cost of grain logistics. The system is composed of two major components: grain logistics goods tracking model and vehicle routing problem optimization model and also allows incorporation of data coming from external sources. The proposed system is an effective tool to manage grain logistics in order to increase the speed of grain logistics and reduce the grain circulation cost.
Modeling the Personal Health Ecosystem.
Blobel, Bernd; Brochhausen, Mathias; Ruotsalainen, Pekka
2018-01-01
Complex ecosystems like the pHealth one combine different domains represented by a huge variety of different actors (human beings, organizations, devices, applications, components) belonging to different policy domains, coming from different disciplines, deploying different methodologies, terminologies, and ontologies, offering different levels of knowledge, skills, and experiences, acting in different scenarios and accommodating different business cases to meet the intended business objectives. For correctly modeling such systems, a system-oriented, architecture-centric, ontology-based, policy-driven approach is inevitable, thereby following established Good Modeling Best Practices. However, most of the existing standards, specifications and tools for describing, representing, implementing and managing health (information) systems reflect the advancement of information and communication technology (ICT) represented by different evolutionary levels of data modeling. The paper presents a methodology for integrating, adopting and advancing models, standards, specifications as well as implemented systems and components on the way towards the aforementioned ultimate approach, so meeting the challenge we face when transforming health systems towards ubiquitous, personalized, predictive, preventive, participative, and cognitive health and social care.
NASA Technical Reports Server (NTRS)
1974-01-01
The transient and steady state response of the respiratory control system for variations in volumetric fractions of inspired gases and special system parameters are modeled. The program contains the capability to change workload. The program is based on Grodins' respiratory control model and can be envisioned as a feedback control system comprised of a plant (the controlled system) and the regulating component (controlling system). The controlled system is partitioned into 3 compartments corresponding to lungs, brain, and tissue with a fluid interconnecting patch representing the blood.
Distillation and Air Stripping Designs for the Lunar Surface
NASA Technical Reports Server (NTRS)
Boul, Peter J.; Lange, Kevin E.; Conger, Bruce; Anderson, Molly
2009-01-01
Air stripping and distillation are two different gravity-based methods, which may be applied to the purification of wastewater on the lunar base. These gravity-based solutions to water processing are robust physical separation techniques, which may be advantageous to many other techniques for their simplicity in design and operation. The two techniques can be used in conjunction with each other to obtain high purity water. The components and feed compositions for modeling waste water streams are presented in conjunction with the Aspen property system for traditional stage distillation models and air stripping models. While the individual components for each of the waste streams will vary naturally within certain bounds, an analog model for waste water processing is suggested based on typical concentration ranges for these components. Target purity levels for the for recycled water are determined for each individual component based on NASA s required maximum contaminant levels for potable water Distillation processes are modeled separately and in tandem with air stripping to demonstrate the potential effectiveness and utility of these methods in recycling wastewater on the Moon. Optimum parameters such as reflux ratio, feed stage location, and processing rates are determined with respect to the power consumption of the process. Multistage distillation is evaluated for components in wastewater to determine the minimum number of stages necessary for each of 65 components in humidity condensate and urine wastewater mixed streams. Components of the wastewater streams are ranked by Henry s Law Constant and the suitability of air stripping in the purification of wastewater in terms of component removal is evaluated. Scaling factors for distillation and air stripping columns are presented to account for the difference in the lunar gravitation environment. Commercially available distillation and air stripping units which are considered suitable for Exploration Life Support are presented. The advantages to the various designs are summarized with respect to water purity levels, power consumption, and processing rates.
Human systems immunology: hypothesis-based modeling and unbiased data-driven approaches.
Arazi, Arnon; Pendergraft, William F; Ribeiro, Ruy M; Perelson, Alan S; Hacohen, Nir
2013-10-31
Systems immunology is an emerging paradigm that aims at a more systematic and quantitative understanding of the immune system. Two major approaches have been utilized to date in this field: unbiased data-driven modeling to comprehensively identify molecular and cellular components of a system and their interactions; and hypothesis-based quantitative modeling to understand the operating principles of a system by extracting a minimal set of variables and rules underlying them. In this review, we describe applications of the two approaches to the study of viral infections and autoimmune diseases in humans, and discuss possible ways by which these two approaches can synergize when applied to human immunology. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Roy, Surajit; Hirt, Evelyn H.
2014-09-12
This report describes research results to date in support of the integration and demonstration of diagnostics technologies for prototypical AdvSMR passive components (to establish condition indices for monitoring) with model-based prognostics methods. The focus of the PHM methodology and algorithm development in this study is at the localized scale. Multiple localized measurements of material condition (using advanced nondestructive measurement methods), along with available measurements of the stressor environment, enhance the performance of localized diagnostics and prognostics of passive AdvSMR components and systems.
Jiao, Dazhi; Wild, David J
2009-02-01
This paper proposes a system that automatically extracts CYP protein and chemical interactions from journal article abstracts, using natural language processing (NLP) and text mining methods. In our system, we employ a maximum entropy based learning method, using results from syntactic, semantic, and lexical analysis of texts. We first present our system architecture and then discuss the data set for training our machine learning based models and the methods in building components in our system, such as part of speech (POS) tagging, Named Entity Recognition (NER), dependency parsing, and relation extraction. An evaluation of the system is conducted at the end, yielding very promising results: The POS, dependency parsing, and NER components in our system have achieved a very high level of accuracy as measured by precision, ranging from 85.9% to 98.5%, and the precision and the recall of the interaction extraction component are 76.0% and 82.6%, and for the overall system are 68.4% and 72.2%, respectively.
Modeling Adaptive Educational Methods with IMS Learning Design
ERIC Educational Resources Information Center
Specht, Marcus; Burgos, Daniel
2007-01-01
The paper describes a classification system for adaptive methods developed in the area of adaptive educational hypermedia based on four dimensions: What components of the educational system are adapted? To what features of the user and the current context does the system adapt? Why does the system adapt? How does the system get the necessary…
Computational model of precision grip in Parkinson's disease: a utility based approach
Gupta, Ankur; Balasubramani, Pragathi P.; Chakravarthy, V. Srinivasa
2013-01-01
We propose a computational model of Precision Grip (PG) performance in normal subjects and Parkinson's Disease (PD) patients. Prior studies on grip force generation in PD patients show an increase in grip force during ON medication and an increase in the variability of the grip force during OFF medication (Ingvarsson et al., 1997; Fellows et al., 1998). Changes in grip force generation in dopamine-deficient PD conditions strongly suggest contribution of the Basal Ganglia, a deep brain system having a crucial role in translating dopamine signals to decision making. The present approach is to treat the problem of modeling grip force generation as a problem of action selection, which is one of the key functions of the Basal Ganglia. The model consists of two components: (1) the sensory-motor loop component, and (2) the Basal Ganglia component. The sensory-motor loop component converts a reference position and a reference grip force, into lift force and grip force profiles, respectively. These two forces cooperate in grip-lifting a load. The sensory-motor loop component also includes a plant model that represents the interaction between two fingers involved in PG, and the object to be lifted. The Basal Ganglia component is modeled using Reinforcement Learning with the significant difference that the action selection is performed using utility distribution instead of using purely Value-based distribution, thereby incorporating risk-based decision making. The proposed model is able to account for the PG results from normal and PD patients accurately (Ingvarsson et al., 1997; Fellows et al., 1998). To our knowledge the model is the first model of PG in PD conditions. PMID:24348373
CICE, The Los Alamos Sea Ice Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunke, Elizabeth; Lipscomb, William; Jones, Philip
The Los Alamos sea ice model (CICE) is the result of an effort to develop a computationally efficient sea ice component for a fully coupled atmosphere–land–ocean–ice global climate model. It was originally designed to be compatible with the Parallel Ocean Program (POP), an ocean circulation model developed at Los Alamos National Laboratory for use on massively parallel computers. CICE has several interacting components: a vertical thermodynamic model that computes local growth rates of snow and ice due to vertical conductive, radiative and turbulent fluxes, along with snowfall; an elastic-viscous-plastic model of ice dynamics, which predicts the velocity field of themore » ice pack based on a model of the material strength of the ice; an incremental remapping transport model that describes horizontal advection of the areal concentration, ice and snow volume and other state variables; and a ridging parameterization that transfers ice among thickness categories based on energetic balances and rates of strain. It also includes a biogeochemical model that describes evolution of the ice ecosystem. The CICE sea ice model is used for climate research as one component of complex global earth system models that include atmosphere, land, ocean and biogeochemistry components. It is also used for operational sea ice forecasting in the polar regions and in numerical weather prediction models.« less
Tolerance assignment in optical design
NASA Astrophysics Data System (ADS)
Youngworth, Richard Neil
2002-09-01
Tolerance assignment is necessary in any engineering endeavor because fabricated systems---due to the stochastic nature of manufacturing and assembly processes---necessarily deviate from the nominal design. This thesis addresses the problem of optical tolerancing. The work can logically be split into three different components that all play an essential role. The first part addresses the modeling of manufacturing errors in contemporary fabrication and assembly methods. The second component is derived from the design aspect---the development of a cost-based tolerancing procedure. The third part addresses the modeling of image quality in an efficient manner that is conducive to the tolerance assignment process. The purpose of the first component, modeling manufacturing errors, is twofold---to determine the most critical tolerancing parameters and to understand better the effects of fabrication errors. Specifically, mid-spatial-frequency errors, typically introduced in sub-aperture grinding and polishing fabrication processes, are modeled. The implication is that improving process control and understanding better the effects of the errors makes the task of tolerance assignment more manageable. Conventional tolerancing methods do not directly incorporate cost. Consequently, tolerancing approaches tend to focus more on image quality. The goal of the second part of the thesis is to develop cost-based tolerancing procedures that facilitate optimum system fabrication by generating the loosest acceptable tolerances. This work has the potential to impact a wide range of optical designs. The third element, efficient modeling of image quality, is directly related to the cost-based optical tolerancing method. Cost-based tolerancing requires efficient and accurate modeling of the effects of errors on the performance of optical systems. Thus it is important to be able to compute the gradient and the Hessian, with respect to the parameters that need to be toleranced, of the figure of merit that measures the image quality of a system. An algebraic method for computing the gradient and the Hessian is developed using perturbation theory.
NASA Technical Reports Server (NTRS)
Hogan, John; Kang, Sukwon; Cavazzoni, Jim; Levri, Julie; Finn, Cory; Luna, Bernadette (Technical Monitor)
2000-01-01
The objective of this study is to compare incineration and composting in a Mars-based advanced life support (ALS) system. The variables explored include waste pre-processing requirements, reactor sizing and buffer capacities. The study incorporates detailed mathematical models of biomass production and waste processing into an existing dynamic ALS system model. The ALS system and incineration models (written in MATLAB/SIMULINK(c)) were developed at the NASA Ames Research Center. The composting process is modeled using first order kinetics, with different degradation rates for individual waste components (carbohydrates, proteins, fats, cellulose and lignin). The biomass waste streams are generated using modified "Eneray Cascade" crop models, which use light- and dark-cycle temperatures, irradiance, photoperiod, [CO2], planting density, and relative humidity as model inputs. The study also includes an evaluation of equivalent system mass (ESM).
Nichols, J.M.; Moniz, L.; Nichols, J.D.; Pecora, L.M.; Cooch, E.
2005-01-01
A number of important questions in ecology involve the possibility of interactions or ?coupling? among potential components of ecological systems. The basic question of whether two components are coupled (exhibit dynamical interdependence) is relevant to investigations of movement of animals over space, population regulation, food webs and trophic interactions, and is also useful in the design of monitoring programs. For example, in spatially extended systems, coupling among populations in different locations implies the existence of redundant information in the system and the possibility of exploiting this redundancy in the development of spatial sampling designs. One approach to the identification of coupling involves study of the purported mechanisms linking system components. Another approach is based on time series of two potential components of the same system and, in previous ecological work, has relied on linear cross-correlation analysis. Here we present two different attractor-based approaches, continuity and mutual prediction, for determining the degree to which two population time series (e.g., at different spatial locations) are coupled. Both approaches are demonstrated on a one-dimensional predator?prey model system exhibiting complex dynamics. Of particular interest is the spatial asymmetry introduced into the model as linearly declining resource for the prey over the domain of the spatial coordinate. Results from these approaches are then compared to the more standard cross-correlation analysis. In contrast to cross-correlation, both continuity and mutual prediction are clearly able to discern the asymmetry in the flow of information through this system.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Schrenkenghost, Debra K.
2001-01-01
The Adjustable Autonomy Testbed (AAT) is a simulation-based testbed located in the Intelligent Systems Laboratory in the Automation, Robotics and Simulation Division at NASA Johnson Space Center. The purpose of the testbed is to support evaluation and validation of prototypes of adjustable autonomous agent software for control and fault management for complex systems. The AA T project has developed prototype adjustable autonomous agent software and human interfaces for cooperative fault management. This software builds on current autonomous agent technology by altering the architecture, components and interfaces for effective teamwork between autonomous systems and human experts. Autonomous agents include a planner, flexible executive, low level control and deductive model-based fault isolation. Adjustable autonomy is intended to increase the flexibility and effectiveness of fault management with an autonomous system. The test domain for this work is control of advanced life support systems for habitats for planetary exploration. The CONFIG hybrid discrete event simulation environment provides flexible and dynamically reconfigurable models of the behavior of components and fluids in the life support systems. Both discrete event and continuous (discrete time) simulation are supported, and flows and pressures are computed globally. This provides fast dynamic simulations of interacting hardware systems in closed loops that can be reconfigured during operations scenarios, producing complex cascading effects of operations and failures. Current object-oriented model libraries support modeling of fluid systems, and models have been developed of physico-chemical and biological subsystems for processing advanced life support gases. In FY01, water recovery system models will be developed.
Holistic versus monomeric strategies for hydrological modelling of human-modified hydrosystems
NASA Astrophysics Data System (ADS)
Nalbantis, I.; Efstratiadis, A.; Rozos, E.; Kopsiafti, M.; Koutsoyiannis, D.
2011-03-01
The modelling of human-modified basins that are inadequately measured constitutes a challenge for hydrological science. Often, models for such systems are detailed and hydraulics-based for only one part of the system while for other parts oversimplified models or rough assumptions are used. This is typically a bottom-up approach, which seeks to exploit knowledge of hydrological processes at the micro-scale at some components of the system. Also, it is a monomeric approach in two ways: first, essential interactions among system components may be poorly represented or even omitted; second, differences in the level of detail of process representation can lead to uncontrolled errors. Additionally, the calibration procedure merely accounts for the reproduction of the observed responses using typical fitting criteria. The paper aims to raise some critical issues, regarding the entire modelling approach for such hydrosystems. For this, two alternative modelling strategies are examined that reflect two modelling approaches or philosophies: a dominant bottom-up approach, which is also monomeric and, very often, based on output information, and a top-down and holistic approach based on generalized information. Critical options are examined, which codify the differences between the two strategies: the representation of surface, groundwater and water management processes, the schematization and parameterization concepts and the parameter estimation methodology. The first strategy is based on stand-alone models for surface and groundwater processes and for water management, which are employed sequentially. For each model, a different (detailed or coarse) parameterization is used, which is dictated by the hydrosystem schematization. The second strategy involves model integration for all processes, parsimonious parameterization and hybrid manual-automatic parameter optimization based on multiple objectives. A test case is examined in a hydrosystem in Greece with high complexities, such as extended surface-groundwater interactions, ill-defined boundaries, sinks to the sea and anthropogenic intervention with unmeasured abstractions both from surface water and aquifers. Criteria for comparison are the physical consistency of parameters, the reproduction of runoff hydrographs at multiple sites within the studied basin, the likelihood of uncontrolled model outputs, the required amount of computational effort and the performance within a stochastic simulation setting. Our work allows for investigating the deterioration of model performance in cases where no balanced attention is paid to all components of human-modified hydrosystems and the related information. Also, sources of errors are identified and their combined effect are evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-08-01
An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less
Data Management System for the National Energy-Water System (NEWS) Assessment Framework
NASA Astrophysics Data System (ADS)
Corsi, F.; Prousevitch, A.; Glidden, S.; Piasecki, M.; Celicourt, P.; Miara, A.; Fekete, B. M.; Vorosmarty, C. J.; Macknick, J.; Cohen, S. M.
2015-12-01
Aiming at providing a comprehensive assessment of the water-energy nexus, the National Energy-Water System (NEWS) project requires the integration of data to support a modeling framework that links climate, hydrological, power production, transmission, and economical models. Large amounts of Georeferenced data has to be streamed to the components of the inter-disciplinary model to explore future challenges and tradeoffs in the US power production, based on climate scenarios, power plant locations and technologies, available water resources, ecosystem sustainability, and economic demand. We used open source and in-house build software components to build a system that addresses two major data challenges: On-the-fly re-projection, re-gridding, interpolation, extrapolation, nodata patching, merging, temporal and spatial aggregation, of static and time series datasets in virtually any file formats and file structures, and any geographic extent for the models I/O, directly at run time; Comprehensive data management based on metadata cataloguing and discovery in repositories utilizing the MAGIC Table (Manipulation and Geographic Inquiry Control database). This innovative concept allows models to access data on-the-fly by data ID, irrespective of file path, file structure, file format and regardless its GIS specifications. In addition, a web-based information and computational system is being developed to control the I/O of spatially distributed Earth system, climate, and hydrological, power grid, and economical data flow within the NEWS framework. The system allows scenario building, data exploration, visualization, querying, and manipulation any loaded gridded, point, and vector polygon dataset. The system has demonstrated its potential for applications in other fields of Earth science modeling, education, and outreach. Over time, this implementation of the system will provide near real-time assessment of various current and future scenarios of the water-energy nexus.
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.
NASA Astrophysics Data System (ADS)
Addy, A. L.; Chow, W. L.; Korst, H. H.; White, R. A.
1983-05-01
Significant data and detailed results of a joint research effort investigating the fluid dynamic mechanisms and interactions within separated flows are presented. The results were obtained through analytical, experimental, and computational investigations of base flow related configurations. The research objectives focus on understanding the component mechanisms and interactions which establish and maintain separated flow regions. Flow models and theoretical analyses were developed to describe the base flowfield. The research approach has been to conduct extensive small-scale experiments on base flow configurations and to analyze these flows by component models and finite-difference techniques. The modeling of base flows of missiles (both powered and unpowered) for transonic and supersonic freestreams has been successful by component models. Research on plume effects and plume modeling indicated the need to match initial plume slope and plume surface curvature for valid wind tunnel simulation of an actual rocket plume. The assembly and development of a state-of-the-art laser Doppler velocimeter (LDV) system for experiments with two-dimensional small-scale models has been completed and detailed velocity and turbulence measurements are underway. The LDV experiments include the entire range of base flowfield mechanisms - shear layer development, recompression/reattachment, shock-induced separation, and plume-induced separation.
Systems analysis techniques for annual cycle thermal energy storage solar systems
NASA Astrophysics Data System (ADS)
Baylin, F.
1980-07-01
Community-scale annual cycle thermal energy storage solar systems are options for building heat and cooling. A variety of approaches are feasible in modeling ACTES solar systems. The key parameter in such efforts, average collector efficiency, is examined, followed by several approaches for simple and effective modeling. Methods are also examined for modeling building loads for structures based on both conventional and passive architectural designs. Two simulation models for sizing solar heating systems with annual storage are presented. Validation is presented by comparison with the results of a study of seasonal storage systems based on SOLANSIM, an hour-by-hour simulation. These models are presently used to examine the economic trade-off between collector field area and storage capacity. Programs directed toward developing other system components such as improved tanks and solar ponds or design tools for ACTES solar systems are examined.
Literature Review on Systems of Systems (SoS): A Methodology With Preliminary Results
2013-11-01
Appendix H. The Enhanced ISAAC Neural Simulation Toolkit (EINSTein) 73 Appendix I. The Map Aware Nonuniform Automata (MANA) Agent-Based Model 81...83 Figure I-3. Quadrant chart addressing SoS and associated SoSA designs for the Map Aware Nonuniform Automata (MANA) agent...Map Aware Nonuniform Automata (MANA) agent-based model. 85 Table I-2. SoS and SoSA software component maturation scores associated with the Map
Tu, Jia-Ying; Hsiao, Wei-De; Chen, Chih-Ying
2014-01-01
Testing techniques of dynamically substructured systems dissects an entire engineering system into parts. Components can be tested via numerical simulation or physical experiments and run synchronously. Additional actuator systems, which interface numerical and physical parts, are required within the physical substructure. A high-quality controller, which is designed to cancel unwanted dynamics introduced by the actuators, is important in order to synchronize the numerical and physical outputs and ensure successful tests. An adaptive forward prediction (AFP) algorithm based on delay compensation concepts has been proposed to deal with substructuring control issues. Although the settling performance and numerical conditions of the AFP controller are improved using new direct-compensation and singular value decomposition methods, the experimental results show that a linear dynamics-based controller still outperforms the AFP controller. Based on experimental observations, the least-squares fitting technique, effectiveness of the AFP compensation and differences between delay and ordinary differential equations are discussed herein, in order to reflect the fundamental issues of actuator modelling in relevant literature and, more specifically, to show that the actuator and numerical substructure are heterogeneous dynamic components and should not be collectively modelled as a homogeneous delay differential equation. PMID:25104902
Reliability Assessment Approach for Stirling Convertors and Generators
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Schreiber, Jeffrey G.; Zampino, Edward; Best, Timothy
2004-01-01
Stirling power conversion is being considered for use in a Radioisotope Power System for deep-space science missions because it offers a multifold increase in the conversion efficiency of heat to electric power. Quantifying the reliability of a Radioisotope Power System that utilizes Stirling power conversion technology is important in developing and demonstrating the capability for long-term success. A description of the Stirling power convertor is provided, along with a discussion about some of the key components. Ongoing efforts to understand component life, design variables at the component and system levels, related sources, and the nature of uncertainties is discussed. The requirement for reliability also is discussed, and some of the critical areas of concern are identified. A section on the objectives of the performance model development and a computation of reliability is included to highlight the goals of this effort. Also, a viable physics-based reliability plan to model the design-level variable uncertainties at the component and system levels is outlined, and potential benefits are elucidated. The plan involves the interaction of different disciplines, maintaining the physical and probabilistic correlations at all the levels, and a verification process based on rational short-term tests. In addition, both top-down and bottom-up coherency were maintained to follow the physics-based design process and mission requirements. The outlined reliability assessment approach provides guidelines to improve the design and identifies governing variables to achieve high reliability in the Stirling Radioisotope Generator design.
MOEMS Modeling Using the Geometrical Matrix Toolbox
NASA Technical Reports Server (NTRS)
Wilson, William C.; Atkinson, Gary M.
2005-01-01
New technologies such as MicroOptoElectro-Mechanical Systems (MOEMS) require new modeling tools. These tools must simultaneously model the optical, electrical, and mechanical domains and the interactions between these domains. To facilitate rapid prototyping of these new technologies an optical toolbox has been developed for modeling MOEMS devices. The toolbox models are constructed using MATLAB's dynamical simulator, Simulink. Modeling toolboxes will allow users to focus their efforts on system design and analysis as opposed to developing component models. This toolbox was developed to facilitate rapid modeling and design of a MOEMS based laser ultrasonic receiver system.
An object-relational model for structured representation of medical knowledge.
Koch, S; Risch, T; Schneider, W; Wagner, I V
2006-07-01
Domain specific knowledge is often not static but continuously evolving. This is especially true for the medical domain. Furthermore, the lack of standardized structures for presenting knowledge makes it difficult or often impossible to assess new knowledge in the context of existing knowledge. Possibilities to compare knowledge easily and directly are often not given. It is therefore of utmost importance to create a model that allows for comparability, consistency and quality assurance of medical knowledge in specific work situations. For this purpose, we have designed on object-relational model based on structured knowledge elements that are dynamically reusable by different multi-media-based tools for case-based documentation, disease course simulation, and decision support. With this model, high-level components, such as patient case reports or simulations of the course of a disease, and low-level components (e.g., diagnoses, symptoms or treatments) as well as the relationships between these components are modeled. The resulting schema has been implemented in AMOS II, on object-relational multi-database system supporting different views with regard to search and analysis depending on different work situations.
An new MHD/kinetic model for exploring energetic particle production in macro-scale systems
NASA Astrophysics Data System (ADS)
Drake, J. F.; Swisdak, M.; Dahlin, J. T.
2017-12-01
A novel MHD/kinetic model is being developed to explore magneticreconnection and particle energization in macro-scale systems such asthe solar corona and the outer heliosphere. The model blends the MHDdescription with a macro-particle description. The rationale for thismodel is based on the recent discovery that energetic particleproduction during magnetic reconnection is controlled by Fermireflection and Betatron acceleration and not parallel electricfields. Since the former mechanisms are not dependent on kineticscales such as the Debye length and the electron and ion inertialscales, a model that sheds these scales is sufficient for describingparticle acceleration in macro-systems. Our MHD/kinetic model includesmacroparticles laid out on an MHD grid that are evolved with the MHDfields. Crucially, the feedback of the energetic component on the MHDfluid is included in the dynamics. Thus, energy of the total system,the MHD fluid plus the energetic component, is conserved. The systemhas no kinetic scales and therefore can be implemented to modelenergetic particle production in macro-systems with none of theconstraints associated with a PIC model. Tests of the new model insimple geometries will be presented and potential applications will bediscussed.
NASA Astrophysics Data System (ADS)
Boakye-Boateng, Nasir Abdulai
The growing demand for wind power integration into the generation mix prompts the need to subject these systems to stringent performance requirements. This study sought to identify the required tools and procedures needed to perform real-time simulation studies of Doubly-Fed Induction Generator (DFIG) based wind generation systems as basis for performing more practical tests of reliability and performance for both grid-connected and islanded wind generation systems. The author focused on developing a platform for wind generation studies and in addition, the author tested the performance of two DFIG models on the platform real-time simulation model; an average SimpowerSystemsRTM DFIG wind turbine, and a detailed DFIG based wind turbine using ARTEMiSRTM components. The platform model implemented here consists of a high voltage transmission system with four integrated wind farm models consisting in total of 65 DFIG based wind turbines and it was developed and tested on OPAL-RT's eMEGASimRTM Real-Time Digital Simulator.
Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N
2012-01-01
Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.
Lessons Learned from using a Livingstone Model to Diagnose a Main Propulsion System
NASA Technical Reports Server (NTRS)
Sweet, Adam; Bajwa, Anupa
2003-01-01
NASA researchers have demonstrated that qualitative, model-based reasoning can be used for fault detection in a Main Propulsion System (MPS), a complex, continuous system. At the heart of this diagnostic system is Livingstone, a discrete, propositional logic-based inference engine. Livingstone comprises a language for specifying a discrete model of the system and a set of algorithms that use the model to track the system's state. Livingstone uses the model to test assumptions about the state of a component - observations from the system are compared with values predicted by the model. The intent of this paper is to summarize some advantages of Livingstone seen through our modeling experience: for instance, flexibility in modeling, speed and maturity. We also describe some shortcomings we perceived in the implementation of Livingstone, such as modeling continuous dynamics and handling of transients. We list some upcoming enhancements to the next version of Livingstone that may resolve some of the current limitations.
ERIC Educational Resources Information Center
Ballantine, R. Malcolm
Decision Support Systems (DSSs) are computer-based decision aids to use when making decisions which are partially amenable to rational decision-making procedures but contain elements where intuitive judgment is an essential component. In such situations, DSSs are used to improve the quality of decision-making. The DSS approach is based on Simon's…
SolarTherm: A flexible Modelica-based simulator for CSP systems
NASA Astrophysics Data System (ADS)
Scott, Paul; Alonso, Alberto de la Calle; Hinkley, James T.; Pye, John
2017-06-01
Annual performance simulations provide a valuable tool for analysing the viability and overall impact of different concentrating solar power (CSP) component and system designs. However, existing tools work best with conventional systems and are difficult or impossible to adapt when novel components, configurations and operating strategies are of interest. SolarTherm is a new open source simulation tool that fulfils this need for the solar community. It includes a simulation framework and a library of flexible CSP components and control strategies that can be adapted or replaced with new designs to meet the special needs of end users. This paper provides an introduction to SolarTherm and a comparison of models for an energy-based trough system and a physical tower system to those in the well-established and widely-used simulator SAM. Differences were found in some components where the inner workings of SAM are undocumented or not well understood, while the other parts show strong agreement. These results help to validate the fundamentals of SolarTherm and demonstrate that, while at an early stage of development, it is already a useful tool for performing annual simulations.
Integration of Modelling and Graphics to Create an Infrared Signal Processing Test Bed
NASA Astrophysics Data System (ADS)
Sethi, H. R.; Ralph, John E.
1989-03-01
The work reported in this paper was carried out as part of a contract with MoD (PE) UK. It considers the problems associated with realistic modelling of a passive infrared system in an operational environment. Ideally all aspects of the system and environment should be integrated into a complete end-to-end simulation but in the past limited computing power has prevented this. Recent developments in workstation technology and the increasing availability of parallel processing techniques makes the end-to-end simulation possible. However the complexity and speed of such simulations means difficulties for the operator in controlling the software and understanding the results. These difficulties can be greatly reduced by providing an extremely user friendly interface and a very flexible, high power, high resolution colour graphics capability. Most system modelling is based on separate software simulation of the individual components of the system itself and its environment. These component models may have their own characteristic inbuilt assumptions and approximations, may be written in the language favoured by the originator and may have a wide variety of input and output conventions and requirements. The models and their limitations need to be matched to the range of conditions appropriate to the operational scenerio. A comprehensive set of data bases needs to be generated by the component models and these data bases must be made readily available to the investigator. Performance measures need to be defined and displayed in some convenient graphics form. Some options are presented for combining available hardware and software to create an environment within which the models can be integrated, and which provide the required man-machine interface, graphics and computing power. The impact of massively parallel processing and artificial intelligence will be discussed. Parallel processing will make real time end-to-end simulation possible and will greatly improve the graphical visualisation of the model output data. Artificial intelligence should help to enhance the man-machine interface.
An integrated radar model solution for mission level performance and cost trades
NASA Astrophysics Data System (ADS)
Hodge, John; Duncan, Kerron; Zimmerman, Madeline; Drupp, Rob; Manno, Mike; Barrett, Donald; Smith, Amelia
2017-05-01
A fully integrated Mission-Level Radar model is in development as part of a multi-year effort under the Northrop Grumman Mission Systems (NGMS) sector's Model Based Engineering (MBE) initiative to digitally interconnect and unify previously separate performance and cost models. In 2016, an NGMS internal research and development (IR and D) funded multidisciplinary team integrated radio frequency (RF), power, control, size, weight, thermal, and cost models together using a commercial-off-the-shelf software, ModelCenter, for an Active Electronically Scanned Array (AESA) radar system. Each represented model was digitally connected with standard interfaces and unified to allow end-to-end mission system optimization and trade studies. The radar model was then linked to the Air Force's own mission modeling framework (AFSIM). The team first had to identify the necessary models, and with the aid of subject matter experts (SMEs) understand and document the inputs, outputs, and behaviors of the component models. This agile development process and collaboration enabled rapid integration of disparate models and the validation of their combined system performance. This MBE framework will allow NGMS to design systems more efficiently and affordably, optimize architectures, and provide increased value to the customer. The model integrates detailed component models that validate cost and performance at the physics level with high-level models that provide visualization of a platform mission. This connectivity of component to mission models allows hardware and software design solutions to be better optimized to meet mission needs, creating cost-optimal solutions for the customer, while reducing design cycle time through risk mitigation and early validation of design decisions.
Chen, Hao; Xie, Xiaoyun; Shu, Wanneng; Xiong, Naixue
2016-10-15
With the rapid growth of wireless sensor applications, the user interfaces and configurations of smart homes have become so complicated and inflexible that users usually have to spend a great amount of time studying them and adapting to their expected operation. In order to improve user experience, a weighted hybrid recommender system based on a Kalman Filter model is proposed to predict what users might want to do next, especially when users are located in a smart home with an enhanced living environment. Specifically, a weight hybridization method was introduced, which combines contextual collaborative filter and the contextual content-based recommendations. This method inherits the advantages of the optimum regression and the stability features of the proposed adaptive Kalman Filter model, and it can predict and revise the weight of each system component dynamically. Experimental results show that the hybrid recommender system can optimize the distribution of weights of each component, and achieve more reasonable recall and precision rates.
Chen, Hao; Xie, Xiaoyun; Shu, Wanneng; Xiong, Naixue
2016-01-01
With the rapid growth of wireless sensor applications, the user interfaces and configurations of smart homes have become so complicated and inflexible that users usually have to spend a great amount of time studying them and adapting to their expected operation. In order to improve user experience, a weighted hybrid recommender system based on a Kalman Filter model is proposed to predict what users might want to do next, especially when users are located in a smart home with an enhanced living environment. Specifically, a weight hybridization method was introduced, which combines contextual collaborative filter and the contextual content-based recommendations. This method inherits the advantages of the optimum regression and the stability features of the proposed adaptive Kalman Filter model, and it can predict and revise the weight of each system component dynamically. Experimental results show that the hybrid recommender system can optimize the distribution of weights of each component, and achieve more reasonable recall and precision rates. PMID:27754456
USDA-ARS?s Scientific Manuscript database
To improve the management strategy of riparian restoration, better understanding of the dynamic of eco-hydrological system and its feedback between hydrological and ecological components are needed. The fully distributed eco-hydrological model coupled with a hydrology component was developed based o...
Analyzing endocrine system conservation and evolution.
Bonett, Ronald M
2016-08-01
Analyzing variation in rates of evolution can provide important insights into the factors that constrain trait evolution, as well as those that promote diversification. Metazoan endocrine systems exhibit apparent variation in evolutionary rates of their constituent components at multiple levels, yet relatively few studies have quantified these patterns and analyzed them in a phylogenetic context. This may be in part due to historical and current data limitations for many endocrine components and taxonomic groups. However, recent technological advancements such as high-throughput sequencing provide the opportunity to collect large-scale comparative data sets for even non-model species. Such ventures will produce a fertile data landscape for evolutionary analyses of nucleic acid and amino acid based endocrine components. Here I summarize evolutionary rate analyses that can be applied to categorical and continuous endocrine traits, and also those for nucleic acid and protein-based components. I emphasize analyses that could be used to test whether other variables (e.g., ecology, ontogenetic timing of expression, etc.) are related to patterns of rate variation and endocrine component diversification. The application of phylogenetic-based rate analyses to comparative endocrine data will greatly enhance our understanding of the factors that have shaped endocrine system evolution. Copyright © 2016 Elsevier Inc. All rights reserved.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
A Model-Driven Approach to e-Course Management
ERIC Educational Resources Information Center
Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana
2018-01-01
This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…
Economic Modeling as a Component of Academic Strategic Planning.
ERIC Educational Resources Information Center
MacKinnon, Joyce; Sothmann, Mark; Johnson, James
2001-01-01
Computer-based economic modeling was used to enable a school of allied health to define outcomes, identify associated costs, develop cost and revenue models, and create a financial planning system. As a strategic planning tool, it assisted realistic budgeting and improved efficiency and effectiveness. (Contains 18 references.) (SK)
NASA Astrophysics Data System (ADS)
Bonne, François; Alamir, Mazen; Hoa, Christine; Bonnay, Patrick; Bon-Mardion, Michel; Monteiro, Lionel
2015-12-01
In this article, we present a new Simulink library of cryogenics components (such as valve, phase separator, mixer, heat exchanger...) to assemble to generate model-based control schemes. Every component is described by its algebraic or differential equation and can be assembled with others to build the dynamical model of a complete refrigerator or the model of a subpart of it. The obtained model can be used to automatically design advanced model based control scheme. It also can be used to design a model based PI controller. Advanced control schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT- 60SA). The paper gives the example of the generation of the dynamical model of the 400W@1.8K refrigerator and shows how to build a Constrained Model Predictive Control for it. Based on the scheme, experimental results will be given. This work is being supported by the French national research agency (ANR) through the ANR-13-SEED-0005 CRYOGREEN program.
Detecting Cyber Attacks On Nuclear Power Plants
NASA Astrophysics Data System (ADS)
Rrushi, Julian; Campbell, Roy
This paper proposes an unconventional anomaly detection approach that provides digital instrumentation and control (I&C) systems in a nuclear power plant (NPP) with the capability to probabilistically discern between legitimate protocol frames and attack frames. The stochastic activity network (SAN) formalism is used to model the fusion of protocol activity in each digital I&C system and the operation of physical components of an NPP. SAN models are employed to analyze links between protocol frames as streams of bytes, their semantics in terms of NPP operations, control data as stored in the memory of I&C systems, the operations of I&C systems on NPP components, and NPP processes. Reward rates and impulse rewards are defined in the SAN models based on the activity-marking reward structure to estimate NPP operation profiles. These profiles are then used to probabilistically estimate the legitimacy of the semantics and payloads of protocol frames received by I&C systems.
Reliability analysis in interdependent smart grid systems
NASA Astrophysics Data System (ADS)
Peng, Hao; Kan, Zhe; Zhao, Dandan; Han, Jianmin; Lu, Jianfeng; Hu, Zhaolong
2018-06-01
Complex network theory is a useful way to study many real complex systems. In this paper, a reliability analysis model based on complex network theory is introduced in interdependent smart grid systems. In this paper, we focus on understanding the structure of smart grid systems and studying the underlying network model, their interactions, and relationships and how cascading failures occur in the interdependent smart grid systems. We propose a practical model for interdependent smart grid systems using complex theory. Besides, based on percolation theory, we also study the effect of cascading failures effect and reveal detailed mathematical analysis of failure propagation in such systems. We analyze the reliability of our proposed model caused by random attacks or failures by calculating the size of giant functioning components in interdependent smart grid systems. Our simulation results also show that there exists a threshold for the proportion of faulty nodes, beyond which the smart grid systems collapse. Also we determine the critical values for different system parameters. In this way, the reliability analysis model based on complex network theory can be effectively utilized for anti-attack and protection purposes in interdependent smart grid systems.
Early Childhood Intervention in Portugal: An Overview Based on the Developmental Systems Model
ERIC Educational Resources Information Center
Pinto, Ana Isabel; Grande, Catarina; Aguiar, Cecilia; de Almeida, Isabel Chaves; Felgueiras, Isabel; Pimentel, Julia Serpa; Serrano, Ana Maria; Carvalho, Leonor; Brandao, Maria Teresa; Boavida, Tania; Santos, Paula; Lopes-dos-Santos, Pedro
2012-01-01
Research studies on early childhood intervention (ECI) in Portugal are diffuse regarding both program components and the geographical area under scrutiny. Since the 1990s, a growing body of knowledge and evidence in ECI is being gathered, based on postgraduate teaching, in-service training, and research. This article draws on the systems theory…
NASA Technical Reports Server (NTRS)
Antle, John M.; Basso, Bruno; Conant, Richard T.; Godfray, H. Charles J.; Jones, James W.; Herrero, Mario; Howitt, Richard E.; Keating, Brian A.; Munoz-Carpena, Rafael; Rosenzweig, Cynthia
2016-01-01
This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.
Antle, John M; Basso, Bruno; Conant, Richard T; Godfray, H Charles J; Jones, James W; Herrero, Mario; Howitt, Richard E; Keating, Brian A; Munoz-Carpena, Rafael; Rosenzweig, Cynthia; Tittonell, Pablo; Wheeler, Tim R
2017-07-01
This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.
NASA Technical Reports Server (NTRS)
1991-01-01
The technical effort and computer code enhancements performed during the sixth year of the Probabilistic Structural Analysis Methods program are summarized. Various capabilities are described to probabilistically combine structural response and structural resistance to compute component reliability. A library of structural resistance models is implemented in the Numerical Evaluations of Stochastic Structures Under Stress (NESSUS) code that included fatigue, fracture, creep, multi-factor interaction, and other important effects. In addition, a user interface was developed for user-defined resistance models. An accurate and efficient reliability method was developed and was successfully implemented in the NESSUS code to compute component reliability based on user-selected response and resistance models. A risk module was developed to compute component risk with respect to cost, performance, or user-defined criteria. The new component risk assessment capabilities were validated and demonstrated using several examples. Various supporting methodologies were also developed in support of component risk assessment.
NASA Astrophysics Data System (ADS)
Wichmann, Volker
2017-09-01
The Gravitational Process Path (GPP) model can be used to simulate the process path and run-out area of gravitational processes based on a digital terrain model (DTM). The conceptual model combines several components (process path, run-out length, sink filling and material deposition) to simulate the movement of a mass point from an initiation site to the deposition area. For each component several modeling approaches are provided, which makes the tool configurable for different processes such as rockfall, debris flows or snow avalanches. The tool can be applied to regional-scale studies such as natural hazard susceptibility mapping but also contains components for scenario-based modeling of single events. Both the modeling approaches and precursor implementations of the tool have proven their applicability in numerous studies, also including geomorphological research questions such as the delineation of sediment cascades or the study of process connectivity. This is the first open-source implementation, completely re-written, extended and improved in many ways. The tool has been committed to the main repository of the System for Automated Geoscientific Analyses (SAGA) and thus will be available with every SAGA release.
Wang, Futao; Pan, Yuanfeng; Cai, Pingxiong; Guo, Tianxiang; Xiao, Huining
2017-10-01
A high efficient and eco-friendly sugarcane cellulose-based adsorbent was prepared in an attempt to remove Pb 2+ , Cu 2+ and Zn 2+ from aqueous solutions. The effects of initial concentration of heavy metal ions and temperature on the adsorption capacity of the bioadsorbent were investigated. The adsorption isotherms showed that the adsorption of Pb 2+ , Cu 2+ and Zn 2+ followed the Langmuir model and the maximum adsorptions were as high as 558.9, 446.2 and 363.3mg·g -1 , respectively, in single component system. The binary component system was better described with the competitive Langmuir isotherm model. The three dimensional sorption surface of binary component system demonstrated that the presence of Pb 2+ decreased the sorption of Cu 2+ , but the adsorption amount of other metal ions was not affected. The result from SEM-EDAX revealed that the adsorption of metal ions on bioadsorbent was mainly driven by coordination, ion exchange and electrostatic association. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C
2012-07-01
The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.
Modeling a Thermoelectric HVAC System for Automobiles
NASA Astrophysics Data System (ADS)
Junior, C. S.; Strupp, N. C.; Lemke, N. C.; Koehler, J.
2009-07-01
In automobiles thermal energy is used at various energy scales. With regard to reduction of CO2 emissions, efficient generation of hot and cold temperatures and wise use of waste heat are of paramount importance for car manufacturers worldwide. Thermoelectrics could be a vital component in automobiles of the future. To evaluate the applicability of thermoelectric modules in automobiles, a Modelica model of a thermoelectric liquid-gas heat exchanger was developed for transient simulations. The model uses component models from the object-oriented Modelica library TIL. It was validated based on experimental data of a prototype heat exchanger and used to simulate transient and steady-state behavior. The use of the model within the energy management of an automobile is successfully shown for the air-conditioning system of a car.
Real-time diagnostics for a reusable rocket engine
NASA Technical Reports Server (NTRS)
Guo, T. H.; Merrill, W.; Duyar, A.
1992-01-01
A hierarchical, decentralized diagnostic system is proposed for the Real-Time Diagnostic System component of the Intelligent Control System (ICS) for reusable rocket engines. The proposed diagnostic system has three layers of information processing: condition monitoring, fault mode detection, and expert system diagnostics. The condition monitoring layer is the first level of signal processing. Here, important features of the sensor data are extracted. These processed data are then used by the higher level fault mode detection layer to do preliminary diagnosis on potential faults at the component level. Because of the closely coupled nature of the rocket engine propulsion system components, it is expected that a given engine condition may trigger more than one fault mode detector. Expert knowledge is needed to resolve the conflicting reports from the various failure mode detectors. This is the function of the diagnostic expert layer. Here, the heuristic nature of this decision process makes it desirable to use an expert system approach. Implementation of the real-time diagnostic system described above requires a wide spectrum of information processing capability. Generally, in the condition monitoring layer, fast data processing is often needed for feature extraction and signal conditioning. This is usually followed by some detection logic to determine the selected faults on the component level. Three different techniques are used to attack different fault detection problems in the NASA LeRC ICS testbed simulation. The first technique employed is the neural network application for real-time sensor validation which includes failure detection, isolation, and accommodation. The second approach demonstrated is the model-based fault diagnosis system using on-line parameter identification. Besides these model based diagnostic schemes, there are still many failure modes which need to be diagnosed by the heuristic expert knowledge. The heuristic expert knowledge is implemented using a real-time expert system tool called G2 by Gensym Corp. Finally, the distributed diagnostic system requires another level of intelligence to oversee the fault mode reports generated by component fault detectors. The decision making at this level can best be done using a rule-based expert system. This level of expert knowledge is also implemented using G2.
Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models
2017-01-01
We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927
Martin, G W; Herie, M A; Turner, B J; Cunningham, J A
1998-11-01
Researchers must develop effective strategies for disseminating research-based treatments. This study evaluates the application of a dissemination model based on principles of social marketing and diffusion theory. A case study describes how the model was implemented. A qualitative design was employed to examine rates of adoption and adaptation of an early intervention program by a targeted system of addictions agencies. The interventions were developed at the Addiction Research Foundation in Toronto and disseminated to Assessment and Referral (A/R) Centres in Ontario, Canada. Study participants included the managers and a designated therapist for 33 participating A/R centres. Managers were asked mainly open-ended questions concerning whether their agency had made a formal decision to adopt the intervention and whether therapists in their agency were using the early intervention program. "Adoption" was operationalized as offering the complete four-session intervention to at least one client. At 12 months after the completion of training workshops, 68% of 34 agencies in the target system had adopted the program while 85% of the agencies were using some components of the intervention with clients. The dissemination model appeared to be effective although its application proved to be time-consuming and labour-intensive. The "market analysis", systems focus and field-test components of the model appeared to contribute to its success.
Multibody model reduction by component mode synthesis and component cost analysis
NASA Technical Reports Server (NTRS)
Spanos, J. T.; Mingori, D. L.
1990-01-01
The classical assumed-modes method is widely used in modeling the dynamics of flexible multibody systems. According to the method, the elastic deformation of each component in the system is expanded in a series of spatial and temporal functions known as modes and modal coordinates, respectively. This paper focuses on the selection of component modes used in the assumed-modes expansion. A two-stage component modal reduction method is proposed combining Component Mode Synthesis (CMS) with Component Cost Analysis (CCA). First, each component model is truncated such that the contribution of the high frequency subsystem to the static response is preserved. Second, a new CMS procedure is employed to assemble the system model and CCA is used to further truncate component modes in accordance with their contribution to a quadratic cost function of the system output. The proposed method is demonstrated with a simple example of a flexible two-body system.
Turbofan Engine Simulated in a Graphical Simulation Environment
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Guo, Ten-Huei
2004-01-01
Recently, there has been an increase in the development of intelligent engine technology with advanced active component control. The computer engine models used in these control studies are component-level models (CLM), models that link individual component models of state space and nonlinear algebraic equations, written in a computer language such as Fortran. The difficulty faced in performing control studies on Fortran-based models is that Fortran is not supported with control design and analysis tools, so there is no means for implementing real-time control. It is desirable to have a simulation environment that is straightforward, has modular graphical components, and allows easy access to health, control, and engine parameters through a graphical user interface. Such a tool should also provide the ability to convert a control design into real-time code, helping to make it an extremely powerful tool in control and diagnostic system development. Simulation time management is shown: Mach number versus time, power level angle versus time, altitude versus time, ambient temperature change versus time, afterburner fuel flow versus time, controller and actuator dynamics, collect initial conditions, CAD output, and component-level model: CLM sensor, CAD input, and model output. The Controls and Dynamics Technologies Branch at the NASA Glenn Research Center has developed and demonstrated a flexible, generic turbofan engine simulation platform that can meet these objectives, known as the Modular Aero-Propulsion System Simulation (MAPSS). MAPSS is a Simulink-based implementation of a Fortran-based, modern high pressure ratio, dual-spool, low-bypass, military-type variable-cycle engine with a digital controller. Simulink (The Mathworks, Natick, MA) is a computer-aided control design and simulation package allows the graphical representation of dynamic systems in a block diagram form. MAPSS is a nonlinear, non-real-time system composed of controller and actuator dynamics (CAD) and component-level model (CLM) modules. The controller in the CAD module emulates the functionality of a digital controller, which has a typical update rate of 50 Hz. The CLM module simulates the dynamics of the engine components and uses an update rate of 2500 Hz, which is needed to iterate to balance mass and energy among system components. The actuators in the CAD module use the same sampling rate as those in the CLM. Two graphs of normalized spool speed versus time in seconds and one graph of normalized average metal temperature versus time in seconds is shown. MAPSS was validated via open-loop and closed-loop comparisons with the Fortran simulation. The preceding plots show the normalized results of a closed-loop comparison looking at three states of the model: low-pressure spool speed, high-pressure spool speed, and the average metal temperature measured from the combustor to the high-pressure turbine. In steady state, the error between the simulations is less than 1 percent. During a transient, the difference between the simulations is due to a correction in MAPSS that prevents the gas flow in the bypass duct inlet from flowing forward instead of toward the aft end, which occurs in the Fortran simulation. A comparison between MAPSS and the Fortran model of the bypass duct inlet flow for power lever angles greater than 35 degrees is shown.
Exploring the sustainability of industrial production and energy generation with a model system
The importance and complexity of sustainability has been well recognized and a formal study of sustainability based on system theory approaches is imperative as many of the relationships between the various components of the system could be non-linear, intertwined, and non-intuit...
Structural model of control system for hydraulic stepper motor complex
NASA Astrophysics Data System (ADS)
Obukhov, A. D.; Dedov, D. L.; Kolodin, A. N.
2018-03-01
The article considers the problem of developing a structural model of the control system for a hydraulic stepper drive complex. A comparative analysis of stepper drives and assessment of the applicability of HSM for solving problems, requiring accurate displacement in space with subsequent positioning of the object, are carried out. The presented structural model of the automated control system of the multi-spindle complex of hydraulic stepper drives reflects the main components of the system, as well as the process of its control based on the control signals transfer to the solenoid valves by the controller. The models and methods described in the article can be used to formalize the control process in technical systems based on the application hydraulic stepper drives and allow switching from mechanical control to automated control.
An effectiveness analysis of healthcare systems using a systems theoretic approach.
Chuang, Sheuwen; Inder, Kerry
2009-10-24
The use of accreditation and quality measurement and reporting to improve healthcare quality and patient safety has been widespread across many countries. A review of the literature reveals no association between the accreditation system and the quality measurement and reporting systems, even when hospital compliance with these systems is satisfactory. Improvement of health care outcomes needs to be based on an appreciation of the whole system that contributes to those outcomes. The research literature currently lacks an appropriate analysis and is fragmented among activities. This paper aims to propose an integrated research model of these two systems and to demonstrate the usefulness of the resulting model for strategic research planning. To achieve these aims, a systematic integration of the healthcare accreditation and quality measurement/reporting systems is structured hierarchically. A holistic systems relationship model of the administration segment is developed to act as an investigation framework. A literature-based empirical study is used to validate the proposed relationships derived from the model. Australian experiences are used as evidence for the system effectiveness analysis and design base for an adaptive-control study proposal to show the usefulness of the system model for guiding strategic research. Three basic relationships were revealed and validated from the research literature. The systemic weaknesses of the accreditation system and quality measurement/reporting system from a system flow perspective were examined. The approach provides a system thinking structure to assist the design of quality improvement strategies. The proposed model discovers a fourth implicit relationship, a feedback between quality performance reporting components and choice of accreditation components that is likely to play an important role in health care outcomes. An example involving accreditation surveyors is developed that provides a systematic search for improving the impact of accreditation on quality of care and hence on the accreditation/performance correlation. There is clear value in developing a theoretical systems approach to achieving quality in health care. The introduction of the systematic surveyor-based search for improvements creates an adaptive-control system to optimize health care quality. It is hoped that these outcomes will stimulate further research in the development of strategic planning using systems theoretic approach for the improvement of quality in health care.
Faults Discovery By Using Mined Data
NASA Technical Reports Server (NTRS)
Lee, Charles
2005-01-01
Fault discovery in the complex systems consist of model based reasoning, fault tree analysis, rule based inference methods, and other approaches. Model based reasoning builds models for the systems either by mathematic formulations or by experiment model. Fault Tree Analysis shows the possible causes of a system malfunction by enumerating the suspect components and their respective failure modes that may have induced the problem. The rule based inference build the model based on the expert knowledge. Those models and methods have one thing in common; they have presumed some prior-conditions. Complex systems often use fault trees to analyze the faults. Fault diagnosis, when error occurs, is performed by engineers and analysts performing extensive examination of all data gathered during the mission. International Space Station (ISS) control center operates on the data feedback from the system and decisions are made based on threshold values by using fault trees. Since those decision-making tasks are safety critical and must be done promptly, the engineers who manually analyze the data are facing time challenge. To automate this process, this paper present an approach that uses decision trees to discover fault from data in real-time and capture the contents of fault trees as the initial state of the trees.
The role of perceived interactivity in virtual communities: building trust and increasing stickiness
NASA Astrophysics Data System (ADS)
Wang, Hongwei; Meng, Yuan; Wang, Wei
2013-03-01
Although previous research has explored factors affecting trust building in websites, little research has been analysed from the perceived interactivity perspective in virtual communities (VCs). A research model for verifying interactivity antecedents to trust and its impact on member stickiness behaviour is presented. Two social interactivity components and two system interactivity components are, respectively, theorised as process-based antecedents and institution-based antecedents to trust in the model. Data were collected from 310 members of VCs to test the model. The results show that connectedness and reciprocity are important antecedents to trust in members, while responsiveness and active control are important antecedents to trust in systems. The results also indicate that trust has significant influence on the members' duration and retention, which are two dimensions of member stickiness measured in this research. These findings have theoretical implications for online interaction-related literature and critical business implications for practitioners of VCs.
NASA Astrophysics Data System (ADS)
Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Formhals, Julian; Bär, Kristian; Sass, Ingo
2017-04-01
Large-scale borehole thermal energy storage (BTES) is a promising technology in the development of sustainable, renewable and low-emission district heating concepts. Such systems consist of several components and assemblies like the borehole heat exchangers (BHE), other heat sources (e.g. solarthermics, combined heat and power plants, peak load boilers, heat pumps), distribution networks and heating installations. The complexity of these systems necessitates numerical simulations in the design and planning phase. Generally, the subsurface components are simulated separately from the above ground components of the district heating system. However, as fluid and heat are exchanged, the subsystems interact with each other and thereby mutually affect their performances. For a proper design of the overall system, it is therefore imperative to take into account the interdependencies of the subsystems. Based on a TCP/IP communication we have developed an interface for the coupling of a simulation package for heating installations with a finite element software for the modeling of the heat flow in the subsurface and the underground installations. This allows for a co-simulation of all system components, whereby the interaction of the different subsystems is considered. Furthermore, the concept allows for a mathematical optimization of the components and the operational parameters. Consequently, a finer adjustment of the system can be ensured and a more precise prognosis of the system's performance can be realized.
NASA Technical Reports Server (NTRS)
Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea
2015-01-01
This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.
NASA Astrophysics Data System (ADS)
Wendel, C. H.; Kazempoor, P.; Braun, R. J.
2015-02-01
Electrical energy storage (EES) is an important component of the future electric grid. Given that no other widely available technology meets all the EES requirements, reversible (or regenerative) solid oxide cells (ReSOCs) working in both fuel cell (power producing) and electrolysis (fuel producing) modes are envisioned as a technology capable of providing highly efficient and cost-effective EES. However, there are still many challenges and questions from cell materials development to system level operation of ReSOCs that should be addressed before widespread application. This paper presents a novel system based on ReSOCs that employ a thermal management strategy of promoting exothermic methanation within the ReSOC cell-stack to provide thermal energy for the endothermic steam/CO2 electrolysis reactions during charging mode (fuel producing). This approach also serves to enhance the energy density of the stored gases. Modeling and parametric analysis of an energy storage concept is performed using a physically based ReSOC stack model coupled with thermodynamic system component models. Results indicate that roundtrip efficiencies greater than 70% can be achieved at intermediate stack temperature (680 °C) and elevated stack pressure (20 bar). The optimal operating condition arises from a tradeoff between stack efficiency and auxiliary power requirements from balance of plant hardware.
PREDICTION OF MULTICOMPONENT INORGANIC ATMOSPHERIC AEROSOL BEHAVIOR. (R824793)
Many existing models calculate the composition of the atmospheric aerosol system by solving a set of algebraic equations based on reversible reactions derived from thermodynamic equilibrium. Some models rely on an a priori knowledge of the presence of components in certain relati...
CELSS scenario analysis: Breakeven calculations
NASA Technical Reports Server (NTRS)
Mason, R. M.
1980-01-01
A model of the relative mass requirements of food production components in a controlled ecological life support system (CELSS) based on regenerative concepts is described. Included are a discussion of model scope, structure, and example calculations. Computer programs for cultivar and breakeven calculations are also included.
NASA Astrophysics Data System (ADS)
Hong, Y.; Kirschbaum, D. B.; Fukuoka, H.
2011-12-01
The key to advancing the predictability of rainfall-triggered landslides is to use physically based slope-stability models that simulate the dynamical response of the subsurface moisture to spatiotemporal variability of rainfall in complex terrains. An early warning system applying such physical models has been developed to predict rainfall-induced shallow landslides over Java Island in Indonesia and Honduras. The prototyped early warning system integrates three major components: (1) a susceptibility mapping or hotspot identification component based on a land surface geospatial database (topographical information, maps of soil properties, and local landslide inventory etc.); (2) a satellite-based precipitation monitoring system (http://trmm.gsfc.nasa.gov) and a precipitation forecasting model (i.e. Weather Research Forecast); and (3) a physically-based, rainfall-induced landslide prediction model SLIDE (SLope-Infiltration-Distributed Equilibrium). The system utilizes the modified physical model to calculate a Factor of Safety (FS) that accounts for the contribution of rainfall infiltration and partial saturation to the shear strength of the soil in topographically complex terrains. The system's prediction performance has been evaluated using a local landslide inventory. In Java Island, Indonesia, evaluation of SLIDE modeling results by local news reports shows that the system successfully predicted landslides in correspondence to the time of occurrence of the real landslide events. Further study of SLIDE is implemented in Honduras where Hurricane Mitch triggered widespread landslides in 1998. Results shows within the approximately 1,200 square kilometers study areas, the values of hit rates reached as high as 78% and 75%, while the error indices were 35% and 49%. Despite positive model performance, the SLIDE model is limited in the early warning system by several assumptions including, using general parameter calibration rather than in situ tests and neglecting geologic information. Advantages and limitations of this model will be discussed with respect to future applications of landslide assessment and prediction over large scales. In conclusion, integration of spatially distributed remote sensing precipitation products and in-situ datasets and physical models in this prototype system enable us to further develop a regional early warning tool in the future for forecasting storm-induced landslides.
Satoshi Hirabayashi; Chuck Kroll; David Nowak
2011-01-01
The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...
Controlled cooling of an electronic system for reduced energy consumption
DOE Office of Scientific and Technical Information (OSTI.GOV)
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
Energy efficient control of a cooling system cooling an electronic system is provided. The control includes automatically determining at least one adjusted control setting for at least one adjustable cooling component of a cooling system cooling the electronic system. The automatically determining is based, at least in part, on power being consumed by the cooling system and temperature of a heat sink to which heat extracted by the cooling system is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the coolingmore » system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on one or more experimentally obtained models relating the targeted temperature and power consumption of the one or more adjustable cooling components of the cooling system.« less
NASA Technical Reports Server (NTRS)
Schuman, H. K.
1992-01-01
An assessment of the potential and limitations of phased array antennas in space-based geophysical precision radiometry is described. Mathematical models exhibiting the dependence of system and scene temperatures and system sensitivity on phased array antenna parameters and components such as phase shifters and low noise amplifiers (LNA) are developed. Emphasis is given to minimum noise temperature designs wherein the LNA's are located at the array level, one per element or subarray. Two types of combiners are considered: array lenses (space feeds) and corporate networks. The result of a survey of suitable components and devices is described. The data obtained from that survey are used in conjunction with the mathematical models to yield an assessment of effective array antenna noise temperature for representative geostationary and low Earth orbit systems. Practical methods of calibrating a space-based, phased array radiometer are briefly addressed as well.
Mathematical Modeling of Intestinal Iron Absorption Using Genetic Programming
Colins, Andrea; Gerdtzen, Ziomara P.; Nuñez, Marco T.; Salgado, J. Cristian
2017-01-01
Iron is a trace metal, key for the development of living organisms. Its absorption process is complex and highly regulated at the transcriptional, translational and systemic levels. Recently, the internalization of the DMT1 transporter has been proposed as an additional regulatory mechanism at the intestinal level, associated to the mucosal block phenomenon. The short-term effect of iron exposure in apical uptake and initial absorption rates was studied in Caco-2 cells at different apical iron concentrations, using both an experimental approach and a mathematical modeling framework. This is the first report of short-term studies for this system. A non-linear behavior in the apical uptake dynamics was observed, which does not follow the classic saturation dynamics of traditional biochemical models. We propose a method for developing mathematical models for complex systems, based on a genetic programming algorithm. The algorithm is aimed at obtaining models with a high predictive capacity, and considers an additional parameter fitting stage and an additional Jackknife stage for estimating the generalization error. We developed a model for the iron uptake system with a higher predictive capacity than classic biochemical models. This was observed both with the apical uptake dataset used for generating the model and with an independent initial rates dataset used to test the predictive capacity of the model. The model obtained is a function of time and the initial apical iron concentration, with a linear component that captures the global tendency of the system, and a non-linear component that can be associated to the movement of DMT1 transporters. The model presented in this paper allows the detailed analysis, interpretation of experimental data, and identification of key relevant components for this complex biological process. This general method holds great potential for application to the elucidation of biological mechanisms and their key components in other complex systems. PMID:28072870
Global model of zenith tropospheric delay proposed based on EOF analysis
NASA Astrophysics Data System (ADS)
Sun, Langlang; Chen, Peng; Wei, Erhu; Li, Qinzheng
2017-07-01
Tropospheric delay is one of the main error budgets in Global Navigation Satellite System (GNSS) measurements. Many empirical correction models have been developed to compensate this delay, and models which do not require meteorological parameters have received the most attention. This study established a global troposphere zenith total delay (ZTD) model, called Global Empirical Orthogonal Function Troposphere (GEOFT), based on the empirical orthogonal function (EOF, also known as geographically weighted PCAs) analysis method and the Global Geodetic Observing System (GGOS) Atmosphere data from 2012 to 2015. The results showed that ZTD variation could be well represented by the characteristics of the EOF base function Ek and associated coefficients Pk. Here, E1 mainly signifies the equatorial anomaly; E2 represents north-south asymmetry, and E3 and E4 reflects regional variation. Moreover, P1 mainly reflects annual and semiannual variation components; P2 and P3 mainly contains annual variation components, and P4 displays semiannual variation components. We validated the proposed GEOFT model using tropospheric delay data of GGOS ZTD grid data and the tropospheric product of the International GNSS Service (IGS) over the year 2016. The results showed that GEOFT model has high accuracy with bias and RMS of -0.3 and 3.9 cm, respectively, with respect to the GGOS ZTD data, and of -0.8 and 4.1 cm, respectively, with respect to the global IGS tropospheric product. The accuracy of GEOFT demonstrating that the use of the EOF analysis method to characterize ZTD variation is reasonable.
pyBSM: A Python package for modeling imaging systems
NASA Astrophysics Data System (ADS)
LeMaster, Daniel A.; Eismann, Michael T.
2017-05-01
There are components that are common to all electro-optical and infrared imaging system performance models. The purpose of the Python Based Sensor Model (pyBSM) is to provide open source access to these functions for other researchers to build upon. Specifically, pyBSM implements much of the capability found in the ERIM Image Based Sensor Model (IBSM) V2.0 along with some improvements. The paper also includes two use-case examples. First, performance of an airborne imaging system is modeled using the General Image Quality Equation (GIQE). The results are then decomposed into factors affecting noise and resolution. Second, pyBSM is paired with openCV to evaluate performance of an algorithm used to detect objects in an image.
Haaland, Ben; Min, Wanli; Qian, Peter Z. G.; Amemiya, Yasuo
2011-01-01
Temperature control for a large data center is both important and expensive. On the one hand, many of the components produce a great deal of heat, and on the other hand, many of the components require temperatures below a fairly low threshold for reliable operation. A statistical framework is proposed within which the behavior of a large cooling system can be modeled and forecast under both steady state and perturbations. This framework is based upon an extension of multivariate Gaussian autoregressive hidden Markov models (HMMs). The estimated parameters of the fitted model provide useful summaries of the overall behavior of and relationships within the cooling system. Predictions under system perturbations are useful for assessing potential changes and improvements to be made to the system. Many data centers have far more cooling capacity than necessary under sensible circumstances, thus resulting in energy inefficiencies. Using this model, predictions for system behavior after a particular component of the cooling system is shut down or reduced in cooling power can be generated. Steady-state predictions are also useful for facility monitors. System traces outside control boundaries flag a change in behavior to examine. The proposed model is fit to data from a group of air conditioners within an enterprise data center from the IT industry. The fitted model is examined, and a particular unit is found to be underutilized. Predictions generated for the system under the removal of that unit appear very reasonable. Steady-state system behavior also is predicted well. PMID:22076026
Calisto, Vânia; Jaria, Guilaine; Silva, Carla Patrícia; Ferreira, Catarina I A; Otero, Marta; Esteves, Valdemar I
2017-05-01
This work describes the adsorptive removal of three widely consumed psychiatric pharmaceuticals (carbamazepine, paroxetine and oxazepam) from ultrapure water. Two different adsorbents were used: a commercial activated carbon and a non-activated waste-based carbon (PS800-150-HCl), produced by pyrolysis of primary paper mill sludge. These adsorbents were used in single, binary and ternary batch experiments in order to determine the adsorption kinetics and equilibrium isotherms of the considered pharmaceuticals. For the three drugs and both carbons, the equilibrium was quickly attained (with maximum equilibrium times of 15 and 120 min for the waste-based and the commercial carbons, respectively) even in binary and ternary systems. Single component equilibrium data were adequately described by the Langmuir model, with the commercial carbon registering higher maximum adsorption capacities (between 272 ± 10 and 493 ± 12 μmol g -1 ) than PS800-150-HCl (between 64 ± 2 and 74 ± 1 μmol g -1 ). Multi-component equilibrium data were also best fitted by the single component Langmuir isotherm, followed by the Langmuir competitive model. Overall, competitive effects did not largely affect the performance of both adsorbents. Binary and ternary systems maintained fast kinetics, the individual maximum adsorption capacities were not lower than half of the single component systems and both carbons presented improved total adsorption capacities for multi-component solutions. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Korotenko, K.
2003-04-01
An ultra-high-resolution version of DieCAST was adjusted for the Adriatic Sea and coupled with an oil spill model. Hydrodynamic module was developed on base of th low dissipative, four-order-accuracy version DieCAST with the resolution of ~2km. The oil spill model was developed on base of particle tracking technique The effect of evaporation is modeled with an original method developed on the base of the pseudo-component approach. A special dialog interface of this hybrid system allowing direct coupling to meteorlogical data collection systems or/and meteorological models. Experiments with hypothetic oil spill are analyzed for the Northern Adriatic Sea. Results (animations) of mesoscale circulation and oil slick modeling are presented at wabsite http://thayer.dartmouth.edu/~cushman/adriatic/movies/
The GPRIME approach to finite element modeling
NASA Technical Reports Server (NTRS)
Wallace, D. R.; Mckee, J. H.; Hurwitz, M. M.
1983-01-01
GPRIME, an interactive modeling system, runs on the CDC 6000 computers and the DEC VAX 11/780 minicomputer. This system includes three components: (1) GPRIME, a user friendly geometric language and a processor to translate that language into geometric entities, (2) GGEN, an interactive data generator for 2-D models; and (3) SOLIDGEN, a 3-D solid modeling program. Each component has a computer user interface of an extensive command set. All of these programs make use of a comprehensive B-spline mathematics subroutine library, which can be used for a wide variety of interpolation problems and other geometric calculations. Many other user aids, such as automatic saving of the geometric and finite element data bases and hidden line removal, are available. This interactive finite element modeling capability can produce a complete finite element model, producing an output file of grid and element data.
NASA Astrophysics Data System (ADS)
Kong, Lingxin; Yang, Bin; Xu, Baoqiang; Li, Yifu
2014-09-01
Based on the molecular interaction volume model (MIVM), the activities of components of Sn-Sb, Sb-Bi, Sn-Zn, Sn-Cu, and Sn-Ag alloys were predicted. The predicted values are in good agreement with the experimental data, which indicate that the MIVM is of better stability and reliability due to its good physical basis. A significant advantage of the MIVM lies in its ability to predict the thermodynamic properties of liquid alloys using only two parameters. The phase equilibria of Sn-Sb and Sn-Bi alloys were calculated based on the properties of pure components and the activity coefficients, which indicates that Sn-Sb and Sn-Bi alloys can be separated thoroughly by vacuum distillation. This study extends previous investigations and provides an effective and convenient model on which to base refining simulations for Sn-based alloys.
NASA Technical Reports Server (NTRS)
Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)
2003-01-01
A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
David, Milnes P.; Iyengar, Madhusudan K.; Schmidt, Roger R.
Energy efficient control of a cooling system cooling an electronic system is provided. The control includes automatically determining at least one adjusted control setting for at least one adjustable cooling component of a cooling system cooling the electronic system. The automatically determining is based, at least in part, on power being consumed by the cooling system and temperature of a heat sink to which heat extracted by the cooling system is rejected. The automatically determining operates to reduce power consumption of the cooling system and/or the electronic system while ensuring that at least one targeted temperature associated with the coolingmore » system or the electronic system is within a desired range. The automatically determining may be based, at least in part, on one or more experimentally obtained models relating the targeted temperature and power consumption of the one or more adjustable cooling components of the cooling system.« less
System Model for MEMS based Laser Ultrasonic Receiver
NASA Technical Reports Server (NTRS)
Wilson, William C.
2002-01-01
A need has been identified for more advanced nondestructive Evaluation technologies for assuring the integrity of airframe structures, wiring, etc. Laser ultrasonic inspection instruments have been shown to detect flaws in structures. However, these instruments are generally too bulky to be used in the confined spaces that are typical of aerospace vehicles. Microsystems technology is one key to reducing the size of current instruments and enabling increased inspection coverage in areas that were previously inaccessible due to instrument size and weight. This paper investigates the system modeling of a Micro OptoElectroMechanical System (MOEMS) based laser ultrasonic receiver. The system model is constructed in software using MATLAB s dynamical simulator, Simulink. The optical components are modeled using geometrical matrix methods and include some image processing. The system model includes a test bench which simulates input stimuli and models the behavior of the material under test.
Schad, Daniel J.; Jünger, Elisabeth; Sebold, Miriam; Garbusow, Maria; Bernhardt, Nadine; Javadi, Amir-Homayoun; Zimmermann, Ulrich S.; Smolka, Michael N.; Heinz, Andreas; Rapp, Michael A.; Huys, Quentin J. M.
2014-01-01
Theories of decision-making and its neural substrates have long assumed the existence of two distinct and competing valuation systems, variously described as goal-directed vs. habitual, or, more recently and based on statistical arguments, as model-free vs. model-based reinforcement-learning. Though both have been shown to control choices, the cognitive abilities associated with these systems are under ongoing investigation. Here we examine the link to cognitive abilities, and find that individual differences in processing speed covary with a shift from model-free to model-based choice control in the presence of above-average working memory function. This suggests shared cognitive and neural processes; provides a bridge between literatures on intelligence and valuation; and may guide the development of process models of different valuation components. Furthermore, it provides a rationale for individual differences in the tendency to deploy valuation systems, which may be important for understanding the manifold neuropsychiatric diseases associated with malfunctions of valuation. PMID:25566131
Patient-Centered Appointment Scheduling Using Agent-Based Simulation
Turkcan, Ayten; Toscos, Tammy; Doebbeling, Brad N.
2014-01-01
Enhanced access and continuity are key components of patient-centered care. Existing studies show that several interventions such as providing same day appointments, walk-in services, after-hours care, and group appointments, have been used to redesign the healthcare systems for improved access to primary care. However, an intervention focusing on a single component of care delivery (i.e. improving access to acute care) might have a negative impact other components of the system (i.e. reduced continuity of care for chronic patients). Therefore, primary care clinics should consider implementing multiple interventions tailored for their patient population needs. We collected rapid ethnography and observations to better understand clinic workflow and key constraints. We then developed an agent-based simulation model that includes all access modalities (appointments, walk-ins, and after-hours access), incorporate resources and key constraints and determine the best appointment scheduling method that improves access and continuity of care. This paper demonstrates the value of simulation models to test a variety of alternative strategies to improve access to care through scheduling. PMID:25954423
Design challenges in nanoparticle-based platforms: Implications for targeted drug delivery systems
NASA Astrophysics Data System (ADS)
Mullen, Douglas Gurnett
Characterization and control of heterogeneous distributions of nanoparticle-ligand components are major design challenges for nanoparticle-based platforms. This dissertation begins with an examination of poly(amidoamine) (PAMAM) dendrimer-based targeted delivery platform. A folic acid targeted modular platform was developed to target human epithelial cancer cells. Although active targeting was observed in vitro, active targeting was not found in vivo using a mouse tumor model. A major flaw of this platform design was that it did not provide for characterization or control of the component distribution. Motivated by the problems experienced with the modular design, the actual composition of nanoparticle-ligand distributions were examined using a model dendrimer-ligand system. High Pressure Liquid Chromatography (HPLC) resolved the distribution of components in samples with mean ligand/dendrimer ratios ranging from 0.4 to 13. A peak fitting analysis enabled the quantification of the component distribution. Quantified distributions were found to be significantly more heterogeneous than commonly expected and standard analytical parameters, namely the mean ligand/nanoparticle ratio, failed to adequately represent the component heterogeneity. The distribution of components was also found to be sensitive to particle modifications that preceded the ligand conjugation. With the knowledge gained from this detailed distribution analysis, a new platform design was developed to provide a system with dramatically improved control over the number of components and with improved batch reproducibility. Using semi-preparative HPLC, individual dendrimer-ligand components were isolated. The isolated dendrimer with precise numbers of ligands were characterized by NMR and analytical HPLC. In total, nine different dendrimer-ligand components were obtained with degrees of purity ≥80%. This system has the potential to serve as a platform to which a precise number of functional molecules can be attached and has the potential to dramatically improve platform efficacy. An additional investigation of reproducibility challenges for current dendrimer-based platform designs is also described. The mass transport quality during the partial acetylation reaction of the dendrimer was found to have a major impact on subsequent dendrimer-ligand distributions that cannot be detected by standard analytical techniques. Consequently, this reaction should be eliminated from the platform design. Finally, optimized protocols for purification and characterization of PAMAM dendrimer were detailed.
Integrated secure solution for electronic healthcare records sharing
NASA Astrophysics Data System (ADS)
Yao, Yehong; Zhang, Chenghao; Sun, Jianyong; Jin, Jin; Zhang, Jianguo
2007-03-01
The EHR is a secure, real-time, point-of-care, patient-centric information resource for healthcare providers. Many countries and regional districts have set long-term goals to build EHRs, and most of EHRs are usually built based on the integration of different information systems with different information models and platforms. A number of hospitals in Shanghai are also piloting the development of an EHR solution based on IHE XDS/XDS-I profiles with a service-oriented architecture (SOA). The first phase of the project targets the Diagnostic Imaging domain and allows seamless sharing of images and reports across the multiple hospitals. To develop EHRs for regional coordinated healthcare, some factors should be considered in designing architecture, one of which is security issue. In this paper, we present some approaches and policies to improve and strengthen the security among the different hospitals' nodes, which are compliant with the security requirements defined by IHE IT Infrastructure (ITI) Technical Framework. Our security solution includes four components: Time Sync System (TSS), Digital Signature Manage System (DSMS), Data Exchange Control Component (DECC) and Single Sign-On (SSO) System. We give a design method and implementation strategy of these security components, and then evaluate the performance and overheads of the security services or features by integrating the security components into an image-based EHR system.
Biomathematical modeling of pulsatile hormone secretion: a historical perspective.
Evans, William S; Farhy, Leon S; Johnson, Michael L
2009-01-01
Shortly after the recognition of the profound physiological significance of the pulsatile nature of hormone secretion, computer-based modeling techniques were introduced for the identification and characterization of such pulses. Whereas these earlier approaches defined perturbations in hormone concentration-time series, deconvolution procedures were subsequently employed to separate such pulses into their secretion event and clearance components. Stochastic differential equation modeling was also used to define basal and pulsatile hormone secretion. To assess the regulation of individual components within a hormone network, a method that quantitated approximate entropy within hormone concentration-times series was described. To define relationships within coupled hormone systems, methods including cross-correlation and cross-approximate entropy were utilized. To address some of the inherent limitations of these methods, modeling techniques with which to appraise the strength of feedback signaling between and among hormone-secreting components of a network have been developed. Techniques such as dynamic modeling have been utilized to reconstruct dose-response interactions between hormones within coupled systems. A logical extension of these advances will require the development of mathematical methods with which to approximate endocrine networks exhibiting multiple feedback interactions and subsequently reconstruct their parameters based on experimental data for the purpose of testing regulatory hypotheses and estimating alterations in hormone release control mechanisms.
A global model for steady state and transient S.I. engine heat transfer studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bohac, S.V.; Assanis, D.N.; Baker, D.M.
1996-09-01
A global, systems-level model which characterizes the thermal behavior of internal combustion engines is described in this paper. Based on resistor-capacitor thermal networks, either steady-state or transient thermal simulations can be performed. A two-zone, quasi-dimensional spark-ignition engine simulation is used to determine in-cylinder gas temperature and convection coefficients. Engine heat fluxes and component temperatures can subsequently be predicted from specification of general engine dimensions, materials, and operating conditions. Emphasis has been placed on minimizing the number of model inputs and keeping them as simple as possible to make the model practical and useful as an early design tool. The successmore » of the global model depends on properly scaling the general engine inputs to accurately model engine heat flow paths across families of engine designs. The development and validation of suitable, scalable submodels is described in detail in this paper. Simulation sub-models and overall system predictions are validated with data from two spark ignition engines. Several sensitivity studies are performed to determine the most significant heat transfer paths within the engine and exhaust system. Overall, it has been shown that the model is a powerful tool in predicting steady-state heat rejection and component temperatures, as well as transient component temperatures.« less
Links, Jonathan M; Schwartz, Brian S; Lin, Sen; Kanarek, Norma; Mitrani-Reiser, Judith; Sell, Tara Kirk; Watson, Crystal R; Ward, Doug; Slemp, Cathy; Burhans, Robert; Gill, Kimberly; Igusa, Tak; Zhao, Xilei; Aguirre, Benigno; Trainor, Joseph; Nigg, Joanne; Inglesby, Thomas; Carbone, Eric; Kendra, James M
2018-02-01
Policy-makers and practitioners have a need to assess community resilience in disasters. Prior efforts conflated resilience with community functioning, combined resistance and recovery (the components of resilience), and relied on a static model for what is inherently a dynamic process. We sought to develop linked conceptual and computational models of community functioning and resilience after a disaster. We developed a system dynamics computational model that predicts community functioning after a disaster. The computational model outputted the time course of community functioning before, during, and after a disaster, which was used to calculate resistance, recovery, and resilience for all US counties. The conceptual model explicitly separated resilience from community functioning and identified all key components for each, which were translated into a system dynamics computational model with connections and feedbacks. The components were represented by publicly available measures at the county level. Baseline community functioning, resistance, recovery, and resilience evidenced a range of values and geographic clustering, consistent with hypotheses based on the disaster literature. The work is transparent, motivates ongoing refinements, and identifies areas for improved measurements. After validation, such a model can be used to identify effective investments to enhance community resilience. (Disaster Med Public Health Preparedness. 2018;12:127-137).
NASA Astrophysics Data System (ADS)
Gromek, Katherine Emily
A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.
Enriching Triangle Mesh Animations with Physically Based Simulation.
Li, Yijing; Xu, Hongyi; Barbic, Jernej
2017-10-01
We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.
Model documentation: Electricity Market Module, Electricity Fuel Dispatch Submodule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This report documents the objectives, analytical approach and development of the National Energy Modeling System Electricity Fuel Dispatch Submodule (EFD), a submodule of the Electricity Market Module (EMM). The report catalogues and describes the model assumptions, computational methodology, parameter estimation techniques, model source code, and forecast results generated through the synthesis and scenario development based on these components.
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
A Modelica-based Model Library for Building Energy and Control Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael
2009-04-07
This paper describes an open-source library with component models for building energy and control systems that is based on Modelica, an equation-based objectoriented language that is well positioned to become the standard for modeling of dynamic systems in various industrial sectors. The library is currently developed to support computational science and engineering for innovative building energy and control systems. Early applications will include controls design and analysis, rapid prototyping to support innovation of new building systems and the use of models during operation for controls, fault detection and diagnostics. This paper discusses the motivation for selecting an equation-based object-oriented language.more » It presents the architecture of the library and explains how base models can be used to rapidly implement new models. To demonstrate the capability of analyzing novel energy and control systems, the paper closes with an example where we compare the dynamic performance of a conventional hydronic heating system with thermostatic radiator valves to an innovative heating system. In the new system, instead of a centralized circulation pump, each of the 18 radiators has a pump whose speed is controlled using a room temperature feedback loop, and the temperature of the boiler is controlled based on the speed of the radiator pump. All flows are computed by solving for the pressure distribution in the piping network, and the controls include continuous and discrete time controls.« less
Analysis tool and methodology design for electronic vibration stress understanding and prediction
NASA Astrophysics Data System (ADS)
Hsieh, Sheng-Jen; Crane, Robert L.; Sathish, Shamachary
2005-03-01
The objectives of this research were to (1) understand the impact of vibration on electronic components under ultrasound excitation; (2) model the thermal profile presented under vibration stress; and (3) predict stress level given a thermal profile of an electronic component. Research tasks included: (1) retrofit of current ultrasonic/infrared nondestructive testing system with sensory devices for temperature readings; (2) design of software tool to process images acquired from the ultrasonic/infrared system; (3) developing hypotheses and conducting experiments; and (4) modeling and evaluation of electronic vibration stress levels using a neural network model. Results suggest that (1) an ultrasonic/infrared system can be used to mimic short burst high vibration loads for electronics components; (2) temperature readings for electronic components under vibration stress are consistent and repeatable; (3) as stress load and excitation time increase, temperature differences also increase; (4) components that are subjected to a relatively high pre-stress load, followed by a normal operating load, have a higher heating rate and lower cooling rate. These findings are based on grayscale changes in images captured during experimentation. Discriminating variables and a neural network model were designed to predict stress levels given temperature and/or grayscale readings. Preliminary results suggest a 15.3% error when using grayscale change rate and 12.8% error when using average heating rate within the neural network model. Data were obtained from a high stress point (the corner) of the chip.
Creating photorealistic virtual model with polarization-based vision system
NASA Astrophysics Data System (ADS)
Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi
2005-08-01
Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
Improving Perceptual Skills with 3-Dimensional Animations.
ERIC Educational Resources Information Center
Johns, Janet Faye; Brander, Julianne Marie
1998-01-01
Describes three-dimensional computer aided design (CAD) models for every component in a representative mechanical system; the CAD models made it easy to generate 3-D animations that are ideal for teaching perceptual skills in multimedia computer-based technical training. Fifteen illustrations are provided. (AEF)
Kirkilionis, Markus; Janus, Ulrich; Sbano, Luca
2011-09-01
We model in detail a simple synthetic genetic clock that was engineered in Atkinson et al. (Cell 113(5):597-607, 2003) using Escherichia coli as a host organism. Based on this engineered clock its theoretical description uses the modelling framework presented in Kirkilionis et al. (Theory Biosci. doi: 10.1007/s12064-011-0125-0 , 2011, this volume). The main goal of this accompanying article was to illustrate that parts of the modelling process can be algorithmically automatised once the model framework we called 'average dynamics' is accepted (Sbano and Kirkilionis, WMI Preprint 7/2007, 2008c; Kirkilionis and Sbano, Adv Complex Syst 13(3):293-326, 2010). The advantage of the 'average dynamics' framework is that system components (especially in genetics) can be easier represented in the model. In particular, if once discovered and characterised, specific molecular players together with their function can be incorporated. This means that, for example, the 'gene' concept becomes more clear, for example, in the way the genetic component would react under different regulatory conditions. Using the framework it has become a realistic aim to link mathematical modelling to novel tools of bioinformatics in the future, at least if the number of regulatory units can be estimated. This should hold in any case in synthetic environments due to the fact that the different synthetic genetic components are simply known (Elowitz and Leibler, Nature 403(6767):335-338, 2000; Gardner et al., Nature 403(6767):339-342, 2000; Hasty et al., Nature 420(6912):224-230, 2002). The paper illustrates therefore as a necessary first step how a detailed modelling of molecular interactions with known molecular components leads to a dynamic mathematical model that can be compared to experimental results on various levels or scales. The different genetic modules or components are represented in different detail by model variants. We explain how the framework can be used for investigating other more complex genetic systems in terms of regulation and feedback.
Integrative approaches for modeling regulation and function of the respiratory system.
Ben-Tal, Alona; Tawhai, Merryn H
2013-01-01
Mathematical models have been central to understanding the interaction between neural control and breathing. Models of the entire respiratory system-which comprises the lungs and the neural circuitry that controls their ventilation-have been derived using simplifying assumptions to compartmentalize each component of the system and to define the interactions between components. These full system models often rely-through necessity-on empirically derived relationships or parameters, in addition to physiological values. In parallel with the development of whole respiratory system models are mathematical models that focus on furthering a detailed understanding of the neural control network, or of the several functions that contribute to gas exchange within the lung. These models are biophysically based, and rely on physiological parameters. They include single-unit models for a breathing lung or neural circuit, through to spatially distributed models of ventilation and perfusion, or multicircuit models for neural control. The challenge is to bring together these more recent advances in models of neural control with models of lung function, into a full simulation for the respiratory system that builds upon the more detailed models but remains computationally tractable. This requires first understanding the mathematical models that have been developed for the respiratory system at different levels, and which could be used to study how physiological levels of O2 and CO2 in the blood are maintained. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhao, Y.; Su, X. H.; Wang, M. H.; Li, Z. Y.; Li, E. K.; Xu, X.
2017-08-01
Water resources vulnerability control management is essential because it is related to the benign evolution of socio-economic, environmental and water resources system. Research on water resources system vulnerability is helpful to realization of water resources sustainable utilization. In this study, the DPSIR framework of driving forces-pressure-state-impact-response was adopted to construct the evaluation index system of water resources system vulnerability. Then the co-evolutionary genetic algorithm and projection pursuit were used to establish evaluation model of water resources system vulnerability. Tengzhou City in Shandong Province was selected as a study area. The system vulnerability was analyzed in terms of driving forces, pressure, state, impact and response on the basis of the projection value calculated by the model. The results show that the five components all belong to vulnerability Grade II, the vulnerability degree of impact and state were higher than other components due to the fierce imbalance in supply-demand and the unsatisfied condition of water resources utilization. It is indicated that the influence of high speed socio-economic development and the overuse of the pesticides have already disturbed the benign development of water environment to some extents. While the indexes in response represented lower vulnerability degree than the other components. The results of the evaluation model are coincident with the status of water resources system in the study area, which indicates that the model is feasible and effective.
Evaluating Emulation-based Models of Distributed Computing Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Stephen T.; Gabert, Kasimir G.; Tarman, Thomas D.
Emulation-based models of distributed computing systems are collections of virtual ma- chines, virtual networks, and other emulation components configured to stand in for oper- ational systems when performing experimental science, training, analysis of design alterna- tives, test and evaluation, or idea generation. As with any tool, we should carefully evaluate whether our uses of emulation-based models are appropriate and justified. Otherwise, we run the risk of using a model incorrectly and creating meaningless results. The variety of uses of emulation-based models each have their own goals and deserve thoughtful evaluation. In this paper, we enumerate some of these uses andmore » describe approaches that one can take to build an evidence-based case that a use of an emulation-based model is credible. Predictive uses of emulation-based models, where we expect a model to tell us something true about the real world, set the bar especially high and the principal evaluation method, called validation , is comensurately rigorous. We spend the majority of our time describing and demonstrating the validation of a simple predictive model using a well-established methodology inherited from decades of development in the compuational science and engineering community.« less
Modeling the Stress Strain Behavior of Woven Ceramic Matrix Composites
NASA Technical Reports Server (NTRS)
Morscher, Gregory N.
2006-01-01
Woven SiC fiber reinforced SiC matrix composites represent one of the most mature composite systems to date. Future components fabricated out of these woven ceramic matrix composites are expected to vary in shape, curvature, architecture, and thickness. The design of future components using woven ceramic matrix composites necessitates a modeling approach that can account for these variations which are physically controlled by local constituent contents and architecture. Research over the years supported primarily by NASA Glenn Research Center has led to the development of simple mechanistic-based models that can describe the entire stress-strain curve for composite systems fabricated with chemical vapor infiltrated matrices and melt-infiltrated matrices for a wide range of constituent content and architecture. Several examples will be presented that demonstrate the approach to modeling which incorporates a thorough understanding of the stress-dependent matrix cracking properties of the composite system.
Validation of PV-RPM Code in the System Advisor Model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Freeman, Janine
2017-04-01
This paper describes efforts made by Sandia National Laboratories (SNL) and the National Renewable Energy Laboratory (NREL) to validate the SNL developed PV Reliability Performance Model (PV - RPM) algorithm as implemented in the NREL System Advisor Model (SAM). The PV - RPM model is a library of functions that estimates component failure and repair in a photovoltaic system over a desired simulation period. The failure and repair distributions in this paper are probabilistic representations of component failure and repair based on data collected by SNL for a PV power plant operating in Arizona. The validation effort focuses on whethermore » the failure and repair dist ributions used in the SAM implementation result in estimated failures that match the expected failures developed in the proof - of - concept implementation. Results indicate that the SAM implementation of PV - RPM provides the same results as the proof - of - concep t implementation, indicating the algorithms were reproduced successfully.« less
A maintenance model for k-out-of-n subsystems aboard a fleet of advanced commercial aircraft
NASA Technical Reports Server (NTRS)
Miller, D. R.
1978-01-01
Proposed highly reliable fault-tolerant reconfigurable digital control systems for a future generation of commercial aircraft consist of several k-out-of-n subsystems. Each of these flight-critical subsystems will consist of n identical components, k of which must be functioning properly in order for the aircraft to be dispatched. Failed components are recoverable; they are repaired in a shop. Spares are inventoried at a main base where they may be substituted for failed components on planes during layovers. Penalties are assessed when failure of a k-out-of-n subsystem causes a dispatch cancellation or delay. A maintenance model for a fleet of aircraft with such control systems is presented. The goals are to demonstrate economic feasibility and to optimize.
A computational framework for modeling targets as complex adaptive systems
NASA Astrophysics Data System (ADS)
Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh
2017-05-01
Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.
SYSTEMS BIOLOGY MODEL DEVELOPMENT AND APPLICATION
System biology models holistically describe, in a quantitative fashion, the relationships between different levels of a biologic system. Relationships between individual components of a system are delineated. System biology models describe how the components of the system inter...
Creation of system of computer-aided design for technological objects
NASA Astrophysics Data System (ADS)
Zubkova, T. M.; Tokareva, M. A.; Sultanov, N. Z.
2018-05-01
Due to the competition in the market of process equipment, its production should be flexible, retuning to various product configurations, raw materials and productivity, depending on the current market needs. This process is not possible without CAD (computer-aided design). The formation of CAD begins with planning. Synthesizing, analyzing, evaluating, converting operations, as well as visualization and decision-making operations, can be automated. Based on formal description of the design procedures, the design route in the form of an oriented graph is constructed. The decomposition of the design process, represented by the formalized description of the design procedures, makes it possible to make an informed choice of the CAD component for the solution of the task. The object-oriented approach allows us to consider the CAD as an independent system whose properties are inherited from the components. The first step determines the range of tasks to be performed by the system, and a set of components for their implementation. The second one is the configuration of the selected components. The interaction between the selected components is carried out using the CALS standards. The chosen CAD / CAE-oriented approach allows creating a single model, which is stored in the database of the subject area. Each of the integration stages is implemented as a separate functional block. The transformation of the CAD model into the model of the internal representation is realized by the block of searching for the geometric parameters of the technological machine, in which the XML-model of the construction is obtained on the basis of the feature method from the theory of image recognition. The configuration of integrated components is divided into three consecutive steps: configuring tasks, components, interfaces. The configuration of the components is realized using the theory of "soft computations" using the Mamdani fuzzy inference algorithm.
A systems engineering analysis of three-point and four-point wind turbine drivetrain configurations
Guo, Yi; Parsons, Tyler; Dykes, Katherine; ...
2016-08-24
This study compares the impact of drivetrain configuration on the mass and capital cost of a series of wind turbines ranging from 1.5 MW to 5.0 MW power ratings for both land-based and offshore applications. The analysis is performed with a new physics-based drivetrain analysis and sizing tool, Drive Systems Engineering (DriveSE), which is part of the Wind-Plant Integrated System Design & Engineering Model. DriveSE uses physics-based relationships to size all major drivetrain components according to given rotor loads simulated based on International Electrotechnical Commission design load cases. The model's sensitivity to input loads that contain a high degree ofmore » variability was analyzed. Aeroelastic simulations are used to calculate the rotor forces and moments imposed on the drivetrain for each turbine design. DriveSE is then used to size all of the major drivetrain components for each turbine for both three-point and four-point configurations. The simulation results quantify the trade-offs in mass and component costs for the different configurations. On average, a 16.7% decrease in total nacelle mass can be achieved when using a three-point drivetrain configuration, resulting in a 3.5% reduction in turbine capital cost. This analysis is driven by extreme loads and does not consider fatigue. Thus, the effects of configuration choices on reliability and serviceability are not captured. Furthermore, a first order estimate of the sizing, dimensioning and costing of major drivetrain components are made which can be used in larger system studies which consider trade-offs between subsystems such as the rotor, drivetrain and tower.« less
A systems engineering analysis of three-point and four-point wind turbine drivetrain configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi; Parsons, Tyler; Dykes, Katherine
This study compares the impact of drivetrain configuration on the mass and capital cost of a series of wind turbines ranging from 1.5 MW to 5.0 MW power ratings for both land-based and offshore applications. The analysis is performed with a new physics-based drivetrain analysis and sizing tool, Drive Systems Engineering (DriveSE), which is part of the Wind-Plant Integrated System Design & Engineering Model. DriveSE uses physics-based relationships to size all major drivetrain components according to given rotor loads simulated based on International Electrotechnical Commission design load cases. The model's sensitivity to input loads that contain a high degree ofmore » variability was analyzed. Aeroelastic simulations are used to calculate the rotor forces and moments imposed on the drivetrain for each turbine design. DriveSE is then used to size all of the major drivetrain components for each turbine for both three-point and four-point configurations. The simulation results quantify the trade-offs in mass and component costs for the different configurations. On average, a 16.7% decrease in total nacelle mass can be achieved when using a three-point drivetrain configuration, resulting in a 3.5% reduction in turbine capital cost. This analysis is driven by extreme loads and does not consider fatigue. Thus, the effects of configuration choices on reliability and serviceability are not captured. Furthermore, a first order estimate of the sizing, dimensioning and costing of major drivetrain components are made which can be used in larger system studies which consider trade-offs between subsystems such as the rotor, drivetrain and tower.« less
ESRDC - Designing and Powering the Future Fleet
2018-02-22
Awards Management 301 Main Street University of South Carolina Columbia, SC 29208 1600 Hampton St, Suite 414 Phone: 803-777-7890 Columbia, SC 29208... managing short circuit faults in MVDC Systems, and 5) modeling of SiC-based electronic power converters to support accurate scalable models in S3D...Research in advanced thermal management followed three tracks. We developed models of thermal system components that are suitable for use in early stage
Beyond a series of security nets: Applying STAMP & STPA to port security
Williams, Adam D.
2015-11-17
Port security is an increasing concern considering the significant role of ports in global commerce and today’s increasingly complex threat environment. Current approaches to port security mirror traditional models of accident causality -- ‘a series of security nets’ based on component reliability and probabilistic assumptions. Traditional port security frameworks result in isolated and inconsistent improvement strategies. Recent work in engineered safety combines the ideas of hierarchy, emergence, control and communication into a new paradigm for understanding port security as an emergent complex system property. The ‘System-Theoretic Accident Model and Process (STAMP)’ is a new model of causality based on systemsmore » and control theory. The associated analysis process -- System Theoretic Process Analysis (STPA) -- identifies specific technical or procedural security requirements designed to work in coordination with (and be traceable to) overall port objectives. This process yields port security design specifications that can mitigate (if not eliminate) port security vulnerabilities related to an emphasis on component reliability, lack of coordination between port security stakeholders or economic pressures endemic in the maritime industry. As a result, this article aims to demonstrate how STAMP’s broader view of causality and complexity can better address the dynamic and interactive behaviors of social, organizational and technical components of port security.« less
Beyond a series of security nets: Applying STAMP & STPA to port security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Adam D.
Port security is an increasing concern considering the significant role of ports in global commerce and today’s increasingly complex threat environment. Current approaches to port security mirror traditional models of accident causality -- ‘a series of security nets’ based on component reliability and probabilistic assumptions. Traditional port security frameworks result in isolated and inconsistent improvement strategies. Recent work in engineered safety combines the ideas of hierarchy, emergence, control and communication into a new paradigm for understanding port security as an emergent complex system property. The ‘System-Theoretic Accident Model and Process (STAMP)’ is a new model of causality based on systemsmore » and control theory. The associated analysis process -- System Theoretic Process Analysis (STPA) -- identifies specific technical or procedural security requirements designed to work in coordination with (and be traceable to) overall port objectives. This process yields port security design specifications that can mitigate (if not eliminate) port security vulnerabilities related to an emphasis on component reliability, lack of coordination between port security stakeholders or economic pressures endemic in the maritime industry. As a result, this article aims to demonstrate how STAMP’s broader view of causality and complexity can better address the dynamic and interactive behaviors of social, organizational and technical components of port security.« less
NASA Technical Reports Server (NTRS)
Penny, Stephen G.; Akella, Santha; Buehner, Mark; Chevallier, Matthieu; Counillon, Francois; Draper, Clara; Frolov, Sergey; Fujii, Yosuke; Karspeck, Alicia; Kumar, Arun
2017-01-01
The purpose of this report is to identify fundamental issues for coupled data assimilation (CDA), such as gaps in science and limitations in forecasting systems, in order to provide guidance to the World Meteorological Organization (WMO) on how to facilitate more rapid progress internationally. Coupled Earth system modeling provides the opportunity to extend skillful atmospheric forecasts beyond the traditional two-week barrier by extracting skill from low-frequency state components such as the land, ocean, and sea ice. More generally, coupled models are needed to support seamless prediction systems that span timescales from weather, subseasonal to seasonal (S2S), multiyear, and decadal. Therefore, initialization methods are needed for coupled Earth system models, either applied to each individual component (called Weakly Coupled Data Assimilation - WCDA) or applied the coupled Earth system model as a whole (called Strongly Coupled Data Assimilation - SCDA). Using CDA, in which model forecasts and potentially the state estimation are performed jointly, each model domain benefits from observations in other domains either directly using error covariance information known at the time of the analysis (SCDA), or indirectly through flux interactions at the model boundaries (WCDA). Because the non-atmospheric domains are generally under-observed compared to the atmosphere, CDA provides a significant advantage over single-domain analyses. Next, we provide a synopsis of goals, challenges, and recommendations to advance CDA: Goals: (a) Extend predictive skill beyond the current capability of NWP (e.g. as demonstrated by improving forecast skill scores), (b) produce physically consistent initial conditions for coupled numerical prediction systems and reanalyses (including consistent fluxes at the domain interfaces), (c) make best use of existing observations by allowing observations from each domain to influence and improve the full earth system analysis, (d) develop a robust observation-based identification and understanding of mechanisms that determine the variability of weather and climate, (e) identify critical weaknesses in coupled models and the earth observing system, (f) generate full-field estimates of unobserved or sparsely observed variables, (g) improve the estimation of the external forcings causing changes to climate, (h) transition successes from idealized CDA experiments to real-world applications. Challenges: (a) Modeling at the interfaces between interacting components of coupled Earth system models may be inadequate for estimating uncertainty or error covariances between domains, (b) current data assimilation methods may be insufficient to simultaneously analyze domains containing multiple spatiotemporal scales of interest, (c) there is no standardization of observation data or their delivery systems across domains, (d) the size and complexity of many large-scale coupled Earth system models makes it is difficult to accurately represent uncertainty due to model parameters and coupling parameters, (e) model errors lead to local biases that can transfer between the different Earth system components and lead to coupled model biases and long-term model drift, (e) information propagation across model components with different spatiotemporal scales is extremely complicated, and must be improved in current coupled modeling frameworks, (h) there is insufficient knowledge on how to represent evolving errors in non-atmospheric model components (e.g. as sea ice, land and ocean) on the timescales of NWP.
Fundamental Technology Development for Gas-Turbine Engine Health Management
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Simon, Donald L.; Hunter, Gary W.; Arnold, Steven M.; Reveley, Mary S.; Anderson, Lynn M.
2007-01-01
Integrated vehicle health management technologies promise to dramatically improve the safety of commercial aircraft by reducing system and component failures as causal and contributing factors in aircraft accidents. To realize this promise, fundamental technology development is needed to produce reliable health management components. These components include diagnostic and prognostic algorithms, physics-based and data-driven lifing and failure models, sensors, and a sensor infrastructure including wireless communications, power scavenging, and electronics. In addition, system assessment methods are needed to effectively prioritize development efforts. Development work is needed throughout the vehicle, but particular challenges are presented by the hot, rotating environment of the propulsion system. This presentation describes current work in the field of health management technologies for propulsion systems for commercial aviation.
Air Force Research Laboratory Technology Milestones 2007
2007-01-01
Propulsion Fuel Pumps and Fuel Systems Liquid Rockets and Combustion Gas Generators Micropropulsion Gears Monopropellants High-Cycle Fatigue and Its... Systems Electric Propulsion Engine Health Monitoring Systems High-Energy-Density Matter Exhaust Nozzles Injectors and Spray Measurements Fans Laser...of software models to drive development of component-based systems and lightweight domain-specific specification and verification technology. Highly
PDS4 - Some Principles for Agile Data Curation
NASA Astrophysics Data System (ADS)
Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Algermissen, S.; Padams, J.
2015-12-01
PDS4, a research data management and curation system for NASA's Planetary Science Archive, was developed using principles that promote the characteristics of agile development. The result is an efficient system that produces better research data products while using less resources (time, effort, and money) and maximizes their usefulness for current and future scientists. The key principle is architectural. The PDS4 information architecture is developed and maintained independent of the infrastructure's process, application and technology architectures. The information architecture is based on an ontology-based information model developed to leverage best practices from standard reference models for digital archives, digital object registries, and metadata registries and capture domain knowledge from a panel of planetary science domain experts. The information model provides a sharable, stable, and formal set of information requirements for the system and is the primary source for information to configure most system components, including the product registry, search engine, validation and display tools, and production pipelines. Multi-level governance is also allowed for the effective management of the informational elements at the common, discipline, and project level. This presentation will describe the development principles, components, and uses of the information model and how an information model-driven architecture exhibits characteristics of agile curation including early delivery, evolutionary development, adaptive planning, continuous improvement, and rapid and flexible response to change.
Towards early software reliability prediction for computer forensic tools (case study).
Abu Talib, Manar
2016-01-01
Versatility, flexibility and robustness are essential requirements for software forensic tools. Researchers and practitioners need to put more effort into assessing this type of tool. A Markov model is a robust means for analyzing and anticipating the functioning of an advanced component based system. It is used, for instance, to analyze the reliability of the state machines of real time reactive systems. This research extends the architecture-based software reliability prediction model for computer forensic tools, which is based on Markov chains and COSMIC-FFP. Basically, every part of the computer forensic tool is linked to a discrete time Markov chain. If this can be done, then a probabilistic analysis by Markov chains can be performed to analyze the reliability of the components and of the whole tool. The purposes of the proposed reliability assessment method are to evaluate the tool's reliability in the early phases of its development, to improve the reliability assessment process for large computer forensic tools over time, and to compare alternative tool designs. The reliability analysis can assist designers in choosing the most reliable topology for the components, which can maximize the reliability of the tool and meet the expected reliability level specified by the end-user. The approach of assessing component-based tool reliability in the COSMIC-FFP context is illustrated with the Forensic Toolkit Imager case study.
Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems
NASA Astrophysics Data System (ADS)
Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard
Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.
GEOS-5 Seasonal Forecast System: ENSO Prediction Skill and Bias
NASA Technical Reports Server (NTRS)
Borovikov, Anna; Kovach, Robin; Marshak, Jelena
2018-01-01
The GEOS-5 AOGCM known as S2S-1.0 has been in service from June 2012 through January 2018 (Borovikov et al. 2017). The atmospheric component of S2S-1.0 is Fortuna-2.5, the same that was used for the Modern-Era Retrospective Analysis for Research and Applications (MERRA), but with adjusted parameterization of moist processes and turbulence. The ocean component is the Modular Ocean Model version 4 (MOM4). The sea ice component is the Community Ice CodE, version 4 (CICE). The land surface model is a catchment-based hydrological model coupled to the multi-layer snow model. The AGCM uses a Cartesian grid with a 1 deg × 1.25 deg horizontal resolution and 72 hybrid vertical levels with the upper most level at 0.01 hPa. OGCM nominal resolution of the tripolar grid is 1/2 deg, with a meridional equatorial refinement to 1/4 deg. In the coupled model initialization, selected atmospheric variables are constrained with MERRA. The Goddard Earth Observing System integrated Ocean Data Assimilation System (GEOS-iODAS) is used for both ocean state and sea ice initialization. SST, T and S profiles and sea ice concentration were assimilated.
NASA Technical Reports Server (NTRS)
Zhu, Dongming; Sakowski, Barbara A.; Fisher, Caleb
2014-01-01
SiCSiC ceramic matrix composites (CMCs) systems will play a crucial role in next generation turbine engines for hot-section component applications because of their ability to significantly increase engine operating temperatures, reduce engine weight and cooling requirements. However, the environmental stability of Si-based ceramics in high pressure, high velocity turbine engine combustion environment is of major concern. The water vapor containing combustion gas leads to accelerated oxidation and corrosion of the SiC based ceramics due to the water vapor reactions with silica (SiO2) scales forming non-protective volatile hydroxide species, resulting in recession of the ceramic components. Although environmental barrier coatings are being developed to help protect the CMC components, there is a need to better understand the fundamental recession behavior of in more realistic cooled engine component environments.In this paper, we describe a comprehensive film cooled high pressure burner rig based testing approach, by using standardized film cooled SiCSiC disc test specimen configurations. The SiCSiC specimens were designed for implementing the burner rig testing in turbine engine relevant combustion environments, obtaining generic film cooled recession rate data under the combustion water vapor conditions, and helping developing the Computational Fluid Dynamics (CFD) film cooled models and performing model validation. Factors affecting the film cooled recession such as temperature, water vapor concentration, combustion gas velocity, and pressure are particularly investigated and modeled, and compared with impingement cooling only recession data in similar combustion flow environments. The experimental and modeling work will help predict the SiCSiC CMC recession behavior, and developing durable CMC systems in complex turbine engine operating conditions.
Using Dual Process Models to Examine Impulsivity Throughout Neural Maturation.
Leshem, Rotem
2016-01-01
The multivariate construct of impulsivity is examined through neural systems and connections that comprise the executive functioning system. It is proposed that cognitive and behavioral components of impulsivity can be divided into two distinct groups, mediated by (1) the cognitive control system: deficits in top-down cognitive control processes referred to as action/cognitive impulsivity and (2) the socioemotional system: related to bottom-up affective/motivational processes referred to as affective impulsivity. Examination of impulsivity from a developmental viewpoint can guide future research, potentially enabling the selection of more effective interventions for impulsive individuals, based on the cognitive components requiring improvement.
Benefits of Enterprise Ontology for the Development of ICT-Based Value Networks
NASA Astrophysics Data System (ADS)
Albani, Antonia; Dietz, Jan L. G.
The competitiveness of value networks is highly dependent on the cooperation between business partners and the interoperability of their information systems. Innovations in information and communication technology (ICT), primarily the emergence of the Internet, offer possibilities to increase the interoperability of information systems and therefore enable inter-enterprise cooperation. For the design of inter-enterprise information systems, the concept of business component appears to be very promising. However, the identification of business components is strongly dependent on the appropriateness and the quality of the underlying business domain model. The ontological model of an enterprise - or an enterprise network - as presented in this article, is a high-quality and very adequate business domain model. It provides all essential information that is necessary for the design of the supporting information systems, and at a level of abstraction that makes it also understandable for business people. The application of enterprise ontology for the identification of business components is clarified. To exemplify our approach, a practical case is taken from the domain of strategic supply network development. By doing this, a widespread problem of the practical application of inter-enterprise information systems is being addressed.
Studying the Warm Layer and the Hardening Factor in Cygnus X-1
NASA Technical Reports Server (NTRS)
Yao, Yangsen; Zhang, Shuangnan; Zhang, Xiaoling; Feng, Yuxin
2002-01-01
As the first dynamically determined black hole X-ray binary system, Cygnus X-1 has been studied extensively. However, its broadband spectrum observed with BeppoSax is still not well understood. Besides the soft excess described by the multi-color disk model (MCD), the power-law hard component and a broad excess feature above 10 keV (a disk reflection component), there is also an additional soft component around 1 keV, whose origin is not known currently. Here we propose that the additional soft component is due to the thermal Comptonization between the soft disk photons and a warm plasma cloud just above the disk, i.e., a warm layer. We use the Monte-Carlo technique to simulate this Compton scattering process and build a table model based on our simulation results. With this table model, we study the disk structure and estimate the hardening factor to the MCD component in Cygnus X-1.
DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; Rice, C Keith; Abdelaziz, Omar
2015-01-01
This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.
Exploration of cellular reaction systems.
Kirkilionis, Markus
2010-01-01
We discuss and review different ways to map cellular components and their temporal interaction with other such components to different non-spatially explicit mathematical models. The essential choices made in the literature are between discrete and continuous state spaces, between rule and event-based state updates and between deterministic and stochastic series of such updates. The temporal modelling of cellular regulatory networks (dynamic network theory) is compared with static network approaches in two first introductory sections on general network modelling. We concentrate next on deterministic rate-based dynamic regulatory networks and their derivation. In the derivation, we include methods from multiscale analysis and also look at structured large particles, here called macromolecular machines. It is clear that mass-action systems and their derivatives, i.e. networks based on enzyme kinetics, play the most dominant role in the literature. The tools to analyse cellular reaction networks are without doubt most complete for mass-action systems. We devote a long section at the end of the review to make a comprehensive review of related tools and mathematical methods. The emphasis is to show how cellular reaction networks can be analysed with the help of different associated graphs and the dissection into modules, i.e. sub-networks.
NASA Astrophysics Data System (ADS)
Konesev, S. G.; Khazieva, R. T.; Kirllov, R. V.; Konev, A. A.
2017-01-01
Some electrical consumers (the charge system of storage capacitor, powerful pulse generators, electrothermal systems, gas-discharge lamps, electric ovens, plasma torches) require constant power consumption, while their resistance changes in the limited range. Current stabilization systems (CSS) with inductive-capacitive transducers (ICT) provide constant power, when the load resistance changes over a wide range and increaseы the efficiency of high-power loads’ power supplies. ICT elements are selected according to the maximum load, which leads to exceeding a predetermined value of capacity. The paper suggests carrying load power by the ICT based on multifunction integrated electromagnetic components (MIEC) to reduce the predetermined capacity of ICT elements and CSS weights and dimensions. The authors developed and patented ICT based on MIEC that reduces the CSS weights and dimensions by reducing components number with the possibility of device’s electric energy transformation and resonance frequency changing. An ICT mathematical model was produced. The model determines the width of the load stabilization range. Electromagnetic processes study model was built with the MIEC integral parameters (full inductance of the electrical lead, total capacity, current of electrical lead). It shows independence of the load current from the load resistance for different ways of MIEC connection.
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-01-01
Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output. PMID:24825937
Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz
2014-04-01
National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output.
[Market-based medicine or patient-based medicine?].
Justich, Pablo R
2015-04-01
The health care has evolved over the centuries from a theocentric model to a model centered on man, environment and society. The process of neoliberal globalization has changed the relationship between the components of the health system and population. The active participation of organizations such as the World Trade Organization, the International Monetary Fund and the World Bank by the techno-medical industrial complex tends to make the health care in a model focused on economy. This, impacts negatively on all components in the process of health care and have an adverse effect on the humanized care. The analysis of each sector in particular and their interactions shows the effects of this change. Alternatives are proposed for each sector to contribute to a model of care focused on the patient, their family and the social environment.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario
2015-04-01
The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.
Definition of Contravariant Velocity Components
NASA Technical Reports Server (NTRS)
Hung, Ching-moa; Kwak, Dochan (Technical Monitor)
2002-01-01
In this paper we have reviewed the basics of tensor analysis in an attempt to clarify some misconceptions regarding contravariant and covariant vector components as used in fluid dynamics. We have indicated that contravariant components are components of a given vector expressed as a unique combination of the covariant base vector system and, vice versa, that the covariant components are components of a vector expressed with the contravariant base vector system. Mathematically, expressing a vector with a combination of base vector is a decomposition process for a specific base vector system. Hence, the contravariant velocity components are decomposed components of velocity vector along the directions of coordinate lines, with respect to the covariant base vector system. However, the contravariant (and covariant) components are not physical quantities. Their magnitudes and dimensions are controlled by their corresponding covariant (and contravariant) base vectors.
V-SUIT Model Validation Using PLSS 1.0 Test Results
NASA Technical Reports Server (NTRS)
Olthoff, Claas
2015-01-01
The dynamic portable life support system (PLSS) simulation software Virtual Space Suit (V-SUIT) has been under development at the Technische Universitat Munchen since 2011 as a spin-off from the Virtual Habitat (V-HAB) project. The MATLAB(trademark)-based V-SUIT simulates space suit portable life support systems and their interaction with a detailed and also dynamic human model, as well as the dynamic external environment of a space suit moving on a planetary surface. To demonstrate the feasibility of a large, system level simulation like V-SUIT, a model of NASA's PLSS 1.0 prototype was created. This prototype was run through an extensive series of tests in 2011. Since the test setup was heavily instrumented, it produced a wealth of data making it ideal for model validation. The implemented model includes all components of the PLSS in both the ventilation and thermal loops. The major components are modeled in greater detail, while smaller and ancillary components are low fidelity black box models. The major components include the Rapid Cycle Amine (RCA) CO2 removal system, the Primary and Secondary Oxygen Assembly (POS/SOA), the Pressure Garment System Volume Simulator (PGSVS), the Human Metabolic Simulator (HMS), the heat exchanger between the ventilation and thermal loops, the Space Suit Water Membrane Evaporator (SWME) and finally the Liquid Cooling Garment Simulator (LCGS). Using the created model, dynamic simulations were performed using same test points also used during PLSS 1.0 testing. The results of the simulation were then compared to the test data with special focus on absolute values during the steady state phases and dynamic behavior during the transition between test points. Quantified simulation results are presented that demonstrate which areas of the V-SUIT model are in need of further refinement and those that are sufficiently close to the test results. Finally, lessons learned from the modelling and validation process are given in combination with implications for the future development of other PLSS models in V-SUIT.
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
Global Drought Monitoring and Forecasting based on Satellite Data and Land Surface Modeling
NASA Astrophysics Data System (ADS)
Sheffield, J.; Lobell, D. B.; Wood, E. F.
2010-12-01
Monitoring drought globally is challenging because of the lack of dense in-situ hydrologic data in many regions. In particular, soil moisture measurements are absent in many regions and in real time. This is especially problematic for developing regions such as Africa where water information is arguably most needed, but virtually non-existent on the ground. With the emergence of remote sensing estimates of all components of the water cycle there is now the potential to monitor the full terrestrial water cycle from space to give global coverage and provide the basis for drought monitoring. These estimates include microwave-infrared merged precipitation retrievals, evapotranspiration based on satellite radiation, temperature and vegetation data, gravity recovery measurements of changes in water storage, microwave based retrievals of soil moisture and altimetry based estimates of lake levels and river flows. However, many challenges remain in using these data, especially due to biases in individual satellite retrieved components, their incomplete sampling in time and space, and their failure to provide budget closure in concert. A potential way forward is to use modeling to provide a framework to merge these disparate sources of information to give physically consistent and spatially and temporally continuous estimates of the water cycle and drought. Here we present results from our experimental global water cycle monitor and its African drought monitor counterpart (http://hydrology.princeton.edu/monitor). The system relies heavily on satellite data to drive the Variable Infiltration Capacity (VIC) land surface model to provide near real-time estimates of precipitation, evapotranspiraiton, soil moisture, snow pack and streamflow. Drought is defined in terms of anomalies of soil moisture and other hydrologic variables relative to a long-term (1950-2000) climatology. We present some examples of recent droughts and how they are identified by the system, including objective quantification and tracking of their spatial-temporal characteristics. Further we present strategies for merging various sources of information, including bias correction of satellite precipitation and assimilation of remotely sensed soil moisture, which can augment the monitoring in regions where satellite precipitation is most uncertain. Ongoing work is adding a drought forecast component based on a successful implementation over the U.S. and agricultural productivity estimates based on output from crop yield models. The forecast component uses seasonal global climate forecasts from the NCEP Climate Forecast System (CFS). These are merged with observed climatology in a Bayesian framework to produce ensemble atmospheric forcings that better capture the uncertainties. At the same time, the system bias corrects and downscales the monthly CFS data. We show some initial seasonal (up to 6-month lead) hydrologic forecast results for the African system. Agricultural monitoring is based on the precipitation, temperature and soil moisture from the system to force statistical and process based crop yield models. We demonstrate the feasibility of monitoring major crop types across the world and show a strategy for providing predictions of yields within our drought forecast mode.
1990-12-01
Implementation of Coupled System 18 15.4. CASE STUDIES & IMPLEMENTATION EXAMPLES 24 15.4.1. The Case Studies of Coupled System 24 15.4.2. Example: Coupled System...occurs during specific phases of the problem-solving process. By decomposing the coupling process into its component layers we effectively study the nature...by the qualitative model, appropriate mathematical model is invoked. 5) The results are verified. If successful, stop. Else go to (2) and use an
NASA Astrophysics Data System (ADS)
Yu, Zhijing; Ma, Kai; Wang, Zhijun; Wu, Jun; Wang, Tao; Zhuge, Jingchang
2018-03-01
A blade is one of the most important components of an aircraft engine. Due to its high manufacturing costs, it is indispensable to come up with methods for repairing damaged blades. In order to obtain a surface model of the blades, this paper proposes a modeling method by using speckle patterns based on the virtual stereo vision system. Firstly, blades are sprayed evenly creating random speckle patterns and point clouds from blade surfaces can be calculated by using speckle patterns based on the virtual stereo vision system. Secondly, boundary points are obtained in the way of varied step lengths according to curvature and are fitted to get a blade surface envelope with a cubic B-spline curve. Finally, the surface model of blades is established with the envelope curves and the point clouds. Experimental results show that the surface model of aircraft engine blades is fair and accurate.
NASA Astrophysics Data System (ADS)
Qi, Le; Zheng, Zhongyi; Gang, Longhui
2017-10-01
It was found that the ships' velocity change, which is impacted by the weather and sea, e.g., wind, sea wave, sea current, tide, etc., is significant and must be considered in the marine traffic model. Therefore, a new marine traffic model based on cellular automaton (CA) was proposed in this paper. The characteristics of the ship's velocity change are taken into account in the model. First, the acceleration of a ship was divided into two components: regular component and random component. Second, the mathematical functions and statistical distribution parameters of the two components were confirmed by spectral analysis, curve fitting and auto-correlation analysis methods. Third, by combining the two components, the acceleration was regenerated in the update rules for ships' movement. To test the performance of the model, the ship traffic flows in the Dover Strait, the Changshan Channel and the Qiongzhou Strait were studied and simulated. The results show that the characteristics of ships' velocities in the simulations are consistent with the measured data by Automatic Identification System (AIS). Although the characteristics of the traffic flow in different areas are different, the velocities of ships can be simulated correctly. It proves that the velocities of ships under the influence of weather and sea can be simulated successfully using the proposed model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xia, Youlong; Mocko, David; Huang, Maoyi
2017-03-01
In preparation for next generation North American Land Data Assimilation System (NLDAS), 3 three advanced land surface models (CLM4.0, Noah-MP, and CLSM-F2.5) were run from 1979 4 to 2014 within the NLDAS-based framework. Monthly total water storage anomaly (TWSA) and 5 its individual water storage components were evaluated against satellite-based and in situ 6 observations, and reference reanalysis products at basin-wide and statewide scales. In general, all 7 three models are able to reasonably capture the monthly and interannual variability and 8 magnitudes for TWSA. However, contributions of the anomalies of individual water 9 components to TWSA are very dependentmore » on the model and basin. A major contributor to the 10 TWSA is the anomaly of total column soil moisture content (SMCA) for CLM4.0 and Noah-MP 11 or groundwater storage anomaly (GWSA) for CLSM-F2.5 although other components such as 12 the anomaly of snow water equivalent (SWEA) also play some role. For each individual water 13 storage component, the models are able to capture broad features such as monthly and 14 interannual variability. However, there are large inter-model differences and quantitative 15 uncertainties in this study. Therefore, it should be thought of as a preliminary synthesis and 16 analysis.« less
Niazi, Muaz A
2014-01-01
The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems.
Niazi, Muaz A.
2014-01-01
The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems. PMID:24701135
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaenko, Alexander; Windus, Theresa L.; Sosonkina, Masha
2012-10-19
The design and development of scientific software components to provide an interface to the effective fragment potential (EFP) methods are reported. Multiscale modeling of physical and chemical phenomena demands the merging of software packages developed by research groups in significantly different fields. Componentization offers an efficient way to realize new high performance scientific methods by combining the best models available in different software packages without a need for package readaptation after the initial componentization is complete. The EFP method is an efficient electronic structure theory based model potential that is suitable for predictive modeling of intermolecular interactions in large molecularmore » systems, such as liquids, proteins, atmospheric aerosols, and nanoparticles, with an accuracy that is comparable to that of correlated ab initio methods. The developed components make the EFP functionality accessible for any scientific component-aware software package. The performance of the component is demonstrated on a protein interaction model, and its accuracy is compared with results obtained with coupled cluster methods.« less
Cost decomposition of linear systems with application to model reduction
NASA Technical Reports Server (NTRS)
Skelton, R. E.
1980-01-01
A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.
Polyenergetic known-component reconstruction without prior shape models
NASA Astrophysics Data System (ADS)
Zhang, C.; Zbijewski, W.; Zhang, X.; Xu, S.; Stayman, J. W.
2017-03-01
Purpose: Previous work has demonstrated that structural models of surgical tools and implants can be integrated into model-based CT reconstruction to greatly reduce metal artifacts and improve image quality. This work extends a polyenergetic formulation of known-component reconstruction (Poly-KCR) by removing the requirement that a physical model (e.g. CAD drawing) be known a priori, permitting much more widespread application. Methods: We adopt a single-threshold segmentation technique with the help of morphological structuring elements to build a shape model of metal components in a patient scan based on initial filtered-backprojection (FBP) reconstruction. This shape model is used as an input to Poly-KCR, a formulation of known-component reconstruction that does not require a prior knowledge of beam quality or component material composition. An investigation of performance as a function of segmentation thresholds is performed in simulation studies, and qualitative comparisons to Poly-KCR with an a priori shape model are made using physical CBCT data of an implanted cadaver and in patient data from a prototype extremities scanner. Results: We find that model-free Poly-KCR (MF-Poly-KCR) provides much better image quality compared to conventional reconstruction techniques (e.g. FBP). Moreover, the performance closely approximates that of Poly- KCR with an a prior shape model. In simulation studies, we find that imaging performance generally follows segmentation accuracy with slight under- or over-estimation based on the shape of the implant. In both simulation and physical data studies we find that the proposed approach can remove most of the blooming and streak artifacts around the component permitting visualization of the surrounding soft-tissues. Conclusion: This work shows that it is possible to perform known-component reconstruction without prior knowledge of the known component. In conjunction with the Poly-KCR technique that does not require knowledge of beam quality or material composition, very little needs to be known about the metal implant and system beforehand. These generalizations will allow more widespread application of KCR techniques in real patient studies where the information of surgical tools and implants is limited or not available.
Conceptual model of knowledge base system
NASA Astrophysics Data System (ADS)
Naykhanova, L. V.; Naykhanova, I. V.
2018-05-01
In the article, the conceptual model of the knowledge based system by the type of the production system is provided. The production system is intended for automation of problems, which solution is rigidly conditioned by the legislation. A core component of the system is a knowledge base. The knowledge base consists of a facts set, a rules set, the cognitive map and ontology. The cognitive map is developed for implementation of a control strategy, ontology - the explanation mechanism. Knowledge representation about recognition of a situation in the form of rules allows describing knowledge of the pension legislation. This approach provides the flexibility, originality and scalability of the system. In the case of changing legislation, it is necessary to change the rules set. This means that the change of the legislation would not be a big problem. The main advantage of the system is that there is an opportunity to be adapted easily to changes of the legislation.
NASA Technical Reports Server (NTRS)
Lee, Allan Y.; Tsuha, Walter S.
1993-01-01
A two-stage model reduction methodology, combining the classical Component Mode Synthesis (CMS) method and the newly developed Enhanced Projection and Assembly (EP&A) method, is proposed in this research. The first stage of this methodology, called the COmponent Modes Projection and Assembly model REduction (COMPARE) method, involves the generation of CMS mode sets, such as the MacNeal-Rubin mode sets. These mode sets are then used to reduce the order of each component model in the Rayleigh-Ritz sense. The resultant component models are then combined to generate reduced-order system models at various system configurations. A composite mode set which retains important system modes at all system configurations is then selected from these reduced-order system models. In the second stage, the EP&A model reduction method is employed to reduce further the order of the system model generated in the first stage. The effectiveness of the COMPARE methodology has been successfully demonstrated on a high-order, finite-element model of the cruise-configured Galileo spacecraft.
Towards Developing a Regional Drought Information System for Lower Mekong
NASA Astrophysics Data System (ADS)
Dutta, R.; Jayasinghe, S.; Basnayake, S. B.; Apirumanekul, C.; Pudashine, J.; Granger, S. L.; Andreadis, K.; Das, N. N.
2016-12-01
With the climate and weather patterns changing over the years, the Lower Mekong Basin have been experiencing frequent and prolonged droughts resulting in severe damage to the agricultural sector affecting food security and livelihoods of the farming community. However, the Regional Drought Information System (RDIS) for Lower Mekong countries would help prepare vulnerable communities from frequent and severe droughts through monitoring, assessing and forecasting of drought conditions and allowing decision makers to take effective decisions in terms of providing early warning, incentives to farmers, and adjustments to cropping calendars and so on. The RDIS is an integrated system that is being designed for drought monitoring, analysis and forecasting based on the need to meet the growing demand of an effective monitoring system for drought by the lower Mekong countries. The RDIS is being built on four major components that includes earth observation component, meteorological data component, database storage and Regional Hydrologic Extreme Assessment System (RHEAS) framework while the outputs from the system will be made open access to the public through a web-based user interface. The system will run on the RHEAS framework that allows both nowcasting and forecasting using hydrological and crop simulation models such as the Variable Infiltration Capacity (VIC) model and the Decision Support System for Agro-Technology Transfer (DSSAT) model respectively. The RHEAS allows for a tightly constrained observation based drought and crop yield information system that can provide customized outputs on drought that includes root zone soil moisture, Standard Precipitation Index (SPI), Standard Runoff Index (SRI), Palmer Drought Severity Index (PDSI) and Crop Yield and can integrate remote sensing products, along with evapotranspiration and soil moisture data. The anticipated outcomes from the RDIS is to improve the operational, technological and institutional capabilities of lower Mekong countries to prepare for and respond towards drought situations and providing policy makers with current and forecast drought indices for decision making on adjusting cropping calendars as well as planning short and long term mitigation measures.
Design of disturbances control model at automotive company
NASA Astrophysics Data System (ADS)
Marie, I. A.; Sari, D. K.; Astuti, P.; Teorema, M.
2017-12-01
The discussion was conducted at PT. XYZ which produces automotive components and motorcycle products. The company produced X123 type cylinder head which is a motor vehicle forming component. The disturbances in the production system has affected the company performance in achieving the target of Key Performance Indicator (KPI). Currently, the determination of the percentage of safety stock of cylinder head products is not in accordance to the control limits set by the company (60% - 80%), and tends to exceed the control limits that cause increasing the inventory wastage in the company. This study aims to identify the production system disturbances that occurs in the production process of manufacturing components of X123 type cylinder head products and design the control model of disturbance to obtain control action and determine the safety stock policy in accordance with the needs of the company. The design stage has been done based on the Disturbance Control Model which already existing and customized with the company need in controlling the production system disturbances at the company. The design of the disturbances control model consists of sub-model of the risk level of the disturbance, sub-model of action status, sub-model action control of the disturbance, and sub-model of determining the safety stock. The model can assist the automotive company in taking the decision to perform the disturbances control action in production system cylinder head while controlling the percentage of the safety stock.
An integrated system for rainfall induced shallow landslides modeling
NASA Astrophysics Data System (ADS)
Formetta, Giuseppe; Capparelli, Giovanna; Rigon, Riccardo; Versace, Pasquale
2014-05-01
Rainfall induced shallow landslides (RISL) cause significant damages involving loss of life and properties. Predict susceptible locations for RISL is a complex task that involves many disciplines: hydrology, geotechnical science, geomorphology, statistic. Usually to accomplish this task two main approaches are used: statistical or physically based model. In this work an open source (OS), 3-D, fully distributed hydrological model was integrated in an OS modeling framework (Object Modeling System). The chain is closed by linking the system to a component for safety factor computation with infinite slope approximation able to take into account layered soils and suction contribution to hillslope stability. The model composition was tested for a case study in Calabria (Italy) in order to simulate the triggering of a landslide happened in the Cosenza Province. The integration in OMS allows the use of other components such as a GIS to manage inputs-output processes, and automatic calibration algorithms to estimate model parameters. Finally, model performances were quantified by comparing modelled and simulated trigger time. This research is supported by Ambito/Settore AMBIENTE E SICUREZZA (PON01_01503) project.
Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N
2017-09-01
In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.
A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language.
Golosio, Bruno; Cangelosi, Angelo; Gamotina, Olesya; Masala, Giovanni Luca
2015-01-01
Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.
Development of a 13 kW Hall Thruster Propulsion System Performance Model for AEPS
NASA Technical Reports Server (NTRS)
Stanley, Steven; Allen, May; Goodfellow, Keith; Chew, Gilbert; Rapetti, Ryan; Tofil, Todd; Herman, Dan; Jackson, Jerry; Myers, Roger
2017-01-01
The Advanced Electric Propulsion System (AEPS) program will develop a flight 13kW Hall thruster propulsion system based on NASA's HERMeS thruster. The AEPS system includes the Hall Thruster, the Power Processing Unit (PPU) and the Xenon Flow Controller (XFC). These three primary components must operate together to ensure that the system generates the required combinations of thrust and specific impulse at the required system efficiencies for the desired system lifetime. At the highest level, the AEPS system will be integrated into the spacecraft and will receive power, propellant, and commands from the spacecraft. Power and propellant flow rates will be determined by the throttle set points commanded by the spacecraft. Within the system, the major control loop is between the mass flow rate and thruster current, with time-dependencies required to handle all expected transients, and additional, much slower interactions between the thruster and cathode temperatures, flow controller and PPU. The internal system interactions generally occur on shorter timescales than the spacecraft interactions, though certain failure modes may require rapid responses from the spacecraft. The AEPS system performance model is designed to account for all these interactions in a way that allows evaluation of the sensitivity of the system to expected changes over the planned mission as well as to assess the impacts of normal component and assembly variability during the production phase of the program. This effort describes the plan for the system performance model development, correlation to NASA test data, and how the model will be used to evaluate the critical internal and external interactions. The results will ensure the component requirements do not unnecessarily drive the system cost or overly constrain the development program. Finally, the model will be available to quickly troubleshoot any future unforeseen development challenges.
Water resources planning based on complex system dynamics: A case study of Tianjin city
NASA Astrophysics Data System (ADS)
Zhang, X. H.; Zhang, H. W.; Chen, B.; Chen, G. Q.; Zhao, X. H.
2008-12-01
A complex system dynamic (SD) model focusing on water resources, termed as TianjinSD, is developed for the integrated and scientific management of the water resources of Tianjin, which contains information feedback that governs interactions in the system and is capable of synthesizing component-level knowledge into system behavior simulation at an integrated level, thus presenting reasonable predictive results for policy-making on water resources allocation and management. As for the Tianjin city, interactions among 96 components for 12 years are explored and four planning alternatives are chosen, one of which is based on the conventional mode assuming that the existing pattern of human activities will be prevailed, while the others are alternative planning designs based on the interaction of local authorities and planning researchers. Optimal mode is therefore obtained according to different scenarios when compared the simulation results for evaluation of different decisions and dynamic consequences.
Unified modeling language and design of a case-based retrieval system in medical imaging.
LeBozec, C.; Jaulent, M. C.; Zapletal, E.; Degoulet, P.
1998-01-01
One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users. Images Figure 6 Figure 7 PMID:9929346
Unified modeling language and design of a case-based retrieval system in medical imaging.
LeBozec, C; Jaulent, M C; Zapletal, E; Degoulet, P
1998-01-01
One goal of artificial intelligence research into case-based reasoning (CBR) systems is to develop approaches for designing useful and practical interactive case-based environments. Explaining each step of the design of the case-base and of the retrieval process is critical for the application of case-based systems to the real world. We describe herein our approach to the design of IDEM--Images and Diagnosis from Examples in Medicine--a medical image case-based retrieval system for pathologists. Our approach is based on the expressiveness of an object-oriented modeling language standard: the Unified Modeling Language (UML). We created a set of diagrams in UML notation illustrating the steps of the CBR methodology we used. The key aspect of this approach was selecting the relevant objects of the system according to user requirements and making visualization of cases and of the components of the case retrieval process. Further evaluation of the expressiveness of the design document is required but UML seems to be a promising formalism, improving the communication between the developers and users.
NASA Astrophysics Data System (ADS)
Meiler, M.; Andre, D.; Schmid, O.; Hofer, E. P.
Intelligent energy management is a cost-effective key path to realize efficient automotive drive trains [R. O'Hayre, S.W. Cha, W. Colella, F.B. Prinz. Fuel Cell Fundamentals, John Wiley & Sons, Hoboken, 2006]. To develop operating strategy in fuel cell drive trains, precise and computational efficient models of all system components, especially the fuel cell stack, are needed. Should these models further be used in diagnostic or control applications, then some major requirements must be fulfilled. First, the model must predict the mean fuel cell voltage very precisely in all possible operating conditions, even during transients. The model output should be as smooth as possible to support best efficient optimization strategies of the complete system. At least, the model must be computational efficient. For most applications, a difference between real fuel cell voltage and model output of less than 10 mV and 1000 calculations per second will be sufficient. In general, empirical models based on system identification offer a better accuracy and consume less calculation resources than detailed models derived from theoretical considerations [J. Larminie, A. Dicks. Fuel Cell Systems Explained, John Wiley & Sons, West Sussex, 2003]. In this contribution, the dynamic behaviour of the mean cell voltage of a polymer-electrolyte-membrane fuel cell (PEMFC) stack due to variations in humidity of cell's reactant gases is investigated. The validity of the overall model structure, a so-called general Hammerstein model (or Uryson model), was introduced recently in [M. Meiler, O. Schmid, M. Schudy, E.P. Hofer. Dynamic fuel cell stack model for real-time simulation based on system identification, J. Power Sources 176 (2007) 523-528]. Fuel cell mean voltage is calculated as the sum of a stationary and a dynamic voltage component. The stationary component of cell voltage is represented by a lookup-table and the dynamic voltage by a parallel placed, nonlinear transfer function. A suitable experimental setup to apply fast variations of gas humidity is introduced and is used to investigate a 10 cell PEMFC stack under various operation conditions. Using methods like stepwise multiple-regression a good mathematical description with reduced free parameters is achieved.
Analytical models for coupling reliability in identical two-magnet systems during slow reversals
NASA Astrophysics Data System (ADS)
Kani, Nickvash; Naeemi, Azad
2017-12-01
This paper follows previous works which investigated the strength of dipolar coupling in two-magnet systems. While those works focused on qualitative analyses, this manuscript elucidates reversal through dipolar coupling culminating in analytical expressions for reversal reliability in identical two-magnet systems. The dipolar field generated by a mono-domain magnetic body can be represented by a tensor containing both longitudinal and perpendicular field components; this field changes orientation and magnitude based on the magnetization of neighboring nanomagnets. While the dipolar field does reduce to its longitudinal component at short time-scales, for slow magnetization reversals, the simple longitudinal field representation greatly underestimates the scope of parameters that ensure reliable coupling. For the first time, analytical models that map the geometric and material parameters required for reliable coupling in two-magnet systems are developed. It is shown that in biaxial nanomagnets, the x ̂ and y ̂ components of the dipolar field contribute to the coupling, while all three dimensions contribute to the coupling between a pair of uniaxial magnets. Additionally, the ratio of the longitudinal and perpendicular components of the dipolar field is also very important. If the perpendicular components in the dipolar tensor are too large, the nanomagnet pair may come to rest in an undesirable meta-stable state away from the free axis. The analytical models formulated in this manuscript map the minimum and maximum parameters for reliable coupling. Using these models, it is shown that there is a very small range of material parameters which can facilitate reliable coupling between perpendicular-magnetic-anisotropy nanomagnets; hence, in-plane nanomagnets are more suitable for coupled systems.
The KATE shell: An implementation of model-based control, monitor and diagnosis
NASA Technical Reports Server (NTRS)
Cornell, Matthew
1987-01-01
The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.
Hydrograph separation for karst watersheds using a two-domain rainfall-discharge model
Long, Andrew J.
2009-01-01
Highly parameterized, physically based models may be no more effective at simulating the relations between rainfall and outflow from karst watersheds than are simpler models. Here an antecedent rainfall and convolution model was used to separate a karst watershed hydrograph into two outflow components: one originating from focused recharge in conduits and one originating from slow flow in a porous annex system. In convolution, parameters of a complex system are lumped together in the impulse-response function (IRF), which describes the response of the system to an impulse of effective precipitation. Two parametric functions in superposition approximate the two-domain IRF. The outflow hydrograph can be separated into flow components by forward modeling with isolated IRF components, which provides an objective criterion for separation. As an example, the model was applied to a karst watershed in the Madison aquifer, South Dakota, USA. Simulation results indicate that this watershed is characterized by a flashy response to storms, with a peak response time of 1 day, but that 89% of the flow results from the slow-flow domain, with a peak response time of more than 1 year. This long response time may be the result of perched areas that store water above the main water table. Simulation results indicated that some aspects of the system are stationary but that nonlinearities also exist.
Reliability and performance evaluation of systems containing embedded rule-based expert systems
NASA Technical Reports Server (NTRS)
Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.
1989-01-01
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.
Dynamic and Contextual Information in HMM Modeling for Handwritten Word Recognition.
Bianne-Bernard, Anne-Laure; Menasri, Farès; Al-Hajj Mohamad, Rami; Mokbel, Chafic; Kermorvant, Christopher; Likforman-Sulem, Laurence
2011-10-01
This study aims at building an efficient word recognition system resulting from the combination of three handwriting recognizers. The main component of this combined system is an HMM-based recognizer which considers dynamic and contextual information for a better modeling of writing units. For modeling the contextual units, a state-tying process based on decision tree clustering is introduced. Decision trees are built according to a set of expert-based questions on how characters are written. Questions are divided into global questions, yielding larger clusters, and precise questions, yielding smaller ones. Such clustering enables us to reduce the total number of models and Gaussians densities by 10. We then apply this modeling to the recognition of handwritten words. Experiments are conducted on three publicly available databases based on Latin or Arabic languages: Rimes, IAM, and OpenHart. The results obtained show that contextual information embedded with dynamic modeling significantly improves recognition.
A development framework for artificial intelligence based distributed operations support systems
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1990-01-01
Advanced automation is required to reduce costly human operations support requirements for complex space-based and ground control systems. Existing knowledge based technologies have been used successfully to automate individual operations tasks. Considerably less progress has been made in integrating and coordinating multiple operations applications for unified intelligent support systems. To fill this gap, SOCIAL, a tool set for developing Distributed Artificial Intelligence (DAI) systems is being constructed. SOCIAL consists of three primary language based components defining: models of interprocess communication across heterogeneous platforms; models for interprocess coordination, concurrency control, and fault management; and for accessing heterogeneous information resources. DAI applications subsystems, either new or existing, will access these distributed services non-intrusively, via high-level message-based protocols. SOCIAL will reduce the complexity of distributed communications, control, and integration, enabling developers to concentrate on the design and functionality of the target DAI system itself.
[Compatible biomass models of natural spruce (Picea asperata)].
Wang, Jin Chi; Deng, Hua Feng; Huang, Guo Sheng; Wang, Xue Jun; Zhang, Lu
2017-10-01
By using nonlinear measurement error method, the compatible tree volume and above ground biomass equations were established based on the volume and biomass data of 150 sampling trees of natural spruce (Picea asperata). Two approaches, controlling directly under total aboveground biomass and controlling jointly from level to level, were used to design the compatible system for the total aboveground biomass and the biomass of four components (stem, bark, branch and foliage), and the total ground biomass could be estimated independently or estimated simultaneously in the system. The results showed that the R 2 of the one variable and bivariate compatible tree volume and aboveground biomass equations were all above 0.85, and the maximum value reached 0.99. The prediction effect of the volume equations could be improved significantly when tree height was included as predictor, while it was not significant in biomass estimation. For the compatible biomass systems, the one variable model based on controlling jointly from level to level was better than the model using controlling directly under total above ground biomass, but the bivariate models of the two methods were similar. Comparing the imitative effects of the one variable and bivariate compatible biomass models, the results showed that the increase of explainable variables could significantly improve the fitness of branch and foliage biomass, but had little effect on other components. Besides, there was almost no difference between the two methods of estimation based on the comparison.
Robotics On-Board Trainer (ROBoT)
NASA Technical Reports Server (NTRS)
Johnson, Genevieve; Alexander, Greg
2013-01-01
ROBoT is an on-orbit version of the ground-based Dynamics Skills Trainer (DST) that astronauts use for training on a frequent basis. This software consists of two primary software groups. The first series of components is responsible for displaying the graphical scenes. The remaining components are responsible for simulating the Mobile Servicing System (MSS), the Japanese Experiment Module Remote Manipulator System (JEMRMS), and the H-II Transfer Vehicle (HTV) Free Flyer Robotics Operations. The MSS simulation software includes: Robotic Workstation (RWS) simulation, a simulation of the Space Station Remote Manipulator System (SSRMS), a simulation of the ISS Command and Control System (CCS), and a portion of the Portable Computer System (PCS) software necessary for MSS operations. These components all run under the CentOS4.5 Linux operating system. The JEMRMS simulation software includes real-time, HIL, dynamics, manipulator multi-body dynamics, and a moving object contact model with Tricks discrete time scheduling. The JEMRMS DST will be used as a functional proficiency and skills trainer for flight crews. The HTV Free Flyer Robotics Operations simulation software adds a functional simulation of HTV vehicle controllers, sensors, and data to the MSS simulation software. These components are intended to support HTV ISS visiting vehicle analysis and training. The scene generation software will use DOUG (Dynamic On-orbit Ubiquitous Graphics) to render the graphical scenes. DOUG runs on a laptop running the CentOS4.5 Linux operating system. DOUG is an Open GL-based 3D computer graphics rendering package. It uses pre-built three-dimensional models of on-orbit ISS and space shuttle systems elements, and provides realtime views of various station and shuttle configurations.
Mark-Up-Based Writing Error Analysis Model in an On-Line Classroom.
ERIC Educational Resources Information Center
Feng, Cheng; Yano, Yoneo; Ogata, Hiroaki
2000-01-01
Describes a new component called "Writing Error Analysis Model" (WEAM) in the CoCoA system for teaching writing composition in Japanese as a foreign language. The Weam can be used for analyzing learners' morphological errors and selecting appropriate compositions for learners' revising exercises. (Author/VWL)
School Site Strategic Planning To Improve District Performance.
ERIC Educational Resources Information Center
Lytle, James H.
This paper describes the evolution of a school-based planning model that accommodates independent approaches to School District of Philadelphia goals. The description centers on key strategic planning decisions made during a 6-year period and three components of the planning model: the organizational monitoring and feedback system; organizational…
OTLA: A New Model for Online Teaching, Learning and Assessment in Higher Education
ERIC Educational Resources Information Center
Ghilay, Yaron; Ghilay, Ruth
2013-01-01
The study examined a new asynchronous model for online teaching, learning and assessment, called OTLA. It is designed for higher-education institutions and is based on LMS (Learning Management System) as well as other relevant IT tools. The new model includes six digital basic components: text, hypertext, text reading, lectures (voice/video),…
Mathematical Model of the Jet Engine Fuel System
NASA Astrophysics Data System (ADS)
Klimko, Marek
2015-05-01
The paper discusses the design of a simplified mathematical model of the jet (turbo-compressor) engine fuel system. The solution will be based on the regulation law, where the control parameter is a fuel mass flow rate and the regulated parameter is the rotational speed. A differential equation of the jet engine and also differential equations of other fuel system components (fuel pump, throttle valve, pressure regulator) will be described, with respect to advanced predetermined simplifications.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Remaining lifetime modeling using State-of-Health estimation
NASA Astrophysics Data System (ADS)
Beganovic, Nejra; Söffker, Dirk
2017-08-01
Technical systems and system's components undergo gradual degradation over time. Continuous degradation occurred in system is reflected in decreased system's reliability and unavoidably lead to a system failure. Therefore, continuous evaluation of State-of-Health (SoH) is inevitable to provide at least predefined lifetime of the system defined by manufacturer, or even better, to extend the lifetime given by manufacturer. However, precondition for lifetime extension is accurate estimation of SoH as well as the estimation and prediction of Remaining Useful Lifetime (RUL). For this purpose, lifetime models describing the relation between system/component degradation and consumed lifetime have to be established. In this contribution modeling and selection of suitable lifetime models from database based on current SoH conditions are discussed. Main contribution of this paper is the development of new modeling strategies capable to describe complex relations between measurable system variables, related system degradation, and RUL. Two approaches with accompanying advantages and disadvantages are introduced and compared. Both approaches are capable to model stochastic aging processes of a system by simultaneous adaption of RUL models to current SoH. The first approach requires a priori knowledge about aging processes in the system and accurate estimation of SoH. An estimation of SoH here is conditioned by tracking actual accumulated damage into the system, so that particular model parameters are defined according to a priori known assumptions about system's aging. Prediction accuracy in this case is highly dependent on accurate estimation of SoH but includes high number of degrees of freedom. The second approach in this contribution does not require a priori knowledge about system's aging as particular model parameters are defined in accordance to multi-objective optimization procedure. Prediction accuracy of this model does not highly depend on estimated SoH. This model has lower degrees of freedom. Both approaches rely on previously developed lifetime models each of them corresponding to predefined SoH. Concerning first approach, model selection is aided by state-machine-based algorithm. In the second approach, model selection conditioned by tracking an exceedance of predefined thresholds is concerned. The approach is applied to data generated from tribological systems. By calculating Root Squared Error (RSE), Mean Squared Error (MSE), and Absolute Error (ABE) the accuracy of proposed models/approaches is discussed along with related advantages and disadvantages. Verification of the approach is done using cross-fold validation, exchanging training and test data. It can be stated that the newly introduced approach based on data (denoted as data-based or data-driven) parametric models can be easily established providing detailed information about remaining useful/consumed lifetime valid for systems with constant load but stochastically occurred damage.
Conceptualizing the dynamics of workplace stress: a systems-based study of nursing aides.
Jetha, Arif; Kernan, Laura; Kurowski, Alicia
2017-01-05
Workplace stress is a complex phenomenon that may often be dynamic and evolving over time. Traditional linear modeling does not allow representation of recursive feedback loops among the implicated factors. The objective of this study was to develop a multidimensional system dynamics model (SDM) of workplace stress among nursing aides and conduct simulations to illustrate how changes in psychosocial perceptions and workplace factors might influence workplace stress over time. Eight key informants with prior experience in a large study of US nursing home workers participated in model building. Participants brainstormed the range of components related to workplace stress. Components were grouped together based on common themes and translated into feedback loops. The SDM was parameterized through key informant insight on the shape and magnitude of the relationship between model components. Model construction was also supported utilizing survey data collected as part of the larger study. All data was entered into the software program, Vensim. Simulations were conducted to examine how adaptations to model components would influence workplace stress. The SDM included perceptions of organizational conditions (e.g., job demands and job control), workplace social support (i.e., managerial and coworker social support), workplace safety, and demands outside of work (i.e. work-family conflict). Each component was part of a reinforcing feedback loop. Simulations exhibited that scenarios with increasing job control and decreasing job demands led to a decline in workplace stress. Within the context of the system, the effects of workplace social support, workplace safety, and work-family conflict were relatively minor. SDM methodology offers a unique perspective for researchers and practitioners to view workplace stress as a dynamic process. The portrayal of multiple recursive feedback loops can guide the development of policies and programs within complex organizational contexts with attention both to interactions among causes and avoidance of adverse unintended consequences. While additional research is needed to further test the modeling approach, findings might underscore the need to direct workplace interventions towards changing organizational conditions for nursing aides.
Interpretation of BM Orionis. [eclipsing binary model
NASA Technical Reports Server (NTRS)
Huang, S.-S.
1975-01-01
The entire light curve of the BM Ori system both inside and outside primary and secondary eclipses has been examined on the basis of two models for the disk around the secondary component: one with the luminous energy of the disk coming entirely from the secondary, and another with the luminous energy coming at least partly from the primary. It has been found that if the disk is highly opaque, as is suggested by the fitting of the light curve, there exist in the first model discrepancies between what has been derived from the luminosity consideration for the secondary component and what has been derived from the radius consideration. Hence the second model is accepted. Based on this model the nature of both component stars has been examined from a consideration of the luminosity and the dimensions of the disk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Droppo, J.G.; Buck, J.W.
1996-03-01
The Multimedia Environmental Pollutant Assessment System (MEPAS) is an integrated software implementation of physics-based fate and transport models for health and environmental risk assessments of both radioactive and hazardous pollutants. This atmospheric component report is one of a series of formulation reports that document the MEPAS mathematical models. MEPAS is a multimedia model; pollutant transport is modeled within, through, and between multiple media (air, soil, groundwater, and surface water). The estimated concentrations in the various media are used to compute exposures and impacts to the environment, to maximum individuals, and to populations. The MEPAS atmospheric component for the air mediamore » documented in this report includes models for emission from a source to the air, initial plume rise and dispersion, airborne pollutant transport and dispersion, and deposition to soils and crops. The material in this report is documentation for MEPAS Versions 3.0 and 3.1 and the MEPAS version used in the Remedial Action Assessment System (RAAS) Version 1.0.« less
Software Considerations for Subscale Flight Testing of Experimental Control Laws
NASA Technical Reports Server (NTRS)
Murch, Austin M.; Cox, David E.; Cunningham, Kevin
2009-01-01
The NASA AirSTAR system has been designed to address the challenges associated with safe and efficient subscale flight testing of research control laws in adverse flight conditions. In this paper, software elements of this system are described, with an emphasis on components which allow for rapid prototyping and deployment of aircraft control laws. Through model-based design and automatic coding a common code-base is used for desktop analysis, piloted simulation and real-time flight control. The flight control system provides the ability to rapidly integrate and test multiple research control laws and to emulate component or sensor failures. Integrated integrity monitoring systems provide aircraft structural load protection, isolate the system from control algorithm failures, and monitor the health of telemetry streams. Finally, issues associated with software configuration management and code modularity are briefly discussed.
Regression to fuzziness method for estimation of remaining useful life in power plant components
NASA Astrophysics Data System (ADS)
Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.
2014-10-01
Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2018-03-26
In this paper we present a framework for the reduction and linking of physiologically based pharmacokinetic (PBPK) models with models of systems biology to describe the effects of drug administration across multiple scales. To address the issue of model complexity, we propose the reduction of each type of model separately prior to being linked. We highlight the use of balanced truncation in reducing the linear components of PBPK models, whilst proper lumping is shown to be efficient in reducing typically nonlinear systems biology type models. The overall methodology is demonstrated via two example systems; a model of bacterial chemotactic signalling in Escherichia coli and a model of extracellular regulatory kinase activation mediated via the extracellular growth factor and nerve growth factor receptor pathways. Each system is tested under the simulated administration of three hypothetical compounds; a strong base, a weak base, and an acid, mirroring the parameterisation of pindolol, midazolam, and thiopental, respectively. Our method can produce up to an 80% decrease in simulation time, allowing substantial speed-up for computationally intensive applications including parameter fitting or agent based modelling. The approach provides a straightforward means to construct simplified Quantitative Systems Pharmacology models that still provide significant insight into the mechanisms of drug action. Such a framework can potentially bridge pre-clinical and clinical modelling - providing an intermediate level of model granularity between classical, empirical approaches and mechanistic systems describing the molecular scale.
The deconvolution of complex spectra by artificial immune system
NASA Astrophysics Data System (ADS)
Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.
2017-11-01
An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.
Nedrelow, David S; Bankwala, Danesh; Hyypio, Jeffrey D; Lai, Victor K; Barocas, Victor H
2018-05-01
The mechanical behavior of collagen-fibrin (col-fib) co-gels is both scientifically interesting and clinically relevant. Collagen-fibrin networks are a staple of tissue engineering research, but the mechanical consequences of changes in co-gel composition have remained difficult to predict or even explain. We previously observed fundamental differences in failure behavior between collagen-rich and fibrin-rich co-gels, suggesting an essential change in how the two components interact as the co-gel's composition changes. In this work, we explored the hypothesis that the co-gel behavior is due to a lack of percolation by the dilute component. We generated a series of computational models based on interpenetrating fiber networks. In these models, the major network component percolated the model space but the minor component did not, instead occupying a small island embedded within the larger network. Each component was assigned properties based on a fit of single-component gel data. Island size was varied to match the relative concentrations of the two components. The model predicted that networks rich in collagen, the stiffer component, would roughly match pure-collagen gel behavior with little additional stress due to the fibrin, as seen experimentally. For fibrin-rich gels, however, the model predicted a smooth increase in the overall network strength with added collagen, as seen experimentally but not consistent with an additive parallel model. We thus conclude that incomplete percolation by the low-concentration component of a co-gel is a major determinant of its macroscopic properties, especially if the low-concentration component is the stiffer component. Models for the behavior of fibrous networks have useful applications in many different fields, including polymer science, textiles, and tissue engineering. In addition to being important structural components in soft tissues and blood clots, these protein networks can serve as scaffolds for bioartificial tissues. Thus, their mechanical behavior, especially in co-gels, is both interesting from a materials science standpoint and significant with regard to tissue engineering. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.
Jordanous, Anna; Keller, Bill
2016-01-01
Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.
Integration of the Remote Agent for the NASA Deep Space One Autonomy Experiment
NASA Technical Reports Server (NTRS)
Dorais, Gregory A.; Bernard, Douglas E.; Gamble, Edward B., Jr.; Kanefsky, Bob; Kurien, James; Muscettola, Nicola; Nayak, P. Pandurang; Rajan, Kanna; Lau, Sonie (Technical Monitor)
1998-01-01
This paper describes the integration of the Remote Agent (RA), a spacecraft autonomy system which is scheduled to control the Deep Space 1 spacecraft during a flight experiment in 1999. The RA is a reusable, model-based autonomy system that is quite different from software typically used to control an aerospace system. We describe the integration challenges we faced, how we addressed them, and the lessons learned. We focus on those aspects of integrating the RA that were either easier or more difficult than integrating a more traditional large software application because the RA is a model-based autonomous system. A number of characteristics of the RA made integration process easier. One example is the model-based nature of RA. Since the RA is model-based, most of its behavior is not hard coded into procedural program code. Instead, engineers specify high level models of the spacecraft's components from which the Remote Agent automatically derives correct system-wide behavior on the fly. This high level, modular, and declarative software description allowed some interfaces between RA components and between RA and the flight software to be automatically generated and tested for completeness against the Remote Agent's models. In addition, the Remote Agent's model-based diagnosis system automatically diagnoses when the RA models are not consistent with the behavior of the spacecraft. In flight, this feature is used to diagnose failures in the spacecraft hardware. During integration, it proved valuable in finding problems in the spacecraft simulator or flight software. In addition, when modifications are made to the spacecraft hardware or flight software, the RA models are easily changed because they only capture a description of the spacecraft. one does not have to maintain procedural code that implements the correct behavior for every expected situation. On the other hand, several features of the RA made it more difficult to integrate than typical flight software. For example, the definition of correct behavior is more difficult to specify for a system that is expected to reason about and flexibly react to its environment than for a traditional flight software system. Consequently, whenever a change is made to the RA it is more time consuming to determine if the resulting behavior is correct. We conclude the paper with a discussion of future work on the Remote Agent as well as recommendations to ease integration of similar autonomy projects.
An investigation of modelling and design for software service applications.
Anjum, Maria; Budgen, David
2017-01-01
Software services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the 'design model'. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model.
Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinson, Zachary; Verduijn, Erik; Wood, Obert R.
2016-04-01
Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less
Using the DPSIR Framework to Develop a Conceptual Model: Technical Support Document
Modern problems (e.g., pollution, urban sprawl, environmental equity) are complex and often transcend spatial and temporal scales. Systems thinking is an approach to problem solving that is based on the belief that the component parts of a system are best understood in the contex...
NASA Astrophysics Data System (ADS)
Lengyel, F.; Yang, P.; Rosenzweig, B.; Vorosmarty, C. J.
2012-12-01
The Northeast Regional Earth System Model (NE-RESM, NSF Award #1049181) integrates weather research and forecasting models, terrestrial and aquatic ecosystem models, a water balance/transport model, and mesoscale and energy systems input-out economic models developed by interdisciplinary research team from academia and government with expertise in physics, biogeochemistry, engineering, energy, economics, and policy. NE-RESM is intended to forecast the implications of planning decisions on the region's environment, ecosystem services, energy systems and economy through the 21st century. Integration of model components and the development of cyberinfrastructure for interacting with the system is facilitated with the integrated Rule Oriented Data System (iRODS), a distributed data grid that provides archival storage with metadata facilities and a rule-based workflow engine for automating and auditing scientific workflows.
Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems
NASA Astrophysics Data System (ADS)
Kreuder, John J.
Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.
Development and implementation of an Integrated Water Resources Management System (IWRMS)
NASA Astrophysics Data System (ADS)
Flügel, W.-A.; Busch, C.
2011-04-01
One of the innovative objectives in the EC project BRAHMATWINN was the development of a stakeholder oriented Integrated Water Resources Management System (IWRMS). The toolset integrates the findings of the project and presents it in a user friendly way for decision support in sustainable integrated water resources management (IWRM) in river basins. IWRMS is a framework, which integrates different types of basin information and which supports the development of IWRM options for climate change mitigation. It is based on the River Basin Information System (RBIS) data models and delivers a graphical user interface for stakeholders. A special interface was developed for the integration of the enhanced DANUBIA model input and the NetSyMod model with its Mulino decision support system (mulino mDss) component. The web based IWRMS contains and combines different types of data and methods to provide river basin data and information for decision support. IWRMS is based on a three tier software framework which uses (i) html/javascript at the client tier, (ii) PHP programming language to realize the application tier, and (iii) a postgresql/postgis database tier to manage and storage all data, except the DANUBIA modelling raw data, which are file based and registered in the database tier. All three tiers can reside on one or different computers and are adapted to the local hardware infrastructure. IWRMS as well as RBIS are based on Open Source Software (OSS) components and flexible and time saving access to that database is guaranteed by web-based interfaces for data visualization and retrieval. The IWRMS is accessible via the BRAHMATWINN homepage: http://www.brahmatwinn.uni-jena.de and a user manual for the RBIS is available for download as well.
Schindlbeck, Christopher; Pape, Christian; Reithmeier, Eduard
2018-04-16
Alignment of optical components is crucial for the assembly of optical systems to ensure their full functionality. In this paper we present a novel predictor-corrector framework for the sequential assembly of serial optical systems. Therein, we use a hybrid optical simulation model that comprises virtual and identified component positions. The hybrid model is constantly adapted throughout the assembly process with the help of nonlinear identification techniques and wavefront measurements. This enables prediction of the future wavefront at the detector plane and therefore allows for taking corrective measures accordingly during the assembly process if a user-defined tolerance on the wavefront error is violated. We present a novel notation for the so-called hybrid model and outline the work flow of the presented predictor-corrector framework. A beam expander is assembled as demonstrator for experimental verification of the framework. The optical setup consists of a laser, two bi-convex spherical lenses each mounted to a five degree-of-freedom stage to misalign and correct components, and a Shack-Hartmann sensor for wavefront measurements.
Research on criticality analysis method of CNC machine tools components under fault rate correlation
NASA Astrophysics Data System (ADS)
Gui-xiang, Shen; Xian-zhuo, Zhao; Zhang, Ying-zhi; Chen-yu, Han
2018-02-01
In order to determine the key components of CNC machine tools under fault rate correlation, a system component criticality analysis method is proposed. Based on the fault mechanism analysis, the component fault relation is determined, and the adjacency matrix is introduced to describe it. Then, the fault structure relation is hierarchical by using the interpretive structure model (ISM). Assuming that the impact of the fault obeys the Markov process, the fault association matrix is described and transformed, and the Pagerank algorithm is used to determine the relative influence values, combined component fault rate under time correlation can obtain comprehensive fault rate. Based on the fault mode frequency and fault influence, the criticality of the components under the fault rate correlation is determined, and the key components are determined to provide the correct basis for equationting the reliability assurance measures. Finally, taking machining centers as an example, the effectiveness of the method is verified.
Development and Validation of a Slurry Model for Chemical Hydrogen Storage in Fuel Cell Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Kriston P.; Pires, Richard P.; Simmons, Kevin L.
2014-07-25
The US Department of Energy's (DOE) Hydrogen Storage Engineering Center of Excellence (HSECoE) is developing models for hydrogen storage systems for fuel cell-based light duty vehicle applications for a variety of promising materials. These transient models simulate the performance of the storage system for comparison to the DOE’s Technical Targets and a set of four drive cycles. The purpose of this research is to describe the models developed for slurry-based chemical hydrogen storage materials. The storage systems of both a representative exothermic system based on ammonia borane and endothermic system based on alane were developed and modeled in Simulink®. Oncemore » complete the reactor and radiator components of the model were validated with experimental data. The model was then run using a highway cycle, an aggressive cycle, cold-start cycle and hot drive cycle. The system design was adjusted to meet these drive cycles. A sensitivity analysis was then performed to identify the range of material properties where these DOE targets and drive cycles could be met. Materials with a heat of reaction greater than 11 kJ/mol H2 generated and a slurry hydrogen capacity of greater than 11.4% will meet the on-board efficiency and gravimetric capacity targets, respectively.« less
The Ensemble Space Weather Modeling System (eSWMS): Status, Capabilities and Challenges
NASA Astrophysics Data System (ADS)
Fry, C. D.; Eccles, J. V.; Reich, J. P.
2010-12-01
Marking a milestone in space weather forecasting, the Space Weather Modeling System (SWMS) successfully completed validation testing in advance of operational testing at Air Force Weather Agency’s primary space weather production center. This is the first coupling of stand-alone, physics-based space weather models that are currently in operations at AFWA supporting the warfighter. Significant development effort went into ensuring the component models were portable and scalable while maintaining consistent results across diverse high performance computing platforms. Coupling was accomplished under the Earth System Modeling Framework (ESMF). The coupled space weather models are the Hakamada-Akasofu-Fry version 2 (HAFv2) solar wind model and GAIM1, the ionospheric forecast component of the Global Assimilation of Ionospheric Measurements (GAIM) model. The SWMS was developed by team members from AFWA, Explorations Physics International, Inc. (EXPI) and Space Environment Corporation (SEC). The successful development of the SWMS provides new capabilities beyond enabling extended lead-time, data-driven ionospheric forecasts. These include ingesting diverse data sets at higher resolution, incorporating denser computational grids at finer time steps, and performing probability-based ensemble forecasts. Work of the SWMS development team now focuses on implementing the ensemble-based probability forecast capability by feeding multiple scenarios of 5 days of solar wind forecasts to the GAIM1 model based on the variation of the input fields to the HAFv2 model. The ensemble SWMS (eSWMS) will provide the most-likely space weather scenario with uncertainty estimates for important forecast fields. The eSWMS will allow DoD mission planners to consider the effects of space weather on their systems with more advance warning than is currently possible. The payoff is enhanced, tailored support to the warfighter with improved capabilities, such as point-to-point HF propagation forecasts, single-frequency GPS error corrections, and high cadence, high-resolution Space Situational Awareness (SSA) products. We present the current status of eSWMS, its capabilities, limitations and path of transition to operational use.
Water Plan 2030: A Dynamic Education Model for Teaching Water Management Issues
NASA Astrophysics Data System (ADS)
Rupprecht, C.; Washburne, J.; Lansey, K.; Williams, A.
2006-12-01
Dynamic educational tools to assist teachers and students in recognizing the impacts of water management decisions in a realistic context are not readily available. Water policy issues are often complex and difficult for students trying to make meaningful connections between system components. To fill this need, we have developed a systems modeling-based educational decision support system (DSS) with supplementary materials. This model, called Water Plan 2030, represents a general semi-arid watershed; it allows users to examine water management alternatives by changing input values for various water uses and basin conditions and immediately receive graphical outputs to compare decisions. The main goal of our DSS model is to foster students' abilities to make knowledgeable decisions with regard to water resources issues. There are two reasons we have developed this model for traditional classroom settings. First, the DSS model provides teachers with a mechanism for educating students about inter-related hydrologic concepts, complex systems and facilitates discussion of water resources issues. Second, Water Plan 2030 encourages student discovery of cause/effect relationships in a dynamic, hands-on environment and develops the ability to realize the implications of water management alternatives. The DSS model has been utilized in an undergraduate, non-major science class for 5 course hours, each of the past 4 semesters. Accompanying the PC-based model are supplementary materials to improve the effectiveness of implementation by emphasizing important concepts and guiding learners through the model components. These materials include in-class tutorials, introductory questions, role-playing activities and homework extensions that have been revised after each user session, based on student and instructor feedback. Most recently, we have developed individual lessons that teach specific model functions and concepts. These modules provide teachers the flexibility to adapt the model to meet numerous teaching goals. Evaluation results indicate that students improved their understanding of fundamental concepts and system interactions and showed the most improvement in questions related to water use by sector and sustainability issues. Model modifications have also improved student feedback of the model effectiveness and user- friendliness. Positive results from this project have created the demand for a web-based version, which will be online in late 2006.