Towards a Theory-Based Design Framework for an Effective E-Learning Computer Programming Course
ERIC Educational Resources Information Center
McGowan, Ian S.
2016-01-01
Built on Dabbagh (2005), this paper presents a four component theory-based design framework for an e-learning session in introductory computer programming. The framework, driven by a body of exemplars component, emphasizes the transformative interaction between the knowledge building community (KBC) pedagogical model, a mixed instructional…
Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G
2012-01-01
Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.
Developing a semantic web model for medical differential diagnosis recommendation.
Mohammed, Osama; Benlamri, Rachid
2014-10-01
In this paper we describe a novel model for differential diagnosis designed to make recommendations by utilizing semantic web technologies. The model is a response to a number of requirements, ranging from incorporating essential clinical diagnostic semantics to the integration of data mining for the process of identifying candidate diseases that best explain a set of clinical features. We introduce two major components, which we find essential to the construction of an integral differential diagnosis recommendation model: the evidence-based recommender component and the proximity-based recommender component. Both approaches are driven by disease diagnosis ontologies designed specifically to enable the process of generating diagnostic recommendations. These ontologies are the disease symptom ontology and the patient ontology. The evidence-based diagnosis process develops dynamic rules based on standardized clinical pathways. The proximity-based component employs data mining to provide clinicians with diagnosis predictions, as well as generates new diagnosis rules from provided training datasets. This article describes the integration between these two components along with the developed diagnosis ontologies to form a novel medical differential diagnosis recommendation model. This article also provides test cases from the implementation of the overall model, which shows quite promising diagnostic recommendation results.
Data driven propulsion system weight prediction model
NASA Astrophysics Data System (ADS)
Gerth, Richard J.
1994-10-01
The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saha, Sankalita; Goebel, Kai
2011-01-01
Accelerated aging methodologies for electrolytic components have been designed and accelerated aging experiments have been carried out. The methodology is based on imposing electrical and/or thermal overstresses via electrical power cycling in order to mimic the real world operation behavior. Data are collected in-situ and offline in order to periodically characterize the devices' electrical performance as it ages. The data generated through these experiments are meant to provide capability for the validation of prognostic algorithms (both model-based and data-driven). Furthermore, the data allow validation of physics-based and empirical based degradation models for this type of capacitor. A first set of models and algorithms has been designed and tested on the data.
A Model-Driven Development Method for Management Information Systems
NASA Astrophysics Data System (ADS)
Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki
Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parsons, Taylor; Guo, Yi; Veers, Paul
Software models that use design-level input variables and physics-based engineering analysis for estimating the mass and geometrical properties of components in large-scale machinery can be very useful for analyzing design trade-offs in complex systems. This study uses DriveSE, an OpenMDAO-based drivetrain model that uses stress and deflection criteria to size drivetrain components within a geared, upwind wind turbine. Because a full lifetime fatigue load spectrum can only be defined using computationally-expensive simulations in programs such as FAST, a parameterized fatigue loads spectrum that depends on wind conditions, rotor diameter, and turbine design life has been implemented. The parameterized fatigue spectrummore » is only used in this paper to demonstrate the proposed fatigue analysis approach. This paper details a three-part investigation of the parameterized approach and a comparison of the DriveSE model with and without fatigue analysis on the main shaft system. It compares loads from three turbines of varying size and determines if and when fatigue governs drivetrain sizing compared to extreme load-driven design. It also investigates the model's sensitivity to shaft material parameters. The intent of this paper is to demonstrate how fatigue considerations in addition to extreme loads can be brought into a system engineering optimization.« less
Models and Frameworks: A Synergistic Association for Developing Component-Based Applications
Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A.; Álvarez, Bárbara
2014-01-01
The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development. PMID:25147858
Models and frameworks: a synergistic association for developing component-based applications.
Alonso, Diego; Sánchez-Ledesma, Francisco; Sánchez, Pedro; Pastor, Juan A; Álvarez, Bárbara
2014-01-01
The use of frameworks and components has been shown to be effective in improving software productivity and quality. However, the results in terms of reuse and standardization show a dearth of portability either of designs or of component-based implementations. This paper, which is based on the model driven software development paradigm, presents an approach that separates the description of component-based applications from their possible implementations for different platforms. This separation is supported by automatic integration of the code obtained from the input models into frameworks implemented using object-oriented technology. Thus, the approach combines the benefits of modeling applications from a higher level of abstraction than objects, with the higher levels of code reuse provided by frameworks. In order to illustrate the benefits of the proposed approach, two representative case studies that use both an existing framework and an ad hoc framework, are described. Finally, our approach is compared with other alternatives in terms of the cost of software development.
Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition
Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.
2015-01-01
In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601
Real-time computing platform for spiking neurons (RT-spike).
Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael
2006-07-01
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
Framework for a clinical information system.
Van De Velde, R; Lansiers, R; Antonissen, G
2002-01-01
The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints.
Sundharam, Sakthivel Manikandan; Navet, Nicolas; Altmeyer, Sebastian; Havet, Lionel
2018-02-20
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system.
A Model-Driven Co-Design Framework for Fusing Control and Scheduling Viewpoints
Navet, Nicolas; Havet, Lionel
2018-01-01
Model-Driven Engineering (MDE) is widely applied in the industry to develop new software functions and integrate them into the existing run-time environment of a Cyber-Physical System (CPS). The design of a software component involves designers from various viewpoints such as control theory, software engineering, safety, etc. In practice, while a designer from one discipline focuses on the core aspects of his field (for instance, a control engineer concentrates on designing a stable controller), he neglects or considers less importantly the other engineering aspects (for instance, real-time software engineering or energy efficiency). This may cause some of the functional and non-functional requirements not to be met satisfactorily. In this work, we present a co-design framework based on timing tolerance contract to address such design gaps between control and real-time software engineering. The framework consists of three steps: controller design, verified by jitter margin analysis along with co-simulation, software design verified by a novel schedulability analysis, and the run-time verification by monitoring the execution of the models on target. This framework builds on CPAL (Cyber-Physical Action Language), an MDE design environment based on model-interpretation, which enforces a timing-realistic behavior in simulation through timing and scheduling annotations. The application of our framework is exemplified in the design of an automotive cruise control system. PMID:29461489
NASA Technical Reports Server (NTRS)
Dehghani, Navid; Tankenson, Michael
2006-01-01
This paper details an architectural description of the Mission Data Processing and Control System (MPCS), an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is developed based on a set of small reusable components, implemented in Java, each designed with a specific function and well-defined interfaces. An industry standard messaging bus is used to transfer information among system components. Components generate standard messages which are used to capture system information, as well as triggers to support the event-driven architecture of the system. Event-driven systems are highly desirable for processing high-rate telemetry (science and engineering) data, and for supporting automation for many mission operations processes.
NASA Technical Reports Server (NTRS)
Dehghani, Navid; Tankenson, Michael
2006-01-01
This viewgraph presentation reviews the architectural description of the Mission Data Processing and Control System (MPCS). MPCS is an event-driven, multi-mission ground data processing components providing uplink, downlink, and data management capabilities which will support the Mars Science Laboratory (MSL) project as its first target mission. MPCS is designed with these factors (1) Enabling plug and play architecture (2) MPCS has strong inheritance from GDS components that have been developed for other Flight Projects (MER, MRO, DAWN, MSAP), and are currently being used in operations and ATLO, and (3) MPCS components are Java-based, platform independent, and are designed to consume and produce XML-formatted data
Simulation Studies of the Dielectric Grating as an Accelerating and Focusing Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soong, Ken; Peralta, E.A.; Byer, R.L.
A grating-based design is a promising candidate for a laser-driven dielectric accelerator. Through simulations, we show the merits of a readily fabricated grating structure as an accelerating component. Additionally, we show that with a small design perturbation, the accelerating component can be converted into a focusing structure. The understanding of these two components is critical in the successful development of any complete accelerator. The concept of accelerating electrons with the tremendous electric fields found in lasers has been proposed for decades. However, until recently the realization of such an accelerator was not technologically feasible. Recent advances in the semiconductor industry,more » as well as advances in laser technology, have now made laser-driven dielectric accelerators imminent. The grating-based accelerator is one proposed design for a dielectric laser-driven accelerator. This design, which was introduced by Plettner, consists of a pair of opposing transparent binary gratings, illustrated in Fig. 1. The teeth of the gratings serve as a phase mask, ensuring a phase synchronicity between the electromagnetic field and the moving particles. The current grating accelerator design has the drive laser incident perpendicular to the substrate, which poses a laser-structure alignment complication. The next iteration of grating structure fabrication seeks to monolithically create an array of grating structures by etching the grating's vacuum channel into a fused silica wafer. With this method it is possible to have the drive laser confined to the plane of the wafer, thus ensuring alignment of the laser-and-structure, the two grating halves, and subsequent accelerator components. There has been previous work using 2-dimensional finite difference time domain (2D-FDTD) calculations to evaluate the performance of the grating accelerator structure. However, this work approximates the grating as an infinite structure and does not accurately model a realizable structure. In this paper, we will present a 3-dimensional frequency-domain simulation of both the infinite and the finite grating accelerator structure. Additionally, we will present a new scheme for a focusing structure based on a perturbation of the accelerating structure. We will present simulations of this proposed focusing structure and quantify the quality of the focusing fields.« less
Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B
2013-01-01
The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.
Schematic driven silicon photonics design
NASA Astrophysics Data System (ADS)
Chrostowski, Lukas; Lu, Zeqin; Flückiger, Jonas; Pond, James; Klein, Jackson; Wang, Xu; Li, Sarah; Tai, Wei; Hsu, En Yao; Kim, Chan; Ferguson, John; Cone, Chris
2016-03-01
Electronic circuit designers commonly start their design process with a schematic, namely an abstract representation of the physical circuit. In integrated photonics on the other hand, it is very common for the design to begin at the physical component level. In order to build large integrated photonic systems, it is crucial to design using a schematic-driven approach. This includes simulations based on schematics, schematic-driven layout, layout versus schematic verification, and post-layout simulations. This paper describes such a design framework implemented using Mentor Graphics and Lumerical Solutions design tools. In addition, we describe challenges in silicon photonics related to manufacturing, and how these can be taken into account in simulations and how these impact circuit performance.
A Model-Driven Approach to e-Course Management
ERIC Educational Resources Information Center
Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana
2018-01-01
This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…
Development of dry coal feeders
NASA Technical Reports Server (NTRS)
Bonin, J. H.; Cantey, D. E.; Daniel, A. D., Jr.; Meyer, J. W.
1977-01-01
Design and fabrication of equipment of feed coal into pressurized environments were investigated. Concepts were selected based on feeder system performance and economic projections. These systems include: two approaches using rotating components, a gas or steam driven ejector, and a modified standpipe feeder concept. Results of development testing of critical components, design procedures, and performance prediction techniques are reviewed.
General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations
NASA Technical Reports Server (NTRS)
Iverson, David L.; Spirkovska, Lilly; Schwabacher, Mark
2010-01-01
Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring.
Concept for a Differential Lock and Traction Control Model in Automobiles
NASA Astrophysics Data System (ADS)
Shukul, A. K.; Hansra, S. K.
2014-01-01
The automobile is a complex integration of electronics and mechanical components. One of the major components is the differential which is limited due to its shortcomings. The paper proposes a concept of a cost effective differential lock and traction for passenger cars to sports utility vehicles alike, employing a parallel braking mechanism coming into action based on the relative speeds of the wheels driven by the differential. The paper highlights the employment of minimum number of components unlike the already existing systems. The system was designed numerically for the traction control and differential lock for the world's cheapest car. The paper manages to come up with all the system parameters and component costing making it a cost effective system.
Framework for a clinical information system.
Van de Velde, R
2000-01-01
The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
Data-based virtual unmodeled dynamics driven multivariable nonlinear adaptive switching control.
Chai, Tianyou; Zhang, Yajun; Wang, Hong; Su, Chun-Yi; Sun, Jing
2011-12-01
For a complex industrial system, its multivariable and nonlinear nature generally make it very difficult, if not impossible, to obtain an accurate model, especially when the model structure is unknown. The control of this class of complex systems is difficult to handle by the traditional controller designs around their operating points. This paper, however, explores the concepts of controller-driven model and virtual unmodeled dynamics to propose a new design framework. The design consists of two controllers with distinct functions. First, using input and output data, a self-tuning controller is constructed based on a linear controller-driven model. Then the output signals of the controller-driven model are compared with the true outputs of the system to produce so-called virtual unmodeled dynamics. Based on the compensator of the virtual unmodeled dynamics, the second controller based on a nonlinear controller-driven model is proposed. Those two controllers are integrated by an adaptive switching control algorithm to take advantage of their complementary features: one offers stabilization function and another provides improved performance. The conditions on the stability and convergence of the closed-loop system are analyzed. Both simulation and experimental tests on a heavily coupled nonlinear twin-tank system are carried out to confirm the effectiveness of the proposed method.
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
An information driven strategy to support multidisciplinary design
NASA Technical Reports Server (NTRS)
Rangan, Ravi M.; Fulton, Robert E.
1990-01-01
The design of complex engineering systems such as aircraft, automobiles, and computers is primarily a cooperative multidisciplinary design process involving interactions between several design agents. The common thread underlying this multidisciplinary design activity is the information exchange between the various groups and disciplines. The integrating component in such environments is the common data and the dependencies that exist between such data. This may be contrasted to classical multidisciplinary analyses problems where there is coupling between distinct design parameters. For example, they may be expressed as mathematically coupled relationships between aerodynamic and structural interactions in aircraft structures, between thermal and structural interactions in nuclear plants, and between control considerations and structural interactions in flexible robots. These relationships provide analytical based frameworks leading to optimization problem formulations. However, in multidisciplinary design problems, information based interactions become more critical. Many times, the relationships between different design parameters are not amenable to analytical characterization. Under such circumstances, information based interactions will provide the best integration paradigm, i.e., there is a need to model the data entities and their dependencies between design parameters originating from different design agents. The modeling of such data interactions and dependencies forms the basis for integrating the various design agents.
Gas engine heat pump cycle analysis. Volume 1: Model description and generic analysis
NASA Astrophysics Data System (ADS)
Fischer, R. D.
1986-10-01
The task has prepared performance and cost information to assist in evaluating the selection of high voltage alternating current components, values for component design variables, and system configurations and operating strategy. A steady-state computer model for performance simulation of engine-driven and electrically driven heat pumps was prepared and effectively used for parametric and seasonal performance analyses. Parametric analysis showed the effect of variables associated with design of recuperators, brine coils, domestic hot water heat exchanger, compressor size, engine efficiency, insulation on exhaust and brine piping. Seasonal performance data were prepared for residential and commercial units in six cities with system configurations closely related to existing or contemplated hardware of the five GRI engine contractors. Similar data were prepared for an advanced variable-speed electric unit for comparison purposes. The effect of domestic hot water production on operating costs was determined. Four fan-operating strategies and two brine loop configurations were explored.
2010-01-01
Background This paper presents the development of a study design built on the principles of theory-driven evaluation. The theory-driven evaluation approach was used to evaluate an adolescent sexual and reproductive health intervention in Mali, Burkina Faso and Cameroon to improve continuity of care through the creation of networks of social and health care providers. Methods/design Based on our experience and the existing literature, we developed a six-step framework for the design of theory-driven evaluations, which we applied in the ex-post evaluation of the networking component of the intervention. The protocol was drafted with the input of the intervention designer. The programme theory, the central element of theory-driven evaluation, was constructed on the basis of semi-structured interviews with designers, implementers and beneficiaries and an analysis of the intervention's logical framework. Discussion The six-step framework proved useful as it allowed for a systematic development of the protocol. We describe the challenges at each step. We found that there is little practical guidance in the existing literature, and also a mix up of terminology of theory-driven evaluation approaches. There is a need for empirical methodological development in order to refine the tools to be used in theory driven evaluation. We conclude that ex-post evaluations of programmes can be based on such an approach if the required information on context and mechanisms is collected during the programme. PMID:21118510
A Tool for Model-Based Generation of Scenario-driven Electric Power Load Profiles
NASA Technical Reports Server (NTRS)
Rozek, Matthew L.; Donahue, Kenneth M.; Ingham, Michel D.; Kaderka, Justin D.
2015-01-01
Power consumption during all phases of spacecraft flight is of great interest to the aerospace community. As a result, significant analysis effort is exerted to understand the rates of electrical energy generation and consumption under many operational scenarios of the system. Previously, no standard tool existed for creating and maintaining a power equipment list (PEL) of spacecraft components that consume power, and no standard tool existed for generating power load profiles based on this PEL information during mission design phases. This paper presents the Scenario Power Load Analysis Tool (SPLAT) as a model-based systems engineering tool aiming to solve those problems. SPLAT is a plugin for MagicDraw (No Magic, Inc.) that aids in creating and maintaining a PEL, and also generates a power and temporal variable constraint set, in Maple language syntax, based on specified operational scenarios. The constraint set can be solved in Maple to show electric load profiles (i.e. power consumption from loads over time). SPLAT creates these load profiles from three modeled inputs: 1) a list of system components and their respective power modes, 2) a decomposition hierarchy of the system into these components, and 3) the specification of at least one scenario, which consists of temporal constraints on component power modes. In order to demonstrate how this information is represented in a system model, a notional example of a spacecraft planetary flyby is introduced. This example is also used to explain the overall functionality of SPLAT, and how this is used to generate electric power load profiles. Lastly, a cursory review of the usage of SPLAT on the Cold Atom Laboratory project is presented to show how the tool was used in an actual space hardware design application.
Network-driven design principles for neuromorphic systems.
Partzsch, Johannes; Schüffny, Rene
2015-01-01
Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems.
Network-driven design principles for neuromorphic systems
Partzsch, Johannes; Schüffny, Rene
2015-01-01
Synaptic connectivity is typically the most resource-demanding part of neuromorphic systems. Commonly, the architecture of these systems is chosen mainly on technical considerations. As a consequence, the potential for optimization arising from the inherent constraints of connectivity models is left unused. In this article, we develop an alternative, network-driven approach to neuromorphic architecture design. We describe methods to analyse performance of existing neuromorphic architectures in emulating certain connectivity models. Furthermore, we show step-by-step how to derive a neuromorphic architecture from a given connectivity model. For this, we introduce a generalized description for architectures with a synapse matrix, which takes into account shared use of circuit components for reducing total silicon area. Architectures designed with this approach are fitted to a connectivity model, essentially adapting to its connection density. They are guaranteeing faithful reproduction of the model on chip, while requiring less total silicon area. In total, our methods allow designers to implement more area-efficient neuromorphic systems and verify usability of the connectivity resources in these systems. PMID:26539079
Integrating FMEA in a Model-Driven Methodology
NASA Astrophysics Data System (ADS)
Scippacercola, Fabio; Pietrantuono, Roberto; Russo, Stefano; Esper, Alexandre; Silva, Nuno
2016-08-01
Failure Mode and Effects Analysis (FMEA) is a well known technique for evaluating the effects of potential failures of components of a system. FMEA demands for engineering methods and tools able to support the time- consuming tasks of the analyst. We propose to make FMEA part of the design of a critical system, by integration into a model-driven methodology. We show how to conduct the analysis of failure modes, propagation and effects from SysML design models, by means of custom diagrams, which we name FMEA Diagrams. They offer an additional view of the system, tailored to FMEA goals. The enriched model can then be exploited to automatically generate FMEA worksheet and to conduct qualitative and quantitative analyses. We present a case study from a real-world project.
The application of domain-driven design in NMS
NASA Astrophysics Data System (ADS)
Zhang, Jinsong; Chen, Yan; Qin, Shengjun
2011-12-01
In the traditional design approach of data-model-driven, system analysis and design phases are often separated which makes the demand information can not be expressed explicitly. The method is also easy to lead developer to the process-oriented programming, making codes between the modules or between hierarchies disordered. So it is hard to meet requirement of system scalability. The paper proposes a software hiberarchy based on rich domain model according to domain-driven design named FHRDM, then the Webwork + Spring + Hibernate (WSH) framework is determined. Domain-driven design aims to construct a domain model which not only meets the demand of the field where the software exists but also meets the need of software development. In this way, problems in Navigational Maritime System (NMS) development like big system business volumes, difficulty of requirement elicitation, high development costs and long development cycle can be resolved successfully.
Van Belle, Sara B; Marchal, Bruno; Dubourg, Dominique; Kegels, Guy
2010-11-30
This paper presents the development of a study design built on the principles of theory-driven evaluation. The theory-driven evaluation approach was used to evaluate an adolescent sexual and reproductive health intervention in Mali, Burkina Faso and Cameroon to improve continuity of care through the creation of networks of social and health care providers. Based on our experience and the existing literature, we developed a six-step framework for the design of theory-driven evaluations, which we applied in the ex-post evaluation of the networking component of the intervention. The protocol was drafted with the input of the intervention designer. The programme theory, the central element of theory-driven evaluation, was constructed on the basis of semi-structured interviews with designers, implementers and beneficiaries and an analysis of the intervention's logical framework. The six-step framework proved useful as it allowed for a systematic development of the protocol. We describe the challenges at each step. We found that there is little practical guidance in the existing literature, and also a mix up of terminology of theory-driven evaluation approaches. There is a need for empirical methodological development in order to refine the tools to be used in theory driven evaluation. We conclude that ex-post evaluations of programmes can be based on such an approach if the required information on context and mechanisms is collected during the programme.
Classroom Strategies Coaching Model: Integration of Formative Assessment and Instructional Coaching
ERIC Educational Resources Information Center
Reddy, Linda A.; Dudek, Christopher M.; Lekwa, Adam
2017-01-01
This article describes the theory, key components, and empirical support for the Classroom Strategies Coaching (CSC) Model, a data-driven coaching approach that systematically integrates data from multiple observations to identify teacher practice needs and goals, design practice plans, and evaluate progress towards goals. The primary aim of the…
A systems engineering analysis of three-point and four-point wind turbine drivetrain configurations
Guo, Yi; Parsons, Tyler; Dykes, Katherine; ...
2016-08-24
This study compares the impact of drivetrain configuration on the mass and capital cost of a series of wind turbines ranging from 1.5 MW to 5.0 MW power ratings for both land-based and offshore applications. The analysis is performed with a new physics-based drivetrain analysis and sizing tool, Drive Systems Engineering (DriveSE), which is part of the Wind-Plant Integrated System Design & Engineering Model. DriveSE uses physics-based relationships to size all major drivetrain components according to given rotor loads simulated based on International Electrotechnical Commission design load cases. The model's sensitivity to input loads that contain a high degree ofmore » variability was analyzed. Aeroelastic simulations are used to calculate the rotor forces and moments imposed on the drivetrain for each turbine design. DriveSE is then used to size all of the major drivetrain components for each turbine for both three-point and four-point configurations. The simulation results quantify the trade-offs in mass and component costs for the different configurations. On average, a 16.7% decrease in total nacelle mass can be achieved when using a three-point drivetrain configuration, resulting in a 3.5% reduction in turbine capital cost. This analysis is driven by extreme loads and does not consider fatigue. Thus, the effects of configuration choices on reliability and serviceability are not captured. Furthermore, a first order estimate of the sizing, dimensioning and costing of major drivetrain components are made which can be used in larger system studies which consider trade-offs between subsystems such as the rotor, drivetrain and tower.« less
A systems engineering analysis of three-point and four-point wind turbine drivetrain configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi; Parsons, Tyler; Dykes, Katherine
This study compares the impact of drivetrain configuration on the mass and capital cost of a series of wind turbines ranging from 1.5 MW to 5.0 MW power ratings for both land-based and offshore applications. The analysis is performed with a new physics-based drivetrain analysis and sizing tool, Drive Systems Engineering (DriveSE), which is part of the Wind-Plant Integrated System Design & Engineering Model. DriveSE uses physics-based relationships to size all major drivetrain components according to given rotor loads simulated based on International Electrotechnical Commission design load cases. The model's sensitivity to input loads that contain a high degree ofmore » variability was analyzed. Aeroelastic simulations are used to calculate the rotor forces and moments imposed on the drivetrain for each turbine design. DriveSE is then used to size all of the major drivetrain components for each turbine for both three-point and four-point configurations. The simulation results quantify the trade-offs in mass and component costs for the different configurations. On average, a 16.7% decrease in total nacelle mass can be achieved when using a three-point drivetrain configuration, resulting in a 3.5% reduction in turbine capital cost. This analysis is driven by extreme loads and does not consider fatigue. Thus, the effects of configuration choices on reliability and serviceability are not captured. Furthermore, a first order estimate of the sizing, dimensioning and costing of major drivetrain components are made which can be used in larger system studies which consider trade-offs between subsystems such as the rotor, drivetrain and tower.« less
MDA-based EHR application security services.
Blobel, Bernd; Pharow, Peter
2004-01-01
Component-oriented, distributed, virtual EHR systems have to meet enhanced security and privacy requirements. In the context of advanced architectural paradigms such as component-orientation, model-driven, and knowledge-based, standardised security services needed have to be specified and implemented in an integrated way following the same paradigm. This concerns the deployment of formal models, meta-languages, reference models such as the ISO RM-ODP, and development as well as implementation tools. International projects' results presented proceed on that streamline.
Brahms Mobile Agents: Architecture and Field Tests
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2002-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, rover/All-Terrain Vehicle (ATV), robotic assistant, other personnel in a local habitat, and a remote mission support team (with time delay). Software processes, called agents, implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system (e.g., return here later and bring this back to the habitat ). This combination of agents, rover, and model-based spoken dialogue interface constitutes a personal assistant. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a run-time system.
A data driven control method for structure vibration suppression
NASA Astrophysics Data System (ADS)
Xie, Yangmin; Wang, Chao; Shi, Hang; Shi, Junwei
2018-02-01
High radio-frequency space applications have motivated continuous research on vibration suppression of large space structures both in academia and industry. This paper introduces a novel data driven control method to suppress vibrations of flexible structures and experimentally validates the suppression performance. Unlike model-based control approaches, the data driven control method designs a controller directly from the input-output test data of the structure, without requiring parametric dynamics and hence free of system modeling. It utilizes the discrete frequency response via spectral analysis technique and formulates a non-convex optimization problem to obtain optimized controller parameters with a predefined controller structure. Such approach is then experimentally applied on an end-driving flexible beam-mass structure. The experiment results show that the presented method can achieve competitive disturbance rejections compared to a model-based mixed sensitivity controller under the same design criterion but with much less orders and design efforts, demonstrating the proposed data driven control is an effective approach for vibration suppression of flexible structures.
Sizing Power Components of an Electrically Driven Tail Cone Thruster and a Range Extender
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Bowman, Cheryl; Jankovsky, Amy
2016-01-01
The aeronautics industry has been challenged on many fronts to increase efficiency, reduce emissions, and decrease dependency on carbon-based fuels. This paper provides an overview of the turboelectric and hybrid electric technologies being developed under NASA's Advanced Air Transportation Technology (AATT) Project and discusses how these technologies can impact vehicle design. The discussion includes an overview of key hybrid electric studies and technology investments, the approach to making informed investment decisions based on key performance parameters and mission studies, and the power system architectures for two candidate aircraft. Finally, the power components for a single-aisle turboelectric aircraft with an electrically driven tail cone thruster and for a hybrid-electric nine-passenger aircraft with a range extender are parametrically sized, and the sensitivity of these components to key parameters is presented.
CEREF: A hybrid data-driven model for forecasting annual streamflow from a socio-hydrological system
NASA Astrophysics Data System (ADS)
Zhang, Hongbo; Singh, Vijay P.; Wang, Bin; Yu, Yinghao
2016-09-01
Hydrological forecasting is complicated by flow regime alterations in a coupled socio-hydrologic system, encountering increasingly non-stationary, nonlinear and irregular changes, which make decision support difficult for future water resources management. Currently, many hybrid data-driven models, based on the decomposition-prediction-reconstruction principle, have been developed to improve the ability to make predictions of annual streamflow. However, there exist many problems that require further investigation, the chief among which is the direction of trend components decomposed from annual streamflow series and is always difficult to ascertain. In this paper, a hybrid data-driven model was proposed to capture this issue, which combined empirical mode decomposition (EMD), radial basis function neural networks (RBFNN), and external forces (EF) variable, also called the CEREF model. The hybrid model employed EMD for decomposition and RBFNN for intrinsic mode function (IMF) forecasting, and determined future trend component directions by regression with EF as basin water demand representing the social component in the socio-hydrologic system. The Wuding River basin was considered for the case study, and two standard statistical measures, root mean squared error (RMSE) and mean absolute error (MAE), were used to evaluate the performance of CEREF model and compare with other models: the autoregressive (AR), RBFNN and EMD-RBFNN. Results indicated that the CEREF model had lower RMSE and MAE statistics, 42.8% and 7.6%, respectively, than did other models, and provided a superior alternative for forecasting annual runoff in the Wuding River basin. Moreover, the CEREF model can enlarge the effective intervals of streamflow forecasting compared to the EMD-RBFNN model by introducing the water demand planned by the government department to improve long-term prediction accuracy. In addition, we considered the high-frequency component, a frequent subject of concern in EMD-based forecasting, and results showed that removing high-frequency component is an effective measure to improve forecasting precision and is suggested for use with the CEREF model for better performance. Finally, the study concluded that the CEREF model can be used to forecast non-stationary annual streamflow change as a co-evolution of hydrologic and social systems with better accuracy. Also, the modification about removing high-frequency can further improve the performance of the CEREF model. It should be noted that the CEREF model is beneficial for data-driven hydrologic forecasting in complex socio-hydrologic systems, and as a simple data-driven socio-hydrologic forecasting model, deserves more attention.
Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization
Marai, G. Elisabeta
2018-01-01
Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550
Analysis-Driven Design Optimization of a SMA-Based Slat-Cove Filler for Aeroacoustic Noise Reduction
NASA Technical Reports Server (NTRS)
Scholten, William; Hartl, Darren; Turner, Travis
2013-01-01
Airframe noise is a significant component of environmental noise in the vicinity of airports. The noise associated with the leading-edge slat of typical transport aircraft is a prominent source of airframe noise. Previous work suggests that a slat-cove filler (SCF) may be an effective noise treatment. Hence, development and optimization of a practical slat-cove-filler structure is a priority. The objectives of this work are to optimize the design of a functioning SCF which incorporates superelastic shape memory alloy (SMA) materials as flexures that permit the deformations involved in the configuration change. The goal of the optimization is to minimize the actuation force needed to retract the slat-SCF assembly while satisfying constraints on the maximum SMA stress and on the SCF deflection under static aerodynamic pressure loads, while also satisfying the condition that the SCF self-deploy during slat extension. A finite element analysis model based on a physical bench-top model is created in Abaqus such that automated iterative analysis of the design could be performed. In order to achieve an optimized design, several design variables associated with the current SCF configuration are considered, such as the thicknesses of SMA flexures and the dimensions of various components, SMA and conventional. Designs of experiment (DOE) are performed to investigate structural response to an aerodynamic pressure load and to slat retraction and deployment. DOE results are then used to inform the optimization process, which determines a design minimizing actuator forces while satisfying the required constraints.
Complex modulation using tandem polarization modulators
NASA Astrophysics Data System (ADS)
Hasan, Mehedi; Hall, Trevor
2017-11-01
A novel photonic technique for implementing frequency up-conversion or complex modulation is proposed. The proposed circuit consists of a sandwich of a quarter-wave plate between two polarization modulators, driven, respectively, by an in-phase and quadrature-phase signals. The operation of the circuit is modelled using a transmission matrix method. The theoretical prediction is then validated by simulation using an industry-standard software tool. The intrinsic conversion efficiency of the architecture is improved by 6 dB over a functionally equivalent design based on dual parallel Mach-Zehnder modulators. Non-ideal scenarios such as imperfect alignment of the optical components and power imbalances and phase errors in the electric drive signals are also analysed. As light travels, along one physical path, the proposed design can be implemented using discrete components with greater control of relative optical path length differences. The circuit can further be integrated in any material platform that offers electro-optic polarization modulators.
Optimal Design of Cable-Driven Manipulators Using Particle Swarm Optimization.
Bryson, Joshua T; Jin, Xin; Agrawal, Sunil K
2016-08-01
The design of cable-driven manipulators is complicated by the unidirectional nature of the cables, which results in extra actuators and limited workspaces. Furthermore, the particular arrangement of the cables and the geometry of the robot pose have a significant effect on the cable tension required to effect a desired joint torque. For a sufficiently complex robot, the identification of a satisfactory cable architecture can be difficult and can result in multiply redundant actuators and performance limitations based on workspace size and cable tensions. This work leverages previous research into the workspace analysis of cable systems combined with stochastic optimization to develop a generalized methodology for designing optimized cable routings for a given robot and desired task. A cable-driven robot leg performing a walking-gait motion is used as a motivating example to illustrate the methodology application. The components of the methodology are described, and the process is applied to the example problem. An optimal cable routing is identified, which provides the necessary controllable workspace to perform the desired task and enables the robot to perform that task with minimal cable tensions. A robot leg is constructed according to this routing and used to validate the theoretical model and to demonstrate the effectiveness of the resulting cable architecture.
NASA Astrophysics Data System (ADS)
Shen, Yan-Jun; Shen, Yanjun; Fink, Manfred; Kralisch, Sven; Brenning, Alexander
2018-01-01
Understanding the water balance, especially as it relates to the distribution of runoff components, is crucial for water resource management and coping with the impacts of climate change. However, hydrological processes are poorly known in mountainous regions due to data scarcity and the complex dynamics of snow and glaciers. This study aims to provide a quantitative comparison of gridded precipitation products in the Tianshan Mountains, located in Central Asia and in order to further understand the mountain hydrology and distribution of runoff components in the glacierized Kaidu Basin. We found that gridded precipitation products are affected by inconsistent biases based on a spatiotemporal comparison with the nearest weather stations and should be evaluated with caution before using them as boundary conditions in hydrological modeling. Although uncertainties remain in this data-scarce basin, driven by field survey data and bias-corrected gridded data sets (ERA-Interim and APHRODITE), the water balance and distribution of runoff components can be plausibly quantified based on the distributed hydrological model (J2000). We further examined parameter sensitivity and uncertainty with respect to both simulated streamflow and different runoff components based on an ensemble of simulations. This study demonstrated the possibility of integrating gridded products in hydrological modeling. The methodology used can be important for model applications and design in other data-scarce mountainous regions. The model-based simulation quantified the water balance and how the water resources are partitioned throughout the year in Tianshan Mountain basins, although the uncertainties present in this study result in important limitations.
Advantages of Brahms for Specifying and Implementing a Multiagent Human-Robotic Exploration System
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Kaskiris, Charis; vanHoof, Ron
2003-01-01
We have developed a model-based, distributed architecture that integrates diverse components in a system designed for lunar and planetary surface operations: an astronaut's space suit, cameras, all-terrain vehicles, robotic assistant, crew in a local habitat, and mission support team. Software processes ('agents') implemented in the Brahms language, run on multiple, mobile platforms. These mobile agents interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. The Brahms-based mobile agent architecture (MAA) uses a novel combination of agent types so the software agents may understand and facilitate communications between people and between system components. A state-of-the-art spoken dialogue interface is integrated with Brahms models, supporting a speech-driven field observation record and rover command system. An important aspect of the methodology involves first simulating the entire system in Brahms, then configuring the agents into a runtime system Thus, Brahms provides a language, engine, and system builder's toolkit for specifying and implementing multiagent systems.
Using diagnostic experiences in experience-based innovative design
NASA Astrophysics Data System (ADS)
Prabhakar, Sattiraju; Goel, Ashok K.
1992-03-01
Designing a novel class of devices requires innovation. Often, the design knowledge of these devices does not identify and address the constraints that are required for their performance in the real world operating environment. So any new design adapted from these devices tend to be similarly sketchy. In order to address this problem, we propose a case-based reasoning method called performance driven innovation (PDI). We model the design as a dynamic process, arrive at a design by adaptation from the known designs, generate failures for this design for some new constraints, and then use this failure knowledge to generate the required design knowledge for the new constraints. In this paper, we discuss two aspects of PDI: the representation of PDI cases and the translation of the failure knowledge into design knowledge for a constraint. Each case in PDI has two components: design and failure knowledge. Both of them are represented using a substance-behavior-function model. Failure knowledge has internal device failure behaviors and external environmental behaviors. The environmental behavior, for a constraint, interacting with the design behaviors, results in the failure internal behavior. The failure adaptation strategy generates functions, from the failure knowledge, which can be addressed using the routine design methods. These ideas are illustrated using a coffee-maker example.
A minimum cost tolerance allocation method for rocket engines and robust rocket engine design
NASA Technical Reports Server (NTRS)
Gerth, Richard J.
1993-01-01
Rocket engine design follows three phases: systems design, parameter design, and tolerance design. Systems design and parameter design are most effectively conducted in a concurrent engineering (CE) environment that utilize methods such as Quality Function Deployment and Taguchi methods. However, tolerance allocation remains an art driven by experience, handbooks, and rules of thumb. It was desirable to develop and optimization approach to tolerancing. The case study engine was the STME gas generator cycle. The design of the major components had been completed and the functional relationship between the component tolerances and system performance had been computed using the Generic Power Balance model. The system performance nominals (thrust, MR, and Isp) and tolerances were already specified, as were an initial set of component tolerances. However, the question was whether there existed an optimal combination of tolerances that would result in the minimum cost without any degradation in system performance.
NASA Astrophysics Data System (ADS)
Rock, B. N.; Hale, S. R.; Graham, K. J.; Hayden, L.; Barber, L.; Perry, C.; Schloss, J.; Sullivan, E.; Yuan, J.; Abebe, E.; Mitchell, L.; Abrams, E.; Gagnon, M.
2008-12-01
Watershed Watch (NSF 0525433) engages early undergraduate students from two-year and four-year colleges in student-driven full inquiry-based instruction in the biogeosciences. Program goals for Watershed Watch are to test if inquiry-rich student-driven projects sufficiently engage undeclared students (or noncommittal STEM majors) to declare a STEM major (or remain with their STEM major). A significant component of this program is an intensive two-week Summer course, in which undeclared freshmen research various aspects of a local watershed. Students develop their own research questions and study design, collect and analyze data, and produce a scientific or an oral poster presentation. The course objectives, curriculum and schedule are presented as a model for dissemination for other institutions and programs seeking to develop inquiry-rich courses designed to attract students into biogeoscience disciplines. Data from self-reported student feedback indicated the most important factors explaining high-levels of student motivation and research excellence in the course are 1) working with committed, energetic, and enthusiastic faculty mentors; and 2) faculty mentors demonstrating high degrees of teamwork and coordination.
Constraint-Driven Software Design: An Escape from the Waterfall Model.
ERIC Educational Resources Information Center
de Hoog, Robert; And Others
1994-01-01
Presents the principles of a development methodology for software design based on a nonlinear, product-driven approach that integrates quality aspects. Two examples are given to show that the flexibility needed for building high quality systems leads to integrated development environments in which methodology, product, and tools are closely…
Direct methanol fuel cells: A database-driven design procedure
NASA Astrophysics Data System (ADS)
Flipsen, S. F. J.; Spitas, C.
2011-10-01
To test the feasibility of DMFC systems in preliminary stages of the design process the design engineer can make use of heuristic models identifying the opportunity of DMFC systems in a specific application. In general these models are to generic and have a low accuracy. To improve the accuracy a second-order model is proposed in this paper. The second-order model consists of an evolutionary algorithm written in Mathematica, which selects a component-set satisfying the fuel-cell systems' performance requirements, places the components in 3D space and optimizes for volume. The results are presented as a 3D draft proposal together with a feasibility metric. To test the algorithm the design of DMFC system applied in the MP3 player is evaluated. The results show that volume and costs are an issue for the feasibility of the fuel-cell power-system applied in the MP3 player. The generated designs and the algorithm are evaluated and recommendations are given.
Use case driven approach to develop simulation model for PCS of APR1400 simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong Wook, Kim; Hong Soo, Kim; Hyeon Tae, Kang
2006-07-01
The full-scope simulator is being developed to evaluate specific design feature and to support the iterative design and validation in the Man-Machine Interface System (MMIS) design of Advanced Power Reactor (APR) 1400. The simulator consists of process model, control logic model, and MMI for the APR1400 as well as the Power Control System (PCS). In this paper, a use case driven approach is proposed to develop a simulation model for PCS. In this approach, a system is considered from the point of view of its users. User's view of the system is based on interactions with the system and themore » resultant responses. In use case driven approach, we initially consider the system as a black box and look at its interactions with the users. From these interactions, use cases of the system are identified. Then the system is modeled using these use cases as functions. Lower levels expand the functionalities of each of these use cases. Hence, starting from the topmost level view of the system, we proceeded down to the lowest level (the internal view of the system). The model of the system thus developed is use case driven. This paper will introduce the functionality of the PCS simulation model, including a requirement analysis based on use case and the validation result of development of PCS model. The PCS simulation model using use case will be first used during the full-scope simulator development for nuclear power plant and will be supplied to Shin-Kori 3 and 4 plant. The use case based simulation model development can be useful for the design and implementation of simulation models. (authors)« less
Electromagnetic Properties Analysis on Hybrid-driven System of Electromagnetic Motor
NASA Astrophysics Data System (ADS)
Zhao, Jingbo; Han, Bingyuan; Bei, Shaoyi
2018-01-01
The hybrid-driven system made of permanent-and electromagnets applied in the electromagnetic motor was analyzed, equivalent magnetic circuit was used to establish the mathematical models of hybrid-driven system, based on the models of hybrid-driven system, the air gap flux, air-gap magnetic flux density, electromagnetic force was proposed. Taking the air-gap magnetic flux density and electromagnetic force as main research object, the hybrid-driven system was researched. Electromagnetic properties of hybrid-driven system with different working current modes is studied preliminary. The results shown that analysis based on hybrid-driven system can improve the air-gap magnetic flux density and electromagnetic force more effectively and can also guarantee the output stability, the effectiveness and feasibility of the hybrid-driven system are verified, which proved theoretical basis for the design of hybrid-driven system.
Agent-Based Modeling of Cancer Stem Cell Driven Solid Tumor Growth.
Poleszczuk, Jan; Macklin, Paul; Enderling, Heiko
2016-01-01
Computational modeling of tumor growth has become an invaluable tool to simulate complex cell-cell interactions and emerging population-level dynamics. Agent-based models are commonly used to describe the behavior and interaction of individual cells in different environments. Behavioral rules can be informed and calibrated by in vitro assays, and emerging population-level dynamics may be validated with both in vitro and in vivo experiments. Here, we describe the design and implementation of a lattice-based agent-based model of cancer stem cell driven tumor growth.
Model-Drive Architecture for Agent-Based Systems
NASA Technical Reports Server (NTRS)
Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.
2004-01-01
The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.
Lemoine, E; Merceron, D; Sallantin, J; Nguifo, E M
1999-01-01
This paper describes a new approach to problem solving by splitting up problem component parts between software and hardware. Our main idea arises from the combination of two previously published works. The first one proposed a conceptual environment of concept modelling in which the machine and the human expert interact. The second one reported an algorithm based on reconfigurable hardware system which outperforms any kind of previously published genetic data base scanning hardware or algorithms. Here we show how efficient the interaction between the machine and the expert is when the concept modelling is based on reconfigurable hardware system. Their cooperation is thus achieved with an real time interaction speed. The designed system has been partially applied to the recognition of primate splice junctions sites in genetic sequences.
Thermal Control Subsystem Design for the Avionics of a Space Station Payload
NASA Technical Reports Server (NTRS)
Moran, Matthew E.
1996-01-01
A case study of the thermal control subsystem development for a space based payload is presented from the concept stage through preliminary design. This payload, the Space Acceleration Measurement System 2 (SAMS-2), will measure the acceleration environment at select locations within the International Space Station. Its thermal control subsystem must maintain component temperatures within an acceptable range over a 10 year life span, while restricting accessible surfaces to touch temperature limits and insuring fail safe conditions in the event of loss of cooling. In addition to these primary design objectives, system level requirements and constraints are imposed on the payload, many of which are driven by multidisciplinary issues. Blending these issues into the overall system design required concurrent design sessions with the project team, iterative conceptual design layouts, thermal analysis and modeling, and hardware testing. Multiple tradeoff studies were also performed to investigate the many options which surfaced during the development cycle.
Life extending control: An interdisciplinary engineering thrust
NASA Technical Reports Server (NTRS)
Lorenzo, Carl F.; Merrill, Walter C.
1991-01-01
The concept of Life Extending Control (LEC) is introduced. Possible extensions to the cyclic damage prediction approach are presented based on the identification of a model from elementary forms. Several candidate elementary forms are presented. These extensions will result in a continuous or differential form of the damage prediction model. Two possible approaches to the LEC based on the existing cyclic damage prediction method, the measured variables LEC and the estimated variables LEC, are defined. Here, damage estimates or measurements would be used directly in the LEC. A simple hydraulic actuator driven position control system example is used to illustrate the main ideas behind LEC. Results from a simple hydraulic actuator example demonstrate that overall system performance (dynamic plus life) can be maximized by accounting for component damage in the control design.
Formalism Challenges of the Cougaar Model Driven Architecture
NASA Technical Reports Server (NTRS)
Bohner, Shawn A.; George, Boby; Gracanin, Denis; Hinchey, Michael G.
2004-01-01
The Cognitive Agent Architecture (Cougaar) is one of the most sophisticated distributed agent architectures developed today. As part of its research and evolution, Cougaar is being studied for application to large, logistics-based applications for the Department of Defense (DoD). Anticipiting future complex applications of Cougaar, we are investigating the Model Driven Architecture (MDA) approach to understand how effective it would be for increasing productivity in Cougar-based development efforts. Recognizing the sophistication of the Cougaar development environment and the limitations of transformation technologies for agents, we have systematically developed an approach that combines component assembly in the large and transformation in the small. This paper describes some of the key elements that went into the Cougaar Model Driven Architecture approach and the characteristics that drove the approach.
NASA Astrophysics Data System (ADS)
Fuchs, Erica R. H.; Bruce, E. J.; Ram, R. J.; Kirchain, Randolph E.
2006-08-01
The monolithic integration of components holds promise to increase network functionality and reduce packaging expense. Integration also drives down yield due to manufacturing complexity and the compounding of failures across devices. Consensus is lacking on the economically preferred extent of integration. Previous studies on the cost feasibility of integration have used high-level estimation methods. This study instead focuses on accurate-to-industry detail, basing a process-based cost model of device manufacture on data collected from 20 firms across the optoelectronics supply chain. The model presented allows for the definition of process organization, including testing, as well as processing conditions, operational characteristics, and level of automation at each step. This study focuses on the cost implications of integration of a 1550-nm DFB laser with an electroabsorptive modulator on an InP platform. Results show the monolithically integrated design to be more cost competitive over discrete component options regardless of production scale. Dominant cost drivers are packaging, testing, and assembly. Leveraging the technical detail underlying model projections, component alignment, bonding, and metal-organic chemical vapor deposition (MOCVD) are identified as processes where technical improvements are most critical to lowering costs. Such results should encourage exploration of the cost advantages of further integration and focus cost-driven technology development.
Procedures and models for estimating preconstruction costs of highway projects.
DOT National Transportation Integrated Search
2012-07-01
This study presents data driven and component based PE cost prediction models by utilizing critical factors retrieved from ten years of historical project data obtained from ODOT roadway division. The study used factor analysis of covariance and corr...
Pollux: Enhancing the Quality of Service of the Global Information Grid (GIG)
2009-06-01
and throughput of standard-based and/or COTS-based QoS-enabled pub/sub technologies, including DDS, JMS, Web Services, and CORBA. 2. The DDS QoS...of ser- vice pICKER (QUICKER) model-driven engineering ( MDE ) toolchain shown in Figure 8. QUICKER extends the Platform-Independent Component Modeling
Andaya, January M; Yamada, Seiji; Maskarinec, Gregory G
2014-01-01
In the current rapidly evolving healthcare environment of the United States, social justice programs in pre-medical and medical education are needed to cultivate socially conscious and health professionals inclined to interdisciplinary collaborations. To address ongoing healthcare inequalities, medical education must help medical students to become physicians skilled not only in the biomedical management of diseases, but also in identifying and addressing social and structural determinants of the patients' daily lives. Using a longitudinal Problem-Based Learning (PBL) methodology, the medical students and faculty advisers at the University of Hawai‘i John A. Burns School of Medicine (JABSOM) developed the Social Justice Curriculum Program (SJCP) to supplement the biomedical curriculum. The SJCP consists of three components: (1) active self-directed learning and didactics, (2) implementation and action, and (3) self-reflection and personal growth. The purpose of introducing a student-driven SJ curriculum is to expose the students to various components of SJ in health and medicine, and maximize engagement by using their own inputs for content and design. It is our hope that the SJCP will serve as a logistic and research-oriented model for future student-driven SJ programs that respond to global health inequalities by cultivating skills and interest in leadership and community service. PMID:25157325
Ambrose, Adrian Jacques H; Andaya, January M; Yamada, Seiji; Maskarinec, Gregory G
2014-08-01
In the current rapidly evolving healthcare environment of the United States, social justice programs in pre-medical and medical education are needed to cultivate socially conscious and health professionals inclined to interdisciplinary collaborations. To address ongoing healthcare inequalities, medical education must help medical students to become physicians skilled not only in the biomedical management of diseases, but also in identifying and addressing social and structural determinants of the patients' daily lives. Using a longitudinal Problem-Based Learning (PBL) methodology, the medical students and faculty advisers at the University of Hawai'i John A. Burns School of Medicine (JABSOM) developed the Social Justice Curriculum Program (SJCP) to supplement the biomedical curriculum. The SJCP consists of three components: (1) active self-directed learning and didactics, (2) implementation and action, and (3) self-reflection and personal growth. The purpose of introducing a student-driven SJ curriculum is to expose the students to various components of SJ in health and medicine, and maximize engagement by using their own inputs for content and design. It is our hope that the SJCP will serve as a logistic and research-oriented model for future student-driven SJ programs that respond to global health inequalities by cultivating skills and interest in leadership and community service.
Gang, G J; Siewerdsen, J H; Stayman, J W
2016-02-01
This work applies task-driven optimization to design CT tube current modulation and directional regularization in penalized-likelihood (PL) reconstruction. The relative performance of modulation schemes commonly adopted for filtered-backprojection (FBP) reconstruction were also evaluated for PL in comparison. We adopt a task-driven imaging framework that utilizes a patient-specific anatomical model and information of the imaging task to optimize imaging performance in terms of detectability index ( d' ). This framework leverages a theoretical model based on implicit function theorem and Fourier approximations to predict local spatial resolution and noise characteristics of PL reconstruction as a function of the imaging parameters to be optimized. Tube current modulation was parameterized as a linear combination of Gaussian basis functions, and regularization was based on the design of (directional) pairwise penalty weights for the 8 in-plane neighboring voxels. Detectability was optimized using a covariance matrix adaptation evolutionary strategy algorithm. Task-driven designs were compared to conventional tube current modulation strategies for a Gaussian detection task in an abdomen phantom. The task-driven design yielded the best performance, improving d' by ~20% over an unmodulated acquisition. Contrary to FBP, PL reconstruction using automatic exposure control and modulation based on minimum variance (in FBP) performed worse than the unmodulated case, decreasing d' by 16% and 9%, respectively. This work shows that conventional tube current modulation schemes suitable for FBP can be suboptimal for PL reconstruction. Thus, the proposed task-driven optimization provides additional opportunities for improved imaging performance and dose reduction beyond that achievable with conventional acquisition and reconstruction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Combining Domain-driven Design and Mashups for Service Development
NASA Astrophysics Data System (ADS)
Iglesias, Carlos A.; Fernández-Villamor, José Ignacio; Del Pozo, David; Garulli, Luca; García, Boni
This chapter presents the Romulus project approach to Service Development using Java-based web technologies. Romulus aims at improving productivity of service development by providing a tool-supported model to conceive Java-based web applications. This model follows a Domain Driven Design approach, which states that the primary focus of software projects should be the core domain and domain logic. Romulus proposes a tool-supported model, Roma Metaframework, that provides an abstraction layer on top of existing web frameworks and automates the application generation from the domain model. This metaframework follows an object centric approach, and complements Domain Driven Design by identifying the most common cross-cutting concerns (security, service, view, ...) of web applications. The metaframework uses annotations for enriching the domain model with these cross-cutting concerns, so-called aspects. In addition, the chapter presents the usage of mashup technology in the metaframework for service composition, using the web mashup editor MyCocktail. This approach is applied to a scenario of the Mobile Phone Service Portability case study for the development of a new service.
Driven Metadynamics: Reconstructing Equilibrium Free Energies from Driven Adaptive-Bias Simulations
2013-01-01
We present a novel free-energy calculation method that constructively integrates two distinct classes of nonequilibrium sampling techniques, namely, driven (e.g., steered molecular dynamics) and adaptive-bias (e.g., metadynamics) methods. By employing nonequilibrium work relations, we design a biasing protocol with an explicitly time- and history-dependent bias that uses on-the-fly work measurements to gradually flatten the free-energy surface. The asymptotic convergence of the method is discussed, and several relations are derived for free-energy reconstruction and error estimation. Isomerization reaction of an atomistic polyproline peptide model is used to numerically illustrate the superior efficiency and faster convergence of the method compared with its adaptive-bias and driven components in isolation. PMID:23795244
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
Component Models for Semantic Web Languages
NASA Astrophysics Data System (ADS)
Henriksson, Jakob; Aßmann, Uwe
Intelligent applications and agents on the Semantic Web typically need to be specified with, or interact with specifications written in, many different kinds of formal languages. Such languages include ontology languages, data and metadata query languages, as well as transformation languages. As learnt from years of experience in development of complex software systems, languages need to support some form of component-based development. Components enable higher software quality, better understanding and reusability of already developed artifacts. Any component approach contains an underlying component model, a description detailing what valid components are and how components can interact. With the multitude of languages developed for the Semantic Web, what are their underlying component models? Do we need to develop one for each language, or is a more general and reusable approach achievable? We present a language-driven component model specification approach. This means that a component model can be (automatically) generated from a given base language (actually, its specification, e.g. its grammar). As a consequence, we can provide components for different languages and simplify the development of software artifacts used on the Semantic Web.
Review and study of physics driven pitting corrosion modeling in 2024-T3 aluminum alloys
NASA Astrophysics Data System (ADS)
Yu, Lingyu; Jata, Kumar V.
2015-04-01
Material degradation due to corrosion and corrosion fatigue has been recognized to significantly affect the airworthiness of civilian and military aircraft, especially for the current fleet of airplanes that have served beyond their initial design life. The ability to predict the corrosion damage development in aircraft components and structures, therefore, is of great importance in managing timely maintenance for the aging aircraft vehicles and in assisting the design of new ones. The assessment of aircraft corrosion and its influence on fatigue life relies on appropriate quantitative models that can evaluate the initiation of the corrosion as well as the accumulation during the period of operation. Beyond the aircraft regime, corrosion has also affected the maintenance, safety and reliability of other systems such as nuclear power systems, steam and gas turbines, marine structures and so on. In the work presented in this paper, we reviewed and studied several physics based pitting corrosion models that have been reported in the literature. The classic work of particle induced pitting corrosion by Wei and Harlow is reviewed in detail. Two types of modeling, a power law based simplified model and a microstructure based model, are compared for 2024-T3 alloy. Data from literatures are used as model inputs. The paper ends with conclusions and recommendations for future work.
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1990-01-01
Viewgraphs on Rubber Airplane: Constraint-based Component-Modeling for Knowledge Representation in Computer Aided Conceptual Design are presented. Topics covered include: computer aided design; object oriented programming; airfoil design; surveillance aircraft; commercial aircraft; aircraft design; and launch vehicles.
Development of a thermodynamic model for a cold cycle 3He-4He dilution refrigerator
NASA Astrophysics Data System (ADS)
Mueller, B. W.; Miller, F. K.
2016-10-01
A thermodynamic model of a 3He-4He cold cycle dilution refrigerator with no actively-driven mechanical components is developed and investigated. The refrigerator employs a reversible superfluid magnetic pump, passive check valves, a phase separation chamber, and a series of recuperative heat exchangers to continuously circulate 3He-4He and maintain a 3He concentration gradient across the mixing chamber. The model predicts cooling power and mixing chamber temperature for a range of design and operating parameters, allowing an evaluation of feasibility for potential 3He-4He cold cycle dilution refrigerator prototype designs. Model simulations for a prototype refrigerator design are presented.
Design of a component-based integrated environmental modeling framework
Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...
NASA Astrophysics Data System (ADS)
Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid
2017-05-01
Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.
An Open Source Low-Cost Automatic System for Image-Based 3d Digitization
NASA Astrophysics Data System (ADS)
Menna, F.; Nocerino, E.; Morabito, D.; Farella, E. M.; Perini, M.; Remondino, F.
2017-11-01
3D digitization of heritage artefacts, reverse engineering of industrial components or rapid prototyping-driven design are key topics today. Indeed, millions of archaeological finds all over the world need to be surveyed in 3D either to allow convenient investigations by researchers or because they are inaccessible to visitors and scientists or, unfortunately, because they are seriously endangered by wars and terrorist attacks. On the other hand, in case of industrial and design components there is often the need of deformation analyses or physical replicas starting from reality-based 3D digitisations. The paper is aligned with these needs and presents the realization of the ORION (arduinO Raspberry pI rOtating table for image based 3D recostructioN) prototype system, with its hardware and software components, providing critical insights about its modular design. ORION is an image-based 3D reconstruction system based on automated photogrammetric acquisitions and processing. The system is being developed under a collaborative educational project between FBK Trento, the University of Trento and internship programs with high school in the Trentino province (Italy).
CD volume design and verification
NASA Technical Reports Server (NTRS)
Li, Y. P.; Hughes, J. S.
1993-01-01
In this paper, we describe a prototype for CD-ROM volume design and verification. This prototype allows users to create their own model of CD volumes by modifying a prototypical model. Rule-based verification of the test volumes can then be performed later on against the volume definition. This working prototype has proven the concept of model-driven rule-based design and verification for large quantity of data. The model defined for the CD-ROM volumes becomes a data model as well as an executable specification.
Human systems immunology: hypothesis-based modeling and unbiased data-driven approaches.
Arazi, Arnon; Pendergraft, William F; Ribeiro, Ruy M; Perelson, Alan S; Hacohen, Nir
2013-10-31
Systems immunology is an emerging paradigm that aims at a more systematic and quantitative understanding of the immune system. Two major approaches have been utilized to date in this field: unbiased data-driven modeling to comprehensively identify molecular and cellular components of a system and their interactions; and hypothesis-based quantitative modeling to understand the operating principles of a system by extracting a minimal set of variables and rules underlying them. In this review, we describe applications of the two approaches to the study of viral infections and autoimmune diseases in humans, and discuss possible ways by which these two approaches can synergize when applied to human immunology. Copyright © 2012 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Na; Wright, Alan D.; Johnson, Kathryn E.
Two independent pitch controllers (IPCs) based on the disturbance accommodating control (DAC) algorithm are designed for the three-bladed Controls Advanced Research Turbine to regulate rotor speed and to mitigate blade root flapwise bending loads in above-rated wind speed. One of the DAC-based IPCs is designed based on a transformed symmetrical-asymmetrical (TSA) turbine model, with wind disturbances being modeled as a collective horizontal component and an asymmetrical linear shear component. Another DAC-based IPC is designed based on a multiblade coordinate (MBC) transformed turbine model, with a horizontal component and a vertical shear component being modeled as step waveform disturbance. Both ofmore » the DAC-based IPCs are found via a regulation equation solved by Kronecker product. Actuator dynamics are considered in the design processes to compensate for actuator phase delay. The simulation study shows the effectiveness of the proposed DAC-based IPCs compared to a proportional-integral (PI) collective pitch controller (CPC). Improvement on rotor speed regulation and once-per-revolution and twice-per-revolution load reductions has been observed in the proposed IPC designs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Na; Wright, Alan D.; Johnson, Kathryn E.
Two independent pitch controllers (IPCs) based on the disturbance accommodating control (DAC) algorithm are designed for the three-bladed Controls Advanced Research Turbine to regulate rotor speed and to mitigate blade root flapwise bending loads in above-rated wind speed. One of the DAC-based IPCs is designed based on a transformed symmetrical-asymmetrical (TSA) turbine model, with wind disturbances being modeled as a collective horizontal component and an asymmetrical linear shear component. Another DAC-based IPC is designed based on a multiblade coordinate (MBC) transformed turbine model, with a horizontal component and a vertical shear component being modeled as step waveform disturbance. Both ofmore » the DAC-based IPCs are found via a regulation equation solved by Kronecker product. Actuator dynamics are considered in the design processes to compensate for actuator phase delay. The simulation study shows the effectiveness of the proposed DAC-based IPCs compared to a proportional-integral (PI) collective pitch controller (CPC). Improvement on rotor speed regulation and once-per-revolution and twice-per-revolution load reductions has been observed in the proposed IPC designs.« less
Test Driven Development of a Parameterized Ice Sheet Component
NASA Astrophysics Data System (ADS)
Clune, T.
2011-12-01
Test driven development (TDD) is a software development methodology that offers many advantages over traditional approaches including reduced development and maintenance costs, improved reliability, and superior design quality. Although TDD is widely accepted in many software communities, the suitability to scientific software is largely undemonstrated and warrants a degree of skepticism. Indeed, numerical algorithms pose several challenges to unit testing in general, and TDD in particular. Among these challenges are the need to have simple, non-redundant closed-form expressions to compare against the results obtained from the implementation as well as realistic error estimates. The necessity for serial and parallel performance raises additional concerns for many scientific applicaitons. In previous work I demonstrated that TDD performed well for the development of a relatively simple numerical model that simulates the growth of snowflakes, but the results were anecdotal and of limited relevance to far more complex software components typical of climate models. This investigation has now been extended by successfully applying TDD to the implementation of a substantial portion of a new parameterized ice sheet component within a full climate model. After a brief introduction to TDD, I will present techniques that address some of the obstacles encountered with numerical algorithms. I will conclude with some quantitative and qualitative comparisons against climate components developed in a more traditional manner.
Shape Memory Alloy (SMA)-Based Launch Lock
NASA Technical Reports Server (NTRS)
Badescu, Mircea; Bao, Xiaoqi; Bar-Cohen, Yoseph
2014-01-01
Most NASA missions require the use of a launch lock for securing moving components during the launch or securing the payload before release. A launch lock is a device used to prevent unwanted motion and secure the controlled components. The current launch locks are based on pyrotechnic, electro mechanically or NiTi driven pin pullers and they are mostly one time use mechanisms that are usually bulky and involve a relatively high mass. Generally, the use of piezoelectric actuation provides high precession nanometer accuracy but it relies on friction to generate displacement. During launch, the generated vibrations can release the normal force between the actuator components allowing shaft's free motion which could result in damage to the actuated structures or instruments. This problem is common to other linear actuators that consist of a ball screw mechanism. The authors are exploring the development of a novel launch lock mechanism that is activated by a shape memory alloy (SMA) material ring, a rigid element and an SMA ring holding flexure. The proposed design and analytical model will be described and discussed in this paper.
Multi-Mission Power Analysis Tool (MMPAT) Version 3
NASA Technical Reports Server (NTRS)
Wood, Eric G.; Chang, George W.; Chen, Fannie C.
2012-01-01
The Multi-Mission Power Analysis Tool (MMPAT) simulates a spacecraft power subsystem including the power source (solar array and/or radioisotope thermoelectric generator), bus-voltage control, secondary battery (lithium-ion or nickel-hydrogen), thermostatic heaters, and power-consuming equipment. It handles multiple mission types including heliocentric orbiters, planetary orbiters, and surface operations. Being parametrically driven along with its user-programmable features can reduce or even eliminate any need for software modifications when configuring it for a particular spacecraft. It provides multiple levels of fidelity, thereby fulfilling the vast majority of a project s power simulation needs throughout the lifecycle. It can operate in a stand-alone mode with a graphical user interface, in batch mode, or as a library linked with other tools. This software can simulate all major aspects of a spacecraft power subsystem. It is parametrically driven to reduce or eliminate the need for a programmer. Added flexibility is provided through user-designed state models and table-driven parameters. MMPAT is designed to be used by a variety of users, such as power subsystem engineers for sizing power subsystem components; mission planners for adjusting mission scenarios using power profiles generated by the model; system engineers for performing system- level trade studies using the results of the model during the early design phases of a spacecraft; and operations personnel for high-fidelity modeling of the essential power aspect of the planning picture.
NASA Astrophysics Data System (ADS)
Gorzelic, P.; Schiff, S. J.; Sinha, A.
2013-04-01
Objective. To explore the use of classical feedback control methods to achieve an improved deep brain stimulation (DBS) algorithm for application to Parkinson's disease (PD). Approach. A computational model of PD dynamics was employed to develop model-based rational feedback controller design. The restoration of thalamocortical relay capabilities to patients suffering from PD is formulated as a feedback control problem with the DBS waveform serving as the control input. Two high-level control strategies are tested: one that is driven by an online estimate of thalamic reliability, and another that acts to eliminate substantial decreases in the inhibition from the globus pallidus interna (GPi) to the thalamus. Control laws inspired by traditional proportional-integral-derivative (PID) methodology are prescribed for each strategy and simulated on this computational model of the basal ganglia network. Main Results. For control based upon thalamic reliability, a strategy of frequency proportional control with proportional bias delivered the optimal control achieved for a given energy expenditure. In comparison, control based upon synaptic inhibitory output from the GPi performed very well in comparison with those of reliability-based control, with considerable further reduction in energy expenditure relative to that of open-loop DBS. The best controller performance was amplitude proportional with derivative control and integral bias, which is full PID control. We demonstrated how optimizing the three components of PID control is feasible in this setting, although the complexity of these optimization functions argues for adaptive methods in implementation. Significance. Our findings point to the potential value of model-based rational design of feedback controllers for Parkinson's disease.
Gorzelic, P; Schiff, S J; Sinha, A
2013-04-01
To explore the use of classical feedback control methods to achieve an improved deep brain stimulation (DBS) algorithm for application to Parkinson's disease (PD). A computational model of PD dynamics was employed to develop model-based rational feedback controller design. The restoration of thalamocortical relay capabilities to patients suffering from PD is formulated as a feedback control problem with the DBS waveform serving as the control input. Two high-level control strategies are tested: one that is driven by an online estimate of thalamic reliability, and another that acts to eliminate substantial decreases in the inhibition from the globus pallidus interna (GPi) to the thalamus. Control laws inspired by traditional proportional-integral-derivative (PID) methodology are prescribed for each strategy and simulated on this computational model of the basal ganglia network. For control based upon thalamic reliability, a strategy of frequency proportional control with proportional bias delivered the optimal control achieved for a given energy expenditure. In comparison, control based upon synaptic inhibitory output from the GPi performed very well in comparison with those of reliability-based control, with considerable further reduction in energy expenditure relative to that of open-loop DBS. The best controller performance was amplitude proportional with derivative control and integral bias, which is full PID control. We demonstrated how optimizing the three components of PID control is feasible in this setting, although the complexity of these optimization functions argues for adaptive methods in implementation. Our findings point to the potential value of model-based rational design of feedback controllers for Parkinson's disease.
Feature-based component model for design of embedded systems
NASA Astrophysics Data System (ADS)
Zha, Xuan Fang; Sriram, Ram D.
2004-11-01
An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.
Magnetically-driven medical robots: An analytical magnetic model for endoscopic capsules design
NASA Astrophysics Data System (ADS)
Li, Jing; Barjuei, Erfan Shojaei; Ciuti, Gastone; Hao, Yang; Zhang, Peisen; Menciassi, Arianna; Huang, Qiang; Dario, Paolo
2018-04-01
Magnetic-based approaches are highly promising to provide innovative solutions for the design of medical devices for diagnostic and therapeutic procedures, such as in the endoluminal districts. Due to the intrinsic magnetic properties (no current needed) and the high strength-to-size ratio compared with electromagnetic solutions, permanent magnets are usually embedded in medical devices. In this paper, a set of analytical formulas have been derived to model the magnetic forces and torques which are exerted by an arbitrary external magnetic field on a permanent magnetic source embedded in a medical robot. In particular, the authors modelled cylindrical permanent magnets as general solution often used and embedded in magnetically-driven medical devices. The analytical model can be applied to axially and diametrically magnetized, solid and annular cylindrical permanent magnets in the absence of the severe calculation complexity. Using a cylindrical permanent magnet as a selected solution, the model has been applied to a robotic endoscopic capsule as a pilot study in the design of magnetically-driven robots.
Dshell++: A Component Based, Reusable Space System Simulation Framework
NASA Technical Reports Server (NTRS)
Lim, Christopher S.; Jain, Abhinandan
2009-01-01
This paper describes the multi-mission Dshell++ simulation framework for high fidelity, physics-based simulation of spacecraft, robotic manipulation and mobility systems. Dshell++ is a C++/Python library which uses modern script driven object-oriented techniques to allow component reuse and a dynamic run-time interface for complex, high-fidelity simulation of spacecraft and robotic systems. The goal of the Dshell++ architecture is to manage the inherent complexity of physicsbased simulations while supporting component model reuse across missions. The framework provides several features that support a large degree of simulation configurability and usability.
Design and Operation of a 4kW Linear Motor Driven Pulse Tube Cryocooler
NASA Astrophysics Data System (ADS)
Zia, J. H.
2004-06-01
A 4 kW electrical input Linear Motor driven pulse tube cryocooler has successfully been designed, built and tested. The optimum operation frequency is 60 Hz with a design refrigeration of >200 W at 80 K. The design exercise involved modeling and optimization in DeltaE software. Load matching between the cold head and linear motor was achieved by careful sizing of the transfer tube. The cryocooler makes use of a dual orifice inertance network and a single compliance tank for phase optimization and streaming suppression in the pulse tube. The in-line cold head design is modular in structure for convenient change-out and re-assembly of various components. The Regenerator consists of layers of two different grades of wire-mesh. The Linear motor is a clearance seal, dual opposed piston design from CFIC Inc. Initial results have demonstrated the refrigeration target of 200 W by liquefying Nitrogen from an ambient temperature and pressure. Overall Carnot efficiencies of 13% have been achieved and efforts to further improve efficiencies are underway. Linear motor efficiencies up to 84% have been observed. Experimental results have shown satisfactory compliance with model predictions, although the effects of streaming were not part of the model. Refrigeration loss due to streaming was minimal at the design operating conditions of 80 K.
Multiple Damage Progression Paths in Model-Based Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Goebel, Kai Frank
2011-01-01
Model-based prognostics approaches employ domain knowledge about a system, its components, and how they fail through the use of physics-based models. Component wear is driven by several different degradation phenomena, each resulting in their own damage progression path, overlapping to contribute to the overall degradation of the component. We develop a model-based prognostics methodology using particle filters, in which the problem of characterizing multiple damage progression paths is cast as a joint state-parameter estimation problem. The estimate is represented as a probability distribution, allowing the prediction of end of life and remaining useful life within a probabilistic framework that supports uncertainty management. We also develop a novel variance control mechanism that maintains an uncertainty bound around the hidden parameters to limit the amount of estimation uncertainty and, consequently, reduce prediction uncertainty. We construct a detailed physics-based model of a centrifugal pump, to which we apply our model-based prognostics algorithms. We illustrate the operation of the prognostic solution with a number of simulation-based experiments and demonstrate the performance of the chosen approach when multiple damage mechanisms are active
Can We Practically Bring Physics-based Modeling Into Operational Analytics Tools?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Bonvini, Marco; Piette, Mary Ann
We present that analytics software is increasingly used to improve and maintain operational efficiency in commercial buildings. Energy managers, owners, and operators are using a diversity of commercial offerings often referred to as Energy Information Systems, Fault Detection and Diagnostic (FDD) systems, or more broadly Energy Management and Information Systems, to cost-effectively enable savings on the order of ten to twenty percent. Most of these systems use data from meters and sensors, with rule-based and/or data-driven models to characterize system and building behavior. In contrast, physics-based modeling uses first-principles and engineering models (e.g., efficiency curves) to characterize system and buildingmore » behavior. Historically, these physics-based approaches have been used in the design phase of the building life cycle or in retrofit analyses. Researchers have begun exploring the benefits of integrating physics-based models with operational data analytics tools, bridging the gap between design and operations. In this paper, we detail the development and operator use of a software tool that uses hybrid data-driven and physics-based approaches to cooling plant FDD and optimization. Specifically, we describe the system architecture, models, and FDD and optimization algorithms; advantages and disadvantages with respect to purely data-driven approaches; and practical implications for scaling and replicating these techniques. Finally, we conclude with an evaluation of the future potential for such tools and future research opportunities.« less
Electromagnetic Interaction between the Component Coils of Multi-Plex Magnets
Nguyen, Quyen V. M.; Torrez, Lynette; Nguyen, Doan Ngoc
2017-12-04
Ultra-high field pulsed magnets are usually designed as a group of nested, concentric coils driven by separated power sources to reduce the required driving voltages and to distribute the mechanical load and to reduce the driving voltages. Since the magnet operates in a fast transient mode, there will be strong and complicated electromagnetic couplings between the component coils. The high eddy currents generated in the reinforcement shells of the component coils during the pulses also strongly affect these couplings. Therefore, understanding the electromagnetic interaction between the component coils will allow safer, more optimized design and operation of our magnets. Asmore » a result, this paper will focus on our finite element modeling and experimental results for the electromagnetic interactions between the component coils of the 100-T nondestructive magnet and 80-T duplex magnet at our facility.« less
Electromagnetic Interaction between the Component Coils of Multi-Plex Magnets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Quyen V. M.; Torrez, Lynette; Nguyen, Doan Ngoc
Ultra-high field pulsed magnets are usually designed as a group of nested, concentric coils driven by separated power sources to reduce the required driving voltages and to distribute the mechanical load and to reduce the driving voltages. Since the magnet operates in a fast transient mode, there will be strong and complicated electromagnetic couplings between the component coils. The high eddy currents generated in the reinforcement shells of the component coils during the pulses also strongly affect these couplings. Therefore, understanding the electromagnetic interaction between the component coils will allow safer, more optimized design and operation of our magnets. Asmore » a result, this paper will focus on our finite element modeling and experimental results for the electromagnetic interactions between the component coils of the 100-T nondestructive magnet and 80-T duplex magnet at our facility.« less
Implementing and Assessing a Flipped Classroom Model for First-Year Engineering Design
ERIC Educational Resources Information Center
Saterbak, Ann; Volz, Tracy; Wettergreen, Matthew
2016-01-01
Faculty at Rice University are creating instructional resources to support teaching first-year engineering design using a flipped classroom model. This implementation of flipped pedagogy is unusual because content-driven, lecture courses are usually targeted for flipping, not project-based design courses that already incorporate an abundance of…
NASA Technical Reports Server (NTRS)
Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel
2006-01-01
Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.
Energy modelling in sensor networks
NASA Astrophysics Data System (ADS)
Schmidt, D.; Krämer, M.; Kuhn, T.; Wehn, N.
2007-06-01
Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.
Rübel, Oliver; Dougherty, Max; Prabhat; Denes, Peter; Conant, David; Chang, Edward F.; Bouchard, Kristofer
2016-01-01
Neuroscience continues to experience a tremendous growth in data; in terms of the volume and variety of data, the velocity at which data is acquired, and in turn the veracity of data. These challenges are a serious impediment to sharing of data, analyses, and tools within and across labs. Here, we introduce BRAINformat, a novel data standardization framework for the design and management of scientific data formats. The BRAINformat library defines application-independent design concepts and modules that together create a general framework for standardization of scientific data. We describe the formal specification of scientific data standards, which facilitates sharing and verification of data and formats. We introduce the concept of Managed Objects, enabling semantic components of data formats to be specified as self-contained units, supporting modular and reusable design of data format components and file storage. We also introduce the novel concept of Relationship Attributes for modeling and use of semantic relationships between data objects. Based on these concepts we demonstrate the application of our framework to design and implement a standard format for electrophysiology data and show how data standardization and relationship-modeling facilitate data analysis and sharing. The format uses HDF5, enabling portable, scalable, and self-describing data storage and integration with modern high-performance computing for data-driven discovery. The BRAINformat library is open source, easy-to-use, and provides detailed user and developer documentation and is freely available at: https://bitbucket.org/oruebel/brainformat. PMID:27867355
NASA Astrophysics Data System (ADS)
Zhang, Y.; Wen, J.; Xiao, Q.; You, D.
2016-12-01
Operational algorithms for land surface BRDF/Albedo products are mainly developed from kernel-driven model, combining atmospherically corrected, multidate, multiband surface reflectance to extract BRDF parameters. The Angular and Spectral Kernel Driven model (ASK model), which incorporates the component spectra as a priori knowledge, provides a potential way to make full use of the multi-sensor data with multispectral information and accumulated observations. However, the ASK model is still not feasible for global BRDF/Albedo inversions due to the lack of sufficient field measurements of component spectra at the large scale. This research outlines a parameterization scheme on the component spectra for global scale BRDF/Albedo inversions in the frame of ASK. The parameter γ(λ) can be derived from the ratio of the leaf reflectance and soil reflectance, supported by globally distributed soil spectral library, ANGERS and LOPEX leaf optical properties database. To consider the intrinsic variability in both the land cover and spectral dimension, the mean and standard deviation of γ(λ) for 28 soil units and 4 leaf types in seven MODIS bands were calculated, with a world soil map used for global BRDF/Albedo products retrieval. Compared to the retrievals from BRF datasets simulated by the PROSAIL model, ASK model shows an acceptable accuracy on the parameterization strategy, with the RMSE 0.007 higher at most than inversion by true component spectra. The results indicate that the classification on ratio contributed to capture the spectral characteristics in BBRDF/Albedo retrieval, whereas the ratio range should be controlled within 8% in each band. Ground-based measurements in Heihe river basin were used to validate the accuracy of the improved ASK model, and the generated broadband albedo products shows good agreement with in situ data, which suggests that the improvement of the component spectra on the ASK model has potential for global scale BRDF/Albedo inversions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y.; Loesser, G.; Smith, M.
ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
NASA Technical Reports Server (NTRS)
Goodman, Michael L.; Kwan, Chiman; Ayhan, Bulent; Shang, Eric L.
2017-01-01
There are many flare forecasting models. For an excellent review and comparison of some of them see Barnes et al. (2016). All these models are successful to some degree, but there is a need for better models. We claim the most successful models explicitly or implicitly base their forecasts on various estimates of components of the photospheric current density J, based on observations of the photospheric magnetic field B. However, none of the models we are aware of compute the complete J. We seek to develop a better model based on computing the complete photospheric J. Initial results from this model are presented in this talk. We present a data driven, near photospheric, 3 D, non-force free magnetohydrodynamic (MHD) model that computes time series of the total J, and associated resistive heating rate in each pixel at the photosphere in the neutral line regions (NLRs) of 14 active regions (ARs). The model is driven by time series of B measured by the Helioseismic & Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. Spurious Doppler periods due to SDO orbital motion are filtered out of the time series of B in every AR pixel. Errors in B due to these periods can be significant.
Maneshi, Mona; Vahdat, Shahabeddin; Gotman, Jean; Grova, Christophe
2016-01-01
Independent component analysis (ICA) has been widely used to study functional magnetic resonance imaging (fMRI) connectivity. However, the application of ICA in multi-group designs is not straightforward. We have recently developed a new method named “shared and specific independent component analysis” (SSICA) to perform between-group comparisons in the ICA framework. SSICA is sensitive to extract those components which represent a significant difference in functional connectivity between groups or conditions, i.e., components that could be considered “specific” for a group or condition. Here, we investigated the performance of SSICA on realistic simulations, and task fMRI data and compared the results with one of the state-of-the-art group ICA approaches to infer between-group differences. We examined SSICA robustness with respect to the number of allowable extracted specific components and between-group orthogonality assumptions. Furthermore, we proposed a modified formulation of the back-reconstruction method to generate group-level t-statistics maps based on SSICA results. We also evaluated the consistency and specificity of the extracted specific components by SSICA. The results on realistic simulated and real fMRI data showed that SSICA outperforms the regular group ICA approach in terms of reconstruction and classification performance. We demonstrated that SSICA is a powerful data-driven approach to detect patterns of differences in functional connectivity across groups/conditions, particularly in model-free designs such as resting-state fMRI. Our findings in task fMRI show that SSICA confirms results of the general linear model (GLM) analysis and when combined with clustering analysis, it complements GLM findings by providing additional information regarding the reliability and specificity of networks. PMID:27729843
The jABC Approach to Rigorous Collaborative Development of SCM Applications
NASA Astrophysics Data System (ADS)
Hörmann, Martina; Margaria, Tiziana; Mender, Thomas; Nagel, Ralf; Steffen, Bernhard; Trinh, Hong
Our approach to the model-driven collaborative design of IKEA's P3 Delivery Management Process uses the jABC [9] for model driven mediation and choreography to complement a RUP-based (Rational Unified Process) development process. jABC is a framework for service development based on Lightweight Process Coordination. Users (product developers and system/software designers) easily develop services and applications by composing reusable building-blocks into (flow-) graph structures that can be animated, analyzed, simulated, verified, executed, and compiled. This way of handling the collaborative design of complex embedded systems has proven to be effective and adequate for the cooperation of non-programmers and non-technical people, which is the focus of this contribution, and it is now being rolled out in the operative practice.
Simulation of Solar Heat Pump Dryer Directly Driven by Photovoltaic Panels
NASA Astrophysics Data System (ADS)
Houhou, H.; Yuan, W.; Wang, G.
2017-05-01
This paper investigates a new type of solar heat pump dryer directly driven by photovoltaic panels. In order to design this system, a mathematical model has been established describing the whole drying process, including models of key components and phenomena of heat and mass transfer at the product layer and the air. The results of simulation at different drying air temperatures and velocities have been calculated and it indicate that the temperature of drying air is crucial external parameter compared to the velocity, with the increase of drying temperature from 45°C to 55°C, the product moisture content (Kg water/Kg dry product) decreased from 0.75 Kg/Kg to 0.3 Kg/Kg.
Development of a Stochastically-driven, Forward Predictive Performance Model for PEMFCs
NASA Astrophysics Data System (ADS)
Harvey, David Benjamin Paul
A one-dimensional multi-scale coupled, transient, and mechanistic performance model for a PEMFC membrane electrode assembly has been developed. The model explicitly includes each of the 5 layers within a membrane electrode assembly and solves for the transport of charge, heat, mass, species, dissolved water, and liquid water. Key features of the model include the use of a multi-step implementation of the HOR reaction on the anode, agglomerate catalyst sub-models for both the anode and cathode catalyst layers, a unique approach that links the composition of the catalyst layer to key properties within the agglomerate model and the implementation of a stochastic input-based approach for component material properties. The model employs a new methodology for validation using statistically varying input parameters and statistically-based experimental performance data; this model represents the first stochastic input driven unit cell performance model. The stochastic input driven performance model was used to identify optimal ionomer content within the cathode catalyst layer, demonstrate the role of material variation in potential low performing MEA materials, provide explanation for the performance of low-Pt loaded MEAs, and investigate the validity of transient-sweep experimental diagnostic methods.
Pirkle, Catherine M; Wu, Yan Yan; Zunzunegui, Maria-Victoria; Gómez, José Fernando
2018-01-01
Objective Conceptual models underpinning much epidemiological research on ageing acknowledge that environmental, social and biological systems interact to influence health outcomes. Recursive partitioning is a data-driven approach that allows for concurrent exploration of distinct mixtures, or clusters, of individuals that have a particular outcome. Our aim is to use recursive partitioning to examine risk clusters for metabolic syndrome (MetS) and its components, in order to identify vulnerable populations. Study design Cross-sectional analysis of baseline data from a prospective longitudinal cohort called the International Mobility in Aging Study (IMIAS). Setting IMIAS includes sites from three middle-income countries—Tirana (Albania), Natal (Brazil) and Manizales (Colombia)—and two from Canada—Kingston (Ontario) and Saint-Hyacinthe (Quebec). Participants Community-dwelling male and female adults, aged 64–75 years (n=2002). Primary and secondary outcome measures We apply recursive partitioning to investigate social and behavioural risk factors for MetS and its components. Model-based recursive partitioning (MOB) was used to cluster participants into age-adjusted risk groups based on variabilities in: study site, sex, education, living arrangements, childhood adversities, adult occupation, current employment status, income, perceived income sufficiency, smoking status and weekly minutes of physical activity. Results 43% of participants had MetS. Using MOB, the primary partitioning variable was participant sex. Among women from middle-incomes sites, the predicted proportion with MetS ranged from 58% to 68%. Canadian women with limited physical activity had elevated predicted proportions of MetS (49%, 95% CI 39% to 58%). Among men, MetS ranged from 26% to 41% depending on childhood social adversity and education. Clustering for MetS components differed from the syndrome and across components. Study site was a primary partitioning variable for all components except HDL cholesterol. Sex was important for most components. Conclusion MOB is a promising technique for identifying disease risk clusters (eg, vulnerable populations) in modestly sized samples. PMID:29500203
Aycinena, Ana Corina; Jennings, Kerri-Ann; Gaffney, Ann Ogden; Koch, Pamela A; Contento, Isobel R; Gonzalez, Monica; Guidon, Ela; Karmally, Wahida; Hershman, Dawn; Greenlee, Heather
2017-02-01
We developed a theory-based dietary change curriculum for Hispanic breast cancer survivors with the goal of testing the effects of the intervention on change in dietary intake of fruits/vegetables and fat in a randomized, clinical trial. Social cognitive theory and the transtheoretical model were used as theoretical frameworks to structure curriculum components using the Nutrition Education DESIGN Procedure. Formative assessments were conducted to identify facilitators and barriers common to Hispanic women and test the degree of difficulty and appropriateness of program materials. Focus groups provided valuable insight and informed preimplementation modifications to the dietary program. The result was a systematically planned, evidence-based, culturally tailored dietary intervention for Hispanic breast cancer survivors, ¡Cocinar Para Su Salud! (Cook for Your Health!). The methodology described here may serve as a framework for the development of future dietary interventions among diverse and minority populations. Short- and long-term study results will be reported elsewhere.
Using a logical information model-driven design process in healthcare.
Cheong, Yu Chye; Bird, Linda; Tun, Nwe Ni; Brooks, Colleen
2011-01-01
A hybrid standards-based approach has been adopted in Singapore to develop a Logical Information Model (LIM) for healthcare information exchange. The Singapore LIM uses a combination of international standards, including ISO13606-1 (a reference model for electronic health record communication), ISO21090 (healthcare datatypes), SNOMED CT (healthcare terminology) and HL7 v2 (healthcare messaging). This logic-based design approach also incorporates mechanisms for achieving bi-directional semantic interoperability.
Design and Application of an Ontology for Component-Based Modeling of Water Systems
NASA Astrophysics Data System (ADS)
Elag, M.; Goodall, J. L.
2012-12-01
Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.
An ontology-driven, case-based clinical decision support model for removable partial denture design
NASA Astrophysics Data System (ADS)
Chen, Qingxiao; Wu, Ji; Li, Shusen; Lyu, Peijun; Wang, Yong; Li, Miao
2016-06-01
We present the initial work toward developing a clinical decision support model for specific design of removable partial dentures (RPDs) in dentistry. We developed an ontological paradigm to represent knowledge of a patient’s oral conditions and denture component parts. During the case-based reasoning process, a cosine similarity algorithm was applied to calculate similarity values between input patients and standard ontology cases. A group of designs from the most similar cases were output as the final results. To evaluate this model, the output designs of RPDs for 104 randomly selected patients were compared with those selected by professionals. An area under the curve of the receiver operating characteristic (AUC-ROC) was created by plotting true-positive rates against the false-positive rate at various threshold settings. The precision at position 5 of the retrieved cases was 0.67 and at the top of the curve it was 0.96, both of which are very high. The mean average of precision (MAP) was 0.61 and the normalized discounted cumulative gain (NDCG) was 0.74 both of which confirmed the efficient performance of our model. All the metrics demonstrated the efficiency of our model. This methodology merits further research development to match clinical applications for designing RPDs. This paper is organized as follows. After the introduction and description of the basis for the paper, the evaluation and results are presented in Section 2. Section 3 provides a discussion of the methodology and results. Section 4 describes the details of the ontology, similarity algorithm, and application.
An ontology-driven, case-based clinical decision support model for removable partial denture design.
Chen, Qingxiao; Wu, Ji; Li, Shusen; Lyu, Peijun; Wang, Yong; Li, Miao
2016-06-14
We present the initial work toward developing a clinical decision support model for specific design of removable partial dentures (RPDs) in dentistry. We developed an ontological paradigm to represent knowledge of a patient's oral conditions and denture component parts. During the case-based reasoning process, a cosine similarity algorithm was applied to calculate similarity values between input patients and standard ontology cases. A group of designs from the most similar cases were output as the final results. To evaluate this model, the output designs of RPDs for 104 randomly selected patients were compared with those selected by professionals. An area under the curve of the receiver operating characteristic (AUC-ROC) was created by plotting true-positive rates against the false-positive rate at various threshold settings. The precision at position 5 of the retrieved cases was 0.67 and at the top of the curve it was 0.96, both of which are very high. The mean average of precision (MAP) was 0.61 and the normalized discounted cumulative gain (NDCG) was 0.74 both of which confirmed the efficient performance of our model. All the metrics demonstrated the efficiency of our model. This methodology merits further research development to match clinical applications for designing RPDs. This paper is organized as follows. After the introduction and description of the basis for the paper, the evaluation and results are presented in Section 2. Section 3 provides a discussion of the methodology and results. Section 4 describes the details of the ontology, similarity algorithm, and application.
Reliability evaluation methodology for NASA applications
NASA Technical Reports Server (NTRS)
Taneja, Vidya S.
1992-01-01
Liquid rocket engine technology has been characterized by the development of complex systems containing large number of subsystems, components, and parts. The trend to even larger and more complex system is continuing. The liquid rocket engineers have been focusing mainly on performance driven designs to increase payload delivery of a launch vehicle for a given mission. In otherwords, although the failure of a single inexpensive part or component may cause the failure of the system, reliability in general has not been considered as one of the system parameters like cost or performance. Up till now, quantification of reliability has not been a consideration during system design and development in the liquid rocket industry. Engineers and managers have long been aware of the fact that the reliability of the system increases during development, but no serious attempts have been made to quantify reliability. As a result, a method to quantify reliability during design and development is needed. This includes application of probabilistic models which utilize both engineering analysis and test data. Classical methods require the use of operating data for reliability demonstration. In contrast, the method described in this paper is based on similarity, analysis, and testing combined with Bayesian statistical analysis.
Cylinders vs. Spheres: Biofluid Shear Thinning in Driven Nanoparticle Transport
Cribb, Jeremy A.; Meehan, Timothy D.; Shah, Sheel M.; Skinner, Kwan; Superfine, Richard
2011-01-01
Increasingly, the research community applies magnetophoresis to micro and nanoscale particles for drug delivery applications and the nanoscale rheological characterization of complex biological materials. Of particular interest is the design and transport of these magnetic particles through entangled polymeric fluids commonly found in biological systems. We report the magnetophoretic transport of spherical and rod-shaped particles through viscoelastic, entangled solutions using lambda-phage DNA (λ-DNA) as a model system. In order to understand and predict the observed phenomena, we fully characterize three fundamental components: the magnetic field and field gradient, the shape and magnetic properties of the probe particles, and the macroscopic rheology of the solution. Particle velocities obtained in Newtonian solutions correspond to macroscale rheology, with forces calculated via Stokes Law. In λ-DNA solutions, nanorod velocities are 100 times larger than predicted by measured zero-shear viscosity. These results are consistent with particles experiencing transport through a shear thinning fluid, indicating magnetically driven transport in shear thinning may be especially effective and favor narrow diameter, high aspect ratio particles. A complete framework for designing single-particle magnetic-based delivery systems results when we combine a quantified magnetic system with qualified particles embedded in a characterized viscoelastic medium. PMID:20571853
Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong
2011-12-01
In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.
EcoliWiki: a wiki-based community resource for Escherichia coli
McIntosh, Brenley K.; Renfro, Daniel P.; Knapp, Gwendowlyn S.; Lairikyengbam, Chanchala R.; Liles, Nathan M.; Niu, Lili; Supak, Amanda M.; Venkatraman, Anand; Zweifel, Adrienne E.; Siegele, Deborah A.; Hu, James C.
2012-01-01
EcoliWiki is the community annotation component of the PortEco (http://porteco.org; formerly EcoliHub) project, an online data resource that integrates information on laboratory strains of Escherichia coli, its phages, plasmids and mobile genetic elements. As one of the early adopters of the wiki approach to model organism databases, EcoliWiki was designed to not only facilitate community-driven sharing of biological knowledge about E. coli as a model organism, but also to be interoperable with other data resources. EcoliWiki content currently covers genes from five laboratory E. coli strains, 21 bacteriophage genomes, F plasmid and eight transposons. EcoliWiki integrates the Mediawiki wiki platform with other open-source software tools and in-house software development to extend how wikis can be used for model organism databases. EcoliWiki can be accessed online at http://ecoliwiki.net. PMID:22064863
View subspaces for indexing and retrieval of 3D models
NASA Astrophysics Data System (ADS)
Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel
2010-02-01
View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.
Hydraulic model of the proposed Water Recovery and Management system for Space Station Freedom
NASA Technical Reports Server (NTRS)
Martin, Charles E.; Bacskay, Allen S.
1991-01-01
A model of the Water Recovery and Management (WRM) system utilizing SINDA '85/FLUINT to determine its hydraulic operation characteristics, and to verify the design flow and pressure drop parameters is presented. The FLUINT analysis package is employed in the model to determine the flow and pressure characteristics when each of the different loop components is operational and contributing to the overall flow pattern. The water is driven in each loop by storage tanks pressurized with cabin air, and is routed through the system to the desired destination.
Frydel, Derek; Levin, Yan
2018-01-14
In the present work, we investigate a gas-liquid transition in a two-component Gaussian core model, where particles of the same species repel and those of different species attract. Unlike a similar transition in a one-component system with particles having attractive interactions at long separations and repulsive interactions at short separations, a transition in the two-component system is not driven solely by interactions but by a specific feature of the interactions, the correlations. This leads to extremely low critical temperature, as correlations are dominant in the strong-coupling limit. By carrying out various approximations based on standard liquid-state methods, we show that a gas-liquid transition of the two-component system poses a challenging theoretical problem.
NASA Astrophysics Data System (ADS)
Frydel, Derek; Levin, Yan
2018-01-01
In the present work, we investigate a gas-liquid transition in a two-component Gaussian core model, where particles of the same species repel and those of different species attract. Unlike a similar transition in a one-component system with particles having attractive interactions at long separations and repulsive interactions at short separations, a transition in the two-component system is not driven solely by interactions but by a specific feature of the interactions, the correlations. This leads to extremely low critical temperature, as correlations are dominant in the strong-coupling limit. By carrying out various approximations based on standard liquid-state methods, we show that a gas-liquid transition of the two-component system poses a challenging theoretical problem.
Johnson, Douglas H.; Cook, R.D.
2013-01-01
In her AAAS News & Notes piece "Can the Southwest manage its thirst?" (26 July, p. 362), K. Wren quotes Ajay Kalra, who advocates a particular method for predicting Colorado River streamflow "because it eschews complex physical climate models for a statistical data-driven modeling approach." A preference for data-driven models may be appropriate in this individual situation, but it is not so generally, Data-driven models often come with a warning against extrapolating beyond the range of the data used to develop the models. When the future is like the past, data-driven models can work well for prediction, but it is easy to over-model local or transient phenomena, often leading to predictive inaccuracy (1). Mechanistic models are built on established knowledge of the process that connects the response variables with the predictors, using information obtained outside of an extant data set. One may shy away from a mechanistic approach when the underlying process is judged to be too complicated, but good predictive models can be constructed with statistical components that account for ingredients missing in the mechanistic analysis. Models with sound mechanistic components are more generally applicable and robust than data-driven models.
2007-12-01
model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality. 15. NUMBER OF...and a Behavioral model. Finally, we build a small agent-based model using the component architecture to demonstrate the library’s functionality...prototypes an architectural design which is generalizable, reusable, and extensible. We have created an initial set of model elements that demonstrate
Strategic Industrial Alliances in Paper Industry: XML- vs Ontology-Based Integration Platforms
ERIC Educational Resources Information Center
Naumenko, Anton; Nikitin, Sergiy; Terziyan, Vagan; Zharko, Andriy
2005-01-01
Purpose: To identify cases related to design of ICT platforms for industrial alliances, where the use of Ontology-driven architectures based on Semantic web standards is more advantageous than application of conventional modeling together with XML standards. Design/methodology/approach: A comparative analysis of the two latest and the most obvious…
Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria
NASA Astrophysics Data System (ADS)
Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong
2017-08-01
In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.
EMMA: a new paradigm in configurable software
Nogiec, J. M.; Trombly-Freytag, K.
2017-11-23
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
EMMA: A New Paradigm in Configurable Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J. M.; Trombly-Freytag, K.
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
EMMA: a new paradigm in configurable software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nogiec, J. M.; Trombly-Freytag, K.
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. As a result, it provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
EMMA: a new paradigm in configurable software
NASA Astrophysics Data System (ADS)
Nogiec, J. M.; Trombly-Freytag, K.
2017-10-01
EMMA is a framework designed to create a family of configurable software systems, with emphasis on extensibility and flexibility. It is based on a loosely coupled, event driven architecture. The EMMA framework has been built upon the premise of composing software systems from independent components. It opens up opportunities for reuse of components and their functionality and composing them together in many different ways. It provides the developer of test and measurement applications with a lightweight alternative to microservices, while sharing their various advantages, including composability, loose coupling, encapsulation, and reuse.
Network Model-Assisted Inference from Respondent-Driven Sampling Data
Gile, Krista J.; Handcock, Mark S.
2015-01-01
Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328
Network Model-Assisted Inference from Respondent-Driven Sampling Data.
Gile, Krista J; Handcock, Mark S
2015-06-01
Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.
Wind Turbine Blade CAD Models Used as Scaffolding Technique to Teach Design Engineers
ERIC Educational Resources Information Center
Irwin, John
2013-01-01
The Siemens PLM CAD software NX is commonly used for designing mechanical systems, and in complex systems such as the emerging area of wind power, the ability to have a model controlled by design parameters is a certain advantage. Formula driven expressions based on the amount of available wind in an area can drive the amount of effective surface…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moreno, Gilbert; Bennion, Kevin
This project will develop thermal management strategies to enable efficient and high-temperature wide-bandgap (WBG)-based power electronic systems (e.g., emerging inverter and DC-DC converter designs). The use of WBG-based devices in automotive power electronics will improve efficiency and increase driving range in electric-drive vehicles; however, the implementation of this technology is limited, in part, due to thermal issues. This project will develop system-level thermal models to determine the thermal limitations of current automotive power modules under elevated device temperature conditions. Additionally, novel cooling concepts and material selection will be evaluated to enable high-temperature silicon and WBG devices in power electronics components.more » WBG devices (silicon carbide [SiC], gallium nitride [GaN]) promise to increase efficiency, but will be driven as hard as possible. This creates challenges for thermal management and reliability.« less
Spin Seebeck effect in a metal-single-molecule-magnet-metal junction
NASA Astrophysics Data System (ADS)
Niu, Pengbin; Liu, Lixiang; Su, Xiaoqiang; Dong, Lijuan; Luo, Hong-Gang
2018-01-01
We investigate the nonlinear regime of temperature-driven spin-related currents through a single molecular magnet (SMM), which is connected with two metal electrodes. Under a large spin approximation, the SMM is simplified to a natural two-channel model possessing spin-opposite configuration and Coulomb interaction. We find that in temperature-driven case the system can generate spin-polarized currents. More interestingly, at electron-hole symmetry point, the competition of the two channels induces a temperature-driven pure spin current. This device demonstrates that temperature-driven SMM junction shows some results different from the usual quantum dot model, which may be useful in the future design of thermal-based molecular spintronic devices.
Effects of Interdisciplinary Education on Technology-Driven Application Design
ERIC Educational Resources Information Center
Tafa, Z.; Rakocevic, G.; Mihailovic, D.; Milutinovic, V.
2011-01-01
This paper describes the structure and the underlying rationale of a new course dedicated to capability maturity model integration (CMMI)-directed design of wireless sensor networks (WSNs)-based biomedical applications that stresses: 1) engineering-, medico-engineering-, and informatics-related issues; 2) design for general- and special-purpose…
NASA Astrophysics Data System (ADS)
Abisset-Chavanne, Emmanuelle; Duval, Jean Louis; Cueto, Elias; Chinesta, Francisco
2018-05-01
Traditionally, Simulation-Based Engineering Sciences (SBES) has relied on the use of static data inputs (model parameters, initial or boundary conditions, … obtained from adequate experiments) to perform simulations. A new paradigm in the field of Applied Sciences and Engineering has emerged in the last decade. Dynamic Data-Driven Application Systems [9, 10, 11, 12, 22] allow the linkage of simulation tools with measurement devices for real-time control of simulations and applications, entailing the ability to dynamically incorporate additional data into an executing application, and in reverse, the ability of an application to dynamically steer the measurement process. It is in that context that traditional "digital-twins" are giving raise to a new generation of goal-oriented data-driven application systems, also known as "hybrid-twins", embracing models based on physics and models exclusively based on data adequately collected and assimilated for filling the gap between usual model predictions and measurements. Within this framework new methodologies based on model learners, machine learning and kinetic goal-oriented design are defining a new paradigm in materials, processes and systems engineering.
Data base architecture for instrument characteristics critical to spacecraft conceptual design
NASA Technical Reports Server (NTRS)
Rowell, Lawrence F.; Allen, Cheryl L.
1990-01-01
Spacecraft designs are driven by the payloads and mission requirements that they support. Many of the payload characteristics, such as mass, power requirements, communication requirements, moving parts, and so forth directly affect the choices for the spacecraft structural configuration and its subsystem design and component selection. The conceptual design process, which translates mission requirements into early spacecraft concepts, must be tolerant of frequent changes in the payload complement and resource requirements. A computer data base was designed and implemented for the purposes of containing the payload characteristics pertinent for spacecraft conceptual design, tracking the evolution of these payloads over time, and enabling the integration of the payload data with engineering analysis programs for improving the efficiency in producing spacecraft designs. In-house tools were used for constructing the data base and for performing the actual integration with an existing program for optimizing payload mass locations on the spacecraft.
Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation
Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee
2018-01-01
This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964
Rational design of metal nitride redox materials for solar-driven ammonia synthesis.
Michalsky, Ronald; Pfromm, Peter H; Steinfeld, Aldo
2015-06-06
Fixed nitrogen is an essential chemical building block for plant and animal protein, which makes ammonia (NH3) a central component of synthetic fertilizer for the global production of food and biofuels. A global project on artificial photosynthesis may foster the development of production technologies for renewable NH3 fertilizer, hydrogen carrier and combustion fuel. This article presents an alternative path for the production of NH3 from nitrogen, water and solar energy. The process is based on a thermochemical redox cycle driven by concentrated solar process heat at 700-1200°C that yields NH3 via the oxidation of a metal nitride with water. The metal nitride is recycled via solar-driven reduction of the oxidized redox material with nitrogen at atmospheric pressure. We employ electronic structure theory for the rational high-throughput design of novel metal nitride redox materials and to show how transition-metal doping controls the formation and consumption of nitrogen vacancies in metal nitrides. We confirm experimentally that iron doping of manganese nitride increases the concentration of nitrogen vacancies compared with no doping. The experiments are rationalized through the average energy of the dopant d-states, a descriptor for the theory-based design of advanced metal nitride redox materials to produce sustainable solar thermochemical ammonia.
Rational design of metal nitride redox materials for solar-driven ammonia synthesis
Michalsky, Ronald; Pfromm, Peter H.; Steinfeld, Aldo
2015-01-01
Fixed nitrogen is an essential chemical building block for plant and animal protein, which makes ammonia (NH3) a central component of synthetic fertilizer for the global production of food and biofuels. A global project on artificial photosynthesis may foster the development of production technologies for renewable NH3 fertilizer, hydrogen carrier and combustion fuel. This article presents an alternative path for the production of NH3 from nitrogen, water and solar energy. The process is based on a thermochemical redox cycle driven by concentrated solar process heat at 700–1200°C that yields NH3 via the oxidation of a metal nitride with water. The metal nitride is recycled via solar-driven reduction of the oxidized redox material with nitrogen at atmospheric pressure. We employ electronic structure theory for the rational high-throughput design of novel metal nitride redox materials and to show how transition-metal doping controls the formation and consumption of nitrogen vacancies in metal nitrides. We confirm experimentally that iron doping of manganese nitride increases the concentration of nitrogen vacancies compared with no doping. The experiments are rationalized through the average energy of the dopant d-states, a descriptor for the theory-based design of advanced metal nitride redox materials to produce sustainable solar thermochemical ammonia. PMID:26052421
ERIC Educational Resources Information Center
St.Germain, Elijah J.; Horowitz, Andrew S.; Rucco, Dominic; Rezler, Evonne M.; Lepore, Salvatore D.
2017-01-01
An organic chemistry experiment is described that is based on recent research to elucidate a novel cation-pi interaction between tetraalkammonium cations and propargyl hydrazines. This nonbonded interaction is a key component of the mechanism of ammonium-catalyzed intramolecular cycloaddition of nitrogen to the terminal carbon of a C-C triple bond…
Davidsen, Peter K; Turan, Nil; Egginton, Stuart; Falciani, Francesco
2016-02-01
The overall aim of physiological research is to understand how living systems function in an integrative manner. Consequently, the discipline of physiology has since its infancy attempted to link multiple levels of biological organization. Increasingly this has involved mathematical and computational approaches, typically to model a small number of components spanning several levels of biological organization. With the advent of "omics" technologies, which can characterize the molecular state of a cell or tissue (intended as the level of expression and/or activity of its molecular components), the number of molecular components we can quantify has increased exponentially. Paradoxically, the unprecedented amount of experimental data has made it more difficult to derive conceptual models underlying essential mechanisms regulating mammalian physiology. We present an overview of state-of-the-art methods currently used to identifying biological networks underlying genomewide responses. These are based on a data-driven approach that relies on advanced computational methods designed to "learn" biology from observational data. In this review, we illustrate an application of these computational methodologies using a case study integrating an in vivo model representing the transcriptional state of hypoxic skeletal muscle with a clinical study representing muscle wasting in chronic obstructive pulmonary disease patients. The broader application of these approaches to modeling multiple levels of biological data in the context of modern physiology is discussed. Copyright © 2016 the American Physiological Society.
Semantic interoperability--HL7 Version 3 compared to advanced architecture standards.
Blobel, B G M E; Engel, K; Pharow, P
2006-01-01
To meet the challenge for high quality and efficient care, highly specialized and distributed healthcare establishments have to communicate and co-operate in a semantically interoperable way. Information and communication technology must be open, flexible, scalable, knowledge-based and service-oriented as well as secure and safe. For enabling semantic interoperability, a unified process for defining and implementing the architecture, i.e. structure and functions of the cooperating systems' components, as well as the approach for knowledge representation, i.e. the used information and its interpretation, algorithms, etc. have to be defined in a harmonized way. Deploying the Generic Component Model, systems and their components, underlying concepts and applied constraints must be formally modeled, strictly separating platform-independent from platform-specific models. As HL7 Version 3 claims to represent the most successful standard for semantic interoperability, HL7 has been analyzed regarding the requirements for model-driven, service-oriented design of semantic interoperable information systems, thereby moving from a communication to an architecture paradigm. The approach is compared with advanced architectural approaches for information systems such as OMG's CORBA 3 or EHR systems such as GEHR/openEHR and CEN EN 13606 Electronic Health Record Communication. HL7 Version 3 is maturing towards an architectural approach for semantic interoperability. Despite current differences, there is a close collaboration between the teams involved guaranteeing a convergence between competing approaches.
Huang, Huiyuan; Ding, Zhongxiang; Mao, Dewang; Yuan, Jianhua; Zhu, Fangmei; Chen, Shuda; Xu, Yan; Lou, Lin; Feng, Xiaoyan; Qi, Le; Qiu, Wusi; Zhang, Han; Zang, Yu-Feng
2016-10-01
The main goal of brain tumor surgery is to maximize tumor resection while minimizing the risk of irreversible postoperative functional sequelae. Eloquent functional areas should be delineated preoperatively, particularly for patients with tumors near eloquent areas. Functional magnetic resonance imaging (fMRI) is a noninvasive technique that demonstrates great promise for presurgical planning. However, specialized data processing toolkits for presurgical planning remain lacking. Based on several functions in open-source software such as Statistical Parametric Mapping (SPM), Resting-State fMRI Data Analysis Toolkit (REST), Data Processing Assistant for Resting-State fMRI (DPARSF) and Multiple Independent Component Analysis (MICA), here, we introduce an open-source MATLAB toolbox named PreSurgMapp. This toolbox can reveal eloquent areas using comprehensive methods and various complementary fMRI modalities. For example, PreSurgMapp supports both model-based (general linear model, GLM, and seed correlation) and data-driven (independent component analysis, ICA) methods and processes both task-based and resting-state fMRI data. PreSurgMapp is designed for highly automatic and individualized functional mapping with a user-friendly graphical user interface (GUI) for time-saving pipeline processing. For example, sensorimotor and language-related components can be automatically identified without human input interference using an effective, accurate component identification algorithm using discriminability index. All the results generated can be further evaluated and compared by neuro-radiologists or neurosurgeons. This software has substantial value for clinical neuro-radiology and neuro-oncology, including application to patients with low- and high-grade brain tumors and those with epilepsy foci in the dominant language hemisphere who are planning to undergo a temporal lobectomy.
30 CFR 18.60 - Detailed inspection of components.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., EVALUATION, AND APPROVAL OF MINING PRODUCTS ELECTRIC MOTOR-DRIVEN MINE EQUIPMENT AND ACCESSORIES Inspections and Tests § 18.60 Detailed inspection of components. An inspection of each electrical component shall... design and construction. (e) Examination for adequacy of electrical insulation and clearances between...
NASA Technical Reports Server (NTRS)
Penn, John M.
2013-01-01
This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA/Johnson Space Center and many other NASA facilities. It describes what was learned and the significant benefits seen, such as fast, thorough, and clear test feedback every time code is checked-in to the code repository. It also describes a system that encourages development of code that is much more flexible, maintainable, and reliable. The Trick Simulation Toolkit development environment provides a common architecture for user-defined simulations. Trick builds executable simulations using user-supplied simulation-definition files (S_define) and user supplied "model code". For each Trick-based simulation, Trick automatically provides job scheduling, checkpoint / restore, data-recording, interactive variable manipulation (variable server), and an input-processor. Also included are tools for plotting recorded data and various other supporting tools and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX. Prior to adopting this new development approach, Trick testing consisted primarily of running a few large simulations, with the hope that their complexity and scale would exercise most of Trick's code and expose any recently introduced bugs. Unsurprising, this approach yielded inconsistent results. It was obvious that a more systematic, thorough approach was required. After seeing examples of some Java-based projects that used the JUnit test framework, similar test frameworks for C and C++ were sought. Several were found, all clearly inspired by JUnit. Googletest, a freely available Open source testing framework, was selected as the most appropriate and capable. The new approach was implemented while rewriting the Trick memory management component, to eliminate a fundamental design flaw. The benefits became obvious almost immediately, not just in the correctness of the individual functions and classes but also in the correctness and flexibility being added to the overall design. Creating code to be testable, and testing as it was created resulted not only in better working code, but also in better-organized, flexible, and readable (i.e., articulate) code. This was, in essence the Test-driven development (TDD) methodology created by Kent Beck. Seeing the benefits of Test Driven Development, other Trick components were refactored to make them more testable and tests were designed and implemented for them.
Sustainability Assessment Model in Product Development
NASA Astrophysics Data System (ADS)
Turan, Faiz Mohd; Johan, Kartina; Nor, Nik Hisyamudin Muhd; Omar, Badrul
2017-08-01
Faster and more efficient development of innovative and sustainable products has become the focus for manufacturing companies in order to remain competitive in today’s technologically driven world. Design concept evaluation which is the end of conceptual design is one of the most critical decision points. It relates to the final success of product development, because poor criteria assessment in design concept evaluation can rarely compensated at the later stages. Furthermore, consumers, investors, shareholders and even competitors are basing their decisions on what to buy or invest in, from whom, and also on what company report, and sustainability is one of a critical component. In this research, a new methodology of sustainability assessment in product development for Malaysian industry has been developed using integration of green project management, new scale of “Weighting criteria” and Rough-Grey Analysis. This method will help design engineers to improve the effectiveness and objectivity of the sustainable design concept evaluation, enable them to make better-informed decisions before finalising their choice and consequently create value to the company or industry. The new framework is expected to provide an alternative to existing methods.
NASA Astrophysics Data System (ADS)
Rock, B. N.; Hale, S.; Graham, K.; Hayden, L. B.
2009-12-01
Watershed Watch (NSF 0525433) engages early undergraduate students from two-year and four-year colleges in student-driven full inquiry-based instruction in the biogeosciences. Program goals for Watershed Watch are to test if inquiry-rich student-driven projects sufficiently engage undeclared students (or noncommittal STEM majors) to declare a STEM major (or remain with their STEM major). The program is a partnership between two four-year campuses - the University of New Hampshire (UNH), and Elizabeth City State University (ECSU, in North Carolina); and two two-year campuses - Great Bay Community College (GBCC, in New Hampshire) and the College of the Albemarle (COA, in North Carolina). The program focuses on two watersheds: the Merrimack Ricer Watershed in New Hampshire and Massachusetts, and the Pasquotank River Watershed in Virginia and North Carolina. Both the terrestrial and aquatic components of both watersheds are evaluated using the student-driven projects. A significant component of this program is an intensive two-week Summer Research Institute (SRI), in which undeclared freshmen and sophomores investigate various aspects of their local watershed. Two Summer Research Institutes have been held on the UNH campus (2006 and 2008) and two on the ECSU campus (2007 and 2009). Students develop their own research questions and study design, collect and analyze data, and produce a scientific oral or poster presentation on the last day of the SRI. The course objectives, curriculum and schedule are presented as a model for dissemination for other institutions and programs seeking to develop inquiry-rich programs or courses designed to attract students into biogeoscience disciplines. Data from self-reported student feedback indicate the most important factors explaining high-levels of student motivation and research excellence in the program are: 1) working with committed, energetic, and enthusiastic faculty mentors, and 2) faculty mentors demonstrating high degrees of teamwork and coordination. The past four Summer Research Institutes have engaged over 100 entry-level undergraduate students in the process of learning science by doing it, and approximately 50% of those participating have declared majors in a wide range of science fields. A total of eight Watershed Watch students have presented findings from their SRI research projects at AGU meetings in 2007, 2008, and 2009. This presentation will highlight the lessons learned over the past four years in the Watershed Watch program.
NASA Astrophysics Data System (ADS)
Stotz, I. L.; Iaffaldano, G.; Davies, D. R.
2018-01-01
The Pacific Plate is thought to be driven mainly by slab pull, associated with subduction along the Aleutians-Japan, Marianas-Izu-Bonin, and Tonga-Kermadec trenches. This implies that viscous flow within the sub-Pacific asthenosphere is mainly generated by overlying plate motion (i.e., Couette flow) and that the associated shear stresses at the lithosphere's base are resisting such motion. Recent studies on glacial isostatic adjustment and lithosphere dynamics provide tighter constraints on the viscosity and thickness of Earth's asthenosphere and, therefore, on the amount of shear stress that asthenosphere and lithosphere mutually exchange, by virtue of Newton's third law of motion. In light of these constraints, the notion that subduction is the main driver of present-day Pacific Plate motion becomes somewhat unviable, as the pulling force that would be required by slabs exceeds the maximum available from their negative buoyancy. Here we use coupled global models of mantle and lithosphere dynamics to show that the sub-Pacific asthenosphere features a significant component of pressure-driven (i.e., Poiseuille) flow and that this has driven at least 50% of the Pacific Plate motion since, at least, 15 Ma. A corollary of our models is that a sublithospheric pressure difference as high as ±50 MPa is required across the Pacific domain.
A development framework for semantically interoperable health information systems.
Lopez, Diego M; Blobel, Bernd G M E
2009-02-01
Semantic interoperability is a basic challenge to be met for new generations of distributed, communicating and co-operating health information systems (HIS) enabling shared care and e-Health. Analysis, design, implementation and maintenance of such systems and intrinsic architectures have to follow a unified development methodology. The Generic Component Model (GCM) is used as a framework for modeling any system to evaluate and harmonize state of the art architecture development approaches and standards for health information systems as well as to derive a coherent architecture development framework for sustainable, semantically interoperable HIS and their components. The proposed methodology is based on the Rational Unified Process (RUP), taking advantage of its flexibility to be configured for integrating other architectural approaches such as Service-Oriented Architecture (SOA), Model-Driven Architecture (MDA), ISO 10746, and HL7 Development Framework (HDF). Existing architectural approaches have been analyzed, compared and finally harmonized towards an architecture development framework for advanced health information systems. Starting with the requirements for semantic interoperability derived from paradigm changes for health information systems, and supported in formal software process engineering methods, an appropriate development framework for semantically interoperable HIS has been provided. The usability of the framework has been exemplified in a public health scenario.
Towards Current Profile Control in ITER: Potential Approaches and Research Needs
NASA Astrophysics Data System (ADS)
Schuster, E.; Barton, J. E.; Wehner, W. P.
2014-10-01
Many challenging plasma control problems still need to be addressed in order for the ITER Plasma Control System (PCS) to be able to successfully achieve the ITER project goals. For instance, setting up a suitable toroidal current density profile is key for one possible advanced scenario characterized by noninductive sustainment of the plasma current and steady-state operation. The nonlinearity and high dimensionality exhibited by the plasma demand a model-based current-profile control synthesis procedure that can accommodate this complexity through embedding the known physics within the design. The development of a model capturing the dynamics of the plasma relevant for control design enables not only the design of feedback controllers for regulation or tracking but also the design of optimal feedforward controllers for a systematic model-based approach to scenario planning, the design of state estimators for a reliable real-time reconstruction of the plasma internal profiles based on limited and noisy diagnostics, and the development of a fast predictive simulation code for closed-loop performance evaluation before implementation. Progress towards control-oriented modeling of the current profile evolution and associated control design has been reported following both data-driven and first-principles-driven approaches. An overview of these two approaches will be provided, as well as a discussion on research needs associated with each one of the model applications described above. Supported by the US Department of Energy under DE-SC0001334 and DE-SC0010661.
Simulation-Driven Design Approach for Design and Optimization of Blankholder
NASA Astrophysics Data System (ADS)
Sravan, Tatipala; Suddapalli, Nikshep R.; Johan, Pilthammar; Mats, Sigvant; Christian, Johansson
2017-09-01
Reliable design of stamping dies is desired for efficient and safe production. The design of stamping dies are today mostly based on casting feasibility, although it can also be based on criteria for fatigue, stiffness, safety, economy. Current work presents an approach that is built on Simulation Driven Design, enabling Design Optimization to address this issue. A structural finite element model of a stamping die, used to produce doors for Volvo V70/S80 car models, is studied. This die had developed cracks during its usage. To understand the behaviour of stress distribution in the stamping die, structural analysis of the die is conducted and critical regions with high stresses are identified. The results from structural FE-models are compared with analytical calculations pertaining to fatigue properties of the material. To arrive at an optimum design with increased stiffness and lifetime, topology and free-shape optimization are performed. In the optimization routine, identified critical regions of the die are set as design variables. Other optimization variables are set to maintain manufacturability of the resultant stamping die. Thereafter a CAD model is built based on geometrical results from topology and free-shape optimizations. Then the CAD model is subjected to structural analysis to visualize the new stress distribution. This process is iterated until a satisfactory result is obtained. The final results show reduction in stress levels by 70% with a more homogeneous distribution. Even though mass of the die is increased by 17 %, overall, a stiffer die with better lifetime is obtained. Finally, by reflecting on the entire process, a coordinated approach to handle such situations efficiently is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnan, Shankar; Karri, Naveen K.; Gogna, Pawan K.
2012-03-13
Enormous military and commercial interests exist in developing quiet, lightweight, and compact thermoelectric (TE) power generation systems. This paper investigates design integration and analysis of an advanced TE power generation system implementing JP-8 fueled combustion and thermal recuperation. Design and development of a portable TE power system using a JP-8 combustor as a high temperature heat source and optimal process flows depend on efficient heat generation, transfer, and recovery within the system are explored. Design optimization of the system required considering the combustion system efficiency and TE conversion efficiency simultaneously. The combustor performance and TE sub-system performance were coupled directlymore » through exhaust temperatures, fuel and air mass flow rates, heat exchanger performance, subsequent hot-side temperatures, and cold-side cooling techniques and temperatures. Systematic investigation of this system relied on accurate thermodynamic modeling of complex, high-temperature combustion processes concomitantly with detailed thermoelectric converter thermal/mechanical modeling. To this end, this work reports on design integration of systemlevel process flow simulations using commercial software CHEMCADTM with in-house thermoelectric converter and module optimization, and heat exchanger analyses using COMSOLTM software. High-performance, high-temperature TE materials and segmented TE element designs are incorporated in coupled design analyses to achieve predicted TE subsystem level conversion efficiencies exceeding 10%. These TE advances are integrated with a high performance microtechnology combustion reactor based on recent advances at the Pacific Northwest National Laboratory (PNNL). Predictions from this coupled simulation established a basis for optimal selection of fuel and air flow rates, thermoelectric module design and operating conditions, and microtechnology heat-exchanger design criteria. This paper will discuss this simulation process that leads directly to system efficiency power maps defining potentially available optimal system operating conditions and regimes. This coupled simulation approach enables pathways for integrated use of high-performance combustor components, high performance TE devices, and microtechnologies to produce a compact, lightweight, combustion driven TE power system prototype that operates on common fuels.« less
Light and redox switchable molecular components for molecular electronics.
Browne, Wesley R; Feringa, Ben L
2010-01-01
The field of molecular and organic electronics has seen rapid progress in recent years, developing from concept and design to actual demonstration devices in which both single molecules and self-assembled monolayers are employed as light-responsive components. Research in this field has seen numerous unexpected challenges that have slowed progress and the initial promise of complex molecular-based computers has not yet been realised. Primarily this has been due to the realisation at an early stage that molecular-based nano-electronics brings with it the interface between the hard (semiconductor) and soft (molecular) worlds and the challenges which accompany working in such an environment. Issues such as addressability, cross-talk, molecular stability and perturbation of molecular properties (e.g., inhibition of photochemistry) have nevertheless driven development in molecular design and synthesis as well as our ability to interface molecular components with bulk metal contacts to a very high level of sophistication. Numerous groups have played key roles in progressing this field not least teams such as those led by Whitesides, Aviram, Ratner, Stoddart and Heath. In this short review we will however focus on the contributions from our own group and those of our collaborators, in employing diarylethene based molecular components.
Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A
2008-02-01
One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).
Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.
2008-01-01
One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
An investigation of modelling and design for software service applications.
Anjum, Maria; Budgen, David
2017-01-01
Software services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the 'design model'. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model.
NeuroLOG: a community-driven middleware design.
Montagnat, Johan; Gaignard, Alban; Lingrand, Diane; Rojas Balderrama, Javier; Collet, Philippe; Lahire, Philippe
2008-01-01
The NeuroLOG project designs an ambitious neurosciences middleware, gaining from many existing components and learning from past project experiences. It is targeting a focused application area and adopting a user-centric perspective to meet the neuroscientists expectations. It aims at fostering the adoption of HealthGrids in a pre-clinical community. This paper details the project's design study and methodology which were proposed to achieve the integration of heterogeneous site data schemas and the definition of a site-centric policy. The NeuroLOG middleware will bridge HealthGrid and local resources to match user desires to control their resources and provide a transitional model towards HealthGrids.
Combined Electrophysiological and Behavioral Evidence for the Suppression of Salient Distractors.
Gaspelin, Nicholas; Luck, Steven J
2018-05-15
Researchers have long debated how salient-but-irrelevant features guide visual attention. Pure stimulus-driven theories claim that salient stimuli automatically capture attention irrespective of goals, whereas pure goal-driven theories propose that an individual's attentional control settings determine whether salient stimuli capture attention. However, recent studies have suggested a hybrid model in which salient stimuli attract visual attention but can be actively suppressed by top-down attentional mechanisms. Support for this hybrid model has primarily come from ERP studies demonstrating that salient stimuli, which fail to capture attention, also elicit a distractor positivity (P D ) component, a putative neural index of suppression. Other support comes from a handful of behavioral studies showing that processing at the salient locations is inhibited compared with other locations. The current study was designed to link the behavioral and neural evidence by combining ERP recordings with an experimental paradigm that provides a behavioral measure of suppression. We found that, when a salient distractor item elicited the P D component, processing at the location of this distractor was suppressed below baseline levels. Furthermore, the magnitude of behavioral suppression and the magnitude of the P D component covaried across participants. These findings provide a crucial connection between the behavioral and neural measures of suppression, which opens the door to using the P D component to assess the timing and neural substrates of the behaviorally observed suppression.
Development of a Compact, Efficient Cooling Pump for Space Suit Life Support Systems
NASA Technical Reports Server (NTRS)
van Boeyen, Roger; Reeh, Jonathan; Trevino, Luis
2009-01-01
A compact, low-power electrochemically-driven fluid cooling pump is currently being developed by Lynntech, Inc. With no electric motor and minimal lightweight components, the pump is significantly lighter than conventional rotodynamic and displacement pumps. Reliability and robustness is achieved with the absence of rotating or moving components (apart from the bellows). By employing sulfonated polystyrene-based proton exchange membranes, rather than conventional Nafion membranes, a significant reduction in the actuator power consumption was demonstrated. Lynntech also demonstrated that these membranes possess the necessary mechanical strength, durability, and temperature range for long life space operation. The preliminary design for a Phase II prototype pump compares very favorably to the fluid cooling pumps currently used in space suit primary life support systems (PLSSs). Characteristics of the electrochemically-driven pump are described and the benefits of the technology as a replacement for electric motor pumps in mechanically pumped single-phase fluid loops is discussed.
SLS Navigation Model-Based Design Approach
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas
2018-01-01
The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed
Data-Driven Model Reduction and Transfer Operator Approximation
NASA Astrophysics Data System (ADS)
Klus, Stefan; Nüske, Feliks; Koltai, Péter; Wu, Hao; Kevrekidis, Ioannis; Schütte, Christof; Noé, Frank
2018-06-01
In this review paper, we will present different data-driven dimension reduction techniques for dynamical systems that are based on transfer operator theory as well as methods to approximate transfer operators and their eigenvalues, eigenfunctions, and eigenmodes. The goal is to point out similarities and differences between methods developed independently by the dynamical systems, fluid dynamics, and molecular dynamics communities such as time-lagged independent component analysis, dynamic mode decomposition, and their respective generalizations. As a result, extensions and best practices developed for one particular method can be carried over to other related methods.
NASA Astrophysics Data System (ADS)
Min, Young-Hoon; Kim, Yong-Kweon
1998-09-01
A silicon based micro mirror array is a highly efficient component for use in optical applications as adaptive optical systems and optical correlators. Many types of micro mirror or micro mirror array have been studied and proposed in order to obtain the optimal performance according to their own purposes. A micro mirror array designed, fabricated and tested in this paper consists of 5 X 5 single layer polysilicon-based, electrostatically driven actuators. The micro mirror array for the optical phase modulation is made by using only two masks and can be driven independently by 25 channel circuits. About 6 (pi) phase modulation is obtained in He-Ne laser ((lambda) equals 633 nm) with 67% fill-factor. In this paper, the deflection characteristics of the actuators in controllable range were studied. The experimental results show that the deflection characteristics is much dependent upon a residual stress in flexure, the initial curvature of mirror due to stress gradient and an electrostatic force acted on other element except for mirror itself. The modeling results agree well with the experimental results. Also, it is important to fabricate a flat mirror that is not initially curved because the curved mirror brings a bad performance in optical use. Therefore, a new method to obtain the flat mirror by using the gold metallization in spite of the residual stress unbalance is proposed in this paper.
Protein-like Nanoparticles Based on Orthogonal Self-Assembly of Chimeric Peptides.
Jiang, Linhai; Xu, Dawei; Namitz, Kevin E; Cosgrove, Michael S; Lund, Reidar; Dong, He
2016-10-01
A novel two-component self-assembling chimeric peptide is designed where two orthogonal protein folding motifs are linked side by side with precisely defined position relative to one another. The self-assembly is driven by a combination of symmetry controlled molecular packing, intermolecular interactions, and geometric constraint to limit the assembly into compact dodecameric protein nanoparticles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Inter-subject phase synchronization for exploratory analysis of task-fMRI.
Bolt, Taylor; Nomi, Jason S; Vij, Shruti G; Chang, Catie; Uddin, Lucina Q
2018-08-01
Analysis of task-based fMRI data is conventionally carried out using a hypothesis-driven approach, where blood-oxygen-level dependent (BOLD) time courses are correlated with a hypothesized temporal structure. In some experimental designs, this temporal structure can be difficult to define. In other cases, experimenters may wish to take a more exploratory, data-driven approach to detecting task-driven BOLD activity. In this study, we demonstrate the efficiency and power of an inter-subject synchronization approach for exploratory analysis of task-based fMRI data. Combining the tools of instantaneous phase synchronization and independent component analysis, we characterize whole-brain task-driven responses in terms of group-wise similarity in temporal signal dynamics of brain networks. We applied this framework to fMRI data collected during performance of a simple motor task and a social cognitive task. Analyses using an inter-subject phase synchronization approach revealed a large number of brain networks that dynamically synchronized to various features of the task, often not predicted by the hypothesized temporal structure of the task. We suggest that this methodological framework, along with readily available tools in the fMRI community, provides a powerful exploratory, data-driven approach for analysis of task-driven BOLD activity. Copyright © 2018 Elsevier Inc. All rights reserved.
Enhancement of High-Speed Infrared Array Electronics (Center Director's Discretionary Fund)
NASA Technical Reports Server (NTRS)
Sutherland, W. T.
1996-01-01
A state-of-the-art infrared detector was to be used as the sensor in a new spectrometer-camera for astronomical observations. The sensitivity of the detector required the use of low-noise, high-speed electronics in the system design. The key component in the electronic system was the pre-amplifier that amplified the low voltage signal coming from the detector. The system was designed based on the selection of the amplifier and that was driven by the maximum noise level, which would yield the desired sensitivity for the telescope system.
NASA Astrophysics Data System (ADS)
Sulis, M.; Paniconi, C.; Marrocu, M.; Huard, D.; Chaumont, D.
2012-12-01
General circulation models (GCMs) are the primary instruments for obtaining projections of future global climate change. Outputs from GCMs, aided by dynamical and/or statistical downscaling techniques, have long been used to simulate changes in regional climate systems over wide spatiotemporal scales. Numerous studies have acknowledged the disagreements between the various GCMs and between the different downscaling methods designed to compensate for the mismatch between climate model output and the spatial scale at which hydrological models are applied. Very little is known, however, about the importance of these differences once they have been input or assimilated by a nonlinear hydrological model. This issue is investigated here at the catchment scale using a process-based model of integrated surface and subsurface hydrologic response driven by outputs from 12 members of a multimodel climate ensemble. The data set consists of daily values of precipitation and min/max temperatures obtained by combining four regional climate models and five GCMs. The regional scenarios were downscaled using a quantile scaling bias-correction technique. The hydrologic response was simulated for the 690 km2des Anglais catchment in southwestern Quebec, Canada. The results show that different hydrological components (river discharge, aquifer recharge, and soil moisture storage) respond differently to precipitation and temperature anomalies in the multimodel climate output, with greater variability for annual discharge compared to recharge and soil moisture storage. We also find that runoff generation and extreme event-driven peak hydrograph flows are highly sensitive to any uncertainty in climate data. Finally, the results show the significant impact of changing sequences of rainy days on groundwater recharge fluxes and the influence of longer dry spells in modifying soil moisture spatial variability.
NASA Astrophysics Data System (ADS)
Bellini, Anna
Customer-driven product customization and continued demand for cost and time savings have generated a renewed interest in agile manufacturing based on improvements on Rapid Prototyping (RP) technologies. The advantages of RP technologies are: (1) ability to shorten the product design and development time, (2) suitability for automation and decrease in the level of human intervention, (3) ability to build many geometrically complex shapes. A shift from "prototyping" to "manufacturing" necessitates the following improvements: (1) Flexibility in choice of materials; (2) Part integrity and built-in characteristics to meet performance requirements; (3) Dimensional stability and tolerances; (4) Improved surface finish. A project funded by ONR has been undertaken to develop an agile manufacturing technology for fabrication of ceramic and multi-component parts to meet various needs of the Navy, such as transducers, etc. The project is based on adaptation of a layered manufacturing concept since the program required that the new technology be developed based on a commercially available RP technology. Among various RP technologies available today, Fused Deposition Modeling (FDM) has been identified as the focus of this research because of its potential versatility in the choice of materials and deposition configuration. This innovative approach allows for designing and implementing highly complex internal architectures into parts through deposition of different materials in a variety of configurations in such a way that the finished product exhibit characteristics to meet the performance requirements. This implies that, in principle, one can tailor-make the assemble of materials and structures as per specifications of an optimum design. The program objectives can be achieved only through accurate process modeling and modeling of material behavior. Oftentimes, process modeling is based on some type of computational approach where as modeling of material behavior is based on extensive experimental investigations. Studies are conducted in the following categories: (1) Flow modeling during extrusion and deposition; (2) Thermal modeling; (3) Flow control during deposition; (4) Product characterization and property determination for dimensional analysis; (5) Development of a novel technology based on a mini-extrusion system. Studies in each of these stages have involved experimental as well as analytical approaches to develop a comprehensive modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Parsons, T.; King, R.
This report summarizes the theory, verification, and validation of a new sizing tool for wind turbine drivetrain components, the Drivetrain Systems Engineering (DriveSE) tool. DriveSE calculates the dimensions and mass properties of the hub, main shaft, main bearing(s), gearbox, bedplate, transformer if up-tower, and yaw system. The level of fi¬ delity for each component varies depending on whether semiempirical parametric or physics-based models are used. The physics-based models have internal iteration schemes based on system constraints and design criteria. Every model is validated against available industry data or finite-element analysis. The verification and validation results show that the models reasonablymore » capture primary drivers for the sizing and design of major drivetrain components.« less
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
An Emotion Aware Task Automation Architecture Based on Semantic Technologies for Smart Offices
2018-01-01
The evolution of the Internet of Things leads to new opportunities for the contemporary notion of smart offices, where employees can benefit from automation to maximize their productivity and performance. However, although extensive research has been dedicated to analyze the impact of workers’ emotions on their job performance, there is still a lack of pervasive environments that take into account emotional behaviour. In addition, integrating new components in smart environments is not straightforward. To face these challenges, this article proposes an architecture for emotion aware automation platforms based on semantic event-driven rules to automate the adaptation of the workplace to the employee’s needs. The main contributions of this paper are: (i) the design of an emotion aware automation platform architecture for smart offices; (ii) the semantic modelling of the system; and (iii) the implementation and evaluation of the proposed architecture in a real scenario. PMID:29748468
An Emotion Aware Task Automation Architecture Based on Semantic Technologies for Smart Offices.
Muñoz, Sergio; Araque, Oscar; Sánchez-Rada, J Fernando; Iglesias, Carlos A
2018-05-10
The evolution of the Internet of Things leads to new opportunities for the contemporary notion of smart offices, where employees can benefit from automation to maximize their productivity and performance. However, although extensive research has been dedicated to analyze the impact of workers’ emotions on their job performance, there is still a lack of pervasive environments that take into account emotional behaviour. In addition, integrating new components in smart environments is not straightforward. To face these challenges, this article proposes an architecture for emotion aware automation platforms based on semantic event-driven rules to automate the adaptation of the workplace to the employee’s needs. The main contributions of this paper are: (i) the design of an emotion aware automation platform architecture for smart offices; (ii) the semantic modelling of the system; and (iii) the implementation and evaluation of the proposed architecture in a real scenario.
Model-Driven Design: Systematically Building Integrated Blended Learning Experiences
ERIC Educational Resources Information Center
Laster, Stephen
2010-01-01
Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…
A Knowledge-Based and Model-Driven Requirements Engineering Approach to Conceptual Satellite Design
NASA Astrophysics Data System (ADS)
Dos Santos, Walter A.; Leonor, Bruno B. F.; Stephany, Stephan
Satellite systems are becoming even more complex, making technical issues a significant cost driver. The increasing complexity of these systems makes requirements engineering activities both more important and difficult. Additionally, today's competitive pressures and other market forces drive manufacturing companies to improve the efficiency with which they design and manufacture space products and systems. This imposes a heavy burden on systems-of-systems engineering skills and particularly on requirements engineering which is an important phase in a system's life cycle. When this is poorly performed, various problems may occur, such as failures, cost overruns and delays. One solution is to underpin the preliminary conceptual satellite design with computer-based information reuse and integration to deal with the interdisciplinary nature of this problem domain. This can be attained by taking a model-driven engineering approach (MDE), in which models are the main artifacts during system development. MDE is an emergent approach that tries to address system complexity by the intense use of models. This work outlines the use of SysML (Systems Modeling Language) and a novel knowledge-based software tool, named SatBudgets, to deal with these and other challenges confronted during the conceptual phase of a university satellite system, called ITASAT, currently being developed by INPE and some Brazilian universities.
An investigation of modelling and design for software service applications
2017-01-01
Software services offer the opportunity to use a component-based approach for the design of applications. However, this needs a deeper understanding of how to develop service-based applications in a systematic manner, and of the set of properties that need to be included in the ‘design model’. We have used a realistic application to explore systematically how service-based designs can be created and described. We first identified the key properties of an SOA (service oriented architecture) and then undertook a single-case case study to explore its use in the development of a design for a large-scale application in energy engineering, modelling this with existing notations wherever possible. We evaluated the resulting design model using two walkthroughs with both domain and application experts. We were able to successfully develop a design model around the ten properties identified, and to describe it by adapting existing design notations. A component-based approach to designing such systems does appear to be feasible. However, it needs the assistance of a more integrated set of notations for describing the resulting design model. PMID:28489905
Technique for Early Reliability Prediction of Software Components Using Behaviour Models
Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad
2016-01-01
Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748
A motor-driven syringe-type gradient maker for forming immobilized pH gradient gels.
Fawcett, J S; Sullivan, J V; Chidakel, B E; Chrambach, A
1988-05-01
A motor driven gradient maker based on the commercial model (Jule Inc., Trumbull, CT) was designed for immobilized pH gradient gels to provide small volumes, rapid stirring and delivery, strict volume and temperature control and air exclusion. The device was constructed and by a convenient procedure yields highly reproducible gradients either in solution or on polyacrylamide gels.
Design and simulation of a new bidirectional actuator for haptic systems featuring MR fluid
NASA Astrophysics Data System (ADS)
Hung, Nguyen Quoc; Tri, Diep Bao; Cuong, Vo Van; Choi, Seung-Bok
2017-04-01
In this research, a new configuration of bidirectional actuator featuring MR fluid (BMRA) is proposed for haptic application. The proposed BMRA consists of a driving disc, a driving housing and a driven disc. The driving disc is placed inside the driving housing and rotates counter to each other by a servo DC motor and a bevel gear system. The driven shaft is also placed inside the housing and next to the driving disc. The gap between the two disc and the gap between the discs and the housing are filled with MR fluid. On the driven disc, two mutual magnetic coils are placed. By applying currents to the two coils mutually, the torque at the output shaft, which is fixed to the driven disc, can be controlled with positive, zero or negative value. This make the actuator be suitable for haptic application. After a review of MR fluid and its application, configuration of the proposed BMRA is presented. The modeling of the actuator is then derived based on Bingham rheological model of MRF and magnetic finite element analysis (FEA). The optimal design of the actuator is then performed to minimize the mass of the BMRA. From the optimal design result, performance characteristics of the actuator is simulated and detailed design of a prototype actuator is conducted.
Toward Computational Cumulative Biology by Combining Models of Biological Datasets
Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel
2014-01-01
A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations—for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database. PMID:25427176
Toward computational cumulative biology by combining models of biological datasets.
Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel
2014-01-01
A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations-for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database.
Some Modeling Tools Available for Adaptive Management of South Florida Hydrology
NASA Astrophysics Data System (ADS)
Lal, W. A.; Van Zee, R. J.
2002-05-01
The hydrology of South Florida is a result of (1) the hydrology of the natural system; (2) the hydrology of the man made design components such as structures and levees designed to alter the natural hydrology; (3) influence of the operations imposed on the system using the design components. Successful restoration of the South Florida ecosystem depend not only on the design of the structural components, but also on its careful operation. The current discussion is focused on a number of optimal control methods that have recently become available to optimize restoration goals in the context of modeling. Optimal operation of the system can lessen stresses on some hydrological and ecological components. Careless operation can on the other hand lead to disastrous effects. Systems engineering and control theory have been used in the past to understand and operate simple systems such as the cruise control and the thermostat. Somewhat complex ones have been used to auto-pilot planes. The simplest control methods such as proportional and integral (PI) control are already used in the South Florida Water Management Model (SFWMM) for flood control and rain driven operations. The popular proportional-integral-differential (PID) control is widely used in industry for operational control of complex engineering systems. Some uses of PID control are investigated in the study. Other methods that an be used for operational control include Baysean methods, Kalman filtering and Neural network methods. A cursory evaluation of these methods is made in the discussion, along with the traditional methods used to operate complex engineering systems.
Myneni, Sahiti; Amith, Muhammad; Geng, Yimin; Tao, Cui
2015-01-01
Adolescent and Young Adult (AYA) cancer survivors manage an array of health-related issues. Survivorship Care Plans (SCPs) have the potential to empower these young survivors by providing information regarding treatment summary, late-effects of cancer therapies, healthy lifestyle guidance, coping with work-life-health balance, and follow-up care. However, current mHealth infrastructure used to deliver SCPs has been limited in terms of flexibility, engagement, and reusability. The objective of this study is to develop an ontology-driven survivor engagement framework to facilitate rapid development of mobile apps that are targeted, extensible, and engaging. The major components include ontology models, patient engagement features, and behavioral intervention technologies. We apply the proposed framework to characterize individual building blocks ("survivor digilegos"), which form the basis for mHealth tools that address user needs across the cancer care continuum. Results indicate that the framework (a) allows identification of AYA survivorship components, (b) facilitates infusion of engagement elements, and (c) integrates behavior change constructs into the design architecture of survivorship applications. Implications for design of patient-engaging chronic disease management solutions are discussed.
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1988-01-01
The Rubber Airplane program, which combines two symbolic processing techniques with a component-based database of design knowledge, is proposed as a computer aid for conceptual design. Using object-oriented programming, programs are organized around the objects and behavior to be simulated, and using constraint propagation, declarative statements designate mathematical relationships among all the equation variables. It is found that the additional level of organizational structure resulting from the arrangement of the design information in terms of design components provides greater flexibility and convenience.
SLS Model Based Design: A Navigation Perspective
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Anzalone, Evan; Park, Thomas; Geohagan, Kevin
2018-01-01
The SLS Program has implemented a Model-based Design (MBD) and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team is responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1B design, the additional GPS Receiver hardware model is managed as a DMM at the vehicle design level. This paper describes the models, and discusses the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the navigation components.
On Complex Networks Representation and Computation of Hydrologycal Quantities
NASA Astrophysics Data System (ADS)
Serafin, F.; Bancheri, M.; David, O.; Rigon, R.
2017-12-01
Water is our blue gold. Despite results of discovery-based science keep warning public opinion about the looming worldwide water crisis, water is still treated as a not worth taking resource. Could a different multi-scale perspective affect environmental decision-making more deeply? Can also a further pairing to a new graphical representation of processes interaction sway decision-making more effectively and public opinion consequently?This abstract introduces a complex networks driven way to represent catchments eco-hydrology and related flexible informatics to manage it. The representation is built upon mathematical category. A category is an algebraic structure that comprises "objects" linked by "arrows". It is an evolution of Petri Nets said Time Continuous Petri Nets (TCPN). It aims to display (water) budgets processes and catchment interactions using explicative and self-contained symbolism. The result improves readability of physical processes compared to current descriptions. The IT perspective hinges on the Object Modeling System (OMS) v3. The latter is a non-invasive flexible environmental modeling framework designed to support component-based model development. The implementation of a Directed Acyclic Graph (DAG) data structure, named Net3, has recently enhanced its flexibility. Net3 represents interacting systems as complex networks: vertices match up with any sort of time evolving quantity; edges correspond to their data (fluxes) interchange. It currently hosts JGrass-NewAge components, and those implementing travel time analysis of fluxes. Further bio-physical or management oriented components can be easily added.This talk introduces both graphical representation and related informatics exercising actual applications and examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saurav, Kumar; Chandan, Vikas
District-heating-and-cooling (DHC) systems are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increasemore » the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components such as buildings, pipes, valves, heating source, etc., interacting with each other. In this paper, we focus on building modelling. In particular, we present a gray-box methodology for thermal modelling of buildings. Gray-box modelling is a hybrid of data driven and physics based models where, coefficients of the equations from physics based models are learned using data. This approach allows us to capture the dynamics of the buildings more effectively as compared to pure data driven approach. Additionally, this approach results in a simpler models as compared to pure physics based models. We first develop the individual components of the building such as temperature evolution, flow controller, etc. These individual models are then integrated in to the complete gray-box model for the building. The model is validated using data collected from one of the buildings at Lule{\\aa}, a city on the coast of northern Sweden.« less
Chemiluminescence generation and detection in a capillary-driven microfluidic chip
NASA Astrophysics Data System (ADS)
Ramon, Charlotte; Temiz, Yuksel; Delamarche, Emmanuel
2017-02-01
The use of microfluidic technology represents a strong opportunity for providing sensitive, low-cost and rapid diagnosis at the point-of-care and such a technology might therefore support better, faster and more efficient diagnosis and treatment of patients at home and in healthcare settings both in developed and developing countries. In this work, we consider luminescence-based assays as an alternative to well-established fluorescence-based systems because luminescence does not require a light source or expensive optical components and is therefore a promising detection method for point-of-care applications. Here, we show a proof-of-concept of chemiluminescence (CL) generation and detection in a capillary-driven microfluidic chip for potential immunoassay applications. We employed a commercial acridan-based reaction, which is catalyzed by horseradish peroxidase (HRP). We investigated CL generation under flow conditions using a simplified immunoassay model where HRP is used instead of the complete sandwich immunocomplex. First, CL signals were generated in a capillary microfluidic chip by immobilizing HRP on a polydimethylsiloxane (PDMS) sealing layer using stencil deposition and flowing CL substrate through the hydrophilic channels. CL signals were detected using a compact (only 5×5×2.5 cm3) and custom-designed scanner, which was assembled for less than $30 and comprised a 128×1 photodiode array, a mini stepper motor, an Arduino microcontroller, and a 3D-printed housing. In addition, microfluidic chips having specific 30-μm-deep structures were fabricated and used to immobilize ensembles of 4.50 μm beads functionalized with HRP so as to generate high CL signals from capillary-driven chips.
Fotopoulos, Christos; Krystallis, Athanasios; Vassallo, Marco; Pagiaslis, Anastasios
2009-02-01
Recognising the need for a more statistically robust instrument to investigate general food selection determinants, the research validates and confirms Food Choice Questionnaire (FCQ's) factorial design, develops ad hoc a more robust FCQ version and tests its ability to discriminate between consumer segments in terms of the importance they assign to the FCQ motivational factors. The original FCQ appears to represent a comprehensive and reliable research instrument. However, the empirical data do not support the robustness of its 9-factorial design. On the other hand, segmentation results at the subpopulation level based on the enhanced FCQ version bring about an optimistic message for the FCQ's ability to predict food selection behaviour. The paper concludes that some of the basic components of the original FCQ can be used as a basis for a new general food motivation typology. The development of such a new instrument, with fewer, of higher abstraction FCQ-based dimensions and fewer items per dimension, is a right step forward; yet such a step should be theory-driven, while a rigorous statistical testing across and within population would be necessary.
ERIC Educational Resources Information Center
Ananyeva, Maria
2014-01-01
This article introduces the concept of a learning curriculum that places adult English as a second language (ESL) students' needs in the center and encourages the engagement of ESL learners in curriculum design. The study is based on contemporary research in the field of adult ESL program planning. It summarizes key components of a learning…
Rocket Engine Oscillation Diagnostics
NASA Technical Reports Server (NTRS)
Nesman, Tom; Turner, James E. (Technical Monitor)
2002-01-01
Rocket engine oscillating data can reveal many physical phenomena ranging from unsteady flow and acoustics to rotordynamics and structural dynamics. Because of this, engine diagnostics based on oscillation data should employ both signal analysis and physical modeling. This paper describes an approach to rocket engine oscillation diagnostics, types of problems encountered, and example problems solved. Determination of design guidelines and environments (or loads) from oscillating phenomena is required during initial stages of rocket engine design, while the additional tasks of health monitoring, incipient failure detection, and anomaly diagnostics occur during engine development and operation. Oscillations in rocket engines are typically related to flow driven acoustics, flow excited structures, or rotational forces. Additional sources of oscillatory energy are combustion and cavitation. Included in the example problems is a sampling of signal analysis tools employed in diagnostics. The rocket engine hardware includes combustion devices, valves, turbopumps, and ducts. Simple models of an oscillating fluid system or structure can be constructed to estimate pertinent dynamic parameters governing the unsteady behavior of engine systems or components. In the example problems it is shown that simple physical modeling when combined with signal analysis can be successfully employed to diagnose complex rocket engine oscillatory phenomena.
Tarity, T David; Koch, Chelsea N; Burket, Jayme C; Wright, Timothy M; Westrich, Geoffrey H
2017-03-01
Adverse local tissue reaction formation has been suggested to occur with the Modular Dual Mobility (MDM) acetabular design. Few reports in the literature have evaluated fretting and corrosion damage between the acetabular shell and modular metal inserts in this modular system. We evaluated a series of 18 retrieved cobalt chromium MDM inserts for evidence of fretting and corrosion. We assessed the backsides of 18 MDM components for evidence of fretting and corrosion in polar and taper regions based on previously established methods. We collected and assessed 30 similarly designed modular inserts retrieved from metal-on-metal (MoM) total hip arthroplasties as a control. No specific pattern of fretting or corrosion was identified on the MDM inserts. Both fretting and corrosion were significantly greater in the MoM cohort than the MDM cohort, driven by higher fretting and corrosion scores in the engaged taper region of the MoM inserts. MoM components demonstrated more fretting and corrosion than MDM designs, specifically at the taper region, likely driven by differences in the taper engagement mechanism and geometry among the insert designs. The lack of significant fretting and corrosion observed in the MDM inserts are inconsistent with recent claims that this interface may produce clinically significant metallosis and adverse local tissue reactions. Copyright © 2016 Elsevier Inc. All rights reserved.
Horvath, Monica M.; Rusincovitch, Shelley A.; Brinson, Stephanie; Shang, Howard C.; Evans, Steve; Ferranti, Jeffrey M.
2015-01-01
Purpose Data generated in the care of patients are widely used to support clinical research and quality improvement, which has hastened the development of self-service query tools. User interface design for such tools, execution of query activity, and underlying application architecture have not been widely reported, and existing tools reflect a wide heterogeneity of methods and technical frameworks. We describe the design, application architecture, and use of a self-service model for enterprise data delivery within Duke Medicine. Methods Our query platform, the Duke Enterprise Data Unified Content Explorer (DEDUCE), supports enhanced data exploration, cohort identification, and data extraction from our enterprise data warehouse (EDW) using a series of modular environments that interact with a central keystone module, Cohort Manager (CM). A data-driven application architecture is implemented through three components: an application data dictionary, the concept of “smart dimensions”, and dynamically-generated user interfaces. Results DEDUCE CM allows flexible hierarchies of EDW queries within a grid-like workspace. A cohort “join” functionality allows switching between filters based on criteria occurring within or across patient encounters. To date, 674 users have been trained and activated in DEDUCE, and logon activity shows a steady increase, with variability between months. A comparison of filter conditions and export criteria shows that these activities have different patterns of usage across subject areas. Conclusions Organizations with sophisticated EDWs may find that users benefit from development of advanced query functionality, complimentary to the user interfaces and infrastructure used in other well-published models. Driven by its EDW context, the DEDUCE application architecture was also designed to be responsive to source data and to allow modification through alterations in metadata rather than programming, allowing an agile response to source system changes. PMID:25051403
Horvath, Monica M; Rusincovitch, Shelley A; Brinson, Stephanie; Shang, Howard C; Evans, Steve; Ferranti, Jeffrey M
2014-12-01
Data generated in the care of patients are widely used to support clinical research and quality improvement, which has hastened the development of self-service query tools. User interface design for such tools, execution of query activity, and underlying application architecture have not been widely reported, and existing tools reflect a wide heterogeneity of methods and technical frameworks. We describe the design, application architecture, and use of a self-service model for enterprise data delivery within Duke Medicine. Our query platform, the Duke Enterprise Data Unified Content Explorer (DEDUCE), supports enhanced data exploration, cohort identification, and data extraction from our enterprise data warehouse (EDW) using a series of modular environments that interact with a central keystone module, Cohort Manager (CM). A data-driven application architecture is implemented through three components: an application data dictionary, the concept of "smart dimensions", and dynamically-generated user interfaces. DEDUCE CM allows flexible hierarchies of EDW queries within a grid-like workspace. A cohort "join" functionality allows switching between filters based on criteria occurring within or across patient encounters. To date, 674 users have been trained and activated in DEDUCE, and logon activity shows a steady increase, with variability between months. A comparison of filter conditions and export criteria shows that these activities have different patterns of usage across subject areas. Organizations with sophisticated EDWs may find that users benefit from development of advanced query functionality, complimentary to the user interfaces and infrastructure used in other well-published models. Driven by its EDW context, the DEDUCE application architecture was also designed to be responsive to source data and to allow modification through alterations in metadata rather than programming, allowing an agile response to source system changes. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimization of Microelectronic Devices for Sensor Applications
NASA Technical Reports Server (NTRS)
Cwik, Tom; Klimeck, Gerhard
2000-01-01
The NASA/JPL goal to reduce payload in future space missions while increasing mission capability demands miniaturization of active and passive sensors, analytical instruments and communication systems among others. Currently, typical system requirements include the detection of particular spectral lines, associated data processing, and communication of the acquired data to other systems. Advances in lithography and deposition methods result in more advanced devices for space application, while the sub-micron resolution currently available opens a vast design space. Though an experimental exploration of this widening design space-searching for optimized performance by repeated fabrication efforts-is unfeasible, it does motivate the development of reliable software design tools. These tools necessitate models based on fundamental physics and mathematics of the device to accurately model effects such as diffraction and scattering in opto-electronic devices, or bandstructure and scattering in heterostructure devices. The software tools must have convenient turn-around times and interfaces that allow effective usage. The first issue is addressed by the application of high-performance computers and the second by the development of graphical user interfaces driven by properly developed data structures. These tools can then be integrated into an optimization environment, and with the available memory capacity and computational speed of high performance parallel platforms, simulation of optimized components can proceed. In this paper, specific applications of the electromagnetic modeling of infrared filtering, as well as heterostructure device design will be presented using genetic algorithm global optimization methods.
An opinion-driven behavioral dynamics model for addictive behaviors
NASA Astrophysics Data System (ADS)
Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; Ambrose, Bridget K.; Brodsky, Nancy S.; Brown, Theresa J.; Husten, Corinne; Glass, Robert J.
2015-04-01
We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual's behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters provide targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. This has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.
Time Series Decomposition into Oscillation Components and Phase Estimation.
Matsuda, Takeru; Komaki, Fumiyasu
2017-02-01
Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.
A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, A.M.M.; Paulson, C.C.; Peacock, M.A.
1995-10-01
A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G.H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. A decisionmore » has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less
A beamline systems model for Accelerator-Driven Transmutation Technology (ADTT) facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Todd, Alan M. M.; Paulson, C. C.; Peacock, M. A.
1995-09-15
A beamline systems code, that is being developed for Accelerator-Driven Transmutation Technology (ADTT) facility trade studies, is described. The overall program is a joint Grumman, G. H. Gillespie Associates (GHGA) and Los Alamos National Laboratory effort. The GHGA Accelerator Systems Model (ASM) has been adopted as the framework on which this effort is based. Relevant accelerator and beam transport models from earlier Grumman systems codes are being adapted to this framework. Preliminary physics and engineering models for each ADTT beamline component have been constructed. Examples noted include a Bridge Coupled Drift Tube Linac (BCDTL) and the accelerator thermal system. Amore » decision has been made to confine the ASM framework principally to beamline modeling, while detailed target/blanket, balance-of-plant and facility costing analysis will be performed externally. An interfacing external balance-of-plant and facility costing model, which will permit the performance of iterative facility trade studies, is under separate development. An ABC (Accelerator Based Conversion) example is used to highlight the present models and capabilities.« less
Yamaura, Hiroshi; Matsushita, Kojiro; Kato, Ryu; Yokoi, Hiroshi
2009-01-01
We have developed a hand rehabilitation system for patients suffering from paralysis or contracture. It consists of two components: a hand rehabilitation machine, which moves human finger joints with motors, and a data glove, which provides control of the movement of finger joints attached to the rehabilitation machine. The machine is based on the arm structure type of hand rehabilitation machine; a motor indirectly moves a finger joint via a closed four-link mechanism. We employ a wire-driven mechanism and develop a compact design that can control all three joints (i.e., PIP, DIP and MP ) of a finger and that offers a wider range of joint motion than conventional systems. Furthermore, we demonstrate the hand rehabilitation process, finger joints of the left hand attached to the machine are controlled by the finger joints of the right hand wearing the data glove.
Design of rapid prototype of UAV line-of-sight stabilized control system
NASA Astrophysics Data System (ADS)
Huang, Gang; Zhao, Liting; Li, Yinlong; Yu, Fei; Lin, Zhe
2018-01-01
The line-of-sight (LOS) stable platform is the most important technology of UAV (unmanned aerial vehicle), which can reduce the effect to imaging quality from vibration and maneuvering of the aircraft. According to the requirement of LOS stability system (inertial and optical-mechanical combined method) and UAV's structure, a rapid prototype is designed using based on industrial computer using Peripheral Component Interconnect (PCI) and Windows RTX to exchange information. The paper shows the control structure, and circuit system including the inertial stability control circuit with gyro and voice coil motor driven circuit, the optical-mechanical stability control circuit with fast-steering-mirror (FSM) driven circuit and image-deviation-obtained system, outer frame rotary follower, and information-exchange system on PC. Test results show the stability accuracy reaches 5μrad, and prove the effectiveness of the combined line-of-sight stabilization control system, and the real-time rapid prototype runs stable.
Colaborated Architechture Framework for Composition UML 2.0 in Zachman Framework
NASA Astrophysics Data System (ADS)
Hermawan; Hastarista, Fika
2016-01-01
Zachman Framework (ZF) is the framework of enterprise architechture that most widely adopted in the Enterprise Information System (EIS) development. In this study, has been developed Colaborated Architechture Framework (CAF) to collaborate ZF with Unified Modeling Language (UML) 2.0 modeling. The CAF provides the composition of ZF matrix that each cell is consist of the Model Driven architechture (MDA) from the various UML models and many Software Requirement Specification (SRS) documents. Implementation of this modeling is used to develops Enterprise Resource Planning (ERP). Because ERP have a coverage of applications in large numbers and complexly relations, it is necessary to use Agile Model Driven Design (AMDD) approach as an advanced method to transforms MDA into components of application modules with efficiently and accurately. Finally, through the using of the CAF, give good achievement in fullfilment the needs from all stakeholders that are involved in the overall process stage of Rational Unified Process (RUP), and also obtaining a high satisfaction to fullfiled the functionality features of the ERP software in PT. Iglas (Persero) Gresik.
An Open-Source Arduino-based Controller for Mechanical Rain Simulators
NASA Astrophysics Data System (ADS)
Cantilina, K. K.
2017-12-01
Many commercial rain simulators currently used in hydrology rely on inflexible and outdated controller designs. These analog controllers typically only allow a handful of discrete parameter options, and do not support internal timing functions or continuously-changing parameters. A desire for finer control of rain simulation events necessitated the design and construction of a microcontroller-based controller, using widely available off-the-shelf components. A menu driven interface allows users to fine-tune simulation parameters without the need for training or experience with microcontrollers, and the accessibility of the Arduino IDE allows users with a minimum of programming and hardware experience to modify the controller program to suit the needs of individual experiments.
Johansen, Kristoffer; Song, Jae Hee; Prentice, Paul
2018-05-01
We describe the design, construction and characterisation of a broadband passive cavitation detector, with the specific aim of detecting low frequency components of periodic shock waves, with high sensitivity. A finite element model is used to guide selection of matching and backing layers for the shock wave passive cavitation detector (swPCD), and the performance is evaluated against a commercially available device. Validation of the model, and characterisation of the swPCD is achieved through experimental detection of laser-plasma bubble collapse shock waves. The final swPCD design is 20 dB more sensitive to the subharmonic component, from acoustic cavitation driven at 220 kHz, than the comparable commercial device. This work may be significant for monitoring cavitation in medical applications, where sensitive detection is critical, and higher frequencies are more readily absorbed by tissue. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
The structural bioinformatics library: modeling in biomolecular science and beyond.
Cazals, Frédéric; Dreyfus, Tom
2017-04-01
Software in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances. The SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback. The SBL is available from http://sbl.inria.fr. Frederic.Cazals@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Using ontologies for structuring organizational knowledge in Home Care assistance.
Valls, Aida; Gibert, Karina; Sánchez, David; Batet, Montserrat
2010-05-01
Information Technologies and Knowledge-based Systems can significantly improve the management of complex distributed health systems, where supporting multidisciplinarity is crucial and communication and synchronization between the different professionals and tasks becomes essential. This work proposes the use of the ontological paradigm to describe the organizational knowledge of such complex healthcare institutions as a basis to support their management. The ontology engineering process is detailed, as well as the way to maintain the ontology updated in front of changes. The paper also analyzes how such an ontology can be exploited in a real healthcare application and the role of the ontology in the customization of the system. The particular case of senior Home Care assistance is addressed, as this is a highly distributed field as well as a strategic goal in an ageing Europe. The proposed ontology design is based on a Home Care medical model defined by an European consortium of Home Care professionals, framed in the scope of the K4Care European project (FP6). Due to the complexity of the model and the knowledge gap existing between the - textual - medical model and the strict formalization of an ontology, an ontology engineering methodology (On-To-Knowledge) has been followed. After applying the On-To-Knowledge steps, the following results were obtained: the feasibility study concluded that the ontological paradigm and the expressiveness of modern ontology languages were enough to describe the required medical knowledge; after the kick-off and refinement stages, a complete and non-ambiguous definition of the Home Care model, including its main components and interrelations, was obtained; the formalization stage expressed HC medical entities in the form of ontological classes, which are interrelated by means of hierarchies, properties and semantically rich class restrictions; the evaluation, carried out by exploiting the ontology into a knowledge-driven e-health application running on a real scenario, showed that the ontology design and its exploitation brought several benefits with regards to flexibility, adaptability and work efficiency from the end-user point of view; for the maintenance stage, two software tools are presented, aimed to address the incorporation and modification of healthcare units and the personalization of ontological profiles. The paper shows that the ontological paradigm and the expressiveness of modern ontology languages can be exploited not only to represent terminology in a non-ambiguous way, but also to formalize the interrelations and organizational structures involved in a real and distributed healthcare environment. This kind of ontologies facilitates the adaptation in front of changes in the healthcare organization or Care Units, supports the creation of profile-based interaction models in a transparent and seamless way, and increases the reusability and generality of the developed software components. As a conclusion of the exploitation of the developed ontology in a real medical scenario, we can say that an ontology formalizing organizational interrelations is a key component for building effective distributed knowledge-driven e-health systems. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Evaluation in the Design of Complex Systems
ERIC Educational Resources Information Center
Ho, Li-An; Schwen, Thomas M.
2006-01-01
We identify literature that argues the process of creating knowledge-based system is often imbalanced. In most knowledge-based systems, development is often technology-driven instead of requirement-driven. Therefore, we argue designers must recognize that evaluation is a critical link in the application of requirement-driven development models…
Modeling and analysis of friction clutch at a driveline for suppressing car starting judder
NASA Astrophysics Data System (ADS)
Li, Liping; Lu, Zhaijun; Liu, Xue-Lai; Sun, Tao; Jing, Xingjian; Shangguan, Wen-Bin
2018-06-01
Car judder is a kind of back-forth vibration during vehicle starting which caused by the torsional oscillation of the driveline. This paper presents a systematic study on the dynamic response characteristics of the clutch driven disc for suppression of the judder during vehicle starting. Self-excited vibration behavior of the clutch driven disc is analyzed based on the developed 4DOF non-linear multi-body dynamic model of the clutch driving process considering stick-slip characteristics and using Karnopp friction models. Physical parameters of a clutch determining the generations of the judder behaviors are discussed and the revised designs of the driven disc of a clutch for suppression of the judder are consequently investigated and validated with experiments for two real cars.
Cobalt: A GPU-based correlator and beamformer for LOFAR
NASA Astrophysics Data System (ADS)
Broekema, P. Chris; Mol, J. Jan David; Nijboer, R.; van Amesfoort, A. S.; Brentjens, M. A.; Loose, G. Marcel; Klijn, W. F. A.; Romein, J. W.
2018-04-01
For low-frequency radio astronomy, software correlation and beamforming on general purpose hardware is a viable alternative to custom designed hardware. LOFAR, a new-generation radio telescope centered in the Netherlands with international stations in Germany, France, Ireland, Poland, Sweden and the UK, has successfully used software real-time processors based on IBM Blue Gene technology since 2004. Since then, developments in technology have allowed us to build a system based on commercial off-the-shelf components that combines the same capabilities with lower operational cost. In this paper, we describe the design and implementation of a GPU-based correlator and beamformer with the same capabilities as the Blue Gene based systems. We focus on the design approach taken, and show the challenges faced in selecting an appropriate system. The design, implementation and verification of the software system show the value of a modern test-driven development approach. Operational experience, based on three years of operations, demonstrates that a general purpose system is a good alternative to the previous supercomputer-based system or custom-designed hardware.
Component Framework for Loosely Coupled High Performance Integrated Plasma Simulations
NASA Astrophysics Data System (ADS)
Elwasif, W. R.; Bernholdt, D. E.; Shet, A. G.; Batchelor, D. B.; Foley, S.
2010-11-01
We present the design and implementation of a component-based simulation framework for the execution of coupled time-dependent plasma modeling codes. The Integrated Plasma Simulator (IPS) provides a flexible lightweight component model that streamlines the integration of stand alone codes into coupled simulations. Standalone codes are adapted to the IPS component interface specification using a thin wrapping layer implemented in the Python programming language. The framework provides services for inter-component method invocation, configuration, task, and data management, asynchronous event management, simulation monitoring, and checkpoint/restart capabilities. Services are invoked, as needed, by the computational components to coordinate the execution of different aspects of coupled simulations on Massive parallel Processing (MPP) machines. A common plasma state layer serves as the foundation for inter-component, file-based data exchange. The IPS design principles, implementation details, and execution model will be presented, along with an overview of several use cases.
A generative tool for building health applications driven by ISO 13606 archetypes.
Menárguez-Tortosa, Marcos; Martínez-Costa, Catalina; Fernández-Breis, Jesualdo Tomás
2012-10-01
The use of Electronic Healthcare Records (EHR) standards in the development of healthcare applications is crucial for achieving the semantic interoperability of clinical information. Advanced EHR standards make use of the dual model architecture, which provides a solution for clinical interoperability based on the separation of the information and knowledge. However, the impact of such standards is biased by the limited availability of tools that facilitate their usage and practical implementation. In this paper, we present an approach for the automatic generation of clinical applications for the ISO 13606 EHR standard, which is based on the dual model architecture. This generator has been generically designed, so it can be easily adapted to other dual model standards and can generate applications for multiple technological platforms. Such good properties are based on the combination of standards for the representation of generic user interfaces and model-driven engineering techniques.
Regmi, Krishna
2018-01-01
Although considerable attention has been paid to the use of quantitative methods in health research, there has been limited focus on decentralisation research using a qualitative-driven mixed method design. Decentralisation presents both a problematic concept and methodological challenges, and is more context-specific and is often multi-dimensional. Researchers often consider using more than one method design when researching phenomena is complex in nature. Aim To explore the effects of decentralisation on the provision of primary healthcare services. Qualitative-driven mixed method design, employing three methods of data collections: focus group discussions (FGDs), semi-structured interviews (SSIs) and participant observations under two components, that is, core component and supplementary components were used. Four FGDs with health service practitioners, three FGDs with district stakeholders, 20 SSIs with health service users and 20 SSIs with national stakeholders were carried out. These were conducted sequentially. NVivo10, a data management program, was utilised to code the field data, employing a content analysis method for searching the underlying themes or concepts in the text material. Findings Both positive and negative experiences related to access, quality, planning, supplies, coordination and supervision were identified. This study suggests some evidence of the effects of decentralisation on health outcomes in general, as well as filling a gap of understanding and examining healthcare through a qualitative-driven mixed methods approach, in particular. Future research in the area of qualitative in-depth understanding of the problems (why decentralisation, why now and what for) would provoke an important data set that benefits the researchers and policy-makers for planning and implementing effective health services.
Nonlinear Tracking Control of a Conductive Supercoiled Polymer Actuator.
Luong, Tuan Anh; Cho, Kyeong Ho; Song, Min Geun; Koo, Ja Choon; Choi, Hyouk Ryeol; Moon, Hyungpil
2018-04-01
Artificial muscle actuators made from commercial nylon fishing lines have been recently introduced and shown as a new type of actuator with high performance. However, the actuators also exhibit significant nonlinearities, which make them difficult to control, especially in precise trajectory-tracking applications. In this article, we present a nonlinear mathematical model of a conductive supercoiled polymer (SCP) actuator driven by Joule heating for model-based feedback controls. Our efforts include modeling of the hysteresis behavior of the actuator. Based on nonlinear modeling, we design a sliding mode controller for SCP actuator-driven manipulators. The system with proposed control law is proven to be asymptotically stable using the Lyapunov theory. The control performance of the proposed method is evaluated experimentally and compared with that of a proportional-integral-derivative (PID) controller through one-degree-of-freedom SCP actuator-driven manipulators. Experimental results show that the proposed controller's performance is superior to that of a PID controller, such as the tracking errors are nearly 10 times smaller compared with those of a PID controller, and it is more robust to external disturbances such as sensor noise and actuator modeling error.
Property-driven functional verification technique for high-speed vision system-on-chip processor
NASA Astrophysics Data System (ADS)
Nshunguyimfura, Victor; Yang, Jie; Liu, Liyuan; Wu, Nanjian
2017-04-01
The implementation of functional verification in a fast, reliable, and effective manner is a challenging task in a vision chip verification process. The main reason for this challenge is the stepwise nature of existing functional verification techniques. This vision chip verification complexity is also related to the fact that in most vision chip design cycles, extensive efforts are focused on how to optimize chip metrics such as performance, power, and area. Design functional verification is not explicitly considered at an earlier stage at which the most sound decisions are made. In this paper, we propose a semi-automatic property-driven verification technique. The implementation of all verification components is based on design properties. We introduce a low-dimension property space between the specification space and the implementation space. The aim of this technique is to speed up the verification process for high-performance parallel processing vision chips. Our experimentation results show that the proposed technique can effectively improve the verification effort up to 20% for the complex vision chip design while reducing the simulation and debugging overheads.
A Design Methodology for Complex (E)-Learning. Innovative Session.
ERIC Educational Resources Information Center
Bastiaens, Theo; van Merrienboer, Jeroen; Hoogveld, Bert
Human resource development (HRD) specialists are searching for instructional design models that accommodate e-learning platforms. Van Merrienboer proposed the four-component instructional design model (4C/ID model) for competency-based education. The model's basic message is that well-designed learning environments can always be described in terms…
Corredor, Iván; Bernardos, Ana M; Iglesias, Josué; Casar, José R
2012-01-01
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.
NASA Technical Reports Server (NTRS)
Karns, James
1993-01-01
The objective of this study was to establish the initial quantitative reliability bounds for nuclear electric propulsion systems in a manned Mars mission required to ensure crew safety and mission success. Finding the reliability bounds involves balancing top-down (mission driven) requirements and bottom-up (technology driven) capabilities. In seeking this balance we hope to accomplish the following: (1) provide design insights into the achievability of the baseline design in terms of reliability requirements, given the existing technology base; (2) suggest alternative design approaches which might enhance reliability and crew safety; and (3) indicate what technology areas require significant research and development to achieve the reliability objectives.
ADAPT: The Agent Development and Prototyping Testbed.
Shoulson, Alexander; Marshak, Nathan; Kapadia, Mubbasir; Badler, Norman I
2014-07-01
We present ADAPT, a flexible platform for designing and authoring functional, purposeful human characters in a rich virtual environment. Our framework incorporates character animation, navigation, and behavior with modular interchangeable components to produce narrative scenes. The animation system provides locomotion, reaching, gaze tracking, gesturing, sitting, and reactions to external physical forces, and can easily be extended with more functionality due to a decoupled, modular structure. The navigation component allows characters to maneuver through a complex environment with predictive steering for dynamic obstacle avoidance. Finally, our behavior framework allows a user to fully leverage a character's animation and navigation capabilities when authoring both individual decision-making and complex interactions between actors using a centralized, event-driven model.
Tsao, Liuxing; Ma, Liang
2016-11-01
Digital human modelling enables ergonomists and designers to consider ergonomic concerns and design alternatives in a timely and cost-efficient manner in the early stages of design. However, the reliability of the simulation could be limited due to the percentile-based approach used in constructing the digital human model. To enhance the accuracy of the size and shape of the models, we proposed a framework to generate digital human models using three-dimensional (3D) anthropometric data. The 3D scan data from specific subjects' hands were segmented based on the estimated centres of rotation. The segments were then driven in forward kinematics to perform several functional postures. The constructed hand models were then verified, thereby validating the feasibility of the framework. The proposed framework helps generate accurate subject-specific digital human models, which can be utilised to guide product design and workspace arrangement. Practitioner Summary: Subject-specific digital human models can be constructed under the proposed framework based on three-dimensional (3D) anthropometry. This approach enables more reliable digital human simulation to guide product design and workspace arrangement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panaccione, Charles; Staab, Greg; Meuleman, Erik
ION has developed a mathematically driven model for a contacting device incorporating mass transfer, heat transfer, and computational fluid dynamics. This model is based upon a parametric structure for purposes of future commercialization. The most promising design from modeling was 3D printed and tested in a bench scale CO 2 capture unit and compared to commercially available structured packing tested in the same unit.
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thimmisetty, Charanraj A.; Zhao, Wenju; Chen, Xiao
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). Thismore » approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.« less
NASA Technical Reports Server (NTRS)
Afjeh, Abdollah A.; Reed, John A.
2003-01-01
This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.
JSTOR: The Development of a Cost-Driven, Value-Based Pricing Model.
ERIC Educational Resources Information Center
Guthrie, Kevin M.
JSTOR (Journal STORage project) began as a project of The Andrew W. Mellon Foundation designed to help libraries address growing persistent space problems. JSTOR was established as an independent not-for-profit organization with its own Board of Trustees in August 1995. This paper summarizes how JSTOR's economic model was developed, lessons…
Phanphet, Suwattanarwong; Dechjarern, Surangsee; Jomjanyong, Sermkiat
2017-05-01
The main objective of this work is to improve the standard of the existing design of knee prosthesis developed by Thailand's Prostheses Foundation of Her Royal Highness The Princess Mother. The experimental structural tests, based on the ISO 10328, of the existing design showed that a few components failed due to fatigue under normal cyclic loading below the required number of cycles. The finite element (FE) simulations of structural tests on the knee prosthesis were carried out. Fatigue life predictions of knee component materials were modeled based on the Morrow's approach. The fatigue life prediction based on the FE model result was validated with the corresponding structural test and the results agreed well. The new designs of the failed components were studied using the design of experimental approach and finite element analysis of the ISO 10328 structural test of knee prostheses under two separated loading cases. Under ultimate loading, knee prosthesis peak von Mises stress must be less than the yield strength of knee component's material and the total knee deflection must be lower than 2.5mm. The fatigue life prediction of all knee components must be higher than 3,000,000 cycles under normal cyclic loading. The design parameters are the thickness of joint bars, the diameter of lower connector and the thickness of absorber-stopper. The optimized knee prosthesis design meeting all the requirements was recommended. Experimental ISO 10328 structural test of the fabricated knee prosthesis based on the optimized design confirmed the finite element prediction. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Conceptual design and analysis of a dynamic scale model of the Space Station Freedom
NASA Technical Reports Server (NTRS)
Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.
1994-01-01
This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.
A MISO-ARX-Based Method for Single-Trial Evoked Potential Extraction.
Yu, Nannan; Wu, Lingling; Zou, Dexuan; Chen, Ying; Lu, Hanbing
2017-01-01
In this paper, we propose a novel method for solving the single-trial evoked potential (EP) estimation problem. In this method, the single-trial EP is considered as a complex containing many components, which may originate from different functional brain sites; these components can be distinguished according to their respective latencies and amplitudes and are extracted simultaneously by multiple-input single-output autoregressive modeling with exogenous input (MISO-ARX). The extraction process is performed in three stages: first, we use a reference EP as a template and decompose it into a set of components, which serve as subtemplates for the remaining steps. Then, a dictionary is constructed with these subtemplates, and EPs are preliminarily extracted by sparse coding in order to roughly estimate the latency of each component. Finally, the single-trial measurement is parametrically modeled by MISO-ARX while characterizing spontaneous electroencephalographic activity as an autoregression model driven by white noise and with each component of the EP modeled by autoregressive-moving-average filtering of the subtemplates. Once optimized, all components of the EP can be extracted. Compared with ARX, our method has greater tracking capabilities of specific components of the EP complex as each component is modeled individually in MISO-ARX. We provide exhaustive experimental results to show the effectiveness and feasibility of our method.
Structural design methodologies for ceramic-based material systems
NASA Technical Reports Server (NTRS)
Duffy, Stephen F.; Chulya, Abhisak; Gyekenyesi, John P.
1991-01-01
One of the primary pacing items for realizing the full potential of ceramic-based structural components is the development of new design methods and protocols. The focus here is on low temperature, fast-fracture analysis of monolithic, whisker-toughened, laminated, and woven ceramic composites. A number of design models and criteria are highlighted. Public domain computer algorithms, which aid engineers in predicting the fast-fracture reliability of structural components, are mentioned. Emphasis is not placed on evaluating the models, but instead is focused on the issues relevant to the current state of the art.
Hilsenrath, Peter; Eakin, Cynthia; Fischer, Katrina
2015-01-01
Health care reform is directed toward improving access and quality while containing costs. An essential part of this is improvement of pricing models to more accurately reflect the costs of providing care. Transparent prices that reflect costs are necessary to signal information to consumers and producers. This information is central in a consumer-driven marketplace. The rapid increase in high deductible insurance and other forms of cost sharing incentivizes the search for price information. The organizational ability to measure costs across a cycle of care is an integral component of creating value, and will play a greater role as reimbursements transition to episode-based care, value-based purchasing, and accountable care organization models. This article discusses use of activity-based costing (ABC) to better measure the cost of health care. It describes examples of ABC in health care organizations and discusses impediments to adoption in the United States including cultural and institutional barriers. © The Author(s) 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Siewerdsen, J; Stayman, J
Purpose: There has been increasing interest in integrating fluence field modulation (FFM) devices with diagnostic CT scanners for dose reduction purposes. Conventional FFM strategies, however, are often either based on heuristics or the analysis of filtered-backprojection (FBP) performance. This work investigates a prospective task-driven optimization of FFM for model-based iterative reconstruction (MBIR) in order to improve imaging performance at the same total dose as conventional strategies. Methods: The task-driven optimization framework utilizes an ultra-low dose 3D scout as a patient-specific anatomical model and a mathematical formation of the imaging task. The MBIR method investigated is quadratically penalized-likelihood reconstruction. The FFMmore » objective function uses detectability index, d’, computed as a function of the predicted spatial resolution and noise in the image. To optimize performance throughout the object, a maxi-min objective was adopted where the minimum d’ over multiple locations is maximized. To reduce the dimensionality of the problem, FFM is parameterized as a linear combination of 2D Gaussian basis functions over horizontal detector pixels and projection angles. The coefficients of these bases are found using the covariance matrix adaptation evolution strategy (CMA-ES) algorithm. The task-driven design was compared with three other strategies proposed for FBP reconstruction for a calcification cluster discrimination task in an abdomen phantom. Results: The task-driven optimization yielded FFM that was significantly different from those designed for FBP. Comparing all four strategies, the task-based design achieved the highest minimum d’ with an 8–48% improvement, consistent with the maxi-min objective. In addition, d’ was improved to a greater extent over a larger area within the entire phantom. Conclusion: Results from this investigation suggests the need to re-evaluate conventional FFM strategies for MBIR. The task-based optimization framework provides a promising approach that maximizes imaging performance under the same total dose constraint.« less
Design and testing of the reactor-internal hydraulic control rod drive for the nuclear heating plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batheja, P.; Meier, W.J.; Rau, P.J.
A hydraulically driven control rod is being developed at Kraftwerk Union for integration in the primary system of a small nuclear district heating reactor. An elaborate test program, under way for --3 yr, was initiated with a plexiglass rig to understand the basic principles. A design specification list was prepared, taking reactor boundary conditions and relevant German rules and regulations into account. Subsequently, an atmospheric loop for testing of components at 20 to 90/sup 0/C was erected. The objectives involved optimization of individual components such as a piston/cylinder drive unit, electromagnetic valves, and an ultrasonic position indication system as wellmore » as verification of computer codes. Based on the results obtained, full-scale components were designed and fabricated for a prototype test rig, which is currently in operation. Thus far, all atmospheric tests in this rig have been completed. Investigations under reactor temperature and pressure, followed by endurance tests, are under way. All tests to date have shown a reliable functioning of the hydraulic drive, including a novel ultrasonic position indication system.« less
On Mixed Data and Event Driven Design for Adaptive-Critic-Based Nonlinear $H_{\\infty}$ Control.
Wang, Ding; Mu, Chaoxu; Liu, Derong; Ma, Hongwen
2018-04-01
In this paper, based on the adaptive critic learning technique, the control for a class of unknown nonlinear dynamic systems is investigated by adopting a mixed data and event driven design approach. The nonlinear control problem is formulated as a two-player zero-sum differential game and the adaptive critic method is employed to cope with the data-based optimization. The novelty lies in that the data driven learning identifier is combined with the event driven design formulation, in order to develop the adaptive critic controller, thereby accomplishing the nonlinear control. The event driven optimal control law and the time driven worst case disturbance law are approximated by constructing and tuning a critic neural network. Applying the event driven feedback control, the closed-loop system is built with stability analysis. Simulation studies are conducted to verify the theoretical results and illustrate the control performance. It is significant to observe that the present research provides a new avenue of integrating data-based control and event-triggering mechanism into establishing advanced adaptive critic systems.
A cloud-based approach for interoperable electronic health records (EHRs).
Bahga, Arshdeep; Madisetti, Vijay K
2013-09-01
We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security.
Biomimetics and the case of the remarkable ragworms.
Hesselberg, Thomas
2007-08-01
Biomimetics is a rapidly growing field both as an academic and as an applied discipline. This paper gives a short introduction to the current status of the discipline before it describes three approaches to biomimetics: the mechanism-driven, which is based on the study of a specific mechanism; the focused organism-driven, which is based on the study of one function in a model organism; and the integrative organism-driven approach, where multiple functions of a model organism provide inspiration. The first two are established approaches and include many modern studies and the famous biomimetic discoveries of Velcro and the Lotus-Effect, whereas the last approach is not yet well recognized. The advantages of the integrative organism-driven approach are discussed using the ragworms as a case study. A morphological and locomotory study of these marine polychaetes reveals their biomimetic potential, which includes using their ability to move in slippery substrates as inspiration for novel endoscopes, using their compound setae as models for passive friction structures and using their three gaits, slow crawling, fast crawling, and swimming as well as their rapid burrowing technique to provide inspiration for the design of displacement pumps and multifunctional robots.
Biomimetics and the case of the remarkable ragworms
NASA Astrophysics Data System (ADS)
Hesselberg, Thomas
2007-08-01
Biomimetics is a rapidly growing field both as an academic and as an applied discipline. This paper gives a short introduction to the current status of the discipline before it describes three approaches to biomimetics: the mechanism-driven, which is based on the study of a specific mechanism; the focused organism-driven, which is based on the study of one function in a model organism; and the integrative organism-driven approach, where multiple functions of a model organism provide inspiration. The first two are established approaches and include many modern studies and the famous biomimetic discoveries of Velcro and the Lotus-Effect, whereas the last approach is not yet well recognized. The advantages of the integrative organism-driven approach are discussed using the ragworms as a case study. A morphological and locomotory study of these marine polychaetes reveals their biomimetic potential, which includes using their ability to move in slippery substrates as inspiration for novel endoscopes, using their compound setae as models for passive friction structures and using their three gaits, slow crawling, fast crawling, and swimming as well as their rapid burrowing technique to provide inspiration for the design of displacement pumps and multifunctional robots.
Prediction of gravity-driven fingering in porous media
NASA Astrophysics Data System (ADS)
Beljadid, Abdelaziz; Cueto-Felgueroso, Luis; Juanes, Ruben
2017-11-01
Gravity-driven displacement of one fluid by another in porous media is often subject to a hydrodynamic instability, whereby fluid invasion takes the form of preferential flow paths-examples include secondary oil migration in reservoir rocks, and infiltration of rainfall water in dry soil. Here, we develop a continuum model of gravity-driven two-phase flow in porous media within the phase-field framework (Cueto-Felgueroso and Juanes, 2008). We employ pore-scale physics arguments to design the free energy of the system, which notably includes a nonlinear formulation of the high-order (square-gradient) term based on equilibrium considerations in the direction orthogonal to gravity. This nonlocal term plays the role of a macroscopic surface tension, which exhibits a strong link with capillary pressure. Our theoretical analysis shows that the proposed model enforces that fluid saturations are bounded between 0 and 1 by construction, therefore overcoming a serious limitation of previous models. Our numerical simulations show that the proposed model also resolves the pinning behavior at the base of the infiltration front, and the asymmetric behavior of the fingers at material interfaces observed experimentally.
Real-time flood forecasts & risk assessment using a possibility-theory based fuzzy neural network
NASA Astrophysics Data System (ADS)
Khan, U. T.
2016-12-01
Globally floods are one of the most devastating natural disasters and improved flood forecasting methods are essential for better flood protection in urban areas. Given the availability of high resolution real-time datasets for flood variables (e.g. streamflow and precipitation) in many urban areas, data-driven models have been effectively used to predict peak flow rates in river; however, the selection of input parameters for these types of models is often subjective. Additionally, the inherit uncertainty associated with data models along with errors in extreme event observations means that uncertainty quantification is essential. Addressing these concerns will enable improved flood forecasting methods and provide more accurate flood risk assessments. In this research, a new type of data-driven model, a quasi-real-time updating fuzzy neural network is developed to predict peak flow rates in urban riverine watersheds. A possibility-to-probability transformation is first used to convert observed data into fuzzy numbers. A possibility theory based training regime is them used to construct the fuzzy parameters and the outputs. A new entropy-based optimisation criterion is used to train the network. Two existing methods to select the optimum input parameters are modified to account for fuzzy number inputs, and compared. These methods are: Entropy-Wavelet-based Artificial Neural Network (EWANN) and Combined Neural Pathway Strength Analysis (CNPSA). Finally, an automated algorithm design to select the optimum structure of the neural network is implemented. The overall impact of each component of training this network is to replace the traditional ad hoc network configuration methods, with one based on objective criteria. Ten years of data from the Bow River in Calgary, Canada (including two major floods in 2005 and 2013) are used to calibrate and test the network. The EWANN method selected lagged peak flow as a candidate input, whereas the CNPSA method selected lagged precipitation and lagged mean daily flow as candidate inputs. Model performance metric show that the CNPSA method had higher performance (with an efficiency of 0.76). Model output was used to assess the risk of extreme peak flows for a given day using an inverse possibility-to-probability transformation.
Machine learning based cloud mask algorithm driven by radiative transfer modeling
NASA Astrophysics Data System (ADS)
Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.
2017-12-01
Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.
A Customizable Language Learning Support System Using Ontology-Driven Engine
ERIC Educational Resources Information Center
Wang, Jingyun; Mendori, Takahiko; Xiong, Juan
2013-01-01
This paper proposes a framework for web-based language learning support systems designed to provide customizable pedagogical procedures based on the analysis of characteristics of both learner and course. This framework employs a course-centered ontology and a teaching method ontology as the foundation for the student model, which includes learner…
NASA Astrophysics Data System (ADS)
Yusof, Wan Zaiyana Mohd; Fadzline Muhamad Tamyez, Puteri
2018-04-01
The definition of innovation does not help the entrepreneurs, business person or innovator to truly grasp what it means to innovate, hence we hear that government has spend millions of ringgit on “innovation” by doing R & D. However, the result has no avail in terms of commercial value. Innovation can be defined as the exploitation of commercialization of an idea or invention to create economic or social value. Most Entrepreneurs and business managers, regard innovation as creating economic value, while forgetting that innovation also create value for society or the environment. The ultimate goal as Entrepreneur, inventor or researcher is to exploit innovation to create value. As changes happen in society and economy, organizations and enterprises have to keep up and this requires innovation. This conceptual paper is to study the radical design driven innovation in the Malaysian furniture industry as a business model which the overall aim of the study is to examine the radical design driven innovation in Malaysia and how it compares with findings from Western studies. This paper will familiarize readers with the innovation and describe the radical design driven perspective that is adopted in its conceptual framework and design process.
A Co-modeling Method Based on Component Features for Mechatronic Devices in Aero-engines
NASA Astrophysics Data System (ADS)
Wang, Bin; Zhao, Haocen; Ye, Zhifeng
2017-08-01
Data-fused and user-friendly design of aero-engine accessories is required because of their structural complexity and stringent reliability. This paper gives an overview of a typical aero-engine control system and the development process of key mechatronic devices used. Several essential aspects of modeling and simulation in the process are investigated. Considering the limitations of a single theoretic model, feature-based co-modeling methodology is suggested to satisfy the design requirements and compensate for diversity of component sub-models for these devices. As an example, a stepper motor controlled Fuel Metering Unit (FMU) is modeled in view of the component physical features using two different software tools. An interface is suggested to integrate the single discipline models into the synthesized one. Performance simulation of this device using the co-model and parameter optimization for its key components are discussed. Comparison between delivery testing and the simulation shows that the co-model for the FMU has a high accuracy and the absolute superiority over a single model. Together with its compatible interface with the engine mathematical model, the feature-based co-modeling methodology is proven to be an effective technical measure in the development process of the device.
NASA Technical Reports Server (NTRS)
George-Falvy, Dez
1992-01-01
Circumferential design combines compactness and efficiency. In remotely controlled valve, flow in tributary duct along circumference of primary duct merged with flow in primary duct. Flow in tributary duct regulated by variable throat nuzzle driven by worm gear. Design leak-proof, and most components easily fabricated on lathe.
Tsui, Fu-Chiang; Espino, Jeremy U; Weng, Yan; Choudary, Arvinder; Su, Hoah-Der; Wagner, Michael M
2005-01-01
The National Retail Data Monitor (NRDM) has monitored over-the-counter (OTC) medication sales in the United States since December 2002. The NRDM collects data from over 18,600 retail stores and processes over 0.6 million sales records per day. This paper describes key architectural features that we have found necessary for a data utility component in a national biosurveillance system. These elements include event-driven architecture to provide analyses of data in near real time, multiple levels of caching to improve query response time, high availability through the use of clustered servers, scalable data storage through the use of storage area networks and a web-service function for interoperation with affiliated systems. The methods and architectural principles are relevant to the design of any production data utility for public health surveillance-systems that collect data from multiple sources in near real time for use by analytic programs and user interfaces that have substantial requirements for time-series data aggregated in multiple dimensions.
Experiments on vibration-driven stick-slip locomotion: A sliding bifurcation perspective
NASA Astrophysics Data System (ADS)
Du, Zhouwei; Fang, Hongbin; Zhan, Xiong; Xu, Jian
2018-05-01
Dry friction appears at the contact interface between two surfaces and is the source of stick-slip vibrations. Instead of being a negative factor, dry friction is essential for vibration-driven locomotion system to take effect. However, the dry-friction-induced stick-slip locomotion has not been fully understood in previous research, especially in terms of experiments. In this paper, we experimentally study the stick-slip dynamics of a vibration-driven locomotion system from a sliding bifurcation perspective. To this end, we first design and build a vibration-driven locomotion prototype based on an internal piezoelectric cantilever. By utilizing the mechanical resonance, the small piezoelectric deformation is significantly amplified to drive the prototype to achieve effective locomotion. Through identifying the stick-slip characteristics in velocity histories, we could categorize the system's locomotion into four types and obtain a stick-slip categorization diagram. In each zone of the diagram the locomotion exhibits qualitatively different stick-slip dynamics. Such categorization diagram is actually a sliding bifurcation diagram; crossing from one stick-slip zone to another corresponds to the triggering of a sliding bifurcation. In addition, a simplified single degree-of-freedom model is established, with the rationality of simplification been explained theoretically and numerically. Based on the equivalent model, a numerical stick-slip categorization is also obtained, which shows good agreement with the experiments both qualitatively and quantitatively. To the best of our knowledge, this is the first work that experimentally generates a sliding bifurcation diagram. The obtained stick-slip categorizations deepen our understanding of stick-slip dynamics in vibration-driven systems and could serve as a base for system design and optimization.
Magnetotransport in Artificial Kagome Spin Ice
NASA Astrophysics Data System (ADS)
Chern, Gia-Wei
2017-12-01
Magnetic nanoarrays with special geometries exhibit nontrivial collective behaviors similar to those observed in spin-ice materials. Here, we present a circuit model to describe the complex magnetotransport phenomena in artificial kagome spin ice. In this picture, the system can be viewed as a resistor network driven by voltage sources that are located at vertices of the honeycomb array. The differential voltages across different terminals of these sources are related to the ice rules that govern the local magnetization ordering. The circuit model relates the transverse Hall voltage of kagome ice to the underlying spin correlations. Treating the magnetic nanoarray as metamaterials, we present a mesoscopic constitutive equation relating the Hall resistance to magnetization components of the system. We further show that the Hall signal is significantly enhanced when the kagome ice undergoes a magnetic-charge-ordering transition. Our analysis can be readily generalized to other lattice geometries, providing a quantitative method for the design of magnetoresistance devices based on artificial spin ice.
Interactive graphical system for small-angle scattering analysis of polydisperse systems
NASA Astrophysics Data System (ADS)
Konarev, P. V.; Volkov, V. V.; Svergun, D. I.
2016-09-01
A program suite for one-dimensional small-angle scattering analysis of polydisperse systems and multiple data sets is presented. The main program, POLYSAS, has a menu-driven graphical user interface calling computational modules from ATSAS package to perform data treatment and analysis. The graphical menu interface allows one to process multiple (time, concentration or temperature-dependent) data sets and interactively change the parameters for the data modelling using sliders. The graphical representation of the data is done via the Winteracter-based program SASPLOT. The package is designed for the analysis of polydisperse systems and mixtures, and permits one to obtain size distributions and evaluate the volume fractions of the components using linear and non-linear fitting algorithms as well as model-independent singular value decomposition. The use of the POLYSAS package is illustrated by the recent examples of its application to study concentration-dependent oligomeric states of proteins and time kinetics of polymer micelles for anticancer drug delivery.
Designing Cognitively Diagnostic Assessment for Algebraic Content Knowledge and Thinking Skills
ERIC Educational Resources Information Center
Zhang, Zhidong
2018-01-01
This study explored a diagnostic assessment method that emphasized the cognitive process of algebra learning. The study utilized a design and a theory-driven model to examine the content knowledge. Using the theory driven model, the thinking skills of algebra learning was also examined. A Bayesian network model was applied to represent the theory…
NASA Astrophysics Data System (ADS)
Shao, Xinxin; Naghdy, Fazel; Du, Haiping
2017-03-01
A fault-tolerant fuzzy H∞ control design approach for active suspension of in-wheel motor driven electric vehicles in the presence of sprung mass variation, actuator faults and control input constraints is proposed. The controller is designed based on the quarter-car active suspension model with a dynamic-damping-in-wheel-motor-driven-system, in which the suspended motor is operated as a dynamic absorber. The Takagi-Sugeno (T-S) fuzzy model is used to model this suspension with possible sprung mass variation. The parallel-distributed compensation (PDC) scheme is deployed to derive a fault-tolerant fuzzy controller for the T-S fuzzy suspension model. In order to reduce the motor wear caused by the dynamic force transmitted to the in-wheel motor, the dynamic force is taken as an additional controlled output besides the traditional optimization objectives such as sprung mass acceleration, suspension deflection and actuator saturation. The H∞ performance of the proposed controller is derived as linear matrix inequalities (LMIs) comprising three equality constraints which are solved efficiently by means of MATLAB LMI Toolbox. The proposed controller is applied to an electric vehicle suspension and its effectiveness is demonstrated through computer simulation.
NASA Astrophysics Data System (ADS)
Butler, Samuel D.; Marciniak, Michael A.
2014-09-01
Since the development of the Torrance-Sparrow bidirectional re ectance distribution function (BRDF) model in 1967, several BRDF models have been created. Previous attempts to categorize BRDF models have relied upon somewhat vague descriptors, such as empirical, semi-empirical, and experimental. Our approach is to instead categorize BRDF models based on functional form: microfacet normal distribution, geometric attenua- tion, directional-volumetric and Fresnel terms, and cross section conversion factor. Several popular microfacet models are compared to a standardized notation for a microfacet BRDF model. A library of microfacet model components is developed, allowing for creation of unique microfacet models driven by experimentally measured BRDFs.
Diffusion and transport in locally disordered driven lattices
NASA Astrophysics Data System (ADS)
Wulf, Thomas; Okupnik, Alexander; Schmelcher, Peter
2016-09-01
We study the effect of disorder on the particle density evolution in a classical Hamiltonian driven lattice setup. If the disorder is localized within a finite sub-domain of the lattice, the emergence of strong tails in the density distribution which even increases towards larger positions is shown, thus yielding a highly non-Gaussian particle density evolution. As the key underlying mechanism, we identify the conversion between different components of the unperturbed systems mixed phase space which is induced by the disorder. Based on the introduction of individual conversion rates between chaotic and regular components, a theoretical model is developed which correctly predicts the scaling of the particle density. The effect of disorder on the transport properties is studied where a significant enhancement of the transport for cases of localized disorder is shown, thereby contrasting strongly the merely weak modification of the transport for global disorder.
Threat driven modeling framework using petri nets for e-learning system.
Khamparia, Aditya; Pandey, Babita
2016-01-01
Vulnerabilities at various levels are main cause of security risks in e-learning system. This paper presents a modified threat driven modeling framework, to identify the threats after risk assessment which requires mitigation and how to mitigate those threats. To model those threat mitigations aspects oriented stochastic petri nets are used. This paper included security metrics based on vulnerabilities present in e-learning system. The Common Vulnerability Scoring System designed to provide a normalized method for rating vulnerabilities which will be used as basis in metric definitions and calculations. A case study has been also proposed which shows the need and feasibility of using aspect oriented stochastic petri net models for threat modeling which improves reliability, consistency and robustness of the e-learning system.
NASA Astrophysics Data System (ADS)
Gang, Grace J.; Siewerdsen, Jeffrey H.; Webster Stayman, J.
2017-06-01
Tube current modulation (TCM) is routinely adopted on diagnostic CT scanners for dose reduction. Conventional TCM strategies are generally designed for filtered-backprojection (FBP) reconstruction to satisfy simple image quality requirements based on noise. This work investigates TCM designs for model-based iterative reconstruction (MBIR) to achieve optimal imaging performance as determined by a task-based image quality metric. Additionally, regularization is an important aspect of MBIR that is jointly optimized with TCM, and includes both the regularization strength that controls overall smoothness as well as directional weights that permits control of the isotropy/anisotropy of the local noise and resolution properties. Initial investigations focus on a known imaging task at a single location in the image volume. The framework adopts Fourier and analytical approximations for fast estimation of the local noise power spectrum (NPS) and modulation transfer function (MTF)—each carrying dependencies on TCM and regularization. For the single location optimization, the local detectability index (d‧) of the specific task was directly adopted as the objective function. A covariance matrix adaptation evolution strategy (CMA-ES) algorithm was employed to identify the optimal combination of imaging parameters. Evaluations of both conventional and task-driven approaches were performed in an abdomen phantom for a mid-frequency discrimination task in the kidney. Among the conventional strategies, the TCM pattern optimal for FBP using a minimum variance criterion yielded a worse task-based performance compared to an unmodulated strategy when applied to MBIR. Moreover, task-driven TCM designs for MBIR were found to have the opposite behavior from conventional designs for FBP, with greater fluence assigned to the less attenuating views of the abdomen and less fluence to the more attenuating lateral views. Such TCM patterns exaggerate the intrinsic anisotropy of the MTF and NPS as a result of the data weighting in MBIR. Directional penalty design was found to reinforce the same trend. The task-driven approaches outperform conventional approaches, with the maximum improvement in d‧ of 13% given by the joint optimization of TCM and regularization. This work demonstrates that the TCM optimal for MBIR is distinct from conventional strategies proposed for FBP reconstruction and strategies optimal for FBP are suboptimal and may even reduce performance when applied to MBIR. The task-driven imaging framework offers a promising approach for optimizing acquisition and reconstruction for MBIR that can improve imaging performance and/or dose utilization beyond conventional imaging strategies.
Wang, Qian; Molenaar, Peter; Harsh, Saurabh; Freeman, Kenneth; Xie, Jinyu; Gold, Carol; Rovine, Mike; Ulbrecht, Jan
2014-03-01
An essential component of any artificial pancreas is on the prediction of blood glucose levels as a function of exogenous and endogenous perturbations such as insulin dose, meal intake, and physical activity and emotional tone under natural living conditions. In this article, we present a new data-driven state-space dynamic model with time-varying coefficients that are used to explicitly quantify the time-varying patient-specific effects of insulin dose and meal intake on blood glucose fluctuations. Using the 3-variate time series of glucose level, insulin dose, and meal intake of an individual type 1 diabetic subject, we apply an extended Kalman filter (EKF) to estimate time-varying coefficients of the patient-specific state-space model. We evaluate our empirical modeling using (1) the FDA-approved UVa/Padova simulator with 30 virtual patients and (2) clinical data of 5 type 1 diabetic patients under natural living conditions. Compared to a forgetting-factor-based recursive ARX model of the same order, the EKF model predictions have higher fit, and significantly better temporal gain and J index and thus are superior in early detection of upward and downward trends in glucose. The EKF based state-space model developed in this article is particularly suitable for model-based state-feedback control designs since the Kalman filter estimates the state variable of the glucose dynamics based on the measured glucose time series. In addition, since the model parameters are estimated in real time, this model is also suitable for adaptive control. © 2014 Diabetes Technology Society.
Modeling interdependencies between business and communication processes in hospitals.
Brigl, Birgit; Wendt, Thomas; Winter, Alfred
2003-01-01
The optimization and redesign of business processes in hospitals is an important challenge for the hospital information management who has to design and implement a suitable HIS architecture. Nevertheless, there are no tools available specializing in modeling information-driven business processes and the consequences on the communication between information processing, tools. Therefore, we will present an approach which facilitates the representation and analysis of business processes and resulting communication processes between application components and their interdependencies. This approach aims not only to visualize those processes, but to also to evaluate if there are weaknesses concerning the information processing infrastructure which hinder the smooth implementation of the business processes.
Active control of complex, multicomponent self-assembly processes
NASA Astrophysics Data System (ADS)
Schulman, Rebecca
The kinetics of many complex biological self-assembly processes such as cytoskeletal assembly are precisely controlled by cells. Spatiotemporal control over rates of filament nucleation, growth and disassembly determine how self-assembly occurs and how the assembled form changes over time. These reaction rates can be manipulated by changing the concentrations of the components needed for assembly by activating or deactivating them. I will describe how we can use these principles to design driven self-assembly processes in which we assemble and disassemble multiple types of components to create micron-scale networks of semiflexible filaments assembled from DNA. The same set of primitive components can be assembled into many different, structures depending on the concentrations of different components and how designed, DNA-based chemical reaction networks manipulate these concentrations over time. These chemical reaction networks can in turn interpret environmental stimuli to direct complex, multistage response. Such a system is a laboratory for understanding complex active material behaviors, such as metamorphosis, self-healing or adaptation to the environment that are ubiquitous in biological systems but difficult to quantitatively characterize or engineer.
Machine learning and data science in soft materials engineering
NASA Astrophysics Data System (ADS)
Ferguson, Andrew L.
2018-01-01
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by ‘de-jargonizing’ data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Machine learning and data science in soft materials engineering.
Ferguson, Andrew L
2018-01-31
In many branches of materials science it is now routine to generate data sets of such large size and dimensionality that conventional methods of analysis fail. Paradigms and tools from data science and machine learning can provide scalable approaches to identify and extract trends and patterns within voluminous data sets, perform guided traversals of high-dimensional phase spaces, and furnish data-driven strategies for inverse materials design. This topical review provides an accessible introduction to machine learning tools in the context of soft and biological materials by 'de-jargonizing' data science terminology, presenting a taxonomy of machine learning techniques, and surveying the mathematical underpinnings and software implementations of popular tools, including principal component analysis, independent component analysis, diffusion maps, support vector machines, and relative entropy. We present illustrative examples of machine learning applications in soft matter, including inverse design of self-assembling materials, nonlinear learning of protein folding landscapes, high-throughput antimicrobial peptide design, and data-driven materials design engines. We close with an outlook on the challenges and opportunities for the field.
Architectural approaches for HL7-based health information systems implementation.
López, D M; Blobel, B
2010-01-01
Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.
An opinion-driven behavioral dynamics model for addictive behaviors
Moore, Thomas W.; Finley, Patrick D.; Apelberg, Benjamin J.; ...
2015-04-08
We present a model of behavioral dynamics that combines a social network-based opinion dynamics model with behavioral mapping. The behavioral component is discrete and history-dependent to represent situations in which an individual’s behavior is initially driven by opinion and later constrained by physiological or psychological conditions that serve to maintain the behavior. Additionally, individuals are modeled as nodes in a social network connected by directed edges. Parameter sweeps illustrate model behavior and the effects of individual parameters and parameter interactions on model results. Mapping a continuous opinion variable into a discrete behavioral space induces clustering on directed networks. Clusters providemore » targets of opportunity for influencing the network state; however, the smaller the network the greater the stochasticity and potential variability in outcomes. Furthermore, this has implications both for behaviors that are influenced by close relationships verses those influenced by societal norms and for the effectiveness of strategies for influencing those behaviors.« less
Calibration of resistance factors needed in the LRFD design of driven piles.
DOT National Transportation Integrated Search
2009-05-01
This research project presents the calibration of resistance factors for the Load and Resistance Factor Design (LRFD) method of driven : piles driven into Louisiana soils based on reliability theory. Fifty-three square Precast-Prestressed-Concrete (P...
Calibration of Resistance Factors Needed in the LRFD Design of Driven Piles
DOT National Transportation Integrated Search
2009-05-01
This research project presents the calibration of resistance factors for the Load and Resistance Factor Design (LRFD) method of driven : piles driven into Louisiana soils based on reliability theory. Fifty-three square Precast-Prestressed-Concrete (P...
Lanning, Maryanna E.; Yu, Wenbo; Yap, Jeremy L.; Chauhan, Jay; Chen, Lijia; Whiting, Ellis; Pidugu, Lakshmi S.; Atkinson, Tyler; Bailey, Hala; Li, Willy; Roth, Braden M.; Hynicka, Lauren; Chesko, Kirsty; Toth, Eric A.; Shapiro, Paul; MacKerell, Alexander D.; Wilder, Paul T.; Fletcher, Steven
2016-01-01
Structure-based drug design was utilized to develop novel, 1-hydroxy-2-naphthoate-based small-molecule inhibitors of Mcl-1. Ligand design was driven by exploiting a salt bridge with R263 and interactions with the p2 and p3 pockets of the protein. Significantly, target molecules were accessed in just two synthetic steps, suggesting further optimization will require minimal synthetic effort. Molecular modeling using the Site-Identification by Ligand Competitive Saturation (SILCS) approach was used to qualitatively direct ligand design as well as develop quantitative models for inhibitor binding affinity to Mcl-1 and the Bcl-2 relative Bcl-xL as well as for the specificity of binding to the two proteins. Results indicated hydrophobic interactions with the p2 pockets dominate the affinity of the most favourable binding ligand (3bl: Ki = 31 nM). Compounds were up to 20-fold selective for Mcl-1 over Bcl-xL. Selectivity of the inhibitors was driven by interactions with the deeper p2 pocket in Mcl-1 versus Bcl-xL. The SILCS-based SAR of the present compounds represents the foundation for the development of Mcl-1 specific inhibitors with the potential to treat a wide range of solid tumours and hematological cancers, including acute myeloid leukaemia. PMID:26985630
NASA Astrophysics Data System (ADS)
Hunter, Jason M.; Maier, Holger R.; Gibbs, Matthew S.; Foale, Eloise R.; Grosvenor, Naomi A.; Harders, Nathan P.; Kikuchi-Miller, Tahali C.
2018-05-01
Salinity modelling in river systems is complicated by a number of processes, including in-stream salt transport and various mechanisms of saline accession that vary dynamically as a function of water level and flow, often at different temporal scales. Traditionally, salinity models in rivers have either been process- or data-driven. The primary problem with process-based models is that in many instances, not all of the underlying processes are fully understood or able to be represented mathematically. There are also often insufficient historical data to support model development. The major limitation of data-driven models, such as artificial neural networks (ANNs) in comparison, is that they provide limited system understanding and are generally not able to be used to inform management decisions targeting specific processes, as different processes are generally modelled implicitly. In order to overcome these limitations, a generic framework for developing hybrid process and data-driven models of salinity in river systems is introduced and applied in this paper. As part of the approach, the most suitable sub-models are developed for each sub-process affecting salinity at the location of interest based on consideration of model purpose, the degree of process understanding and data availability, which are then combined to form the hybrid model. The approach is applied to a 46 km reach of the Murray River in South Australia, which is affected by high levels of salinity. In this reach, the major processes affecting salinity include in-stream salt transport, accession of saline groundwater along the length of the reach and the flushing of three waterbodies in the floodplain during overbank flows of various magnitudes. Based on trade-offs between the degree of process understanding and data availability, a process-driven model is developed for in-stream salt transport, an ANN model is used to model saline groundwater accession and three linear regression models are used to account for the flushing of the different floodplain storages. The resulting hybrid model performs very well on approximately 3 years of daily validation data, with a Nash-Sutcliffe efficiency (NSE) of 0.89 and a root mean squared error (RMSE) of 12.62 mg L-1 (over a range from approximately 50 to 250 mg L-1). Each component of the hybrid model results in noticeable improvements in model performance corresponding to the range of flows for which they are developed. The predictive performance of the hybrid model is significantly better than that of a benchmark process-driven model (NSE = -0.14, RMSE = 41.10 mg L-1, Gbench index = 0.90) and slightly better than that of a benchmark data-driven (ANN) model (NSE = 0.83, RMSE = 15.93 mg L-1, Gbench index = 0.36). Apart from improved predictive performance, the hybrid model also has advantages over the ANN benchmark model in terms of increased capacity for improving system understanding and greater ability to support management decisions.
Distillation and Air Stripping Designs for the Lunar Surface
NASA Technical Reports Server (NTRS)
Boul, Peter J.; Lange, Kevin E.; Conger, Bruce; Anderson, Molly
2009-01-01
Air stripping and distillation are two different gravity-based methods, which may be applied to the purification of wastewater on the lunar base. These gravity-based solutions to water processing are robust physical separation techniques, which may be advantageous to many other techniques for their simplicity in design and operation. The two techniques can be used in conjunction with each other to obtain high purity water. The components and feed compositions for modeling waste water streams are presented in conjunction with the Aspen property system for traditional stage distillation models and air stripping models. While the individual components for each of the waste streams will vary naturally within certain bounds, an analog model for waste water processing is suggested based on typical concentration ranges for these components. Target purity levels for the for recycled water are determined for each individual component based on NASA s required maximum contaminant levels for potable water Distillation processes are modeled separately and in tandem with air stripping to demonstrate the potential effectiveness and utility of these methods in recycling wastewater on the Moon. Optimum parameters such as reflux ratio, feed stage location, and processing rates are determined with respect to the power consumption of the process. Multistage distillation is evaluated for components in wastewater to determine the minimum number of stages necessary for each of 65 components in humidity condensate and urine wastewater mixed streams. Components of the wastewater streams are ranked by Henry s Law Constant and the suitability of air stripping in the purification of wastewater in terms of component removal is evaluated. Scaling factors for distillation and air stripping columns are presented to account for the difference in the lunar gravitation environment. Commercially available distillation and air stripping units which are considered suitable for Exploration Life Support are presented. The advantages to the various designs are summarized with respect to water purity levels, power consumption, and processing rates.
Shao, Wei; Liu, Mingxia; Zhang, Daoqiang
2016-01-01
The systematic study of subcellular location pattern is very important for fully characterizing the human proteome. Nowadays, with the great advances in automated microscopic imaging, accurate bioimage-based classification methods to predict protein subcellular locations are highly desired. All existing models were constructed on the independent parallel hypothesis, where the cellular component classes are positioned independently in a multi-class classification engine. The important structural information of cellular compartments is missed. To deal with this problem for developing more accurate models, we proposed a novel cell structure-driven classifier construction approach (SC-PSorter) by employing the prior biological structural information in the learning model. Specifically, the structural relationship among the cellular components is reflected by a new codeword matrix under the error correcting output coding framework. Then, we construct multiple SC-PSorter-based classifiers corresponding to the columns of the error correcting output coding codeword matrix using a multi-kernel support vector machine classification approach. Finally, we perform the classifier ensemble by combining those multiple SC-PSorter-based classifiers via majority voting. We evaluate our method on a collection of 1636 immunohistochemistry images from the Human Protein Atlas database. The experimental results show that our method achieves an overall accuracy of 89.0%, which is 6.4% higher than the state-of-the-art method. The dataset and code can be downloaded from https://github.com/shaoweinuaa/. dqzhang@nuaa.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Design of smart sensing components for volcano monitoring
Xu, M.; Song, W.-Z.; Huang, R.; Peng, Y.; Shirazi, B.; LaHusen, R.; Kiely, A.; Peterson, N.; Ma, A.; Anusuya-Rangappa, L.; Miceli, M.; McBride, D.
2009-01-01
In a volcano monitoring application, various geophysical and geochemical sensors generate continuous high-fidelity data, and there is a compelling need for real-time raw data for volcano eruption prediction research. It requires the network to support network synchronized sampling, online configurable sensing and situation awareness, which pose significant challenges on sensing component design. Ideally, the resource usages shall be driven by the environment and node situations, and the data quality is optimized under resource constraints. In this paper, we present our smart sensing component design, including hybrid time synchronization, configurable sensing, and situation awareness. Both design details and evaluation results are presented to show their efficiency. Although the presented design is for a volcano monitoring application, its design philosophy and framework can also apply to other similar applications and platforms. ?? 2009 Elsevier B.V.
NASA Astrophysics Data System (ADS)
Bonne, François; Alamir, Mazen; Hoa, Christine; Bonnay, Patrick; Bon-Mardion, Michel; Monteiro, Lionel
2015-12-01
In this article, we present a new Simulink library of cryogenics components (such as valve, phase separator, mixer, heat exchanger...) to assemble to generate model-based control schemes. Every component is described by its algebraic or differential equation and can be assembled with others to build the dynamical model of a complete refrigerator or the model of a subpart of it. The obtained model can be used to automatically design advanced model based control scheme. It also can be used to design a model based PI controller. Advanced control schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT- 60SA). The paper gives the example of the generation of the dynamical model of the 400W@1.8K refrigerator and shows how to build a Constrained Model Predictive Control for it. Based on the scheme, experimental results will be given. This work is being supported by the French national research agency (ANR) through the ANR-13-SEED-0005 CRYOGREEN program.
Configurable product design considering the transition of multi-hierarchical models
NASA Astrophysics Data System (ADS)
Ren, Bin; Qiu, Lemiao; Zhang, Shuyou; Tan, Jianrong; Cheng, Jin
2013-03-01
The current research of configurable product design mainly focuses on how to convert a predefined set of components into a valid set of product structures. With the scale and complexity of configurable products increasing, the interdependencies between customer demands and product structures grow up as well. The result is that existing product structures fails to satisfy the individual customer requirements and hence product variants are needed. This paper is aimed to build a bridge between customer demands and product structures in order to make demand-driven fast response design feasible. First of all, multi-hierarchical models of configurable product design are established with customer demand model, technical requirement model and product structure model. Then, the transition of multi-hierarchical models among customer demand model, technical requirement model and product structure model is solved with fuzzy analytic hierarchy process (FAHP) and the algorithm of multi-level matching. Finally, optimal structure according to the customer demands is obtained with the calculation of Euclidean distance and similarity of some cases. In practice, the configuration design of a clamping unit of injection molding machine successfully performs an optimal search strategy for the product variants with reasonable satisfaction to individual customer demands. The proposed method can automatically generate a configuration design with better alternatives for each product structures, and shorten the time of finding the configuration of a product.
Problem Based Learning in Design and Technology Education Supported by Hypermedia-Based Environments
ERIC Educational Resources Information Center
Page, Tom; Lehtonen, Miika
2006-01-01
Audio-visual advances in virtual reality (VR) technology have given rise to innovative new ways to teach and learn. However, so far teaching and learning processes have been technologically driven as opposed to pedagogically led. This paper identifies the development of a pedagogical model and its application for teaching, studying and learning…
Designing an optimal software intensive system acquisition: A game theoretic approach
NASA Astrophysics Data System (ADS)
Buettner, Douglas John
The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.
A Monthly Water-Balance Model Driven By a Graphical User Interface
McCabe, Gregory J.; Markstrom, Steven L.
2007-01-01
This report describes a monthly water-balance model driven by a graphical user interface, referred to as the Thornthwaite monthly water-balance program. Computations of monthly water-balance components of the hydrologic cycle are made for a specified location. The program can be used as a research tool, an assessment tool, and a tool for classroom instruction.
Ball driven type MEMS SAD for artillery fuse
NASA Astrophysics Data System (ADS)
Seok, Jin Oh; Jeong, Ji-hun; Eom, Junseong; Lee, Seung S.; Lee, Chun Jae; Ryu, Sung Moon; Oh, Jong Soo
2017-01-01
The SAD (safety and arming device) is an indispensable fuse component that ensures safe and reliable performance during the use of ammunition. Because the application of electronic devices for smart munitions is increasing, miniaturization of the SAD has become one of the key issues for next-generation artillery fuses. Based on MEMS technology, various types of miniaturized SADs have been proposed and fabricated. However, none of them have been reported to have been used in actual munitions due to their lack of high impact endurance and complicated explosive train arrangements. In this research, a new MEMS SAD using a ball driven mechanism, is successfully demonstrated based on a UV LIGA (lithography, electroplating and molding) process. Unlike other MEMS SADs, both high impact endurance and simple structure were achieved by using a ball driven mechanism. The simple structural design also simplified the fabrication process and increased the processing yield. The ball driven type MEMS SAD performed successfully under the desired safe and arming conditions of a spin test and showed fine agreement with the FEM simulation result, conducted prior to its fabrication. A field test was also performed with a grenade launcher to evaluate the SAD performance in the firing environment. All 30 of the grenade samples equipped with the proposed MEMS SAD operated successfully under the high-G setback condition.
NASA Technical Reports Server (NTRS)
Cramer, Nick; Swei, Sean Shan-Min; Cheung, Kenny; Teodorescu, Mircea
2015-01-01
This paper presents a modeling and control of aerostructure developed by lattice-based cellular materials/components. The proposed aerostructure concept leverages a building block strategy for lattice-based components which provide great adaptability to varying ight scenarios, the needs of which are essential for in- ight wing shaping control. A decentralized structural control design is proposed that utilizes discrete-time lumped mass transfer matrix method (DT-LM-TMM). The objective is to develop an e ective reduced order model through DT-LM-TMM that can be used to design a decentralized controller for the structural control of a wing. The proposed approach developed in this paper shows that, as far as the performance of overall structural system is concerned, the reduced order model can be as e ective as the full order model in designing an optimal stabilizing controller.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.
2011-01-01
The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.
NASA Astrophysics Data System (ADS)
Thomas, R.; Prentice, I. C. C.; Graven, H. D.
2016-12-01
A simple model for gross primary production (GPP), the P-model, is used to analyse the recent increase in the amplitude of the seasonal cycle of CO2 (ASC) at high northern latitudes. Current terrestrial biosphere models and Earth System Models generally underestimate the observed increase in ASC since 1960. The increased ASC is primarily driven by an increase in net primary productivity (NPP), rather than respiration, so models are likely underestimating increases in NPP. In a recent study of process-based terrestrial biosphere models from the Multi-scale Synthesis and Terrestrial Model Intercomparison Project (MsTMIP), we showed that the concept of light-use efficiency can be used to separate modelled NPP changes into structural and physiological components (Thomas et al, 2016). The structural component (leaf area) can be tested against observations of greening, while the physiological component (light-use efficiency) is an emergent model property. The analysis suggests that current models are capturing the increases in vegetation greenness, but underestimating the increases in light-use efficiency and NPP. We test this hypothesis using the P-model, which explicitly uses greenness data and includes the effects of rising CO2 and climate change. In the P-model, GPP is calculated using only a few equations, which are based on a strong empirical and theoretical framework, and vegetation is not separated into plant functional types. The model is driven by observed greenness, CO2, temperature and vapour pressure, and modelled photosynthetically active radiation at a monthly time-step. Photosynthetic assimilation is based on two key assumptions: the co-limitation hypothesis (electron transport- and Rubisco-limited photosynthetic rates are equal), and the least-cost hypothesis (optimal ci:ca ratio), and is limited by modelled soil moisture. We present simulated changes in GPP over the satellite period (1982-2011) in the P-model, and assess the associated changes in light-use efficiency and ASC. Our results have implications for the attribution of drivers of ecosystem change and the formulation of prognostic and diagnostic biosphere models. Thomas, R. T. et al. 2016, CO2 and greening observations indicate increasing light-use efficiency in Northern terrestrial ecosystems, Geophys Res Lett, in review.
Solid Modeling of Crew Exploration Vehicle Structure Concepts for Mass Optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivek
2006-01-01
Parametric solid and surface models of the crew exploration vehicle (CEV) command module (CM) structure concepts are developed for rapid finite element analyses, structural sizing and estimation of optimal structural mass. The effects of the structural configuration and critical design parameters on the stress distribution are visualized, examined to arrive at an efficient design. The CM structural components consisted of the outer heat shield, inner pressurized crew cabin, ring bulkhead and spars. For this study only the internal cabin pressure load case is considered. Component stress, deflection, margins of safety and mass are used as design goodness criteria. The design scenario is explored by changing the component thickness parameters and materials until an acceptable design is achieved. Aluminum alloy, titanium alloy and an advanced composite material properties are considered for the stress analysis and the results are compared as a part of lessons learned and to build up a structural component sizing knowledge base for the future CEV technology support. This independent structural analysis and the design scenario based optimization process may also facilitate better CM structural definition and rapid prototyping.
Computational Model for Ethnographically Informed Systems Design
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; James, Anne; Shah, Nazaraf; Terken, Jacuqes
This paper presents a computational model for ethnographically informed systems design that can support complex and distributed cooperative activities. This model is based on an ethnographic framework consisting of three important dimensions (e.g., distributed coordination, awareness of work and plans and procedure), and the BDI (Belief, Desire and Intention) model of intelligent agents. The ethnographic framework is used to conduct ethnographic analysis and to organise ethnographically driven information into three dimensions, whereas the BDI model allows such information to be mapped upon the underlying concepts of multi-agent systems. The advantage of this model is that it is built upon an adaptation of existing mature and well-understood techniques. By the use of this model, we also address the cognitive aspects of systems design.
The Need for Software Architecture Evaluation in the Acquisition of Software-Intensive Sysetms
2014-01-01
Function and Performance Specification GIG Global Information Grid ISO International Standard Organisation MDA Model Driven Architecture...architecture and design, which is a key part of knowledge-based economy UNCLASSIFIED DSTO-TR-2936 UNCLASSIFIED 24 Allow Australian SMEs to
NASA Astrophysics Data System (ADS)
Qiu, Lemiao; Liu, Xiaojian; Zhang, Shuyou; Sun, Liangfeng
2014-05-01
The current research of configurable product disassemblability focuses on disassemblability evaluation and disassembly sequence planning. Little work has been done on quantitative analysis of configurable product disassemblability. The disassemblability modeling technology for configurable product based on disassembly constraint relation weighted design structure matrix (DSM) is proposed. Major factors affecting the disassemblability of configurable product are analyzed, and the disassembling degrees between components in configurable product are obtained by calculating disassembly entropies such as joint type, joint quantity, disassembly path, disassembly accessibility and material compatibility. The disassembly constraint relation weighted DSM of configurable product is constructed and configuration modules are formed by matrix decomposition and tearing operations. The disassembly constraint relation in configuration modules is strong coupling, and the disassembly constraint relation between modules is weak coupling, and the disassemblability configuration model is constructed based on configuration module. Finally, taking a hydraulic forging press as an example, the decomposed weak coupling components are used as configuration modules alone, components with a strong coupling are aggregated into configuration modules, and the disassembly sequence of components inside configuration modules is optimized by tearing operation. A disassemblability configuration model of the hydraulic forging press is constructed. By researching the disassemblability modeling technology of product configuration design based on disassembly constraint relation weighted DSM, the disassembly property in maintenance, recycling and reuse of configurable product are optimized.
Corredor, Iván; Bernardos, Ana M.; Iglesias, Josué; Casar, José R.
2012-01-01
Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym. PMID:23012544
Avco Lycoming QCGAT program design cycle, demonstrated performance and emissions
NASA Technical Reports Server (NTRS)
Fogel, P.; Koschier, A.
1980-01-01
A high bypass ratio, twin spool turbofan engine of modular design which incorporates a front fan module driven by a modified LTS101 core engine was tested. The engine is housed in a nacelle incorporating full length fan ducting with sound treatment in both the inlet and fan discharge flow paths. Design goals of components and results of component tests are presented together with full engine test results. The rationale behind the combustor design selected for the engine is presented as well as the emissions test results. Total system (engine and nacelle) test results are included.
NASA Technical Reports Server (NTRS)
Dash, S. M.; Sinha, N.; Wolf, D. E.; York, B. J.
1986-01-01
An overview of computational models developed for the complete, design-oriented analysis of a scramjet propulsion system is provided. The modular approach taken involves the use of different PNS models to analyze the individual propulsion system components. The external compression and internal inlet flowfields are analyzed by the SCRAMP and SCRINT components discussed in Part II of this paper. The combustor is analyzed by the SCORCH code which is based upon SPLITP PNS pressure-split methodology formulated by Dash and Sinha. The nozzle is analyzed by the SCHNOZ code which is based upon SCIPVIS PNS shock-capturing methodology formulated by Dash and Wolf. The current status of these models, previous developments leading to this status, and, progress towards future hybrid and 3D versions are discussed in this paper.
NASA Technical Reports Server (NTRS)
Welch, Gerard E.; Hathaway, Michael D.; Skoch, Gary J.; Snyder, Christopher A.
2012-01-01
Technical challenges of compressors for future rotorcraft engines are driven by engine-level and component-level requirements. Cycle analyses are used to highlight the engine-level challenges for 3000, 7500, and 12000 SHP-class engines, which include retention of performance and stability margin at low corrected flows, and matching compressor type, axial-flow or centrifugal, to the low corrected flows and high temperatures in the aft stages. At the component level: power-to-weight and efficiency requirements impel designs with lower inherent aerodynamic stability margin; and, optimum engine overall pressure ratios lead to small blade heights and the associated challenges of scale, particularly increased clearance-to-span ratios. The technical challenges associated with the aerodynamics of low corrected flows and stability management impel the compressor aero research and development efforts reviewed herein. These activities include development of simple models for clearance sensitivities to improve cycle calculations, full-annulus, unsteady Navier-Stokes simulations used to elucidate stall, its inception, and the physics of stall control by discrete tip-injection, development of an actuator-duct-based model for rapid simulation of nonaxisymmetric flow fields (e.g., due inlet circumferential distortion), advanced centrifugal compressor stage development and experimentation, and application of stall control in a T700 engine.
Design, fabrication and test of a trace contaminant control system
NASA Technical Reports Server (NTRS)
1975-01-01
A trace contaminant control system was designed, fabricated, and evaluated to determine suitability of the system concept to future manned spacecraft. Two different models were considered. The load model initially required by the contract was based on the Space Station Prototype (SSP) general specifications SVSK HS4655, reflecting a change from a 9 man crew to a 6 man crew of the model developed in previous phases of this effort. Trade studies and a system preliminary design were accomplished based on this contaminant load, including computer analyses to define the optimum system configuration in terms of component arrangements, flow rates and component sizing. At the completion of the preliminary design effort a revised contaminant load model was developed for the SSP. Additional analyses were then conducted to define the impact of this new contaminant load model on the system configuration. A full scale foam-core mock-up with the appropriate SSP system interfaces was also fabricated.
A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations
Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang
2008-01-01
Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033
Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R
2014-04-01
Componential theories assume that emotion episodes consist of emergent and dynamic response changes to relevant events in different components, such as appraisal, physiology, motivation, expression, and subjective feeling. In particular, Scherer's Component Process Model hypothesizes that subjective feeling emerges when the synchronization (or coherence) of appraisal-driven changes between emotion components has reached a critical threshold. We examined the prerequisite of this synchronization hypothesis for appraisal-driven response changes in facial expression. The appraisal process was manipulated by using feedback stimuli, presented in a gambling task. Participants' responses to the feedback were investigated in concurrently recorded brain activity related to appraisal (event-related potentials, ERP) and facial muscle activity (electromyography, EMG). Using principal component analysis, the prediction of appraisal-driven response changes in facial EMG was examined. Results support this prediction: early cognitive processes (related to the feedback-related negativity) seem to primarily affect the upper face, whereas processes that modulate P300 amplitudes tend to predominantly drive cheek region responses. Copyright © 2013 Elsevier B.V. All rights reserved.
Modelling of the mercury loss in fluorescent lamps under the influence of metal oxide coatings
NASA Astrophysics Data System (ADS)
Santos Abreu, A.; Mayer, J.; Lenk, D.; Horn, S.; Konrad, A.; Tidecks, R.
2016-11-01
The mercury transport and loss mechanisms in the metal oxide coatings of mercury low pressure discharge fluorescent lamps have been investigated. An existing model based on a ballistic process is discussed in the context of experimental mercury loss data. Two different approaches to the modeling of the mercury loss have been developed. The first one is based on mercury transition rates between the plasma, the coating, and the glass without specifying the underlying physical processes. The second one is based on a transport process driven by diffusion and a binding process of mercury reacting to mercury oxide inside the layers. Moreover, we extended the diffusion based model to handle multi-component coatings. All approaches are applied to describe mercury loss experiments under the influence of an Al 2 O 3 coating.
A meteorologically-driven yield reduction model for spring and winter wheat
NASA Technical Reports Server (NTRS)
Ravet, F. W.; Cremins, W. J.; Taylor, T. W.; Ashburn, P.; Smika, D.; Aaronson, A. (Principal Investigator)
1983-01-01
A yield reduction model for spring and winter wheat was developed for large-area crop condition assessment. Reductions are expressed in percentage from a base yield and are calculated on a daily basis. The algorithm contains two integral components: a two-layer soil water budget model and a crop calendar routine. Yield reductions associated with hot, dry winds (Sukhovey) and soil moisture stress are determined. Input variables include evapotranspiration, maximum temperature and precipitation; subsequently crop-stage, available water holding percentage and stress duration are evaluated. No specific base yield is required and may be selected by the user; however, it may be generally characterized as the maximum likely to be produced commercially at a location.
Worklist handling in workflow-enabled radiological application systems
NASA Astrophysics Data System (ADS)
Wendler, Thomas; Meetz, Kirsten; Schmidt, Joachim; von Berg, Jens
2000-05-01
For the next generation integrated information systems for health care applications, more emphasis has to be put on systems which, by design, support the reduction of cost, the increase inefficiency and the improvement of the quality of services. A substantial contribution to this will be the modeling. optimization, automation and enactment of processes in health care institutions. One of the perceived key success factors for the system integration of processes will be the application of workflow management, with workflow management systems as key technology components. In this paper we address workflow management in radiology. We focus on an important aspect of workflow management, the generation and handling of worklists, which provide workflow participants automatically with work items that reflect tasks to be performed. The display of worklists and the functions associated with work items are the visible part for the end-users of an information system using a workflow management approach. Appropriate worklist design and implementation will influence user friendliness of a system and will largely influence work efficiency. Technically, in current imaging department information system environments (modality-PACS-RIS installations), a data-driven approach has been taken: Worklist -- if present at all -- are generated from filtered views on application data bases. In a future workflow-based approach, worklists will be generated by autonomous workflow services based on explicit process models and organizational models. This process-oriented approach will provide us with an integral view of entire health care processes or sub- processes. The paper describes the basic mechanisms of this approach and summarizes its benefits.
Design of the De-Orbit Sail Boom Deployment Unit
NASA Astrophysics Data System (ADS)
Meyer, Sebastian; Hillebrandt, Martin; Straubel, Marco; Huhne, Christian
2014-06-01
The design of the De-Orbit Sail boom deployment unit is strongly driven by volume constraints, which are given by the cubesat container. Four CFRP (carbon fiber reinforced polymer) booms [4] with a cross-sectional shape of a double-omega and a length of 3.6 m are reeled on one spool in the center of the unit. The deployment of the four booms are controlled by an electric motor, which acts on the boom spool. Due to the volume limitation caused by the dimensions of the cubesat deployer the deployment unit has little room for the mechanisms components. With the aim to achieve a robust design, the deployment concept of the unit has greatly changed during the development process. The history of the design as well as the mechanisms are described. Additionally the results of the flight model testing are presented.
Díaz-Rodríguez, Natalia; Cadahía, Olmo León; Cuéllar, Manuel Pegalajar; Lilius, Johan; Calvo-Flores, Miguel Delgado
2014-01-01
Human activity recognition is a key task in ambient intelligence applications to achieve proper ambient assisted living. There has been remarkable progress in this domain, but some challenges still remain to obtain robust methods. Our goal in this work is to provide a system that allows the modeling and recognition of a set of complex activities in real life scenarios involving interaction with the environment. The proposed framework is a hybrid model that comprises two main modules: a low level sub-activity recognizer, based on data-driven methods, and a high-level activity recognizer, implemented with a fuzzy ontology to include the semantic interpretation of actions performed by users. The fuzzy ontology is fed by the sub-activities recognized by the low level data-driven component and provides fuzzy ontological reasoning to recognize both the activities and their influence in the environment with semantics. An additional benefit of the approach is the ability to handle vagueness and uncertainty in the knowledge-based module, which substantially outperforms the treatment of incomplete and/or imprecise data with respect to classic crisp ontologies. We validate these advantages with the public CAD-120 dataset (Cornell Activity Dataset), achieving an accuracy of 90.1% and 91.07% for low-level and high-level activities, respectively. This entails an improvement over fully data-driven or ontology-based approaches. PMID:25268914
NASA Astrophysics Data System (ADS)
Prakashan, A.; Mukunda, H. S.; Samuel, S. D.; Colaco, J. C.
1992-11-01
This paper addresses the design and development of a four degree of freedom industrial manipulator, with three liner axes in the positioning mechanism and one rotary axis in the orientation mechanism. The positioning mechanism joints are driven with dc servo motors fitted with incremental shaft encoders. The rotary joint of the orientation mechanism is driven by a stepping motor. The manipulator is controlled by an IBM 386 PC/AT. Microcomputer based interface cards have been developed for independent joint control. PID controllers for dc motors have been designed. Kinematic modeling, dynamic modeling, and path planning have been carried out to generate the control sequence to accomplish a given task with reference to source and destination state constraints. This project has been sponsored by the Department of Science and Technology, Government of India, New Delhi, and has been executed in collaboration with M/s Larsen & Toubro Ltd, Mysore, India.
JAMS - a software platform for modular hydrological modelling
NASA Astrophysics Data System (ADS)
Kralisch, Sven; Fischer, Christian
2015-04-01
Current challenges of understanding and assessing the impacts of climate and land use changes on environmental systems demand for an ever-increasing integration of data and process knowledge in corresponding simulation models. Software frameworks that allow for a seamless creation of integrated models based on less complex components (domain models, process simulation routines) have therefore gained increasing attention during the last decade. JAMS is an Open-Source software framework that has been especially designed to cope with the challenges of eco-hydrological modelling. This is reflected by (i) its flexible approach for representing time and space, (ii) a strong separation of process simulation components from the declarative description of more complex models using domain specific XML, (iii) powerful analysis and visualization functions for spatial and temporal input and output data, and (iv) parameter optimization and uncertainty analysis functions commonly used in environmental modelling. Based on JAMS, different hydrological and nutrient-transport simulation models were implemented and successfully applied during the last years. We will present the JAMS core concepts and give an overview of models, simulation components and support tools available for that framework. Sample applications will be used to underline the advantages of component-based model designs and to show how JAMS can be used to address the challenges of integrated hydrological modelling.
NASA Technical Reports Server (NTRS)
Krebs, R. P.
1971-01-01
The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.
Image-based 3D reconstruction and virtual environmental walk-through
NASA Astrophysics Data System (ADS)
Sun, Jifeng; Fang, Lixiong; Luo, Ying
2001-09-01
We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.
NASA Technical Reports Server (NTRS)
Allan, Brian G.
2000-01-01
A reduced order modeling approach of the Navier-Stokes equations is presented for the design of a distributed optimal feedback kernel. This approach is based oil a Krylov subspace method where significant modes of the flow are captured in the model This model is then used in all optimal feedback control design where sensing and actuation is performed oil tile entire flow field. This control design approach yields all optimal feedback kernel which provides insight into the placement of sensors and actuators in the flow field. As all evaluation of this approach, a two-dimensional shear layer and driven cavity flow are investigated.
A Global User-Driven Model for Tile Prefetching in Web Geographical Information Systems.
Pan, Shaoming; Chong, Yanwen; Zhang, Hang; Tan, Xicheng
2017-01-01
A web geographical information system is a typical service-intensive application. Tile prefetching and cache replacement can improve cache hit ratios by proactively fetching tiles from storage and replacing the appropriate tiles from the high-speed cache buffer without waiting for a client's requests, which reduces disk latency and improves system access performance. Most popular prefetching strategies consider only the relative tile popularities to predict which tile should be prefetched or consider only a single individual user's access behavior to determine which neighbor tiles need to be prefetched. Some studies show that comprehensively considering all users' access behaviors and all tiles' relationships in the prediction process can achieve more significant improvements. Thus, this work proposes a new global user-driven model for tile prefetching and cache replacement. First, based on all users' access behaviors, a type of expression method for tile correlation is designed and implemented. Then, a conditional prefetching probability can be computed based on the proposed correlation expression mode. Thus, some tiles to be prefetched can be found by computing and comparing the conditional prefetching probability from the uncached tiles set and, similarly, some replacement tiles can be found in the cache buffer according to multi-step prefetching. Finally, some experiments are provided comparing the proposed model with other global user-driven models, other single user-driven models, and other client-side prefetching strategies. The results show that the proposed model can achieve a prefetching hit rate in approximately 10.6% ~ 110.5% higher than the compared methods.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
NASA Astrophysics Data System (ADS)
El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis
2018-02-01
Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.
ERIC Educational Resources Information Center
Sengupta-Irving, Tesha; Enyedy, Noel
2015-01-01
This article investigates why students reported liking a student-driven learning design better than a highly guided design despite equivalent gains in knowledge assessments in both conditions. We created two learning designs based on the distinction in the literature between student-driven and teacher-led approaches. One teacher assigned each of…
NASA Astrophysics Data System (ADS)
van den Dool, G.
2017-11-01
This study (van den Dool, 2017) is a proof of concept for a global predictive wildfire model, in which the temporal-spatial characteristics of wildfires are placed in a Geographical Information System (GIS), and the risk analysis is based on data-driven fuzzy logic functions. The data sources used in this model are available as global datasets, but subdivided into three pilot areas: North America (California/Nevada), Europe (Spain), and Asia (Mongolia), and are downscaled to the highest resolution (3-arc second). The GIS is constructed around three themes: topography, fuel availability and climate. From the topographical data, six derived sub-themes are created and converted to a fuzzy membership based on the catchment area statistics. The fuel availability score is a composite of four data layers: land cover, wood loads, biomass, biovolumes. As input for the climatological sub-model reanalysed daily averaged, weather-related data is used, which is accumulated to a global weekly time-window (to account for the uncertainty within the climatological model) and forms the temporal component of the model. The final product is a wildfire risk score (from 0 to 1) by week, representing the average wildfire risk in an area. To compute the potential wildfire risk the sub-models are combined usinga Multi-Criteria Approach, and the model results are validated against the area under the Receiver Operating Characteristic curve.
Wu, Xiao; Shen, Jiong; Li, Yiguo; Lee, Kwang Y
2014-05-01
This paper develops a novel data-driven fuzzy modeling strategy and predictive controller for boiler-turbine unit using fuzzy clustering and subspace identification (SID) methods. To deal with the nonlinear behavior of boiler-turbine unit, fuzzy clustering is used to provide an appropriate division of the operation region and develop the structure of the fuzzy model. Then by combining the input data with the corresponding fuzzy membership functions, the SID method is extended to extract the local state-space model parameters. Owing to the advantages of the both methods, the resulting fuzzy model can represent the boiler-turbine unit very closely, and a fuzzy model predictive controller is designed based on this model. As an alternative approach, a direct data-driven fuzzy predictive control is also developed following the same clustering and subspace methods, where intermediate subspace matrices developed during the identification procedure are utilized directly as the predictor. Simulation results show the advantages and effectiveness of the proposed approach. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.
NASA Astrophysics Data System (ADS)
Guilyardi, E.
2003-04-01
The European Union's PRISM infrastructure project (PRogram for Integrated earth System Modelling) aims at designing a flexible environment to easily assemble and run Earth System Models (http://prism.enes.org). Europe's widely distributed modelling expertise is both a strength and a challenge. Recognizing this, the PRISM project aims at developing an efficient shared modelling software infrastructure for climate scientists, providing them with an opportunity for greater focus on scientific issues, including the necessary scientific diversity (models and approaches). The proposed PRISM system includes 1) the use - or definition - and promotion of scientific and technical standards to increase component modularity, 2) an end-to-end software environment (coupler, user interface, diagnostics) to launch, monitor and analyze complex Earth System Models built around the existing and future community models, 3) testing and quality standards to ensure HPC performance on a variety of platforms and 4) community wide inputs and requirements capture in all stages of system specifications and design through user/developers meetings, workshops and thematic schools. This science driven project, led by 22 institutes* and started December 1st 2001, benefits from a unique gathering of scientific and technical expertise. More than 30 models (both global and regional) have expressed interest to be part of the PRISM system and 6 types of components have been identified: atmosphere, atmosphere chemistry, land surface, ocean, sea ice and ocean biochemistry. Progress and overall architecture design will be presented. * MPI-Met (Coordinator), KNMI (co-coordinator), MPI-M&D, Met Office, University of Reading, IPSL, Meteo-France, CERFACS, DMI, SMHI, NERSC, ETH Zurich, INGV, MPI-BGC, PIK, ECMWF, UCL-ASTR, NEC, FECIT, SGI, SUN, CCRLE
Data-Driven Instructional Leadership
ERIC Educational Resources Information Center
Blink, Rebecca
2006-01-01
With real-world examples from actual schools, this book illustrates how to nurture a culture of continuous improvement, meet the needs of individual students, foster an environment of high expectations, and meet the requirements of NCLB. Each component of the Data-Driven Instructional Leadership (DDIS) model represents several branches of…
Mechanics of Interrill Erosion with Wind-Driven Rain (WDR)
USDA-ARS?s Scientific Manuscript database
This article provides an evaluation analysis for the performance of the interrill component of the Water Erosion Prediction Project (WEPP) model for Wind-Driven Rain (WDR) events. The interrill delivery rates (Di) were collected in the wind tunnel rainfall simulator facility of the International Cen...
Circuit-based versus full-wave modelling of active microwave circuits
NASA Astrophysics Data System (ADS)
Bukvić, Branko; Ilić, Andjelija Ž.; Ilić, Milan M.
2018-03-01
Modern full-wave computational tools enable rigorous simulations of linear parts of complex microwave circuits within minutes, taking into account all physical electromagnetic (EM) phenomena. Non-linear components and other discrete elements of the hybrid microwave circuit are then easily added within the circuit simulator. This combined full-wave and circuit-based analysis is a must in the final stages of the circuit design, although initial designs and optimisations are still faster and more comfortably done completely in the circuit-based environment, which offers real-time solutions at the expense of accuracy. However, due to insufficient information and general lack of specific case studies, practitioners still struggle when choosing an appropriate analysis method, or a component model, because different choices lead to different solutions, often with uncertain accuracy and unexplained discrepancies arising between the simulations and measurements. We here design a reconfigurable power amplifier, as a case study, using both circuit-based solver and a full-wave EM solver. We compare numerical simulations with measurements on the manufactured prototypes, discussing the obtained differences, pointing out the importance of measured parameters de-embedding, appropriate modelling of discrete components and giving specific recipes for good modelling practices.
Distillation Designs for the Lunar Surface
NASA Technical Reports Server (NTRS)
Boul, Peter J.; Lange,Kevin E.; Conger, Bruce; Anderson, Molly
2010-01-01
Gravity-based distillation methods may be applied to the purification of wastewater on the lunar base. These solutions to water processing are robust physical separation techniques, which may be more advantageous than many other techniques for their simplicity in design and operation. The two techniques can be used in conjunction with each other to obtain high purity water. The components and feed compositions for modeling waste water streams are presented in conjunction with the Aspen property system for traditional stage distillation. While the individual components for each of the waste streams will vary naturally within certain bounds, an analog model for waste water processing is suggested based on typical concentration ranges for these components. Target purity levels for recycled water are determined for each individual component based on NASA s required maximum contaminant levels for potable water Optimum parameters such as reflux ratio, feed stage location, and processing rates are determined with respect to the power consumption of the process. Multistage distillation is evaluated for components in wastewater to determine the minimum number of stages necessary for each of 65 components in humidity condensate and urine wastewater mixed streams.
A Multi-Mode Shock Tube for Investigation of Blast-Induced Traumatic Brain Injury
Reneer, Dexter V.; Hisel, Richard D.; Hoffman, Joshua M.; Kryscio, Richard J.; Lusk, Braden T.
2011-01-01
Abstract Blast-induced mild traumatic brain injury (bTBI) has become increasingly common in recent military conflicts. The mechanisms by which non-impact blast exposure results in bTBI are incompletely understood. Current small animal bTBI models predominantly utilize compressed air-driven membrane rupture as their blast wave source, while large animal models use chemical explosives. The pressure-time signature of each blast mode is unique, making it difficult to evaluate the contributions of the different components of the blast wave to bTBI when using a single blast source. We utilized a multi-mode shock tube, the McMillan blast device, capable of utilizing compressed air- and compressed helium-driven membrane rupture, and the explosives oxyhydrogen and cyclotrimethylenetrinitramine (RDX, the primary component of C-4 plastic explosives) as the driving source. At similar maximal blast overpressures, the positive pressure phase of compressed air-driven blasts was longer, and the positive impulse was greater, than those observed for shockwaves produced by other driving sources. Helium-driven shockwaves more closely resembled RDX blasts, but by displacing air created a hypoxic environment within the shock tube. Pressure-time traces from oxyhydrogen-driven shockwaves were very similar those produced by RDX, although they resulted in elevated carbon monoxide levels due to combustion of the polyethylene bag used to contain the gases within the shock tube prior to detonation. Rats exposed to compressed air-driven blasts had more pronounced vascular damage than those exposed to oxyhydrogen-driven blasts of the same peak overpressure, indicating that differences in blast wave characteristics other than peak overpressure may influence the extent of bTBI. Use of this multi-mode shock tube in small animal models will enable comparison of the extent of brain injury with the pressure-time signature produced using each blast mode, facilitating evaluation of the blast wave components contributing to bTBI. PMID:21083431
A multi-mode shock tube for investigation of blast-induced traumatic brain injury.
Reneer, Dexter V; Hisel, Richard D; Hoffman, Joshua M; Kryscio, Richard J; Lusk, Braden T; Geddes, James W
2011-01-01
Blast-induced mild traumatic brain injury (bTBI) has become increasingly common in recent military conflicts. The mechanisms by which non-impact blast exposure results in bTBI are incompletely understood. Current small animal bTBI models predominantly utilize compressed air-driven membrane rupture as their blast wave source, while large animal models use chemical explosives. The pressure-time signature of each blast mode is unique, making it difficult to evaluate the contributions of the different components of the blast wave to bTBI when using a single blast source. We utilized a multi-mode shock tube, the McMillan blast device, capable of utilizing compressed air- and compressed helium-driven membrane rupture, and the explosives oxyhydrogen and cyclotrimethylenetrinitramine (RDX, the primary component of C-4 plastic explosives) as the driving source. At similar maximal blast overpressures, the positive pressure phase of compressed air-driven blasts was longer, and the positive impulse was greater, than those observed for shockwaves produced by other driving sources. Helium-driven shockwaves more closely resembled RDX blasts, but by displacing air created a hypoxic environment within the shock tube. Pressure-time traces from oxyhydrogen-driven shockwaves were very similar those produced by RDX, although they resulted in elevated carbon monoxide levels due to combustion of the polyethylene bag used to contain the gases within the shock tube prior to detonation. Rats exposed to compressed air-driven blasts had more pronounced vascular damage than those exposed to oxyhydrogen-driven blasts of the same peak overpressure, indicating that differences in blast wave characteristics other than peak overpressure may influence the extent of bTBI. Use of this multi-mode shock tube in small animal models will enable comparison of the extent of brain injury with the pressure-time signature produced using each blast mode, facilitating evaluation of the blast wave components contributing to bTBI.
Working with the HL7 metamodel in a Model Driven Engineering context.
Martínez-García, A; García-García, J A; Escalona, M J; Parra-Calderón, C L
2015-10-01
HL7 (Health Level 7) International is an organization that defines health information standards. Most HL7 domain information models have been designed according to a proprietary graphic language whose domain models are based on the HL7 metamodel. Many researchers have considered using HL7 in the MDE (Model-Driven Engineering) context. A limitation has been identified: all MDE tools support UML (Unified Modeling Language), which is a standard model language, but most do not support the HL7 proprietary model language. We want to support software engineers without HL7 experience, thus real-world problems would be modeled by them by defining system requirements in UML that are compliant with HL7 domain models transparently. The objective of the present research is to connect HL7 with software analysis using a generic model-based approach. This paper introduces a first approach to an HL7 MDE solution that considers the MIF (Model Interchange Format) metamodel proposed by HL7 by making use of a plug-in developed in the EA (Enterprise Architect) tool. Copyright © 2015 Elsevier Inc. All rights reserved.
Retrosynthetic Reaction Prediction Using Neural Sequence-to-Sequence Models
2017-01-01
We describe a fully data driven model that learns to perform a retrosynthetic reaction prediction task, which is treated as a sequence-to-sequence mapping problem. The end-to-end trained model has an encoder–decoder architecture that consists of two recurrent neural networks, which has previously shown great success in solving other sequence-to-sequence prediction tasks such as machine translation. The model is trained on 50,000 experimental reaction examples from the United States patent literature, which span 10 broad reaction types that are commonly used by medicinal chemists. We find that our model performs comparably with a rule-based expert system baseline model, and also overcomes certain limitations associated with rule-based expert systems and with any machine learning approach that contains a rule-based expert system component. Our model provides an important first step toward solving the challenging problem of computational retrosynthetic analysis. PMID:29104927
NASA Astrophysics Data System (ADS)
Ye, Hongfei; Zheng, Yonggang; Zhang, Zhongqiang; Zhang, Hongwu; Chen, Zhen
2016-08-01
Precisely controlling the deformation of carbon nanotubes (CNTs) has practical application in the development of nanoscale functional devices, although it is a challenging task. Here, we propose a novel method to guide the deformation of CNTs through filling them with salt water and applying an electric field. With the electric field along the axial direction, the height of CNTs is enlarged by the axial electric force due to the internal ions and polar water molecules. Under an electric field with two mutually orthogonal components, the transverse electric force could further induce the bending deformation of CNTs. Based on the classical rod and beam theories, two mechanical models are constructed to verify and quantitatively describe the relationships between the tension and bending deformations of CNTs and the electric field intensity. Moreover, by means of the electric field-driven tension behavior of CNTs, we design a stretchable molecular sieve to control the flow rate of mixed gas and collect a single high-purity gas. The present work opens up new avenues in the design and fabrication of nanoscale controlling units.
NASA Astrophysics Data System (ADS)
El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali
2015-09-01
The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.
Handheld ultrasound array imaging device
NASA Astrophysics Data System (ADS)
Hwang, Juin-Jet; Quistgaard, Jens
1999-06-01
A handheld ultrasound imaging device, one that weighs less than five pounds, has been developed for diagnosing trauma in the combat battlefield as well as a variety of commercial mobile diagnostic applications. This handheld device consists of four component ASICs, each is designed using the state of the art microelectronics technologies. These ASICs are integrated with a convex array transducer to allow high quality imaging of soft tissues and blood flow in real time. The device is designed to be battery driven or ac powered with built-in image storage and cineloop playback capability. Design methodologies of a handheld device are fundamentally different to those of a cart-based system. As system architecture, signal and image processing algorithm as well as image control circuit and software in this device is deigned suitably for large-scale integration, the image performance of this device is designed to be adequate to the intent applications. To elongate the battery life, low power design rules and power management circuits are incorporated in the design of each component ASIC. The performance of the prototype device is currently being evaluated for various applications such as a primary image screening tool, fetal imaging in Obstetrics, foreign object detection and wound assessment for emergency care, etc.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Nonlinear flow response of soft hair beds
NASA Astrophysics Data System (ADS)
Alvarado, José; Comtet, Jean; de Langre, Emmanuel; Hosoi, A. E.
2017-10-01
We are `hairy' on the inside: beds of passive fibres anchored to a surface and immersed in fluids are prevalent in many biological systems, including intestines, tongues, and blood vessels. These hairs are soft enough to deform in response to stresses from fluid flows. Yet fluid stresses are in turn affected by hair deformation, leading to a coupled elastoviscous problem that is poorly understood. Here we investigate a biomimetic model system of elastomer hair beds subject to shear-driven Stokes flows. We characterize this system with a theoretical model that accounts for the large-deformation flow response of hair beds. Hair bending results in a drag-reducing nonlinearity because the hair tip lowers towards the base, widening the gap through which fluid flows. When hairs are cantilevered at an angle subnormal to the surface, flow against the grain bends hairs away from the base, narrowing the gap. The flow response of angled hair beds is axially asymmetric and amounts to a rectification nonlinearity. We identify an elastoviscous parameter that controls nonlinear behaviour. Our study raises the hypothesis that biological hairy surfaces function to reduce fluid drag. Furthermore, angled hairs may be incorporated in the design of integrated microfluidic components, such as diodes and pumps.
Analysis of dynamic behavior of multiple-stage planetary gear train used in wind driven generator.
Wang, Jungang; Wang, Yong; Huo, Zhipu
2014-01-01
A dynamic model of multiple-stage planetary gear train composed of a two-stage planetary gear train and a one-stage parallel axis gear is proposed to be used in wind driven generator to analyze the influence of revolution speed and mesh error on dynamic load sharing characteristic based on the lumped parameter theory. Dynamic equation of the model is solved using numerical method to analyze the uniform load distribution of the system. It is shown that the load sharing property of the system is significantly affected by mesh error and rotational speed; load sharing coefficient and change rate of internal and external meshing of the system are of obvious difference from each other. The study provides useful theoretical guideline for the design of the multiple-stage planetary gear train of wind driven generator.
Analysis of Dynamic Behavior of Multiple-Stage Planetary Gear Train Used in Wind Driven Generator
Wang, Jungang; Wang, Yong; Huo, Zhipu
2014-01-01
A dynamic model of multiple-stage planetary gear train composed of a two-stage planetary gear train and a one-stage parallel axis gear is proposed to be used in wind driven generator to analyze the influence of revolution speed and mesh error on dynamic load sharing characteristic based on the lumped parameter theory. Dynamic equation of the model is solved using numerical method to analyze the uniform load distribution of the system. It is shown that the load sharing property of the system is significantly affected by mesh error and rotational speed; load sharing coefficient and change rate of internal and external meshing of the system are of obvious difference from each other. The study provides useful theoretical guideline for the design of the multiple-stage planetary gear train of wind driven generator. PMID:24511295
Acoustic Performance of Drive Rig Mufflers for Model Scale Engine Testing
NASA Technical Reports Server (NTRS)
Stephens, David, B.
2013-01-01
Aircraft engine component testing at the NASA Glenn Research Center (GRC) includes acoustic testing of scale model fans and propellers in the 9- by15-Foot Low Speed Wind Tunnel (LSWT). This testing utilizes air driven turbines to deliver power to the article being studied. These air turbines exhaust directly downstream of the model in the wind tunnel test section and have been found to produce significant unwanted noise that reduces the quality of the acoustic measurements of the engine model being tested. This report describes an acoustic test of a muffler designed to mitigate the extraneous turbine noise. The muffler was found to provide acoustic attenuation of at least 8 dB between 700 Hz and 20 kHz which significantly improves the quality of acoustic measurements in the facility.
Reconfigurable Model Execution in the OpenMDAO Framework
NASA Technical Reports Server (NTRS)
Hwang, John T.
2017-01-01
NASA's OpenMDAO framework facilitates constructing complex models and computing their derivatives for multidisciplinary design optimization. Decomposing a model into components that follow a prescribed interface enables OpenMDAO to assemble multidisciplinary derivatives from the component derivatives using what amounts to the adjoint method, direct method, chain rule, global sensitivity equations, or any combination thereof, using the MAUD architecture. OpenMDAO also handles the distribution of processors among the disciplines by hierarchically grouping the components, and it automates the data transfer between components that are on different processors. These features have made OpenMDAO useful for applications in aircraft design, satellite design, wind turbine design, and aircraft engine design, among others. This paper presents new algorithms for OpenMDAO that enable reconfigurable model execution. This concept refers to dynamically changing, during execution, one or more of: the variable sizes, solution algorithm, parallel load balancing, or set of variables-i.e., adding and removing components, perhaps to switch to a higher-fidelity sub-model. Any component can reconfigure at any point, even when running in parallel with other components, and the reconfiguration algorithm presented here performs the synchronized updates to all other components that are affected. A reconfigurable software framework for multidisciplinary design optimization enables new adaptive solvers, adaptive parallelization, and new applications such as gradient-based optimization with overset flow solvers and adaptive mesh refinement. Benchmarking results demonstrate the time savings for reconfiguration compared to setting up the model again from scratch, which can be significant in large-scale problems. Additionally, the new reconfigurability feature is applied to a mission profile optimization problem for commercial aircraft where both the parametrization of the mission profile and the time discretization are adaptively refined, resulting in computational savings of roughly 10% and the elimination of oscillations in the optimized altitude profile.
Spinning Rocket Simulator Turntable Design
NASA Technical Reports Server (NTRS)
Miles, Robert W.
2001-01-01
Contained herein is the research and data acquired from the Turntable Design portion of the Spinning Rocket Simulator (SRS) project. The SRS Project studies and eliminates the effect of coning on thrust-propelled spacecraft. This design and construction of the turntable adds a structural support for the SRS model and two degrees of freedom. The two degrees of freedom, radial and circumferential, will help develop a simulated thrust force perpendicular to the plane of the spacecraft model while undergoing an unstable coning motion. The Turntable consists of a ten-foot linear track mounted to a sprocket and press-fit to a thrust bearing. A two-inch high column grounded by a Triangular Baseplate supports this bearing and houses the slip rings and pressurized, air-line swivel. The thrust bearing allows the entire system to rotate under the moment applied through the chain-driven sprocket producing a circumferential degree of freedom. The radial degree of freedom is given to the model through the helically threaded linear track. This track allows the Model Support and Counter Balance to simultaneously reposition according to the coning motion of the Model. Two design factors that hinder the linear track are bending and twist due to torsion. A Standard Aluminum "C" channel significantly reduces these two deflections. Safety considerations dictate the design of all the components involved in this project.
Geerts, Hugo; Hofmann-Apitius, Martin; Anastasio, Thomas J
2017-11-01
Neurodegenerative diseases such as Alzheimer's disease (AD) follow a slowly progressing dysfunctional trajectory, with a large presymptomatic component and many comorbidities. Using preclinical models and large-scale omics studies ranging from genetics to imaging, a large number of processes that might be involved in AD pathology at different stages and levels have been identified. The sheer number of putative hypotheses makes it almost impossible to estimate their contribution to the clinical outcome and to develop a comprehensive view on the pathological processes driving the clinical phenotype. Traditionally, bioinformatics approaches have provided correlations and associations between processes and phenotypes. Focusing on causality, a new breed of advanced and more quantitative modeling approaches that use formalized domain expertise offer new opportunities to integrate these different modalities and outline possible paths toward new therapeutic interventions. This article reviews three different computational approaches and their possible complementarities. Process algebras, implemented using declarative programming languages such as Maude, facilitate simulation and analysis of complicated biological processes on a comprehensive but coarse-grained level. A model-driven Integration of Data and Knowledge, based on the OpenBEL platform and using reverse causative reasoning and network jump analysis, can generate mechanistic knowledge and a new, mechanism-based taxonomy of disease. Finally, Quantitative Systems Pharmacology is based on formalized implementation of domain expertise in a more fine-grained, mechanism-driven, quantitative, and predictive humanized computer model. We propose a strategy to combine the strengths of these individual approaches for developing powerful modeling methodologies that can provide actionable knowledge for rational development of preventive and therapeutic interventions. Development of these computational approaches is likely to be required for further progress in understanding and treating AD. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.
Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto
2017-09-29
The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
Modeling Cable and Guide Channel Interaction in a High-Strength Cable-Driven Continuum Manipulator
Moses, Matthew S.; Murphy, Ryan J.; Kutzer, Michael D. M.; Armand, Mehran
2016-01-01
This paper presents several mechanical models of a high-strength cable-driven dexterous manipulator designed for surgical procedures. A stiffness model is presented that distinguishes between contributions from the cables and the backbone. A physics-based model incorporating cable friction is developed and its predictions are compared with experimental data. The data show that under high tension and high curvature, the shape of the manipulator deviates significantly from a circular arc. However, simple parametric models can fit the shape with good accuracy. The motivating application for this study is to develop a model so that shape can be predicted using easily measured quantities such as tension, so that real-time navigation may be performed, especially in minimally-invasive surgical procedures, while reducing the need for hazardous imaging methods such as fluoroscopy. PMID:27818607
Modeling Cable and Guide Channel Interaction in a High-Strength Cable-Driven Continuum Manipulator.
Moses, Matthew S; Murphy, Ryan J; Kutzer, Michael D M; Armand, Mehran
2015-12-01
This paper presents several mechanical models of a high-strength cable-driven dexterous manipulator designed for surgical procedures. A stiffness model is presented that distinguishes between contributions from the cables and the backbone. A physics-based model incorporating cable friction is developed and its predictions are compared with experimental data. The data show that under high tension and high curvature, the shape of the manipulator deviates significantly from a circular arc. However, simple parametric models can fit the shape with good accuracy. The motivating application for this study is to develop a model so that shape can be predicted using easily measured quantities such as tension, so that real-time navigation may be performed, especially in minimally-invasive surgical procedures, while reducing the need for hazardous imaging methods such as fluoroscopy.
Advanced optical manufacturing digital integrated system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Li, Wei; Tang, Dingyong
2012-10-01
It is necessarily to adapt development of advanced optical manufacturing technology with modern science technology development. To solved these problems which low of ration, ratio of finished product, repetition, consistent in big size and high precision in advanced optical component manufacturing. Applied business driven and method of Rational Unified Process, this paper has researched advanced optical manufacturing process flow, requirement of Advanced Optical Manufacturing integrated System, and put forward architecture and key technology of it. Designed Optical component core and Manufacturing process driven of Advanced Optical Manufacturing Digital Integrated System. the result displayed effective well, realized dynamic planning Manufacturing process, information integration improved ratio of production manufactory.
Moving Beyond ERP Components: A Selective Review of Approaches to Integrate EEG and Behavior
Bridwell, David A.; Cavanagh, James F.; Collins, Anne G. E.; Nunez, Michael D.; Srinivasan, Ramesh; Stober, Sebastian; Calhoun, Vince D.
2018-01-01
Relationships between neuroimaging measures and behavior provide important clues about brain function and cognition in healthy and clinical populations. While electroencephalography (EEG) provides a portable, low cost measure of brain dynamics, it has been somewhat underrepresented in the emerging field of model-based inference. We seek to address this gap in this article by highlighting the utility of linking EEG and behavior, with an emphasis on approaches for EEG analysis that move beyond focusing on peaks or “components” derived from averaging EEG responses across trials and subjects (generating the event-related potential, ERP). First, we review methods for deriving features from EEG in order to enhance the signal within single-trials. These methods include filtering based on user-defined features (i.e., frequency decomposition, time-frequency decomposition), filtering based on data-driven properties (i.e., blind source separation, BSS), and generating more abstract representations of data (e.g., using deep learning). We then review cognitive models which extract latent variables from experimental tasks, including the drift diffusion model (DDM) and reinforcement learning (RL) approaches. Next, we discuss ways to access associations among these measures, including statistical models, data-driven joint models and cognitive joint modeling using hierarchical Bayesian models (HBMs). We think that these methodological tools are likely to contribute to theoretical advancements, and will help inform our understandings of brain dynamics that contribute to moment-to-moment cognitive function. PMID:29632480
Brochhausen, Mathias; Spear, Andrew D.; Cocos, Cristian; Weiler, Gabriele; Martín, Luis; Anguita, Alberto; Stenzhorn, Holger; Daskalaki, Evangelia; Schera, Fatima; Schwarz, Ulf; Sfakianakis, Stelios; Kiefer, Stephan; Dörr, Martin; Graf, Norbert; Tsiknakis, Manolis
2017-01-01
Objective This paper introduces the objectives, methods and results of ontology development in the EU co-funded project Advancing Clinico-genomic Trials on Cancer – Open Grid Services for Improving Medical Knowledge Discovery (ACGT). While the available data in the life sciences has recently grown both in amount and quality, the full exploitation of it is being hindered by the use of different underlying technologies, coding systems, category schemes and reporting methods on the part of different research groups. The goal of the ACGT project is to contribute to the resolution of these problems by developing an ontology-driven, semantic grid services infrastructure that will enable efficient execution of discovery-driven scientific workflows in the context of multi-centric, post-genomic clinical trials. The focus of the present paper is the ACGT Master Ontology (MO). Methods ACGT project researchers undertook a systematic review of existing domain and upper-level ontologies, as well as of existing ontology design software, implementation methods, and end-user interfaces. This included the careful study of best practices, design principles and evaluation methods for ontology design, maintenance, implementation, and versioning, as well as for use on the part of domain experts and clinicians. Results To date, the results of the ACGT project include (i) the development of a master ontology (the ACGT-MO) based on clearly defined principles of ontology development and evaluation; (ii) the development of a technical infra-structure (the ACGT Platform) that implements the ACGT-MO utilizing independent tools, components and resources that have been developed based on open architectural standards, and which includes an application updating and evolving the ontology efficiently in response to end-user needs; and (iii) the development of an Ontology-based Trial Management Application (ObTiMA) that integrates the ACGT-MO into the design process of clinical trials in order to guarantee automatic semantic integration without the need to perform a separate mapping process. PMID:20438862
Raindrop and flow interactions for interrill erosion with wind-driven rain
USDA-ARS?s Scientific Manuscript database
Wind-driven rain (WDR) experiments were conducted to evaluate interrill component of the Water Erosion Prediction Project (WEPP) model with two-dimensional experimental set-up in wind tunnel. Synchronized wind and rain simulations were applied to soil surfaces on windward and leeward slopes of 7, 15...
Situations, Interaction, Process and Affordances: An Ecological Psychology Perspective.
ERIC Educational Resources Information Center
Young, Michael F.; DePalma, Andrew; Garrett, Steven
2002-01-01
From an ecological psychology perspective, a full analysis of any learning context must acknowledge the complex nonlinear dynamics that unfold as an intentionally-driven learner interacts with a technology-based purposefully designed learning environment. A full situation model would need to incorporate constraints from the environment and also…
Ren, Jiaping; Wang, Xinjie; Manocha, Dinesh
2016-01-01
We present a biologically plausible dynamics model to simulate swarms of flying insects. Our formulation, which is based on biological conclusions and experimental observations, is designed to simulate large insect swarms of varying densities. We use a force-based model that captures different interactions between the insects and the environment and computes collision-free trajectories for each individual insect. Furthermore, we model the noise as a constructive force at the collective level and present a technique to generate noise-induced insect movements in a large swarm that are similar to those observed in real-world trajectories. We use a data-driven formulation that is based on pre-recorded insect trajectories. We also present a novel evaluation metric and a statistical validation approach that takes into account various characteristics of insect motions. In practice, the combination of Curl noise function with our dynamics model is used to generate realistic swarm simulations and emergent behaviors. We highlight its performance for simulating large flying swarms of midges, fruit fly, locusts and moths and demonstrate many collective behaviors, including aggregation, migration, phase transition, and escape responses. PMID:27187068
A System for Measurement of Convection Aboard Space Station
NASA Technical Reports Server (NTRS)
Bogatyrev, Gennady P.; Gorbunov, Aleksei V; Putin, Gennady F.; Ivanov, Alexander I.; Nikitin, Sergei A.; Polezhaev, Vadim I.
1996-01-01
A simple device for direct measurement of buoyancy driven fluid flows in a low-gravity environment is proposed. A system connecting spacecraft accelerometers data and results of thermal convection in enclosure measurements and numerical simulations is developed. This system will permit also to evaluate the low frequency microacceleration component. The goal of the paper is to present objectives and current results of ground-based experimental and numerical modeling of this convection detector.
Predicting the Magnetic Properties of ICMEs: A Pragmatic View
NASA Astrophysics Data System (ADS)
Riley, P.; Linker, J.; Ben-Nun, M.; Torok, T.; Ulrich, R. K.; Russell, C. T.; Lai, H.; de Koning, C. A.; Pizzo, V. J.; Liu, Y.; Hoeksema, J. T.
2017-12-01
The southward component of the interplanetary magnetic field plays a crucial role in being able to successfully predict space weather phenomena. Yet, thus far, it has proven extremely difficult to forecast with any degree of accuracy. In this presentation, we describe an empirically-based modeling framework for estimating Bz values during the passage of interplanetary coronal mass ejections (ICMEs). The model includes: (1) an empirically-based estimate of the magnetic properties of the flux rope in the low corona (including helicity and field strength); (2) an empirically-based estimate of the dynamic properties of the flux rope in the high corona (including direction, speed, and mass); and (3) a physics-based estimate of the evolution of the flux rope during its passage to 1 AU driven by the output from (1) and (2). We compare model output with observations for a selection of events to estimate the accuracy of this approach. Importantly, we pay specific attention to the uncertainties introduced by the components within the framework, separating intrinsic limitations from those that can be improved upon, either by better observations or more sophisticated modeling. Our analysis suggests that current observations/modeling are insufficient for this empirically-based framework to provide reliable and actionable prediction of the magnetic properties of ICMEs. We suggest several paths that may lead to better forecasts.
Event-driven management algorithm of an Engineering documents circulation system
NASA Astrophysics Data System (ADS)
Kuzenkov, V.; Zebzeev, A.; Gromakov, E.
2015-04-01
Development methodology of an engineering documents circulation system in the design company is reviewed. Discrete event-driven automatic models using description algorithms of project management is offered. Petri net use for dynamic design of projects is offered.
The design, hysteresis modeling and control of a novel SMA-fishing-line actuator
NASA Astrophysics Data System (ADS)
Xiang, Chaoqun; Yang, Hui; Sun, Zhiyong; Xue, Bangcan; Hao, Lina; Asadur Rahoman, M. D.; Davis, Steve
2017-03-01
Fishing line can be combined with shape memory alloy (SMA) to form novel artificial muscle actuators which have low cost, are lightweight and soft. They can be applied in bionic, wearable and rehabilitation robots, and can reduce system weight and cost, increase power-to-weight ratio and offer safer physical human-robot interaction. However, these actuators possess several disadvantages, for example fishing line based actuators possess low strength and are complex to drive, and SMA possesses a low percentage contraction and has high hysteresis. This paper presents a novel artificial actuator (known as an SMA-fishing-line) made of fishing line and SMA twisted then coiled together, which can be driven directly by an electrical voltage. Its output force can reach 2.65 N at 7.4 V drive voltage, and the percentage contraction at 4 V driven voltage with a 3 N load is 7.53%. An antagonistic bionic joint driven by the novel SMA-fishing-line actuators is presented, and based on an extended unparallel Prandtl-Ishlinskii (EUPI) model, its hysteresis behavior is established, and the error ratio of the EUPI model is determined to be 6.3%. A Joule heat model of the SMA-fishing-line is also presented, and the maximum error of the established model is 0.510 mm. Based on this accurate hysteresis model, a composite PID controller consisting of PID and an integral inverse (I-I) compensator is proposed and its performance is compared with a traditional PID controller through simulations and experimentation. These results show that the composite PID controller possesses higher control precision than basic PID, and is feasible for implementation in an SMA-fishing-line driven antagonistic bionic joint.
Integrated Workforce Planning Model: A Proof of Concept
NASA Technical Reports Server (NTRS)
Guruvadoo, Eranna K.
2001-01-01
Recently, the Workforce and Diversity Management Office at KSC have launched a major initiative to develop and implement a competency/skill approach to Human Resource management. As the competency/skill dictionary is being elaborated, the need for a competency-based workforce-planning model is recognized. A proof of concept for such a model is presented using a multidimensional data model that can provide the data infrastructure necessary to drive intelligent decision support systems for workforce planing. The components of competency-driven workforce planning model are explained. The data model is presented and several schemes that would support the workforce-planning model are presented. Some directions and recommendations for future work are given.
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.
A discrete decentralized variable structure robotic controller
NASA Technical Reports Server (NTRS)
Tumeh, Zuheir S.
1989-01-01
A decentralized trajectory controller for robotic manipulators is designed and tested using a multiprocessor architecture and a PUMA 560 robot arm. The controller is made up of a nominal model-based component and a correction component based on a variable structure suction control approach. The second control component is designed using bounds on the difference between the used and actual values of the model parameters. Since the continuous manipulator system is digitally controlled along a trajectory, a discretized equivalent model of the manipulator is used to derive the controller. The motivation for decentralized control is that the derived algorithms can be executed in parallel using a distributed, relatively inexpensive, architecture where each joint is assigned a microprocessor. Nonlinear interaction and coupling between joints is treated as a disturbance torque that is estimated and compensated for.
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
Infusing Technology Driven Design Thinking in Industrial Design Education: A Case Study
ERIC Educational Resources Information Center
Mubin, Omar; Novoa, Mauricio; Al Mahmud, Abdullah
2017-01-01
Purpose: This paper narrates a case study on design thinking-based education work in an industrial design honours program. Student projects were developed in a multi-disciplinary setting across a Computing and Engineering faculty that allowed promoting technologically and user-driven innovation strategies. Design/methodology/approach: A renewed…
Test Driven Development: Lessons from a Simple Scientific Model
NASA Astrophysics Data System (ADS)
Clune, T. L.; Kuo, K.
2010-12-01
In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.
A Hybrid Coarse-graining Approach for Lipid Bilayers at Large Length and Time Scales
Ayton, Gary S.; Voth, Gregory A.
2009-01-01
A hybrid analytic-systematic (HAS) coarse-grained (CG) lipid model is developed and employed in a large-scale simulation of a liposome. The methodology is termed hybrid analyticsystematic as one component of the interaction between CG sites is variationally determined from the multiscale coarse-graining (MS-CG) methodology, while the remaining component utilizes an analytic potential. The systematic component models the in-plane center of mass interaction of the lipids as determined from an atomistic-level MD simulation of a bilayer. The analytic component is based on the well known Gay-Berne ellipsoid of revolution liquid crystal model, and is designed to model the highly anisotropic interactions at a highly coarse-grained level. The HAS CG approach is the first step in an “aggressive” CG methodology designed to model multi-component biological membranes at very large length and timescales. PMID:19281167
NASA Astrophysics Data System (ADS)
Wang, Hexiang; Schuster, Eugenio; Rafiq, Tariq; Kritz, Arnold; Ding, Siye
2016-10-01
Extensive research has been conducted to find high-performance operating scenarios characterized by high fusion gain, good confinement, plasma stability and possible steady-state operation. A key plasma property that is related to both the stability and performance of these advanced plasma scenarios is the safety factor profile. A key component of the EAST research program is the exploration of non-inductively driven steady-state plasmas with the recently upgraded heating and current drive capabilities that include lower hybrid current drive and neutral beam injection. Anticipating the need for tight regulation of the safety factor profile in these plasma scenarios, a first-principles-driven (FPD)control-oriented model is proposed to describe the safety factor profile evolution in EAST in response to the different actuators. The TRANSP simulation code is employed to tailor the FPD model to the EAST tokamak geometry and to convert it into a form suitable for control design. The FPD control-oriented model's prediction capabilities are demonstrated by comparing predictions with experimental data from EAST. Supported by the US DOE under DE-SC0010537,DE-FG02-92ER54141 and DE-SC0013977.
CrossTalk. The Journal of Defense Software Engineering. Volume 23, Number 6, Nov/Dec 2010
2010-11-01
Model of archi- tectural design. It guides developers to apply effort to their software architecture commensurate with the risks faced by...Driven Model is the promotion of risk to prominence. It is possible to apply the Risk-Driven Model to essentially any software development process...succeed without any planned architecture work, while many high-risk projects would fail without it . The Risk-Driven Model walks a middle path
Physical modelling of LNG rollover in a depressurized container filled with water
NASA Astrophysics Data System (ADS)
Maksim, Dadonau; Denissenko, Petr; Hubert, Antoine; Dembele, Siaka; Wen, Jennifer
2015-11-01
Stable density stratification of multi-component Liquefied Natural Gas causes it to form distinct layers, with upper layer having a higher fraction of the lighter components. Heat flux through the walls and base of the container results in buoyancy-driven convection accompanied by heat and mass transfer between the layers. The equilibration of densities of the top and bottom layers, normally caused by the preferential evaporation of Nitrogen, may induce an imbalance in the system and trigger a rapid mixing process, so-called rollover. Numerical simulation of the rollover is complicated and codes require validation. Physical modelling of the phenomenon has been performed in a water-filled depressurized vessel. Reducing gas pressure in the container to levels comparable to the hydrostatic pressure in the water column allows modelling of tens of meters industrial reservoirs using a 20 cm laboratory setup. Additionally, it allows to model superheating of the base fluid layer at temperatures close the room temperature. Flow visualizations and parametric studies are presented. Results are related to outcomes of numerical modelling.
Designing Educative Curriculum Materials: A Theoretically and Empirically Driven Process
ERIC Educational Resources Information Center
Davis, Elizabeth A.; Palincsar, Annemarie Sullivan; Arias, Anna Maria; Bismack, Amber Schultz; Marulis, Loren M.; Iwashyna, Stefanie K.
2014-01-01
In this article, the authors argue for a design process in the development of educative curriculum materials that is theoretically and empirically driven. Using a design-based research approach, they describe their design process for incorporating educative features intended to promote teacher learning into existing, high-quality curriculum…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, C.H.; Ready, A.B.; Rea, J.
1995-06-01
Versions of the computer program PROATES (PROcess Analysis for Thermal Energy Systems) have been used since 1979 to analyse plant performance improvement proposals relating to existing plant and also to evaluate new plant designs. Several plant modifications have been made to improve performance based on the model predictions and the predicted performance has been realised in practice. The program was born out of a need to model the overall steady state performance of complex plant to enable proposals to change plant component items or operating strategy to be evaluated. To do this with confidence it is necessary to model themore » multiple thermodynamic interactions between the plant components. The modelling system is modular in concept allowing the configuration of individual plant components to represent any particular power plant design. A library exists of physics based modules which have been extensively validated and which provide representations of a wide range of boiler, turbine and CW system components. Changes to model data and construction is achieved via a user friendly graphical model editing/analysis front-end with results being presented via the computer screen or hard copy. The paper describes briefly the modelling system but concentrates mainly on the application of the modelling system to assess design re-optimisation, firing with different fuels and the re-powering of an existing plant.« less
Library Catalog Log Analysis in E-Book Patron-Driven Acquisitions (PDA): A Case Study
ERIC Educational Resources Information Center
Urbano, Cristóbal; Zhang, Yin; Downey, Kay; Klingler, Thomas
2015-01-01
Patron-Driven Acquisitions (PDA) is a new model used for e-book acquisition by academic libraries. A key component of this model is to make records of ebooks available in a library catalog and let actual patron usage decide whether or not an item is purchased. However, there has been a lack of research examining the role of the library catalog as…
Chen, Jiehui; Salim, Mariam B; Matsumoto, Mitsuji
2010-01-01
Wireless Sensor Networks (WSNs) designed for mission-critical applications suffer from limited sensing capacities, particularly fast energy depletion. Regarding this, mobile sinks can be used to balance the energy consumption in WSNs, but the frequent location updates of the mobile sinks can lead to data collisions and rapid energy consumption for some specific sensors. This paper explores an optimal barrier coverage based sensor deployment for event driven WSNs where a dual-sink model was designed to evaluate the energy performance of not only static sensors, but Static Sink (SS) and Mobile Sinks (MSs) simultaneously, based on parameters such as sensor transmission range r and the velocity of the mobile sink v, etc. Moreover, a MS mobility model was developed to enable SS and MSs to effectively collaborate, while achieving spatiotemporal energy performance efficiency by using the knowledge of the cumulative density function (cdf), Poisson process and M/G/1 queue. The simulation results verified that the improved energy performance of the whole network was demonstrated clearly and our eDSA algorithm is more efficient than the static-sink model, reducing energy consumption approximately in half. Moreover, we demonstrate that our results are robust to realistic sensing models and also validate the correctness of our results through extensive simulations.
Chen, Jiehui; Salim, Mariam B.; Matsumoto, Mitsuji
2010-01-01
Wireless Sensor Networks (WSNs) designed for mission-critical applications suffer from limited sensing capacities, particularly fast energy depletion. Regarding this, mobile sinks can be used to balance the energy consumption in WSNs, but the frequent location updates of the mobile sinks can lead to data collisions and rapid energy consumption for some specific sensors. This paper explores an optimal barrier coverage based sensor deployment for event driven WSNs where a dual-sink model was designed to evaluate the energy performance of not only static sensors, but Static Sink (SS) and Mobile Sinks (MSs) simultaneously, based on parameters such as sensor transmission range r and the velocity of the mobile sink v, etc. Moreover, a MS mobility model was developed to enable SS and MSs to effectively collaborate, while achieving spatiotemporal energy performance efficiency by using the knowledge of the cumulative density function (cdf), Poisson process and M/G/1 queue. The simulation results verified that the improved energy performance of the whole network was demonstrated clearly and our eDSA algorithm is more efficient than the static-sink model, reducing energy consumption approximately in half. Moreover, we demonstrate that our results are robust to realistic sensing models and also validate the correctness of our results through extensive simulations. PMID:22163503
Fundamental Technology Development for Gas-Turbine Engine Health Management
NASA Technical Reports Server (NTRS)
Mercer, Carolyn R.; Simon, Donald L.; Hunter, Gary W.; Arnold, Steven M.; Reveley, Mary S.; Anderson, Lynn M.
2007-01-01
Integrated vehicle health management technologies promise to dramatically improve the safety of commercial aircraft by reducing system and component failures as causal and contributing factors in aircraft accidents. To realize this promise, fundamental technology development is needed to produce reliable health management components. These components include diagnostic and prognostic algorithms, physics-based and data-driven lifing and failure models, sensors, and a sensor infrastructure including wireless communications, power scavenging, and electronics. In addition, system assessment methods are needed to effectively prioritize development efforts. Development work is needed throughout the vehicle, but particular challenges are presented by the hot, rotating environment of the propulsion system. This presentation describes current work in the field of health management technologies for propulsion systems for commercial aviation.
Tsui, Fu-Chiang; Espino, Jeremy U.; Weng, Yan; Choudary, Arvinder; Su, Hoah-Der; Wagner, Michael M.
2005-01-01
The National Retail Data Monitor (NRDM) has monitored over-the-counter (OTC) medication sales in the United States since December 2002. The NRDM collects data from over 18,600 retail stores and processes over 0.6 million sales records per day. This paper describes key architectural features that we have found necessary for a data utility component in a national biosurveillance system. These elements include event-driven architecture to provide analyses of data in near real time, multiple levels of caching to improve query response time, high availability through the use of clustered servers, scalable data storage through the use of storage area networks and a web-service function for interoperation with affiliated systems. The methods and architectural principles are relevant to the design of any production data utility for public health surveillance—systems that collect data from multiple sources in near real time for use by analytic programs and user interfaces that have substantial requirements for time-series data aggregated in multiple dimensions. PMID:16779138
A Model-Driven, Science Data Product Registration Service
NASA Astrophysics Data System (ADS)
Hardman, S.; Ramirez, P.; Hughes, J. S.; Joyner, R.; Cayanan, M.; Lee, H.; Crichton, D. J.
2011-12-01
The Planetary Data System (PDS) has undertaken an effort to overhaul the PDS data architecture (including the data model, data structures, data dictionary, etc.) and to deploy an upgraded software system (including data services, distributed data catalog, etc.) that fully embraces the PDS federation as an integrated system while taking advantage of modern innovations in information technology (including networking capabilities, processing speeds, and software breakthroughs). A core component of this new system is the Registry Service that will provide functionality for tracking, auditing, locating, and maintaining artifacts within the system. These artifacts can range from data files and label files, schemas, dictionary definitions for objects and elements, documents, services, etc. This service offers a single reference implementation of the registry capabilities detailed in the Consultative Committee for Space Data Systems (CCSDS) Registry Reference Model White Book. The CCSDS Reference Model in turn relies heavily on the Electronic Business using eXtensible Markup Language (ebXML) standards for registry services and the registry information model, managed by the OASIS consortium. Registries are pervasive components in most information systems. For example, data dictionaries, service registries, LDAP directory services, and even databases provide registry-like services. These all include an account of informational items that are used in large-scale information systems ranging from data values such as names and codes, to vocabularies, services and software components. The problem is that many of these registry-like services were designed with their own data models associated with the specific type of artifact they track. Additionally these services each have their own specific interface for interacting with the service. This Registry Service implements the data model specified in the ebXML Registry Information Model (RIM) specification that supports the various artifacts above as well as offering the flexibility to support customer-defined artifacts. Key features for the Registry Service include: - Model-based configuration specifying customer-defined artifact types, metadata attributes to capture for each artifact type, supported associations and classification schemes. - A REST-based external interface that is accessible via the Hypertext Transfer Protocol (HTTP). - Federation of Registry Service instances allowing associations between registered artifacts across registries as well as queries for artifacts across those same registries. A federation also enables features such as replication and synchronization if desired for a given deployment. In addition to its use as a core component of the PDS, the generic implementation of the Registry Service facilitates its applicability as a core component in any science data archive or science data system.
Markstrom, Steven L.; Niswonger, Richard G.; Regan, R. Steven; Prudic, David E.; Barlow, Paul M.
2008-01-01
The need to assess the effects of variability in climate, biota, geology, and human activities on water availability and flow requires the development of models that couple two or more components of the hydrologic cycle. An integrated hydrologic model called GSFLOW (Ground-water and Surface-water FLOW) was developed to simulate coupled ground-water and surface-water resources. The new model is based on the integration of the U.S. Geological Survey Precipitation-Runoff Modeling System (PRMS) and the U.S. Geological Survey Modular Ground-Water Flow Model (MODFLOW). Additional model components were developed, and existing components were modified, to facilitate integration of the models. Methods were developed to route flow among the PRMS Hydrologic Response Units (HRUs) and between the HRUs and the MODFLOW finite-difference cells. This report describes the organization, concepts, design, and mathematical formulation of all GSFLOW model components. An important aspect of the integrated model design is its ability to conserve water mass and to provide comprehensive water budgets for a location of interest. This report includes descriptions of how water budgets are calculated for the integrated model and for individual model components. GSFLOW provides a robust modeling system for simulating flow through the hydrologic cycle, while allowing for future enhancements to incorporate other simulation techniques.
Model-Based Trade Space Exploration for Near-Earth Space Missions
NASA Technical Reports Server (NTRS)
Cohen, Ronald H.; Boncyk, Wayne; Brutocao, James; Beveridge, Iain
2005-01-01
We developed a capability for model-based trade space exploration to be used in the conceptual design of Earth-orbiting space missions. We have created a set of reusable software components to model various subsystems and aspects of space missions. Several example mission models were created to test the tools and process. This technique and toolset has demonstrated itself to be valuable for space mission architectural design.
NASA Astrophysics Data System (ADS)
Mittchell, Richard L.; Symko-Davies, Martha; Thomas, Holly P.; Witt, C. Edwin
1999-03-01
The Photovoltaic Manufacturing Technology (PVMaT) Project is a government/industry research and development (R&D) partnership between the U.S. federal government (through the U.S. Department of Energy [DOE]) and members of the U.S. PV industry. The goals of PVMaT are to assist the U.S. PV industry improve module manufacturing processes and equipment; accelerate manufacturing cost reductions for PV modules, balance-of-systems components, and integrated systems; increase commercial product performance and reliability; and enhance investment opportunities for substantial scale-ups of U.S.-based PV manufacturing plant capacities. The approach for PVMaT has been to cost-share the R&D risk as industry explores new manufacturing options and ideas for improved PV modules and components, advances system and product integration, and develops new system designs. These activities will lead to overall reduced system life-cycle costs for reliable PV end-products. The 1994 PVMaT Product-Driven BOS and Systems activities, as well as Product-Driven Module Manufacturing R&D activities, are just being completed. Fourteen new subcontracts have just been awarded in the areas of PV System and Component Technology and Module Manufacturing Technology. Government funding, subcontractor cost-sharing, and a comparison of the relative efforts by PV technology throughout the PVMaT project are also discussed.
Seasonal and weekly variability of Atlantic inflow into the northern North Sea
NASA Astrophysics Data System (ADS)
Sheehan, Peter; Berx, Bee; Gallego, Alejandro; Hall, Rob; Heywood, Karen
2017-04-01
Quantifying the variability of Atlantic inflow is necessary for managing the North Sea ecosystem and for producing accurate models for forecasting, for example, oil spill trajectories. The JONSIS hydrographic section (2.23°W to 0° at 59.28°N) crosses the path of the main inflow of Atlantic water into the northwestern North Sea. 122 occupations between 1989 and 2015 are examined to determine the annual cycle of thermohaline-driven volume transport into the North Sea. Thermohaline transport is at a minimum (0.1 Sv) during winter when it is driven by a horizontal salinity gradient across a zonal bottom front; it is at a maximum (0.35 Sv) in early autumn when it is driven by a horizontal temperature gradient that develops across the same front. The amplitude of the annual cycle of temperature-driven transport (0.15 Sv) is bigger than the amplitude of the annual cycle of salinity-driven transport (0.025 Sv). The annual cycles are approximately six months out of phase. Our quantitative results are the first to be based on a long-term dataset, and we advance previous understanding by identifying a salinity-driven flow in winter. Week-to-week variability of the Atlantic inflow is examined from ten Seaglider occupations of the JONSIS section in October and November 2013. Tidal ellipses produced from glider dive-average current observations are in good agreement with ellipses produced from tide model predictions. Total transport is derived by referencing geostrophic shear to dive-average-current observations once the tidal component of the flow has been removed. Total transport through the section during the deployment (0.5-1 Sv) is bigger than the thermohaline component (0.1-0.2 Sv), suggesting non-thermohaline forcings (e.g. wind forcing) are important at that time of year. Thermohaline transport during the glider deployment is in agreement with the annual cycle derived from the long-term observations. The addition of the glider-derived barotropic current permits a more accurate estimate of the transport than is possible from long-term hydrographic monitoring, and enables the separation of barotropic and depth-varying components. These results refine our understanding of the variability of Atlantic inflow into the North Sea on key timescales, and of the contribution of frontal flow to shelf sea circulation.
Ma, Ye; Xie, Shengquan; Zhang, Yanxin
2016-03-01
A patient-specific electromyography (EMG)-driven neuromuscular model (PENm) is developed for the potential use of human-inspired gait rehabilitation robots. The PENm is modified based on the current EMG-driven models by decreasing the calculation time and ensuring good prediction accuracy. To ensure the calculation efficiency, the PENm is simplified into two EMG channels around one joint with minimal physiological parameters. In addition, a dynamic computation model is developed to achieve real-time calculation. To ensure the calculation accuracy, patient-specific muscle kinematics information, such as the musculotendon lengths and the muscle moment arms during the entire gait cycle, are employed based on the patient-specific musculoskeletal model. Moreover, an improved force-length-velocity relationship is implemented to generate accurate muscle forces. Gait analysis data including kinematics, ground reaction forces, and raw EMG signals from six adolescents at three different speeds were used to evaluate the PENm. The simulation results show that the PENm has the potential to predict accurate joint moment in real-time. The design of advanced human-robot interaction control strategies and human-inspired gait rehabilitation robots can benefit from the application of the human internal state provided by the PENm. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
2017-11-09
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
Development of Parametric Mass and Volume Models for an Aerospace SOFC/Gas Turbine Hybrid System
NASA Technical Reports Server (NTRS)
Tornabene, Robert; Wang, Xiao-yen; Steffen, Christopher J., Jr.; Freeh, Joshua E.
2005-01-01
In aerospace power systems, mass and volume are key considerations to produce a viable design. The utilization of fuel cells is being studied for a commercial aircraft electrical power unit. Based on preliminary analyses, a SOFC/gas turbine system may be a potential solution. This paper describes the parametric mass and volume models that are used to assess an aerospace hybrid system design. The design tool utilizes input from the thermodynamic system model and produces component sizing, performance, and mass estimates. The software is designed such that the thermodynamic model is linked to the mass and volume model to provide immediate feedback during the design process. It allows for automating an optimization process that accounts for mass and volume in its figure of merit. Each component in the system is modeled with a combination of theoretical and empirical approaches. A description of the assumptions and design analyses is presented.
Lin, Chih-Tin; Meyhofer, Edgar; Kurabayashi, Katsuo
2010-01-01
Directional control of microtubule shuttles via microfabricated tracks is key to the development of controlled nanoscale mass transport by kinesin motor molecules. Here we develop and test a model to quantitatively predict the stochastic behavior of microtubule guiding when they mechanically collide with the sidewalls of lithographically patterned tracks. By taking into account appropriate probability distributions of microscopic states of the microtubule system, the model allows us to theoretically analyze the roles of collision conditions and kinesin surface densities in determining how the motion of microtubule shuttles is controlled. In addition, we experimentally observe the statistics of microtubule collision events and compare our theoretical prediction with experimental data to validate our model. The model will direct the design of future hybrid nanotechnology devices that integrate nanoscale transport systems powered by kinesin-driven molecular shuttles.
Varifocal MOEMS fiber scanner for confocal endomicroscopy.
Meinert, Tobias; Weber, Niklas; Zappe, Hans; Seifert, Andreas
2014-12-15
Based on an advanced silicon optical bench technology with integrated MOEMS (Micro-Opto-Electro-Mechanical-System) components, a piezo-driven fiber scanner for confocal microscopy has been developed. This highly-miniaturized technology allows integration into an endoscope with a total outer probe diameter of 2.5 mm. The system features a hydraulically-driven varifocal lens providing axial confocal scanning without any translational movement of components. The demonstrated resolutions are 1.7 μm laterally and 19 μm axially.
Buonaccorsi, G A; Rose, C J; O'Connor, J P B; Roberts, C; Watson, Y; Jackson, A; Jayson, G C; Parker, G J M
2010-01-01
Clinical trials of anti-angiogenic and vascular-disrupting agents often use biomarkers derived from DCE-MRI, typically reporting whole-tumor summary statistics and so overlooking spatial parameter variations caused by tissue heterogeneity. We present a data-driven segmentation method comprising tracer-kinetic model-driven registration for motion correction, conversion from MR signal intensity to contrast agent concentration for cross-visit normalization, iterative principal components analysis for imputation of missing data and dimensionality reduction, and statistical outlier detection using the minimum covariance determinant to obtain a robust Mahalanobis distance. After applying these techniques we cluster in the principal components space using k-means. We present results from a clinical trial of a VEGF inhibitor, using time-series data selected because of problems due to motion and outlier time series. We obtained spatially-contiguous clusters that map to regions with distinct microvascular characteristics. This methodology has the potential to uncover localized effects in trials using DCE-MRI-based biomarkers.
[Urban ecological risk assessment: a review].
Wang, Mei-E; Chen, Wei-Ping; Peng, Chi
2014-03-01
With the development of urbanization and the degradation of urban living environment, urban ecological risks caused by urbanization have attracted more and more attentions. Based on urban ecology principles and ecological risk assessment frameworks, contents of urban ecological risk assessment were reviewed in terms of driven forces, risk resources, risk receptors, endpoints and integrated approaches for risk assessment. It was suggested that types and degrees of urban economical and social activities were the driven forces for urban ecological risks. Ecological functional components at different levels in urban ecosystems as well as the urban system as a whole were the risk receptors. Assessment endpoints involved in changes of urban ecological structures, processes, functional components and the integrity of characteristic and function. Social-ecological models should be the major approaches for urban ecological risk assessment. Trends for urban ecological risk assessment study should focus on setting a definite protection target and criteria corresponding to assessment endpoints, establishing a multiple-parameter assessment system and integrative assessment approaches.
Using Movies to Analyse Gene Circuit Dynamics in Single Cells
Locke, James CW; Elowitz, Michael B
2010-01-01
Preface Many bacterial systems rely on dynamic genetic circuits to control critical processes. A major goal of systems biology is to understand these behaviours in terms of individual genes and their interactions. However, traditional techniques based on population averages wash out critical dynamics that are either unsynchronized between cells or driven by fluctuations, or ‘noise,’ in cellular components. Recently, the combination of time-lapse microscopy, quantitative image analysis, and fluorescent protein reporters has enabled direct observation of multiple cellular components over time in individual cells. In conjunction with mathematical modelling, these techniques are now providing powerful insights into genetic circuit behaviour in diverse microbial systems. PMID:19369953
NASA Astrophysics Data System (ADS)
Torres-Arredondo, M.-A.; Sierra-Pérez, Julián; Cabanes, Guénaël
2016-05-01
The process of measuring and analysing the data from a distributed sensor network all over a structural system in order to quantify its condition is known as structural health monitoring (SHM). For the design of a trustworthy health monitoring system, a vast amount of information regarding the inherent physical characteristics of the sources and their propagation and interaction across the structure is crucial. Moreover, any SHM system which is expected to transition to field operation must take into account the influence of environmental and operational changes which cause modifications in the stiffness and damping of the structure and consequently modify its dynamic behaviour. On that account, special attention is paid in this paper to the development of an efficient SHM methodology where robust signal processing and pattern recognition techniques are integrated for the correct interpretation of complex ultrasonic waves within the context of damage detection and identification. The methodology is based on an acousto-ultrasonics technique where the discrete wavelet transform is evaluated for feature extraction and selection, linear principal component analysis for data-driven modelling and self-organising maps for a two-level clustering under the principle of local density. At the end, the methodology is experimentally demonstrated and results show that all the damages were detectable and identifiable.
A modular method for evaluating the performance of picture archiving and communication systems.
Sanders, W H; Kant, L A; Kudrimoti, A
1993-08-01
Modeling can be used to predict the performance of picture archiving and communication system (PACS) configurations under various load conditions at an early design stage. This is important because choices made early in the design of a system can have a significant impact on the performance of the resulting implementation. Because PACS consist of many types of components, it is important to do such evaluations in a modular manner, so that alternative configurations and designs can be easily investigated. Stochastic activity networks (SANs) and reduced base model construction methods can aid in doing this. SANs are a model type particularly suited to the evaluation of systems in which several activities may be in progress concurrently, and each activity may affect the others through the results of its completion. Together with SANs, reduced base model construction methods provide a means to build highly modular models, in which models of particular components can be easily reused. In this article, we investigate the use of SANs and reduced base model construction techniques in evaluating PACS. Construction and solution of the models is done using UltraSAN, a graphic-oriented software tool for model specification, analysis, and simulation. The method is illustrated via the evaluation of a realistically sized PACS for a typical United States hospital of 300 to 400 beds, and the derivation of system response times and component utilizations.
NASA Astrophysics Data System (ADS)
Desai, A. B.; Desai, K. P.; Naik, H. B.; Atrey, M. D.
2017-02-01
Thermoacoustic engines (TAEs) are devices which convert heat energy into useful acoustic work whereas thermoacoustic refrigerators (TARs) convert acoustic work into temperature gradient. These devices work without any moving component. Study presented here comprises of a combination system i.e. thermoacoustic engine driven thermoacoustic refrigerator (TADTAR). This system has no moving component and hence it is easy to fabricate but at the same time it is very challenging to design and construct optimized system with comparable performance. The work presented here aims to apply optimization technique to TADTAR in the form of response surface methodology (RSM). Significance of stack position and stack length for engine stack, stack position and stack length for refrigerator stack are investigated in current work. Results from RSM are compared with results from simulations using Design Environment for Low-amplitude Thermoacoustic Energy conversion (DeltaEC) for compliance.
Dynamic testing for shuttle design verification
NASA Technical Reports Server (NTRS)
Green, C. E.; Leadbetter, S. A.; Rheinfurth, M. H.
1972-01-01
Space shuttle design verification requires dynamic data from full scale structural component and assembly tests. Wind tunnel and other scaled model tests are also required early in the development program to support the analytical models used in design verification. Presented is a design philosophy based on mathematical modeling of the structural system strongly supported by a comprehensive test program; some of the types of required tests are outlined.
Phasor Domain Steady-State Modeling and Design of the DC–DC Modular Multilevel Converter
Yang, Heng; Qin, Jiangchao; Debnath, Suman; ...
2016-01-06
The DC-DC Modular Multilevel Converter (MMC), which originated from the AC-DC MMC, is an attractive converter topology for interconnection of medium-/high-voltage DC grids. This paper presents design considerations for the DC-DC MMC to achieve high efficiency and reduced component sizes. A steady-state mathematical model of the DC-DC MMC in the phasor-domain is developed. Based on the developed model, a design approach is proposed to size the components and to select the operating frequency of the converter to satisfy a set of design constraints while achieving high efficiency. The design approach includes sizing of the arm inductor, Sub-Module (SM) capacitor, andmore » phase filtering inductor along with the selection of AC operating frequency of the converter. The accuracy of the developed model and the effectiveness of the design approach are validated based on the simulation studies in the PSCAD/EMTDC software environment. The analysis and developments of this paper can be used as a guideline for design of the DC-DC MMC.« less
Extended Twin Study of Alcohol Use in Virginia and Australia.
Verhulst, Brad; Neale, Michael C; Eaves, Lindon J; Medland, Sarah E; Heath, Andrew C; Martin, Nicholas G; Maes, Hermine H
2018-06-01
Drinking alcohol is a normal behavior in many societies, and prior studies have demonstrated it has both genetic and environmental sources of variation. Using two very large samples of twins and their first-degree relatives (Australia ≈ 20,000 individuals from 8,019 families; Virginia ≈ 23,000 from 6,042 families), we examine whether there are differences: (1) in the genetic and environmental factors that influence four interrelated drinking behaviors (quantity, frequency, age of initiation, and number of drinks in the last week), (2) between the twin-only design and the extended twin design, and (3) the Australian and Virginia samples. We find that while drinking behaviors are interrelated, there are substantial differences in the genetic and environmental architectures across phenotypes. Specifically, drinking quantity, frequency, and number of drinks in the past week have large broad genetic variance components, and smaller but significant environmental variance components, while age of onset is driven exclusively by environmental factors. Further, the twin-only design and the extended twin design come to similar conclusions regarding broad-sense heritability and environmental transmission, but the extended twin models provide a more nuanced perspective. Finally, we find a high level of similarity between the Australian and Virginian samples, especially for the genetic factors. The observed differences, when present, tend to be at the environmental level. Implications for the extended twin model and future directions are discussed.
Signal Processing in Periodically Forced Gradient Frequency Neural Networks
Kim, Ji Chul; Large, Edward W.
2015-01-01
Oscillatory instability at the Hopf bifurcation is a dynamical phenomenon that has been suggested to characterize active non-linear processes observed in the auditory system. Networks of oscillators poised near Hopf bifurcation points and tuned to tonotopically distributed frequencies have been used as models of auditory processing at various levels, but systematic investigation of the dynamical properties of such oscillatory networks is still lacking. Here we provide a dynamical systems analysis of a canonical model for gradient frequency neural networks driven by a periodic signal. We use linear stability analysis to identify various driven behaviors of canonical oscillators for all possible ranges of model and forcing parameters. The analysis shows that canonical oscillators exhibit qualitatively different sets of driven states and transitions for different regimes of model parameters. We classify the parameter regimes into four main categories based on their distinct signal processing capabilities. This analysis will lead to deeper understanding of the diverse behaviors of neural systems under periodic forcing and can inform the design of oscillatory network models of auditory signal processing. PMID:26733858
Visual target modulation of functional connectivity networks revealed by self-organizing group ICA.
van de Ven, Vincent; Bledowski, Christoph; Prvulovic, David; Goebel, Rainer; Formisano, Elia; Di Salle, Francesco; Linden, David E J; Esposito, Fabrizio
2008-12-01
We applied a data-driven analysis based on self-organizing group independent component analysis (sogICA) to fMRI data from a three-stimulus visual oddball task. SogICA is particularly suited to the investigation of the underlying functional connectivity and does not rely on a predefined model of the experiment, which overcomes some of the limitations of hypothesis-driven analysis. Unlike most previous applications of ICA in functional imaging, our approach allows the analysis of the data at the group level, which is of particular interest in high order cognitive studies. SogICA is based on the hierarchical clustering of spatially similar independent components, derived from single subject decompositions. We identified four main clusters of components, centered on the posterior cingulate, bilateral insula, bilateral prefrontal cortex, and right posterior parietal and prefrontal cortex, consistently across all participants. Post hoc comparison of time courses revealed that insula, prefrontal cortex and right fronto-parietal components showed higher activity for targets than for distractors. Activation for distractors was higher in the posterior cingulate cortex, where deactivation was observed for targets. While our results conform to previous neuroimaging studies, they also complement conventional results by showing functional connectivity networks with unique contributions to the task that were consistent across subjects. SogICA can thus be used to probe functional networks of active cognitive tasks at the group-level and can provide additional insights to generate new hypotheses for further study. Copyright 2007 Wiley-Liss, Inc.
An Integrated Data-Driven Strategy for Safe-by-Design Nanoparticles: The FP7 MODERN Project.
Brehm, Martin; Kafka, Alexander; Bamler, Markus; Kühne, Ralph; Schüürmann, Gerrit; Sikk, Lauri; Burk, Jaanus; Burk, Peeter; Tamm, Tarmo; Tämm, Kaido; Pokhrel, Suman; Mädler, Lutz; Kahru, Anne; Aruoja, Villem; Sihtmäe, Mariliis; Scott-Fordsmand, Janeck; Sorensen, Peter B; Escorihuela, Laura; Roca, Carlos P; Fernández, Alberto; Giralt, Francesc; Rallo, Robert
2017-01-01
The development and implementation of safe-by-design strategies is key for the safe development of future generations of nanotechnology enabled products. The safety testing of the huge variety of nanomaterials that can be synthetized is unfeasible due to time and cost constraints. Computational modeling facilitates the implementation of alternative testing strategies in a time and cost effective way. The development of predictive nanotoxicology models requires the use of high quality experimental data on the structure, physicochemical properties and bioactivity of nanomaterials. The FP7 Project MODERN has developed and evaluated the main components of a computational framework for the evaluation of the environmental and health impacts of nanoparticles. This chapter describes each of the elements of the framework including aspects related to data generation, management and integration; development of nanodescriptors; establishment of nanostructure-activity relationships; identification of nanoparticle categories; hazard ranking and risk assessment.
Mobile Agents: A Distributed Voice-Commanded Sensory and Robotic System for Surface EVA Assistance
NASA Technical Reports Server (NTRS)
Clancey, William J.; Sierhuis, Maarten; Alena, Rick; Crawford, Sekou; Dowding, John; Graham, Jeff; Kaskiris, Charis; Tyree, Kim S.; vanHoof, Ronnie
2003-01-01
A model-based, distributed architecture integrates diverse components in a system designed for lunar and planetary surface operations: spacesuit biosensors, cameras, GPS, and a robotic assistant. The system transmits data and assists communication between the extra-vehicular activity (EVA) astronauts, the crew in a local habitat, and a remote mission support team. Software processes ("agents"), implemented in a system called Brahms, run on multiple, mobile platforms, including the spacesuit backpacks, all-terrain vehicles, and robot. These "mobile agents" interpret and transform available data to help people and robotic systems coordinate their actions to make operations more safe and efficient. Different types of agents relate platforms to each other ("proxy agents"), devices to software ("comm agents"), and people to the system ("personal agents"). A state-of-the-art spoken dialogue interface enables people to communicate with their personal agents, supporting a speech-driven navigation and scheduling tool, field observation record, and rover command system. An important aspect of the engineering methodology involves first simulating the entire hardware and software system in Brahms, and then configuring the agents into a runtime system. Design of mobile agent functionality has been based on ethnographic observation of scientists working in Mars analog settings in the High Canadian Arctic on Devon Island and the southeast Utah desert. The Mobile Agents system is developed iteratively in the context of use, with people doing authentic work. This paper provides a brief introduction to the architecture and emphasizes the method of empirical requirements analysis, through which observation, modeling, design, and testing are integrated in simulated EVA operations.
Open-ocean boundary conditions from interior data: Local and remote forcing of Massachusetts Bay
Bogden, P.S.; Malanotte-Rizzoli, P.; Signell, R.
1996-01-01
Massachusetts and Cape Cod Bays form a semienclosed coastal basin that opens onto the much larger Gulf of Maine. Subtidal circulation in the bay is driven by local winds and remotely driven flows from the gulf. The local-wind forced flow is estimated with a regional shallow water model driven by wind measurements. The model uses a gravity wave radiation condition along the open-ocean boundary. Results compare reasonably well with observed currents near the coast. In some offshore regions however, modeled flows are an order of magnitude less energetic than the data. Strong flows are observed even during periods of weak local wind forcing. Poor model-data comparisons are attributable, at least in part, to open-ocean boundary conditions that neglect the effects of remote forcing. Velocity measurements from within Massachusetts Bay are used to estimate the remotely forced component of the flow. The data are combined with shallow water dynamics in an inverse-model formulation that follows the theory of Bennett and McIntosh [1982], who considered tides. We extend their analysis to consider the subtidal response to transient forcing. The inverse model adjusts the a priori open-ocean boundary condition, thereby minimizing a combined measure of model-data misfit and boundary condition adjustment. A "consistency criterion" determines the optimal trade-off between the two. The criterion is based on a measure of plausibility for the inverse solution. The "consistent" inverse solution reproduces 56% of the average squared variation in the data. The local-wind-driven flow alone accounts for half of the model skill. The other half is attributable to remotely forced flows from the Gulf of Maine. The unexplained 44% comes from measurement errors and model errors that are not accounted for in the analysis.
Application-Driven Educational Game to Assist Young Children in Learning English Vocabulary
ERIC Educational Resources Information Center
Chen, Zhi-Hong; Lee, Shu-Yu
2018-01-01
This paper describes the development of an educational game, named My-Pet-Shop, to enhance young children's learning of English vocabulary. The educational game is underpinned by an application-driven model, which consists of three components: application scenario, subject learning, and learning regulation. An empirical study is further conducted…
A Global User-Driven Model for Tile Prefetching in Web Geographical Information Systems
Pan, Shaoming; Chong, Yanwen; Zhang, Hang; Tan, Xicheng
2017-01-01
A web geographical information system is a typical service-intensive application. Tile prefetching and cache replacement can improve cache hit ratios by proactively fetching tiles from storage and replacing the appropriate tiles from the high-speed cache buffer without waiting for a client’s requests, which reduces disk latency and improves system access performance. Most popular prefetching strategies consider only the relative tile popularities to predict which tile should be prefetched or consider only a single individual user's access behavior to determine which neighbor tiles need to be prefetched. Some studies show that comprehensively considering all users’ access behaviors and all tiles’ relationships in the prediction process can achieve more significant improvements. Thus, this work proposes a new global user-driven model for tile prefetching and cache replacement. First, based on all users’ access behaviors, a type of expression method for tile correlation is designed and implemented. Then, a conditional prefetching probability can be computed based on the proposed correlation expression mode. Thus, some tiles to be prefetched can be found by computing and comparing the conditional prefetching probability from the uncached tiles set and, similarly, some replacement tiles can be found in the cache buffer according to multi-step prefetching. Finally, some experiments are provided comparing the proposed model with other global user-driven models, other single user-driven models, and other client-side prefetching strategies. The results show that the proposed model can achieve a prefetching hit rate in approximately 10.6% ~ 110.5% higher than the compared methods. PMID:28085937
Short-term airing by natural ventilation - modeling and control strategies.
Perino, M; Heiselberg, P
2009-10-01
The need to improve the energy efficiency of buildings requires new and more efficient ventilation systems. It has been demonstrated that innovative operating concepts that make use of natural ventilation seem to be more appreciated by occupants. This kind of system frequently integrates traditional mechanical ventilation components with natural ventilation devices, such as motorized windows and louvers. Among the various ventilation strategies that are currently available, buoyancy driven single-sided natural ventilation has proved to be very effective and can provide high air change rates for temperature and IAQ control. However, in order to promote a wider applications of these systems, an improvement in the knowledge of their working principles and the availability of new design and simulation tools is necessary. In this context, the paper analyses and presents the results of a research that was aimed at developing and validating numerical models for the analysis of buoyancy driven single-sided natural ventilation systems. Once validated, these models can be used to optimize control strategies in order to achieve satisfactory indoor comfort conditions and IAQ. Practical Implications Numerical and experimental analyses have proved that short-term airing by intermittent ventilation is an effective measure to satisfactorily control IAQ. Different control strategies have been investigated to optimize the capabilities of the systems. The proposed zonal model has provided good performances and could be adopted as a design tool, while CFD simulations can be profitably used for detailed studies of the pollutant concentration distribution in a room and to address local discomfort problems.
Conversion of Component-Based Point Definition to VSP Model and Higher Order Meshing
NASA Technical Reports Server (NTRS)
Ordaz, Irian
2011-01-01
Vehicle Sketch Pad (VSP) has become a powerful conceptual and parametric geometry tool with numerous export capabilities for third-party analysis codes as well as robust surface meshing capabilities for computational fluid dynamics (CFD) analysis. However, a capability gap currently exists for reconstructing a fully parametric VSP model of a geometry generated by third-party software. A computer code called GEO2VSP has been developed to close this gap and to allow the integration of VSP into a closed-loop geometry design process with other third-party design tools. Furthermore, the automated CFD surface meshing capability of VSP are demonstrated for component-based point definition geometries in a conceptual analysis and design framework.
Evidences of trapping in tungsten and implications for plasma-facing components
NASA Astrophysics Data System (ADS)
Longhurst, G. R.; Anderl, R. A.; Holland, D. F.
Trapping effects that include significant delays in permeation saturation, abrupt changes in permeation rate associated with temperature changes, and larger than expected inventories of hydrogen isotopes in the material, were seen in implantation-driven permeation experiments using 25- and 50-micron thick tungsten foils at temperatures of 638 to 825 K. Computer models that simulate permeation transients reproduce the steady-state permeation and reemission behavior of these experiments with expected values of material parameters. However, the transient time characteristics were not successfully simulated without the assumption of traps of substantial trap energy and concentration. An analytical model based on the assumptions of thermodynamic equilibrium between trapped hydrogen atoms and a comparatively low mobile atom concentration successfully accounts for the observed behavior. Using steady-state and transient permeation data from experiments at different temperatures, the effective trap binding energy may be inferred. We analyze a tungsten coated divertor plate design representative of those proposed for ITER and ARIES and consider the implications for tritium permeation and retention if the same trapping we observed was present in that tungsten. Inventory increases of several orders of magnitude may result.
A process-based agricultural model for the irrigated agriculture sector in Alberta, Canada
NASA Astrophysics Data System (ADS)
Ammar, M. E.; Davies, E. G.
2015-12-01
Connections between land and water, irrigation, agricultural productivity and profitability, policy alternatives, and climate change and variability are complex, poorly understood, and unpredictable. Policy assessment for agriculture presents a large potential for development of broad-based simulation models that can aid assessment and quantification of policy alternatives over longer temporal scales. The Canadian irrigated agriculture sector is concentrated in Alberta, where it represents two thirds of the irrigated land-base in Canada and is the largest consumer of surface water. Despite interest in irrigation expansion, its potential in Alberta is uncertain given a constrained water supply, significant social and economic development and increasing demands for both land and water, and climate change. This paper therefore introduces a system dynamics model as a decision support tool to provide insights into irrigation expansion in Alberta, and into trade-offs and risks associated with that expansion. It is intended to be used by a wide variety of users including researchers, policy analysts and planners, and irrigation managers. A process-based cropping system approach is at the core of the model and uses a water-driven crop growth mechanism described by AquaCrop. The tool goes beyond a representation of crop phenology and cropping systems by permitting assessment and quantification of the broader, long-term consequences of agricultural policies for Alberta's irrigation sector. It also encourages collaboration and provides a degree of transparency that gives confidence in simulation results. The paper focuses on the agricultural component of the systems model, describing the process involved; soil water and nutrients balance, crop growth, and water, temperature, salinity, and nutrients stresses, and how other disciplines can be integrated to account for the effects of interactions and feedbacks in the whole system. In later stages, other components such as livestock production systems and agricultural production economics will be integrated to the agricultural model to make the systems tool. It will capture feedback loops, time delays, and the nonlinearities of the system. Moreover, the model is designed for quick reconfiguration to different regions given parametrized crop data.
NASA Technical Reports Server (NTRS)
Angerer, James R.; Mccurdy, David A.; Erickson, Richard A.
1991-01-01
The purpose of this investigation was to develop a noise annoyance model, superior to those already in use, for evaluating passenger response to sounds containing tonal components which may be heard within current and future commercial aircraft. The sound spectra investigated ranged from those being experienced by passengers on board turbofan powered aircraft now in service to those cabin noise spectra passengers may experience within advanced propeller-driven aircraft of the future. A total of 240 sounds were tested in this experiment. Sixty-six of these 240 sounds were steady state, while the other 174 varied temporally due to tonal beating. Here, the entire experiment is described, but the analysis is limited to those responses elicited by the 66 steady-state sounds.
Internet MEMS design tools based on component technology
NASA Astrophysics Data System (ADS)
Brueck, Rainer; Schumer, Christian
1999-03-01
The micro electromechanical systems (MEMS) industry in Europe is characterized by small and medium sized enterprises specialized on products to solve problems in specific domains like medicine, automotive sensor technology, etc. In this field of business the technology driven design approach known from micro electronics is not appropriate. Instead each design problem aims at its own, specific technology to be used for the solution. The variety of technologies at hand, like Si-surface, Si-bulk, LIGA, laser, precision engineering requires a huge set of different design tools to be available. No single SME can afford to hold licenses for all these tools. This calls for a new and flexible way of designing, implementing and distributing design software. The Internet provides a flexible manner of offering software access along with methodologies of flexible licensing e.g. on a pay-per-use basis. New communication technologies like ADSL, TV cable of satellites as carriers promise to offer a bandwidth sufficient even for interactive tools with graphical interfaces in the near future. INTERLIDO is an experimental tool suite for process specification and layout verification for lithography based MEMS technologies to be accessed via the Internet. The first version provides a Java implementation even including a graphical editor for process specification. Currently, a new version is brought into operation that is based on JavaBeans component technology. JavaBeans offers the possibility to realize independent interactive design assistants, like a design rule checking assistants, a process consistency checking assistants, a technology definition assistants, a graphical editor assistants, etc. that may reside distributed over the Internet, communicating via Internet protocols. Each potential user thus is able to configure his own dedicated version of a design tool set dedicated to the requirements of the current problem to be solved.
Eyes On the Ground: Path Forward Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, Randolph; Little, Charles Q.; peter-stein, natacha
A previous report assesses our progress to date on the Eyes On the Ground project, and reviews lessons learned [1]. In this report, we address the implications of those lessons in defining the most productive path forward for the remainder of the project. We propose two main concepts: Interactive Diagnosis and Model-Driven Assistance. Among these, the Model-Driven Assistance concept appears the most promising. The Model-Driven Assistance concept is based on an approximate but useful model of a facility, which provides a unified representation for storing, viewing, and analyzing data that is known about the facility. This representation provides value tomore » both inspectors and IAEA headquarters, and facilitates communication between the two. The concept further includes a lightweight, portable field tool to aid the inspector in executing a variety of inspection tasks, including capture of images and 3-d scan data. We develop a detailed description of this concept, including its system components, functionality, and example use cases. The envisioned tool would provide value by reducing inspector cognitive load, streamlining inspection tasks, and facilitating communication between the inspector and teams at IAEA headquarters. We conclude by enumerating the top implementation priorities to pursue in the remaining limited time of the project. Approved for public release; further dissemination unlimited.« less
Wang, Futao; Pan, Yuanfeng; Cai, Pingxiong; Guo, Tianxiang; Xiao, Huining
2017-10-01
A high efficient and eco-friendly sugarcane cellulose-based adsorbent was prepared in an attempt to remove Pb 2+ , Cu 2+ and Zn 2+ from aqueous solutions. The effects of initial concentration of heavy metal ions and temperature on the adsorption capacity of the bioadsorbent were investigated. The adsorption isotherms showed that the adsorption of Pb 2+ , Cu 2+ and Zn 2+ followed the Langmuir model and the maximum adsorptions were as high as 558.9, 446.2 and 363.3mg·g -1 , respectively, in single component system. The binary component system was better described with the competitive Langmuir isotherm model. The three dimensional sorption surface of binary component system demonstrated that the presence of Pb 2+ decreased the sorption of Cu 2+ , but the adsorption amount of other metal ions was not affected. The result from SEM-EDAX revealed that the adsorption of metal ions on bioadsorbent was mainly driven by coordination, ion exchange and electrostatic association. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1990-01-01
Originally, computer programs for engineering design focused on detailed geometric design. Later, computer programs for algorithmically performing the preliminary design of specific well-defined classes of objects became commonplace. However, due to the need for extreme flexibility, it appears unlikely that conventional programming techniques will prove fruitful in developing computer aids for engineering conceptual design. The use of symbolic processing techniques, such as object-oriented programming and constraint propagation, facilitate such flexibility. Object-oriented programming allows programs to be organized around the objects and behavior to be simulated, rather than around fixed sequences of function- and subroutine-calls. Constraint propagation allows declarative statements to be understood as designating multi-directional mathematical relationships among all the variables of an equation, rather than as unidirectional assignments to the variable on the left-hand side of the equation, as in conventional computer programs. The research has concentrated on applying these two techniques to the development of a general-purpose computer aid for engineering conceptual design. Object-oriented programming techniques are utilized to implement a user-extensible database of design components. The mathematical relationships which model both geometry and physics of these components are managed via constraint propagation. In addition, to this component-based hierarchy, special-purpose data structures are provided for describing component interactions and supporting state-dependent parameters. In order to investigate the utility of this approach, a number of sample design problems from the field of aerospace engineering were implemented using the prototype design tool, Rubber Airplane. The additional level of organizational structure obtained by representing design knowledge in terms of components is observed to provide greater convenience to the program user, and to result in a database of engineering information which is easier both to maintain and to extend.
Domke, Grant M.; Woodall, Christopher W.; Walters, Brian F.; Smith, James E.
2013-01-01
The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.’s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events. PMID:23544112
Domke, Grant M; Woodall, Christopher W; Walters, Brian F; Smith, James E
2013-01-01
The inventory and monitoring of coarse woody debris (CWD) carbon (C) stocks is an essential component of any comprehensive National Greenhouse Gas Inventory (NGHGI). Due to the expense and difficulty associated with conducting field inventories of CWD pools, CWD C stocks are often modeled as a function of more commonly measured stand attributes such as live tree C density. In order to assess potential benefits of adopting a field-based inventory of CWD C stocks in lieu of the current model-based approach, a national inventory of downed dead wood C across the U.S. was compared to estimates calculated from models associated with the U.S.'s NGHGI and used in the USDA Forest Service, Forest Inventory and Analysis program. The model-based population estimate of C stocks for CWD (i.e., pieces and slash piles) in the conterminous U.S. was 9 percent (145.1 Tg) greater than the field-based estimate. The relatively small absolute difference was driven by contrasting results for each CWD component. The model-based population estimate of C stocks from CWD pieces was 17 percent (230.3 Tg) greater than the field-based estimate, while the model-based estimate of C stocks from CWD slash piles was 27 percent (85.2 Tg) smaller than the field-based estimate. In general, models overestimated the C density per-unit-area from slash piles early in stand development and underestimated the C density from CWD pieces in young stands. This resulted in significant differences in CWD C stocks by region and ownership. The disparity in estimates across spatial scales illustrates the complexity in estimating CWD C in a NGHGI. Based on the results of this study, it is suggested that the U.S. adopt field-based estimates of CWD C stocks as a component of its NGHGI to both reduce the uncertainty within the inventory and improve the sensitivity to potential management and climate change events.
Contextualizing Learning Scenarios According to Different Learning Management Systems
ERIC Educational Resources Information Center
Drira, R.; Laroussi, M.; Le Pallec, X.; Warin, B.
2012-01-01
In this paper, we first demonstrate that an instructional design process of Technology Enhanced Learning (TEL) systems based on a Model Driven Approach (MDA) addresses the limits of Learning Technology Standards (LTS), such as SCORM and IMS-LD. Although these standards ensure the interoperability of TEL systems across different Learning Management…
ERIC Educational Resources Information Center
King, Kathleen P.
2009-01-01
Based on the theory of transformative learning (Mezirow, 1980) and critical pedagogy (Freire, 1980), mixed-methods research (Tashakkori & Teddlie, 1998) of a hospital workers' union and training organization addressed the impact of a custom-designed, group-focused, results-driven professional development model with 130 participants. Employees…
Experimental studies of characteristic combustion-driven flows for CFD validation
NASA Technical Reports Server (NTRS)
Santoro, R. J.; Moser, M.; Anderson, W.; Pal, S.; Ryan, H.; Merkle, C. L.
1992-01-01
A series of rocket-related studies intended to develop a suitable data base for validation of Computational Fluid Dynamics (CFD) models of characteristic combustion-driven flows was undertaken at the Propulsion Engineering Research Center at Penn State. Included are studies of coaxial and impinging jet injectors as well as chamber wall heat transfer effects. The objective of these studies is to provide fundamental understanding and benchmark quality data for phenomena important to rocket combustion under well-characterized conditions. Diagnostic techniques utilized in these studies emphasize determinations of velocity, temperature, spray and droplet characteristics, and combustion zone distribution. Since laser diagnostic approaches are favored, the development of an optically accessible rocket chamber has been a high priority in the initial phase of the project. During the design phase for this chamber, the advice and input of the CFD modeling community were actively sought through presentations and written surveys. Based on this procedure, a suitable uni-element rocket chamber was fabricated and is presently under preliminary testing. Results of these tests, as well as the survey findings leading to the chamber design, were presented.
Evaluating the role of clinical pharmacists in pre-procedural anticoagulation management.
Kataruka, Akash; Renner, Elizabeth; Barnes, Geoffrey D
2018-02-01
While physicians are typically responsible for managing perioperative warfarin, clinic pharmacists may improve pre-procedural decision-making. We assessed the impact of pharmacist-driven care for chronic warfarin-treated patients undergoing outpatient right heart catheterization (RHC). 200 warfarin patients who underwent RHC between January 2012 and September 2015 were analyzed. Pharmacist-care (n = 79) was compared to the usual care model (n = 121). The primary outcome was a composite of (1) documentation of anticoagulation plan, (2) holding warfarin at least 5 days prior to procedure, (3) guideline-congruent low molecular weight heparin (LMWH) bridging, and (4) correct LMWH dosing if bridging deemed necessary. Chi-squared test performed to assess the role of pharmacist. A multivariable logistic regression analysis was performed to the composite endpoint, adjusted for the month of procedure. Compared to the usual care model, pharmacist-driven care (OR 4.69, 95% CI 1.73-12.71, p = 0.002) and date of the procedure (OR 1.06/month, 95% CI 1.01-1.10, p = 0.011) were independently associated with the primary composite outcome. Of the individual outcome components, pharmacist-driven care was only associated with documentation (96.2% vs. 67.8%, OR 9.19, 95% CI 2.19-38.62, p = 0.002). Remaining components including hold warfarin for at least 5 days, appropriate bridging and correct LMWH dosing were not significantly associated with pharmacist-care. Pharmacist-care is associated with better guideline-based anticoagulation management, but this was primarily driven by improved documentation. The impact of pharmacist managed peri-procedural anticoagulation on clinical outcomes remains unknown.
NASA Astrophysics Data System (ADS)
Liang, Likai; Bi, Yushen
Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.
Failure Predictions for VHTR Core Components using a Probabilistic Contiuum Damage Mechanics Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fok, Alex
2013-10-30
The proposed work addresses the key research need for the development of constitutive models and overall failure models for graphite and high temperature structural materials, with the long-term goal being to maximize the design life of the Next Generation Nuclear Plant (NGNP). To this end, the capability of a Continuum Damage Mechanics (CDM) model, which has been used successfully for modeling fracture of virgin graphite, will be extended as a predictive and design tool for the core components of the very high- temperature reactor (VHTR). Specifically, irradiation and environmental effects pertinent to the VHTR will be incorporated into the modelmore » to allow fracture of graphite and ceramic components under in-reactor conditions to be modeled explicitly using the finite element method. The model uses a combined stress-based and fracture mechanics-based failure criterion, so it can simulate both the initiation and propagation of cracks. Modern imaging techniques, such as x-ray computed tomography and digital image correlation, will be used during material testing to help define the baseline material damage parameters. Monte Carlo analysis will be performed to address inherent variations in material properties, the aim being to reduce the arbitrariness and uncertainties associated with the current statistical approach. The results can potentially contribute to the current development of American Society of Mechanical Engineers (ASME) codes for the design and construction of VHTR core components.« less
Shape optimization of three-dimensional stamped and solid automotive components
NASA Technical Reports Server (NTRS)
Botkin, M. E.; Yang, R.-J.; Bennett, J. A.
1987-01-01
The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.
Gao, Quan-Wen; Song, Hui-Feng; Xu, Ming-Huo; Liu, Chun-Ming; Chai, Jia-Ke
2013-11-01
To explore the clinical application of mandibular-driven simultaneous maxillo-mandihular distraction to correct hemifacial microsomia with rapid prototyping technology. The patient' s skull resin model was manufactured with rapid prototyping technology. The osteotomy was designed on skull resin model. According to the preoperative design, the patients underwent Le Fort I osteotomy and mandibular ramus osteotomy. The internal mandible distractor was embedded onto the osteotomy position. The occlusal titanium pin was implanted. Distraction were carried out by mandibular-driven simultaneous maxillo-mandihular distraction 5 days after operation. The distraction in five patients was complete as designed. No infection and dysosteogenesis happened. The longest distance of distraction was 28 mm, and the shortest distance was 16 mm. The facial asymmetry deformity was significantly improved at the end of distraction. The ocelusal plane of patients obviously improved. Rapid prototyping technology is helpful to design precisely osteotomy before operation. Mandibular-driven simultaneous maxillo-mandibular distraction can correct hemifacial microsomia. It is worth to clinical application.
Zhao, Ming; Rattanatamrong, Prapaporn; DiGiovanna, Jack; Mahmoudi, Babak; Figueiredo, Renato J; Sanchez, Justin C; Príncipe, José C; Fortes, José A B
2008-01-01
Dynamic data-driven brain-machine interfaces (DDDBMI) have great potential to advance the understanding of neural systems and improve the design of brain-inspired rehabilitative systems. This paper presents a novel cyberinfrastructure that couples in vivo neurophysiology experimentation with massive computational resources to provide seamless and efficient support of DDDBMI research. Closed-loop experiments can be conducted with in vivo data acquisition, reliable network transfer, parallel model computation, and real-time robot control. Behavioral experiments with live animals are supported with real-time guarantees. Offline studies can be performed with various configurations for extensive analysis and training. A Web-based portal is also provided to allow users to conveniently interact with the cyberinfrastructure, conducting both experimentation and analysis. New motor control models are developed based on this approach, which include recursive least square based (RLS) and reinforcement learning based (RLBMI) algorithms. The results from an online RLBMI experiment shows that the cyberinfrastructure can successfully support DDDBMI experiments and meet the desired real-time requirements.
Event-driven simulation in SELMON: An overview of EDSE
NASA Technical Reports Server (NTRS)
Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.
1992-01-01
EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.
Quantitative modeling of the reaction/diffusion kinetics of two-chemistry photopolymers
NASA Astrophysics Data System (ADS)
Kowalski, Benjamin Andrew
Optically driven diffusion in photopolymers is an appealing material platform for a broad range of applications, in which the recorded refractive index patterns serve either as images (e.g. data storage, display holography) or as optical elements (e.g. custom GRIN components, integrated optical devices). A quantitative understanding of the reaction/diffusion kinetics is difficult to obtain directly, but is nevertheless necessary in order to fully exploit the wide array of design freedoms in these materials. A general strategy for characterizing these kinetics is proposed, in which key processes are decoupled and independently measured. This strategy enables prediction of a material's potential refractive index change, solely on the basis of its chemical components. The degree to which a material does not reach this potential reveals the fraction of monomer that has participated in unwanted reactions, reducing spatial resolution and dynamic range. This approach is demonstrated for a model material similar to commercial media, achieving quantitative predictions of index response over three orders of exposure dose (~1 to ~103 mJ cm-2) and three orders of feature size (0.35 to 500 microns). The resulting insights enable guided, rational design of new material formulations with demonstrated performance improvement.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-10-28
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems' architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation.
Palomar, Esther; Chen, Xiaohong; Liu, Zhiming; Maharjan, Sabita; Bowen, Jonathan
2016-01-01
Smart city systems embrace major challenges associated with climate change, energy efficiency, mobility and future services by embedding the virtual space into a complex cyber-physical system. Those systems are constantly evolving and scaling up, involving a wide range of integration among users, devices, utilities, public services and also policies. Modelling such complex dynamic systems’ architectures has always been essential for the development and application of techniques/tools to support design and deployment of integration of new components, as well as for the analysis, verification, simulation and testing to ensure trustworthiness. This article reports on the definition and implementation of a scalable component-based architecture that supports a cooperative energy demand response (DR) system coordinating energy usage between neighbouring households. The proposed architecture, called refinement of Cyber-Physical Component Systems (rCPCS), which extends the refinement calculus for component and object system (rCOS) modelling method, is implemented using Eclipse Extensible Coordination Tools (ECT), i.e., Reo coordination language. With rCPCS implementation in Reo, we specify the communication, synchronisation and co-operation amongst the heterogeneous components of the system assuring, by design scalability and the interoperability, correctness of component cooperation. PMID:27801829
Stav, Erlend; Walderhaug, Ståle; Mikalsen, Marius; Hanke, Sten; Benc, Ivan
2013-11-01
The proper use of ICT services can support seniors in living independently longer. While such services are starting to emerge, current proprietary solutions are often expensive, covering only isolated parts of seniors' needs, and lack support for sharing information between services and between users. For developers, the challenge is that it is complex and time consuming to develop high quality, interoperable services, and new techniques are needed to simplify the development and reduce the development costs. This paper provides the complete view of the experiences gained in the MPOWER project with respect to using model-driven development (MDD) techniques for Service Oriented Architecture (SOA) system development in the Ambient Assisted Living (AAL) domain. To address this challenge, the approach of the European research project MPOWER (2006-2009) was to investigate and record the user needs, define a set of reusable software services based on these needs, and then implement pilot systems using these services. Further, a model-driven toolchain covering key development phases was developed to support software developers through this process. Evaluations were conducted both on the technical artefacts (methodology and tools), and on end user experience from using the pilot systems in trial sites. The outcome of the work on the user needs is a knowledge base recorded as a Unified Modeling Language (UML) model. This comprehensive model describes actors, use cases, and features derived from these. The model further includes the design of a set of software services, including full trace information back to the features and use cases motivating their design. Based on the model, the services were implemented for use in Service Oriented Architecture (SOA) systems, and are publicly available as open source software. The services were successfully used in the realization of two pilot applications. There is therefore a direct and traceable link from the user needs of the elderly, through the service design knowledge base, to the service and pilot implementations. The evaluation of the SOA approach on the developers in the project revealed that SOA is useful with respect to job performance and quality. Furthermore, they think SOA is easy to use and support development of AAL applications. An important finding is that the developers clearly report that they intend to use SOA in the future, but not for all type of projects. With respect to using model-driven development in web services design and implementation, the developers reported that it was useful. However, it is important that the code generated from the models is correct if the full potential of MDD should be achieved. The pilots and their evaluation in the trial sites showed that the services of the platform are sufficient to create suitable systems for end users in the domain. A SOA platform with a set of reusable domain services is a suitable foundation for more rapid development and tailoring of assisted living systems covering reoccurring needs among elderly users. It is feasible to realize a tool-chain for model-driven development of SOA applications in the AAL domain, and such a tool-chain can be accepted and found useful by software developers. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Respiratory protective device design using control system techniques
NASA Technical Reports Server (NTRS)
Burgess, W. A.; Yankovich, D.
1972-01-01
The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.
Crash energy absorption of two-segment crash box with holes under frontal load
NASA Astrophysics Data System (ADS)
Choiron, Moch. Agus; Sudjito, Hidayati, Nafisah Arina
2016-03-01
Crash box is one of the passive safety components which designed as an impact energy absorber during collision. Crash box designs have been developed in order to obtain the optimum crashworthiness performance. Circular cross section was first investigated with one segment design, it rather influenced by its length which is being sensitive to the buckling occurrence. In this study, the two-segment crash box design with additional holes is investigated and deformation behavior and crash energy absorption are observed. The crash box modelling is performed by finite element analysis. The crash test components were impactor, crash box, and fixed rigid base. Impactor and the fixed base material are modelled as a rigid, and crash box material as bilinear isotropic hardening. Crash box length of 100 mm and frontal crash velocity of 16 km/jam are selected. Crash box material of Aluminum Alloy is used. Based on simulation results, it can be shown that holes configuration with 2 holes and ¾ length locations have the largest crash energy absorption. This condition associated with deformation pattern, this crash box model produces axisymmetric mode than other models.
Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie
2009-01-01
Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.
Halogen bonding (X-bonding): A biological perspective
Scholfield, Matthew R; Zanden, Crystal M Vander; Carter, Megan; Ho, P Shing
2013-01-01
The concept of the halogen bond (or X-bond) has become recognized as contributing significantly to the specificity in recognition of a large class of halogenated compounds. The interaction is most easily understood as primarily an electrostatically driven molecular interaction, where an electropositive crown, or σ-hole, serves as a Lewis acid to attract a variety of electron-rich Lewis bases, in analogous fashion to a classic hydrogen bonding (H-bond) interaction. We present here a broad overview of X-bonds from the perspective of a biologist who may not be familiar with this recently rediscovered class of interactions and, consequently, may be interested in how they can be applied as a highly directional and specific component of the molecular toolbox. This overview includes a discussion for where X-bonds are found in biomolecular structures, and how their structure–energy relationships are studied experimentally and modeled computationally. In total, our understanding of these basic concepts will allow X-bonds to be incorporated into strategies for the rational design of new halogenated inhibitors against biomolecular targets or toward molecular engineering of new biological-based materials. PMID:23225628
Study on Capturing Functional Requirements of the New Product Based on Evolution
NASA Astrophysics Data System (ADS)
Liu, Fang; Song, Liya; Bai, Zhonghang; Zhang, Peng
In order to exist in an increasingly competitive global marketplace, it is important for corporations to forecast the evolutionary direction of new products rapidly and effectively. Most products in the world are developed based on the design of existing products. In the product design, capturing functional requirements is a key step. Function is continuously evolving, which is driven by the evolution of needs and technologies. So the functional requirements of new product can be forecasted based on the functions of existing product. Eight laws of function evolution are put forward in this paper. The process model of capturing the functional requirements of new product based on function evolution is proposed. An example illustrates the design process.
General Purpose Data-Driven Monitoring for Space Operations
NASA Technical Reports Server (NTRS)
Iverson, David L.; Martin, Rodney A.; Schwabacher, Mark A.; Spirkovska, Liljana; Taylor, William McCaa; Castle, Joseph P.; Mackey, Ryan M.
2009-01-01
As modern space propulsion and exploration systems improve in capability and efficiency, their designs are becoming increasingly sophisticated and complex. Determining the health state of these systems, using traditional parameter limit checking, model-based, or rule-based methods, is becoming more difficult as the number of sensors and component interactions grow. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults or failures. The Inductive Monitoring System (IMS) is a data-driven system health monitoring software tool that has been successfully applied to several aerospace applications. IMS uses a data mining technique called clustering to analyze archived system data and characterize normal interactions between parameters. The scope of IMS based data-driven monitoring applications continues to expand with current development activities. Successful IMS deployment in the International Space Station (ISS) flight control room to monitor ISS attitude control systems has led to applications in other ISS flight control disciplines, such as thermal control. It has also generated interest in data-driven monitoring capability for Constellation, NASA's program to replace the Space Shuttle with new launch vehicles and spacecraft capable of returning astronauts to the moon, and then on to Mars. Several projects are currently underway to evaluate and mature the IMS technology and complementary tools for use in the Constellation program. These include an experiment on board the Air Force TacSat-3 satellite, and ground systems monitoring for NASA's Ares I-X and Ares I launch vehicles. The TacSat-3 Vehicle System Management (TVSM) project is a software experiment to integrate fault and anomaly detection algorithms and diagnosis tools with executive and adaptive planning functions contained in the flight software on-board the Air Force Research Laboratory TacSat-3 satellite. The TVSM software package will be uploaded after launch to monitor spacecraft subsystems such as power and guidance, navigation, and control (GN&C). It will analyze data in real-time to demonstrate detection of faults and unusual conditions, diagnose problems, and react to threats to spacecraft health and mission goals. The experiment will demonstrate the feasibility and effectiveness of integrated system health management (ISHM) technologies with both ground and on-board experiments.
NASA Astrophysics Data System (ADS)
Merkord, C. L.; Liu, Y.; DeVos, M.; Wimberly, M. C.
2015-12-01
Malaria early detection and early warning systems are important tools for public health decision makers in regions where malaria transmission is seasonal and varies from year to year with fluctuations in rainfall and temperature. Here we present a new data-driven dynamic linear model based on the Kalman filter with time-varying coefficients that are used to identify malaria outbreaks as they occur (early detection) and predict the location and timing of future outbreaks (early warning). We fit linear models of malaria incidence with trend and Fourier form seasonal components using three years of weekly malaria case data from 30 districts in the Amhara Region of Ethiopia. We identified past outbreaks by comparing the modeled prediction envelopes with observed case data. Preliminary results demonstrated the potential for improved accuracy and timeliness over commonly-used methods in which thresholds are based on simpler summary statistics of historical data. Other benefits of the dynamic linear modeling approach include robustness to missing data and the ability to fit models with relatively few years of training data. To predict future outbreaks, we started with the early detection model for each district and added a regression component based on satellite-derived environmental predictor variables including precipitation data from the Tropical Rainfall Measuring Mission (TRMM) and land surface temperature (LST) and spectral indices from the Moderate Resolution Imaging Spectroradiometer (MODIS). We included lagged environmental predictors in the regression component of the model, with lags chosen based on cross-correlation of the one-step-ahead forecast errors from the first model. Our results suggest that predictions of future malaria outbreaks can be improved by incorporating lagged environmental predictors.
Designing Problem-Driven Instruction with Online Social Media
ERIC Educational Resources Information Center
Kyeong-Ju Seo, Kay, Ed.; Pellegrino, Debra A., Ed.; Engelhard, Chalee, Ed.
2012-01-01
Designing Problem-Driven Instruction with Online Social Media has the capacity to transform an educator's teaching style by presenting innovative ways to empower problem-based instruction with online social media. Knowing that not all instructors are comfortable in this area, this book provides clear, systematic design approaches for instructors…
Cost-driven materials selection criteria for redox flow battery electrolytes
NASA Astrophysics Data System (ADS)
Dmello, Rylan; Milshtein, Jarrod D.; Brushett, Fikile R.; Smith, Kyle C.
2016-10-01
Redox flow batteries show promise for grid-scale energy storage applications but are presently too expensive for widespread adoption. Electrolyte material costs constitute a sizeable fraction of the redox flow battery price. As such, this work develops a techno-economic model for redox flow batteries that accounts for redox-active material, salt, and solvent contributions to the electrolyte cost. Benchmark values for electrolyte constituent costs guide identification of design constraints. Nonaqueous battery design is sensitive to all electrolyte component costs, cell voltage, and area-specific resistance. Design challenges for nonaqueous batteries include minimizing salt content and dropping redox-active species concentration requirements. Aqueous battery design is sensitive to only redox-active material cost and cell voltage, due to low area-specific resistance and supporting electrolyte costs. Increasing cell voltage and decreasing redox-active material cost present major materials selection challenges for aqueous batteries. This work minimizes cost-constraining variables by mapping the battery design space with the techno-economic model, through which we highlight pathways towards low price and moderate concentration. Furthermore, the techno-economic model calculates quantitative iterations of battery designs to achieve the Department of Energy battery price target of 100 per kWh and highlights cost cutting strategies to drive battery prices down further.
NASA Technical Reports Server (NTRS)
Mahefkey, E. T.; Richter, R.
1981-01-01
The major design and performance test subtasks in the development of small (200 to 1,000 whr) integral heat pipe/thermal energy storage devices for use with thermally driven spacecraft cryo-coolers are described. The design of the integral heat pipe/thermal energy storage device was based on a quasi steady resistance heat transfer, lumped capacitance model. Design considerations for the heat pipe and thermal storage annuli are presented. The thermomechanical stress and insulation system design for the device are reviewed. Experimental correlations are described, as are the plans for the further development of the concept.
Design and development of integral heat pipe/thermal energy storage devices
NASA Astrophysics Data System (ADS)
Mahefkey, E. T.; Richter, R.
1981-06-01
The major design and performance test subtasks in the development of small (200 to 1,000 whr) integral heat pipe/thermal energy storage devices for use with thermally driven spacecraft cryo-coolers are described. The design of the integral heat pipe/thermal energy storage device was based on a quasi steady resistance heat transfer, lumped capacitance model. Design considerations for the heat pipe and thermal storage annuli are presented. The thermomechanical stress and insulation system design for the device are reviewed. Experimental correlations are described, as are the plans for the further development of the concept.
A Data-Driven Approach to Interactive Visualization of Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jun
Driven by emerging industry standards, electric utilities and grid coordination organizations are eager to seek advanced tools to assist grid operators to perform mission-critical tasks and enable them to make quick and accurate decisions. The emerging field of visual analytics holds tremendous promise for improving the business practices in today’s electric power industry. The conducted investigation, however, has revealed that the existing commercial power grid visualization tools heavily rely on human designers, hindering user’s ability to discover. Additionally, for a large grid, it is very labor-intensive and costly to build and maintain the pre-designed visual displays. This project proposes amore » data-driven approach to overcome the common challenges. The proposed approach relies on developing powerful data manipulation algorithms to create visualizations based on the characteristics of empirically or mathematically derived data. The resulting visual presentations emphasize what the data is rather than how the data should be presented, thus fostering comprehension and discovery. Furthermore, the data-driven approach formulates visualizations on-the-fly. It does not require a visualization design stage, completely eliminating or significantly reducing the cost for building and maintaining visual displays. The research and development (R&D) conducted in this project is mainly divided into two phases. The first phase (Phase I & II) focuses on developing data driven techniques for visualization of power grid and its operation. Various data-driven visualization techniques were investigated, including pattern recognition for auto-generation of one-line diagrams, fuzzy model based rich data visualization for situational awareness, etc. The R&D conducted during the second phase (Phase IIB) focuses on enhancing the prototyped data driven visualization tool based on the gathered requirements and use cases. The goal is to evolve the prototyped tool developed during the first phase into a commercial grade product. We will use one of the identified application areas as an example to demonstrate how research results achieved in this project are successfully utilized to address an emerging industry need. In summary, the data-driven visualization approach developed in this project has proven to be promising for building the next-generation power grid visualization tools. Application of this approach has resulted in a state-of-the-art commercial tool currently being leveraged by more than 60 utility organizations in North America and Europe .« less
Pedagogical Reasoning and Action: Affordances of Practice-Based Teacher Professional Development
ERIC Educational Resources Information Center
Pella, Shannon
2015-01-01
A common theme has been consistently woven through the literature on teacher professional development: that practice-based designs and collaboration are two components of effective teacher learning models. In addition to collaboration and practice-based designs, inquiry cycles have been long recognized as catalysts for teacher professional…
NASA Astrophysics Data System (ADS)
Oskouie, M. Faraji; Ansari, R.; Rouhi, H.
2018-04-01
Eringen's nonlocal elasticity theory is extensively employed for the analysis of nanostructures because it is able to capture nanoscale effects. Previous studies have revealed that using the differential form of the strain-driven version of this theory leads to paradoxical results in some cases, such as bending analysis of cantilevers, and recourse must be made to the integral version. In this article, a novel numerical approach is developed for the bending analysis of Euler-Bernoulli nanobeams in the context of strain- and stress-driven integral nonlocal models. This numerical approach is proposed for the direct solution to bypass the difficulties related to converting the integral governing equation into a differential equation. First, the governing equation is derived based on both strain-driven and stress-driven nonlocal models by means of the minimum total potential energy. Also, in each case, the governing equation is obtained in both strong and weak forms. To solve numerically the derived equations, matrix differential and integral operators are constructed based upon the finite difference technique and trapezoidal integration rule. It is shown that the proposed numerical approach can be efficiently applied to the strain-driven nonlocal model with the aim of resolving the mentioned paradoxes. Also, it is able to solve the problem based on the strain-driven model without inconsistencies of the application of this model that are reported in the literature.
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
Sinha, Samir K; Bessman, Edward S; Flomenbaum, Neal; Leff, Bruce
2011-06-01
We inform the future development of a new geriatric emergency management practice model. We perform a systematic review of the existing evidence for emergency department (ED)-based case management models designed to improve the health, social, and health service utilization outcomes for noninstitutionalized older patients within the context of an index ED visit. This was a systematic review of English-language articles indexed in MEDLINE and CINAHL (1966 to 2010), describing ED-based case management models for older adults. Bibliographies of the retrieved articles were reviewed to identify additional references. A systematic qualitative case study analytic approach was used to identify the core operational components and outcome measures of the described clinical interventions. The authors of the included studies were also invited to verify our interpretations of their work. The determined patterns of component adherence were then used to postulate the relative importance and effect of the presence or absence of a particular component in influencing the overall effectiveness of their respective interventions. Eighteen of 352 studies (reported in 20 articles) met study criteria. Qualitative analyses identified 28 outcome measures and 8 distinct model characteristic components that included having an evidence-based practice model, nursing clinical involvement or leadership, high-risk screening processes, focused geriatric assessments, the initiation of care and disposition planning in the ED, interprofessional and capacity-building work practices, post-ED discharge follow-up with patients, and evaluation and monitoring processes. Of the 15 positive study results, 6 had all 8 characteristic components and 9 were found to be lacking at least 1 component. Two studies with positive results lacked 2 characteristic components and none lacked more than 2 components. Of the 3 studies with negative results demonstrating no positive effects based on any outcome tested, one lacked 2, one lacked 3, and one lacked 4 of the 8 model components. Successful models of ED-based case management models for older adults share certain key characteristics. This study builds on the emerging literature in this area and leverages the differences in these models and their associated outcomes to support the development of an evidence-based normative and effective geriatric emergency management practice model designed to address the special care needs and thereby improve the health and health service utilization outcomes of older patients. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Amran, T. G.; Janitra Yose, Mindy
2018-03-01
As the free trade Asean Economic Community (AEC) causes the tougher competition, it is important that Indonesia’s automotive industry have high competitiveness as well. A model of logistics performance measurement was designed as an evaluation tool for automotive component companies to improve their logistics performance in order to compete in AEC. The design of logistics performance measurement model was based on the Logistics Scorecard perspectives, divided into two stages: identifying the logistics business strategy to get the KPI and arranging the model. 23 KPI was obtained. The measurement result can be taken into consideration of determining policies to improve the performance logistics competitiveness.
Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation
NASA Astrophysics Data System (ADS)
Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua
2015-09-01
Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.
Lost in translation: bridging gaps between design and evidence-based design.
Watkins, Nicholas; Keller, Amy
2008-01-01
The healthcare design community is adopting evidence-based design (EBD) at a startling rate. However, the role of research within an architectural practice is unclear. Reasons for the lack of clarity include multiple connotations of EBD, the tension between a research-driven market and market-driven research, and the competing expectations and standards of design practitioners and researchers. Research as part of EBD should be integral with the design process so that research directly contributes to building projects. Characteristics of a comprehensive programming methodology to close the gap between design and EBD are suggested.
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.
Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin
2007-11-01
This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
Design of a line-VISAR interferometer system for the Sandia Z Machine
NASA Astrophysics Data System (ADS)
Galbraith, J.; Austin, K.; Baker, J.; Bettencourt, R.; Bliss, E.; Celeste, J.; Clancy, T.; Cohen, S.; Crosley, M.; Datte, P.; Fratanduono, D.; Frieders, G.; Hammer, J.; Jackson, J.; Johnson, D.; Jones, M.; Koen, D.; Lusk, J.; Martinez, A.; Massey, W.; McCarville, T.; McLean, H.; Raman, K.; Rodriguez, S.; Spencer, D.; Springer, P.; Wong, J.
2017-08-01
A joint team comprised of Lawrence Livermore National Laboratory (LLNL) and Sandia National Laboratory (SNL) personnel is designing a line-VISAR (Velocity Interferometer System for Any Reflector) for the Sandia Z Machine, Z Line-VISAR. The diagnostic utilizes interferometry to assess current delivery as a function of radius during a magnetically-driven implosion. The Z Line-VISAR system is comprised of the following: a two-leg line-VISAR interferometer, an eight-channel Gated Optical Imager (GOI), and a fifty-meter transport beampath to/from the target of interest. The Z Machine presents unique optomechanical design challenges. The machine utilizes magnetically driven pulsed power to drive a target to elevated temperatures and pressures useful for high energy density science. Shock accelerations exceeding 30g and a strong electromagnetic pulse (EMP) are generated during the shot event as the machine discharges currents of over 25 million amps. Sensitive optical components must be protected from shock loading, and electrical equipment must be adequately shielded from the EMP. The optical design must accommodate temperature and humidity fluctuations in the facility as well as airborne hydrocarbons from the pulsed power components. We will describe the engineering design and concept of operations of the Z Line-VISAR system. Focus will be on optomechanical design.
NASA Technical Reports Server (NTRS)
Briggs, Hugh C.
2008-01-01
An error budget is a commonly used tool in design of complex aerospace systems. It represents system performance requirements in terms of allowable errors and flows these down through a hierarchical structure to lower assemblies and components. The requirements may simply be 'allocated' based upon heuristics or experience, or they may be designed through use of physics-based models. This paper presents a basis for developing an error budget for models of the system, as opposed to the system itself. The need for model error budgets arises when system models are a principle design agent as is increasingly more common for poorly testable high performance space systems.
An approach to the mathematical modelling of a controlled ecological life support system
NASA Technical Reports Server (NTRS)
Averner, M. M.
1981-01-01
An approach to the design of a computer based model of a closed ecological life-support system suitable for use in extraterrestrial habitats is presented. The model is based on elemental mass balance and contains representations of the metabolic activities of biological components. The model can be used as a tool in evaluating preliminary designs for closed regenerative life support systems and as a method for predicting the behavior of such systems.
NASA Astrophysics Data System (ADS)
Zimmerling, Clemens; Dörr, Dominik; Henning, Frank; Kärger, Luise
2018-05-01
Due to their high mechanical performance, continuous fibre reinforced plastics (CoFRP) become increasingly important for load bearing structures. In many cases, manufacturing CoFRPs comprises a forming process of textiles. To predict and optimise the forming behaviour of a component, numerical simulations are applied. However, for maximum part quality, both the geometry and the process parameters must match in mutual regard, which in turn requires numerous numerically expensive optimisation iterations. In both textile and metal forming, a lot of research has focused on determining optimum process parameters, whilst regarding the geometry as invariable. In this work, a meta-model based approach on component level is proposed, that provides a rapid estimation of the formability for variable geometries based on pre-sampled, physics-based draping data. Initially, a geometry recognition algorithm scans the geometry and extracts a set of doubly-curved regions with relevant geometry parameters. If the relevant parameter space is not part of an underlying data base, additional samples via Finite-Element draping simulations are drawn according to a suitable design-table for computer experiments. Time saving parallel runs of the physical simulations accelerate the data acquisition. Ultimately, a Gaussian Regression meta-model is built from the data base. The method is demonstrated on a box-shaped generic structure. The predicted results are in good agreement with physics-based draping simulations. Since evaluations of the established meta-model are numerically inexpensive, any further design exploration (e.g. robustness analysis or design optimisation) can be performed in short time. It is expected that the proposed method also offers great potential for future applications along virtual process chains: For each process step along the chain, a meta-model can be set-up to predict the impact of design variations on manufacturability and part performance. Thus, the method is considered to facilitate a lean and economic part and process design under consideration of manufacturing effects.
TRANSFORM - TRANsient Simulation Framework of Reconfigurable Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenwood, Michael S; Cetiner, Mustafa S; Fugate, David L
Existing development tools for early stage design and scoping of energy systems are often time consuming to use, proprietary, and do not contain the necessary function to model complete systems (i.e., controls, primary, and secondary systems) in a common platform. The Modelica programming language based TRANSFORM tool (1) provides a standardized, common simulation environment for early design of energy systems (i.e., power plants), (2) provides a library of baseline component modules to be assembled into full plant models using available geometry, design, and thermal-hydraulic data, (3) defines modeling conventions for interconnecting component models, and (4) establishes user interfaces and supportmore » tools to facilitate simulation development (i.e., configuration and parameterization), execution, and results display and capture.« less
User's guide to the Reliability Estimation System Testbed (REST)
NASA Technical Reports Server (NTRS)
Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam
1992-01-01
The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.
A time domain frequency-selective multivariate Granger causality approach.
Leistritz, Lutz; Witte, Herbert
2016-08-01
The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.
The potential of expert systems for remote sensing application
NASA Technical Reports Server (NTRS)
Mooneyhan, D. W.
1983-01-01
An overview of the status and potential of artificial intelligence-driven expert systems in the role of image data analysis is presented. An expert system is defined and its structure is summarized. Three such systems designed for image interpretation are outlined. The use of an expert system to detect changes on the earth's surface is discussed, and the components of a knowledge-based image interpretation system and their make-up are outlined. An example of how such a system should work for an area in the tropics where deforestation has occurred is presented as a sequence of situation/action decisions.
Optical components damage parameters database system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong
2012-10-01
Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.
Learning Tasks, Peer Interaction, and Cognition Process: An Online Collaborative Design Model
ERIC Educational Resources Information Center
Du, Jianxia; Durrington, Vance A.
2013-01-01
This paper illustrates a model for Online Group Collaborative Learning. The authors based the foundation of the Online Collaborative Design Model upon Piaget's concepts of assimilation and accommodation, and Vygotsky's theory of social interaction. The four components of online collaborative learning include: individual processes, the task(s)…
Designing an Educational Game with Ten Steps to Complex Learning
ERIC Educational Resources Information Center
Enfield, Jacob
2012-01-01
Few instructional design (ID) models exist which are specific for developing educational games. Moreover, those extant ID models have not been rigorously evaluated. No ID models were found which focus on educational games with complex learning objectives. "Ten Steps to Complex Learning" (TSCL) is based on the four component instructional…
IoT-Based User-Driven Service Modeling Environment for a Smart Space Management System
Choi, Hoan-Suk; Rhee, Woo-Seop
2014-01-01
The existing Internet environment has been extended to the Internet of Things (IoT) as an emerging new paradigm. The IoT connects various physical entities. These entities have communication capability and deploy the observed information to various service areas such as building management, energy-saving systems, surveillance services, and smart homes. These services are designed and developed by professional service providers. Moreover, users' needs have become more complicated and personalized with the spread of user-participation services such as social media and blogging. Therefore, some active users want to create their own services to satisfy their needs, but the existing IoT service-creation environment is difficult for the non-technical user because it requires a programming capability to create a service. To solve this problem, we propose the IoT-based user-driven service modeling environment to provide an easy way to create IoT services. Also, the proposed environment deploys the defined service to another user. Through the personalization and customization of the defined service, the value and dissemination of the service is increased. This environment also provides the ontology-based context-information processing that produces and describes the context information for the IoT-based user-driven service. PMID:25420153
IoT-based user-driven service modeling environment for a smart space management system.
Choi, Hoan-Suk; Rhee, Woo-Seop
2014-11-20
The existing Internet environment has been extended to the Internet of Things (IoT) as an emerging new paradigm. The IoT connects various physical entities. These entities have communication capability and deploy the observed information to various service areas such as building management, energy-saving systems, surveillance services, and smart homes. These services are designed and developed by professional service providers. Moreover, users' needs have become more complicated and personalized with the spread of user-participation services such as social media and blogging. Therefore, some active users want to create their own services to satisfy their needs, but the existing IoT service-creation environment is difficult for the non-technical user because it requires a programming capability to create a service. To solve this problem, we propose the IoT-based user-driven service modeling environment to provide an easy way to create IoT services. Also, the proposed environment deploys the defined service to another user. Through the personalization and customization of the defined service, the value and dissemination of the service is increased. This environment also provides the ontology-based context-information processing that produces and describes the context information for the IoT-based user-driven service.
High-level PC-based laser system modeling
NASA Astrophysics Data System (ADS)
Taylor, Michael S.
1991-05-01
Since the inception of the Strategic Defense Initiative (SDI) there have been a multitude of comparison studies done in an attempt to evaluate the effectiveness and relative sizes of complementary, and sometimes competitive, laser weapon systems. It became more and more apparent that what the systems analyst needed was not only a fast, but a cost effective way to perform high-level trade studies. In the present investigation, a general procedure is presented for the development of PC-based algorithmic systems models for laser systems. This procedure points out all of the major issues that should be addressed in the design and development of such a model. Issues addressed include defining the problem to be modeled, defining a strategy for development, and finally, effective use of the model once developed. Being a general procedure, it will allow a systems analyst to develop a model to meet specific needs. To illustrate this method of model development, a description of the Strategic Defense Simulation - Design To (SDS-DT) model developed and used by Science Applications International Corporation (SAIC) is presented. SDS-DT is a menu-driven, fast executing, PC-based program that can be used to either calculate performance, weight, volume, and cost values for a particular design or, alternatively, to run parametrics on particular system parameters to perhaps optimize a design.
Lightweight approach to model traceability in a CASE tool
NASA Astrophysics Data System (ADS)
Vileiniskis, Tomas; Skersys, Tomas; Pavalkis, Saulius; Butleris, Rimantas; Butkiene, Rita
2017-07-01
A term "model-driven" is not at all a new buzzword within the ranks of system development community. Nevertheless, the ever increasing complexity of model-driven approaches keeps fueling all kinds of discussions around this paradigm and pushes researchers forward to research and develop new and more effective ways to system development. With the increasing complexity, model traceability, and model management as a whole, becomes indispensable activities of model-driven system development process. The main goal of this paper is to present a conceptual design and implementation of a practical lightweight approach to model traceability in a CASE tool.
Creation of system of computer-aided design for technological objects
NASA Astrophysics Data System (ADS)
Zubkova, T. M.; Tokareva, M. A.; Sultanov, N. Z.
2018-05-01
Due to the competition in the market of process equipment, its production should be flexible, retuning to various product configurations, raw materials and productivity, depending on the current market needs. This process is not possible without CAD (computer-aided design). The formation of CAD begins with planning. Synthesizing, analyzing, evaluating, converting operations, as well as visualization and decision-making operations, can be automated. Based on formal description of the design procedures, the design route in the form of an oriented graph is constructed. The decomposition of the design process, represented by the formalized description of the design procedures, makes it possible to make an informed choice of the CAD component for the solution of the task. The object-oriented approach allows us to consider the CAD as an independent system whose properties are inherited from the components. The first step determines the range of tasks to be performed by the system, and a set of components for their implementation. The second one is the configuration of the selected components. The interaction between the selected components is carried out using the CALS standards. The chosen CAD / CAE-oriented approach allows creating a single model, which is stored in the database of the subject area. Each of the integration stages is implemented as a separate functional block. The transformation of the CAD model into the model of the internal representation is realized by the block of searching for the geometric parameters of the technological machine, in which the XML-model of the construction is obtained on the basis of the feature method from the theory of image recognition. The configuration of integrated components is divided into three consecutive steps: configuring tasks, components, interfaces. The configuration of the components is realized using the theory of "soft computations" using the Mamdani fuzzy inference algorithm.
An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.
Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V
2014-07-01
We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.
An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology
Deodhar, Suruchi; Bisset, Keith R.; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V.
2014-01-01
We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity. PMID:25530914
Derivation of Markov processes that violate detailed balance
NASA Astrophysics Data System (ADS)
Lee, Julian
2018-03-01
Time-reversal symmetry of the microscopic laws dictates that the equilibrium distribution of a stochastic process must obey the condition of detailed balance. However, cyclic Markov processes that do not admit equilibrium distributions with detailed balance are often used to model systems driven out of equilibrium by external agents. I show that for a Markov model without detailed balance, an extended Markov model can be constructed, which explicitly includes the degrees of freedom for the driving agent and satisfies the detailed balance condition. The original cyclic Markov model for the driven system is then recovered as an approximation at early times by summing over the degrees of freedom for the driving agent. I also show that the widely accepted expression for the entropy production in a cyclic Markov model is actually a time derivative of an entropy component in the extended model. Further, I present an analytic expression for the entropy component that is hidden in the cyclic Markov model.
NASA Astrophysics Data System (ADS)
Cantrell, P.; Ewing-Taylor, J.; Crippen, K. J.; Smith, K. D.; Snelson, C. M.
2004-12-01
Education professionals and seismologists under the emerging SUN (Shaking Up Nevada) program are leveraging the existing infrastructure of the real-time Nevada K-12 Seismic Network to provide a unique inquiry based science experience for teachers. The concept and effort are driven by teacher needs and emphasize rigorous content knowledge acquisition coupled with the translation of that knowledge into an integrated seismology based earth sciences curriculum development process. We are developing a pedagogical framework, graduate level coursework, and materials to initiate the SUN model for teacher professional development in an effort to integrate the research benefits of real-time seismic data with science education needs in Nevada. A component of SUN is to evaluate teacher acquisition of qualified seismological and earth science information and pedagogy both in workshops and in the classroom and to assess the impact on student achievement. SUN's mission is to positively impact earth science education practices. With the upcoming EarthScope initiative, the program is timely and will incorporate EarthScope real-time seismic data (USArray) and educational materials in graduate course materials and teacher development programs. A number of schools in Nevada are contributing real-time data from both inexpensive and high-quality seismographs that are integrated with Nevada regional seismic network operations as well as the IRIS DMC. A powerful and unique component of the Nevada technology model is that schools can receive "stable" continuous live data feeds from 100's seismograph stations in Nevada, California and world (including live data from Earthworm systems and the IRIS DMC BUD - Buffer of Uniform Data). Students and teachers see their own networked seismograph station within a global context, as participants in regional and global monitoring. The robust real-time Internet communications protocols invoked in the Nevada network provide for local data acquisition, remote multi-channel data access, local time-series data management, interactive multi-window waveform display and time-series analysis with centralized meta-data control. Formally integrating educational seismology into the K-12 science curriculum with an overall "positive" impact to science education practices necessarily requires a collaborative effort between professional educators and seismologists yet driven exclusively by teacher needs.
Advanced and secure architectural EHR approaches.
Blobel, Bernd
2006-01-01
Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.
Reliability Quantification of Advanced Stirling Convertor (ASC) Components
NASA Technical Reports Server (NTRS)
Shah, Ashwin R.; Korovaichuk, Igor; Zampino, Edward
2010-01-01
The Advanced Stirling Convertor, is intended to provide power for an unmanned planetary spacecraft and has an operational life requirement of 17 years. Over this 17 year mission, the ASC must provide power with desired performance and efficiency and require no corrective maintenance. Reliability demonstration testing for the ASC was found to be very limited due to schedule and resource constraints. Reliability demonstration must involve the application of analysis, system and component level testing, and simulation models, taken collectively. Therefore, computer simulation with limited test data verification is a viable approach to assess the reliability of ASC components. This approach is based on physics-of-failure mechanisms and involves the relationship among the design variables based on physics, mechanics, material behavior models, interaction of different components and their respective disciplines such as structures, materials, fluid, thermal, mechanical, electrical, etc. In addition, these models are based on the available test data, which can be updated, and analysis refined as more data and information becomes available. The failure mechanisms and causes of failure are included in the analysis, especially in light of the new information, in order to develop guidelines to improve design reliability and better operating controls to reduce the probability of failure. Quantified reliability assessment based on fundamental physical behavior of components and their relationship with other components has demonstrated itself to be a superior technique to conventional reliability approaches based on utilizing failure rates derived from similar equipment or simply expert judgment.
A Collapsar Model with Disk Wind: Implications for Supernovae Associated with Gamma-Ray Bursts
NASA Astrophysics Data System (ADS)
Hayakawa, Tomoyasu; Maeda, Keiichi
2018-02-01
We construct a simple but self-consistent collapsar model for gamma-ray bursts (GRBs) and SNe associated with GRBs (GRB-SNe). Our model includes a black hole, an accretion disk, and the envelope surrounding the central system. The evolutions of the different components are connected by the transfer of the mass and angular momentum. To address properties of the jet and the wind-driven SNe, we consider competition of the ram pressure from the infalling envelope and those from the jet and wind. The expected properties of the GRB jet and the wind-driven SN are investigated as a function of the progenitor mass and angular momentum. We find two conditions that should be satisfied if the wind-driven explosion is to explain the properties of the observed GRB-SNe: (1) the wind should be collimated at its base, and (2) it should not prevent further accretion even after the launch of the SN explosion. Under these conditions, some relations seen in the properties of the GRB-SNe could be reproduced by a sequence of different angular momentum in the progenitors. Only the model with the largest angular momentum could explain the observed (energetic) GRB-SNe, and we expect that the collapsar model can result in a wide variety of observational counterparts, mainly depending on the angular momentum of the progenitor star.
Modeling the Personal Health Ecosystem.
Blobel, Bernd; Brochhausen, Mathias; Ruotsalainen, Pekka
2018-01-01
Complex ecosystems like the pHealth one combine different domains represented by a huge variety of different actors (human beings, organizations, devices, applications, components) belonging to different policy domains, coming from different disciplines, deploying different methodologies, terminologies, and ontologies, offering different levels of knowledge, skills, and experiences, acting in different scenarios and accommodating different business cases to meet the intended business objectives. For correctly modeling such systems, a system-oriented, architecture-centric, ontology-based, policy-driven approach is inevitable, thereby following established Good Modeling Best Practices. However, most of the existing standards, specifications and tools for describing, representing, implementing and managing health (information) systems reflect the advancement of information and communication technology (ICT) represented by different evolutionary levels of data modeling. The paper presents a methodology for integrating, adopting and advancing models, standards, specifications as well as implemented systems and components on the way towards the aforementioned ultimate approach, so meeting the challenge we face when transforming health systems towards ubiquitous, personalized, predictive, preventive, participative, and cognitive health and social care.
Projections of extreme water level events for atolls in the western Tropical Pacific
NASA Astrophysics Data System (ADS)
Merrifield, M. A.; Becker, J. M.; Ford, M.; Yao, Y.
2014-12-01
Conditions that lead to extreme water levels and coastal flooding are examined for atolls in the Republic of the Marshall Islands based on a recent field study of wave transformations over fringing reefs, tide gauge observations, and wave model hindcasts. Wave-driven water level extremes pose the largest threat to atoll shorelines, with coastal levels scaling as approximately one-third of the incident breaking wave height. The wave-driven coastal water level is partitioned into a mean setup, low frequency oscillations associated with cross-reef quasi-standing modes, and wind waves that reach the shore after undergoing high dissipation due to breaking and bottom friction. All three components depend on the water level over the reef; however, the sum of the components is independent of water level due to cancelling effects. Wave hindcasts suggest that wave-driven water level extremes capable of coastal flooding are infrequent events that require a peak wave event to coincide with mid- to high-tide conditions. Interannual and decadal variations in sea level do not change the frequency of these events appreciably. Future sea-level rise scenarios significantly increase the flooding threat associated with wave events, with a nearly exponential increase in flooding days per year as sea level exceeds 0.3 to 1.0 m above current levels.
Rational design of capillary-driven flows for paper-based microfluidics.
Elizalde, Emanuel; Urteaga, Raúl; Berli, Claudio L A
2015-05-21
The design of paper-based assays that integrate passive pumping requires a precise programming of the fluid transport, which has to be encoded in the geometrical shape of the substrate. This requirement becomes critical in multiple-step processes, where fluid handling must be accurate and reproducible for each operation. The present work theoretically investigates the capillary imbibition in paper-like substrates to better understand fluid transport in terms of the macroscopic geometry of the flow domain. A fluid dynamic model was derived for homogeneous porous substrates with arbitrary cross-sectional shapes, which allows one to determine the cross-sectional profile required for a prescribed fluid velocity or mass transport rate. An extension of the model to slit microchannels is also demonstrated. Calculations were validated by experiments with prototypes fabricated in our lab. The proposed method constitutes a valuable tool for the rational design of paper-based assays.
NASA Astrophysics Data System (ADS)
Mozumder, Chandan K.
The objective in crashworthiness design is to generate plastically deformable energy absorbing structures which can satisfy the prescribed force-displacement (FD) response. The FD behavior determines the reaction force, displacement and the internal energy that the structure should withstand. However, attempts to include this requirement in structural optimization problems remain scarce. The existing commercial optimization tools utilize models under static loading conditions because of the complexities associated with dynamic/impact loading. Due to the complexity of a crash event and the consequent time required to numerically analyze the dynamic response of the structure, classical methods (i.e., gradient-based and direct) are not well developed to solve this undertaking. This work presents an approach under the framework of the hybrid cellular automaton (HCA) method to solve the above challenge. The HCA method has been successfully applied to nonlinear transient topology optimization for crashworthiness design. In this work, the HCA algorithm has been utilized to develop an efficient methodology for synthesizing shell-based sheet metal structures with optimal material thickness distribution under a dynamic loading event using topometry optimization. This method utilizes the cellular automata (CA) computing paradigm and nonlinear transient finite element analysis (FEA) via ls-dyna. In this method, a set field variables is driven to their target states by changing a convenient set of design variables (e.g., thickness). These rules operate locally in cells within a lattice that only know local conditions. The field variables associated with the cells are driven to a setpoint to obtain the desired structure. This methodology is used to design for structures with controlled energy absorption with specified buckling zones. The peak reaction force and the maximum displacement are also constrained to meet the desired safety level according to passenger safety regulations. Design for prescribed FD response by minimizing the error between the actual response and desired FD curve is implemented. With the use of HCA rules, manufacturability constraints (e.g., rolling) and structures which can be manufactured by special techniques, such as, tailor-welded blanks (TWB), have also been implemented. This methodology is applied to shock-absorbing structural components for passengers in a crashing vehicle. These results are compared to previous designs showing the benefits of the method introduced in this work.
This provides an overview of a novel open-source conceptuial model of molecular and biochemical pathways involved in the regulation of fish reproduction. Further, it provides concrete examples of how such models can be used to design and conduct hypothesis-driven "omics" experim...
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Design, analysis, operation, and advanced control of hybrid renewable energy systems
NASA Astrophysics Data System (ADS)
Whiteman, Zachary S.
Because using non-renewable energy systems (e.g., coal-powered co-generation power plants) to generate electricity is an unsustainable, environmentally hazardous practice, it is important to develop cost-effective and reliable renewable energy systems, such as photovoltaics (PVs), wind turbines (WTs), and fuel cells (FCs). Non-renewable energy systems, however, are currently less expensive than individual renewable energy systems (IRESs). Furthermore, IRESs based on intermittent natural resources (e.g., solar irradiance and wind) are incapable of meeting continuous energy demands. Such shortcomings can be mitigated by judiciously combining two or more complementary IRESs to form a hybrid renewable energy system (HRES). Although previous research efforts focused on the design, operation, and control of HRESs has proven useful, no prior HRES research endeavor has taken a systematic and comprehensive approach towards establishing guidelines by which HRESs should be designed, operated, and controlled. The overall goal of this dissertation, therefore, is to establish the principles governing the design, operation, and control of HRESs resulting in cost-effective and reliable energy solutions for stationary and mobile applications. To achieve this goal, we developed and demonstrated four separate HRES principles. Rational selection of HRES type: HRES components and their sizes should be rationally selected using knowledge of component costs, availability of renewable energy resources, and expected power demands of the application. HRES design: by default, the components of a HRES should be arranged in parallel for increased efficiency and reliability. However, a series HRES design may be preferred depending on the operational considerations of the HRES components. HRES control strategy selection: the choice of HRES control strategy depends on the dynamics of HRES components, their operational considerations, and the practical limitations of the HRES end-use. HRES data-driven control: information-rich data should be used to assist in the intelligent coordination of HRES components in meeting its operating objectives when additional computation can be afforded and significant benefits can be realized.
NASA Astrophysics Data System (ADS)
Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon
2016-03-01
In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.
Sizing Power Components of an Electrically Driven Tail Cone Thruster and a Range Extender
NASA Technical Reports Server (NTRS)
Jansen, Ralph H.; Bowman, Cheryl; Jankovsky, Amy
2016-01-01
The aeronautics industry has been challenged on many fronts to increase efficiency, reduce emissions, and decrease dependency on carbon-based fuels. The NASA Aeronautics Research Mission Directorate has identified a suite of investments to meet long term research demands beyond the purview of commercial investment. Electrification of aviation propulsion through turboelectric or hybrid electric propulsion is one of many exciting research areas which has the potential to revolutionize the aviation industry. This paper will provide an overview of the turboelectric and hybrid electric technologies being developed under NASAs Advanced Air Transportation Technology (AATT) Project, and how these technologies can impact vehicle design. An overview will be presented of vehicle system studies and the electric drive system assumptions for successful turboelectric and hybrid electric propulsion in single aisle size commercial aircraft. Key performance parameters for electric drive system technologies will be reviewed, and the technical investment made in materials, electric machines, power electronics, and integrated power systems will be discussed. Finally, power components for a single aisle turboelectric aircraft with an electrically driven tail cone thruster and a hybrid electric nine passenger aircraft with a range extender will be parametrically sized.
Performance Analysis of Stirling Engine-Driven Vapor Compression Heat Pump System
NASA Astrophysics Data System (ADS)
Kagawa, Noboru
Stirling engine-driven vapor compression systems have many unique advantages including higher thermal efficiencies, preferable exhaust gas characteristics, multi-fuel usage, and low noise and vibration which can play an important role in alleviating environmental and energy problems. This paper introduces a design method for the systems based on reliable mathematical methods for Stirling and Rankin cycles using reliable thermophysical information for refrigerants. The model deals with a combination of a kinematic Stirling engine and a scroll compressor. Some experimental coefficients are used to formulate the model. The obtained results show the performance behavior in detail. The measured performance of the actual system coincides with the calculated results. Furthermore, the calculated results clarify the performance using alternative refrigerants for R-22.
A Single Chip VLSI Implementation of a QPSK/SQPSK Demodulator for a VSAT Receiver Station
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; King, Brent
1995-01-01
This thesis presents a VLSI implementation of a QPSK/SQPSK demodulator. It is designed to be employed in a VSAT earth station that utilizes the FDMA/TDM link. A single chip architecture is used to enable this chip to be easily employed in the VSAT system. This demodulator contains lowpass filters, integrate and dump units, unique word detectors, a timing recovery unit, a phase recovery unit and a down conversion unit. The design stages start with a functional representation of the system by using the C programming language. Then it progresses into a register based representation using the VHDL language. The layout components are designed based on these VHDL models and simulated. Component generators are developed for the adder, multiplier, read-only memory and serial access memory in order to shorten the design time. These sub-components are then block routed to form the main components of the system. The main components are block routed to form the final demodulator.
NASA Astrophysics Data System (ADS)
Halimah, B. Z.; Azlina, A.; Sembok, T. M.; Sufian, I.; Sharul Azman, M. N.; Azuraliza, A. B.; Zulaiha, A. O.; Nazlia, O.; Salwani, A.; Sanep, A.; Hailani, M. T.; Zaher, M. Z.; Azizah, J.; Nor Faezah, M. Y.; Choo, W. O.; Abdullah, Chew; Sopian, B.
The Holistic Islamic Banking System (HiCORE), a banking system suitable for virtual banking environment, created based on universityindustry collaboration initiative between Universiti Kebangsaan Malaysia (UKM) and Fuziq Software Sdn Bhd. HiCORE was modeled on a multitiered Simple - Services Oriented Architecture (S-SOA), using the parameterbased semantic approach. HiCORE's existence is timely as the financial world is looking for a new approach to creating banking and financial products that are interest free or based on the Islamic Syariah principles and jurisprudence. An interest free banking system has currently caught the interest of bankers and financiers all over the world. HiCORE's Parameter-based module houses the Customer-information file (CIF), Deposit and Financing components. The Parameter based module represents the third tier of the multi-tiered Simple SOA approach. This paper highlights the multi-tiered parameter- driven approach to the creation of new Islamiic products based on the 'dalil' (Quran), 'syarat' (rules) and 'rukun' (procedures) as required by the syariah principles and jurisprudence reflected by the semantic ontology embedded in the parameter module of the system.
Computational electronics and electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shang, C. C.
The Computational Electronics and Electromagnetics thrust area at Lawrence Livermore National Laboratory serves as the focal point for engineering R&D activities for developing computer-based design, analysis, and tools for theory. Key representative applications include design of particle accelerator cells and beamline components; engineering analysis and design of high-power components, photonics, and optoelectronics circuit design; EMI susceptibility analysis; and antenna synthesis. The FY-96 technology-base effort focused code development on (1) accelerator design codes; (2) 3-D massively parallel, object-oriented time-domain EM codes; (3) material models; (4) coupling and application of engineering tools for analysis and design of high-power components; (5) 3-D spectral-domainmore » CEM tools; and (6) enhancement of laser drilling codes. Joint efforts with the Power Conversion Technologies thrust area include development of antenna systems for compact, high-performance radar, in addition to novel, compact Marx generators. 18 refs., 25 figs., 1 tab.« less
Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Peissig, Peggy L; Denny, Joshua C; Kho, Abel N; Miller, Aaron; Pathak, Jyotishman
2012-01-01
The development of Electronic Health Record (EHR)-based phenotype selection algorithms is a non-trivial and highly iterative process involving domain experts and informaticians. To make it easier to port algorithms across institutions, it is desirable to represent them using an unambiguous formal specification language. For this purpose we evaluated the recently developed National Quality Forum (NQF) information model designed for EHR-based quality measures: the Quality Data Model (QDM). We selected 9 phenotyping algorithms that had been previously developed as part of the eMERGE consortium and translated them into QDM format. Our study concluded that the QDM contains several core elements that make it a promising format for EHR-driven phenotyping algorithms for clinical research. However, we also found areas in which the QDM could be usefully extended, such as representing information extracted from clinical text, and the ability to handle algorithms that do not consist of Boolean combinations of criteria.
Statistical and engineering methods for model enhancement
NASA Astrophysics Data System (ADS)
Chang, Chia-Jung
Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.
NASA Astrophysics Data System (ADS)
Okada, S.; Sunaga, H.; Kaneko, H.; Takizawa, H.; Kawasuso, A.; Yotsumoto, K.; Tanaka, R.
1999-06-01
The Positron Factory has been planned at Japan Atomic Energy Research Institute (JAERI). The factory is expected to produce linac-based monoenergetic positron beams having world-highest intensities of more than 1010e+/sec, which will be applied for R&D of materials science, biotechnology and basic physics & chemistry. In this article, results of the design studies are demonstrated for the following essential components of the facilities: 1) Conceptual design of a high-power electron linac with 100 MeV in beam energy and 100 kW in averaged beam power, 2) Performance tests of the RF window in the high-power klystron and of the electron beam window, 3) Development of a self-driven rotating electron-to-positron converter and the performance tests, 4) Proposal of multi-channel beam generation system for monoenergetic positrons, with a series of moderator assemblies based on a newly developed Monte Carlo simulation and the demonstrative experiment, 5) Proposal of highly efficient moderator structures, 6) Conceptual design of a local shield to suppress the surrounding radiation and activation levels.
Live Speech Driven Head-and-Eye Motion Generators.
Le, Binh H; Ma, Xiaohan; Deng, Zhigang
2012-11-01
This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.
Research needs for developing a commodity-driven freight modeling approach.
DOT National Transportation Integrated Search
2003-01-01
It is well known that better freight forecasting models and data are needed, but the literature does not clearly indicate which components of the modeling methodology are most in need of improvement, which is a critical need in an era of limited rese...
NASA Astrophysics Data System (ADS)
Siettos, C. I.; Gear, C. W.; Kevrekidis, I. G.
2012-08-01
We show how the equation-free approach can be exploited to enable agent-based simulators to perform system-level computations such as bifurcation, stability analysis and controller design. We illustrate these tasks through an event-driven agent-based model describing the dynamic behaviour of many interacting investors in the presence of mimesis. Using short bursts of appropriately initialized runs of the detailed, agent-based simulator, we construct the coarse-grained bifurcation diagram of the (expected) density of agents and investigate the stability of its multiple solution branches. When the mimetic coupling between agents becomes strong enough, the stable stationary state loses its stability at a coarse turning point bifurcation. We also demonstrate how the framework can be used to design a wash-out dynamic controller that stabilizes open-loop unstable stationary states even under model uncertainty.
Mukumbang, Ferdinand C; Van Belle, Sara; Marchal, Bruno; van Wyk, Brian
2017-08-25
It is increasingly acknowledged that differentiated care models hold potential to manage large volumes of patients on antiretroviral therapy (ART). Various group-based models of ART service delivery aimed at decongesting local health facilities, encouraging patient retention in care, and enhancing adherence to medication have been implemented across sub-Saharan Africa. Evidence from the literature suggests that these models of ART service delivery are more effective than corresponding facility-based care and superior to individual-based models. Nevertheless, there is little understanding of how these care models work to achieve their intended outcomes. The aim of this study was to review the theories explicating how and why group-based ART models work using a realist evaluation framework. A systematic review of the literature on group-based ART support models in sub-Saharan Africa was conducted. We searched the Google Scholar and PubMed databases and supplemented these with a reference chase of the identified articles. We applied a theory-driven approach-narrative synthesis-to synthesise the data. Data were analysed using the thematic content analysis method and synthesised according to aspects of the Intervention-Context-Actor-Mechanism-Outcome heuristic-analytic tool-a realist evaluation theory building tool. Twelve articles reporting primary studies on group-based models of ART service delivery were included in the review. The six studies that employed a quantitative study design failed to identify aspects of the context and mechanisms that work to trigger the outcomes of group-based models. While the other four studies that applied a qualitative and the two using a mixed methods design identified some of the aspects of the context and mechanisms that could trigger the outcomes of group-based ART models, these studies did not explain the relationship(s) between the theory elements and how they interact to produce the outcome(s). Although we could distill various components of the Intervention-Context-Actor-Mechanism-Outcome analytic tool from different studies exploring group-based programmes, we could not, however, identify a salient programme theory based on the Intervention-Context-Actor-Mechanism-Outcome heuristic analysis. The scientific community, policy makers and programme implementers would benefit more if explanatory findings of how, why, for whom and in what circumstances programmes work are presented rather than just reporting on the outcomes of the interventions.
Evaluation of Anomaly Detection Capability for Ground-Based Pre-Launch Shuttle Operations. Chapter 8
NASA Technical Reports Server (NTRS)
Martin, Rodney Alexander
2010-01-01
This chapter will provide a thorough end-to-end description of the process for evaluation of three different data-driven algorithms for anomaly detection to select the best candidate for deployment as part of a suite of IVHM (Integrated Vehicle Health Management) technologies. These algorithms were deemed to be sufficiently mature enough to be considered viable candidates for deployment in support of the maiden launch of Ares I-X, the successor to the Space Shuttle for NASA's Constellation program. Data-driven algorithms are just one of three different types being deployed. The other two types of algorithms being deployed include a "nile-based" expert system, and a "model-based" system. Within these two categories, the deployable candidates have already been selected based upon qualitative factors such as flight heritage. For the rule-based system, SHINE (Spacecraft High-speed Inference Engine) has been selected for deployment, which is a component of BEAM (Beacon-based Exception Analysis for Multimissions), a patented technology developed at NASA's JPL (Jet Propulsion Laboratory) and serves to aid in the management and identification of operational modes. For the "model-based" system, a commercially available package developed by QSI (Qualtech Systems, Inc.), TEAMS (Testability Engineering and Maintenance System) has been selected for deployment to aid in diagnosis. In the context of this particular deployment, distinctions among the use of the terms "data-driven," "rule-based," and "model-based," can be found in. Although there are three different categories of algorithms that have been selected for deployment, our main focus in this chapter will be on the evaluation of three candidates for data-driven anomaly detection. These algorithms will be evaluated upon their capability for robustly detecting incipient faults or failures in the ground-based phase of pre-launch space shuttle operations, rather than based oil heritage as performed in previous studies. Robust detection will allow for the achievement of pre-specified minimum false alarm and/or missed detection rates in the selection of alert thresholds. All algorithms will also be optimized with respect to an aggregation of these same criteria. Our study relies upon the use of Shuttle data to act as was a proxy for and in preparation for application to Ares I-X data, which uses a very similar hardware platform for the subsystems that are being targeted (TVC - Thrust Vector Control subsystem for the SRB (Solid Rocket Booster)).
Electrically Driven Liquid Film Boiling Experiment
NASA Technical Reports Server (NTRS)
Didion, Jeffrey R.
2016-01-01
This presentation presents the science background and ground based results that form the basis of the Electrically Driven Liquid Film Boiling Experiment. This is an ISS experiment that is manifested for 2021. Objective: Characterize the effects of gravity on the interaction of electric and flow fields in the presence of phase change specifically pertaining to: a) The effects of microgravity on the electrically generated two-phase flow. b) The effects of microgravity on electrically driven liquid film boiling (includes extreme heat fluxes). Electro-wetting of the boiling section will repel the bubbles away from the heated surface in microgravity environment. Relevance/Impact: Provides phenomenological foundation for the development of electric field based two-phase thermal management systems leveraging EHD, permitting optimization of heat transfer surface area to volume ratios as well as achievement of high heat transfer coefficients thus resulting in system mass and volume savings. EHD replaces buoyancy or flow driven bubble removal from heated surface. Development Approach: Conduct preliminary experiments in low gravity and ground-based facilities to refine technique and obtain preliminary data for model development. ISS environment required to characterize electro-wetting effect on nucleate boiling and CHF in the absence of gravity. Will operate in the FIR - designed for autonomous operation.
Design challenges in nanoparticle-based platforms: Implications for targeted drug delivery systems
NASA Astrophysics Data System (ADS)
Mullen, Douglas Gurnett
Characterization and control of heterogeneous distributions of nanoparticle-ligand components are major design challenges for nanoparticle-based platforms. This dissertation begins with an examination of poly(amidoamine) (PAMAM) dendrimer-based targeted delivery platform. A folic acid targeted modular platform was developed to target human epithelial cancer cells. Although active targeting was observed in vitro, active targeting was not found in vivo using a mouse tumor model. A major flaw of this platform design was that it did not provide for characterization or control of the component distribution. Motivated by the problems experienced with the modular design, the actual composition of nanoparticle-ligand distributions were examined using a model dendrimer-ligand system. High Pressure Liquid Chromatography (HPLC) resolved the distribution of components in samples with mean ligand/dendrimer ratios ranging from 0.4 to 13. A peak fitting analysis enabled the quantification of the component distribution. Quantified distributions were found to be significantly more heterogeneous than commonly expected and standard analytical parameters, namely the mean ligand/nanoparticle ratio, failed to adequately represent the component heterogeneity. The distribution of components was also found to be sensitive to particle modifications that preceded the ligand conjugation. With the knowledge gained from this detailed distribution analysis, a new platform design was developed to provide a system with dramatically improved control over the number of components and with improved batch reproducibility. Using semi-preparative HPLC, individual dendrimer-ligand components were isolated. The isolated dendrimer with precise numbers of ligands were characterized by NMR and analytical HPLC. In total, nine different dendrimer-ligand components were obtained with degrees of purity ≥80%. This system has the potential to serve as a platform to which a precise number of functional molecules can be attached and has the potential to dramatically improve platform efficacy. An additional investigation of reproducibility challenges for current dendrimer-based platform designs is also described. The mass transport quality during the partial acetylation reaction of the dendrimer was found to have a major impact on subsequent dendrimer-ligand distributions that cannot be detected by standard analytical techniques. Consequently, this reaction should be eliminated from the platform design. Finally, optimized protocols for purification and characterization of PAMAM dendrimer were detailed.
Electric-hybrid-vehicle simulation
NASA Astrophysics Data System (ADS)
Pasma, D. C.
The simulation of electric hybrid vehicles is to be performed using experimental data to model propulsion system components. The performance of an existing ac propulsion system will be used as the baseline for comparative purposes. Hybrid components to be evaluated include electrically and mechanically driven flywheels, and an elastomeric regenerative braking system.
Design of pressure-driven microfluidic networks using electric circuit analogy.
Oh, Kwang W; Lee, Kangsun; Ahn, Byungwook; Furlani, Edward P
2012-02-07
This article reviews the application of electric circuit methods for the analysis of pressure-driven microfluidic networks with an emphasis on concentration- and flow-dependent systems. The application of circuit methods to microfluidics is based on the analogous behaviour of hydraulic and electric circuits with correlations of pressure to voltage, volumetric flow rate to current, and hydraulic to electric resistance. Circuit analysis enables rapid predictions of pressure-driven laminar flow in microchannels and is very useful for designing complex microfluidic networks in advance of fabrication. This article provides a comprehensive overview of the physics of pressure-driven laminar flow, the formal analogy between electric and hydraulic circuits, applications of circuit theory to microfluidic network-based devices, recent development and applications of concentration- and flow-dependent microfluidic networks, and promising future applications. The lab-on-a-chip (LOC) and microfluidics community will gain insightful ideas and practical design strategies for developing unique microfluidic network-based devices to address a broad range of biological, chemical, pharmaceutical, and other scientific and technical challenges.
Evaluating model accuracy for model-based reasoning
NASA Technical Reports Server (NTRS)
Chien, Steve; Roden, Joseph
1992-01-01
Described here is an approach to automatically assessing the accuracy of various components of a model. In this approach, actual data from the operation of a target system is used to drive statistical measures to evaluate the prediction accuracy of various portions of the model. We describe how these statistical measures of model accuracy can be used in model-based reasoning for monitoring and design. We then describe the application of these techniques to the monitoring and design of the water recovery system of the Environmental Control and Life Support System (ECLSS) of Space Station Freedom.
Rugel, Emily J; Henderson, Sarah B; Carpiano, Richard M; Brauer, Michael
2017-11-01
Natural spaces can provide psychological benefits to individuals, but population-level epidemiologic studies have produced conflicting results. Refining current exposure-assessment methods is necessary to advance our understanding of population health and to guide the design of health-promoting urban forms. The aim of this study was to develop a comprehensive Natural Space Index that robustly models potential exposure based on the presence, form, accessibility, and quality of multiple forms of greenspace (e.g., parks and street trees) and bluespace (e.g., oceans and lakes). The index was developed for greater Vancouver, Canada. Greenness presence was derived from remote sensing (NDVI/EVI); forms were extracted from municipal and private databases; and accessibility was based on restrictions such as private ownership. Quality appraisals were conducted for 200 randomly sampled parks using the Public Open Space Desktop Appraisal Tool (POSDAT). Integrating these measures in GIS, exposure was assessed for 60,242 postal codes using 100- to 1,600-m buffers based on hypothesized pathways to mental health. A single index was then derived using principal component analysis (PCA). Comparing NDVI with alternate approaches for assessing natural space resulted in widely divergent results, with quintile rankings shifting for 22-88% of postal codes, depending on the measure. Overall park quality was fairly low (mean of 15 on a scale of 0-45), with no significant difference seen by neighborhood-level household income. The final PCA identified three main sets of variables, with the first two components explaining 68% of the total variance. The first component was dominated by the percentages of public and private greenspace and bluespace and public greenspace within 250m, while the second component was driven by lack of access to bluespace within 1 km. Many current approaches to modeling natural space may misclassify exposures and have limited specificity. The Natural Space Index represents a novel approach at a regional scale with application to urban planning and policy-making. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Schill, S.; Novak, G.; Zimmermann, K.; Bertram, T. H.
2014-12-01
The ocean serves as a major source for atmospheric aerosol particles, yet the chemicophysical properties of sea spray aerosol to date are not well characterized. Understanding the transfer of organic compounds, present in the sea surface microlayer (SSML), to sea-spray particles and their resulting impact on cloud formation is important for predicting aerosol impact on climate in remote marine environments. Here, we present a series of laboratory experiments designed to probe the fractionation of select organic molecules during wave breaking. We use a representative set of organic mimics (e.g. sterols, sugars, lipids, proteins, fatty acids) to test a recent physically based model of organic enrichment in sea-spray aerosol [Burrows et al., 2014] that is based on Langmuir absorption equilibria. Experiments were conducted in the UCSD Marine Aerosol Reference Tank (MART) permitting accurate representation of wave breaking processes in the laboratory. We report kappa values for the resulting sea-spray aerosols and compare them to a predictions made using Kappa-Köhler Theory driven by a linear combination of the pure component kappa values. Hygroscopicity determinations made using the model systems are discussed within the context of measurements of CCN activity made using natural, coastal water.
Nonlinear flow response of soft hair beds
NASA Astrophysics Data System (ADS)
Alvarado, José
2017-11-01
We are hairy inside: beds of passive fibers anchored to a surface and immersed in fluids are prevalent in many biological systems, including intestines, tongues, and blood vessels. Such hairs are soft enough to deform in response to stresses from fluid flows. Fluid stresses are in turn affected by hair deformation, leading to a coupled elastoviscous problem which is poorly understood. Here we investigate a biomimetic model system of elastomer hair beds subject to shear- driven Stokes flows. We characterize this system with a theoretical model which accounts for the large-deformation flow response of hair beds. Hair bending results in a drag-reducing nonlinearity because the hair tip lowers toward the base, widening the gap through which fluid flows. When hairs are cantilevered at an angle subnormal to the surface, flow against the grain bends hairs away from the base, narrowing the gap. The flow response of angled hair beds is axially asymmetric and amounts to a rectification nonlinearity. We identify an elastoviscous parameter which controls nonlinear behavior. Our study raises the hypothesis that biological hairy surfaces function to reduce fluid drag. Furthermore, angled hairs may be incorporated in the design of integrated microfluidic components, such as diodes and pumps. J.A. acknowledges support the U. S. Army Research Office under Grant Number W911NF-14-1-0396.
ERIC Educational Resources Information Center
Sampson, Victor; Grooms, Jonathon; Walker, Joi
2009-01-01
Argument-Driven Inquiry (ADI) is an instructional model that enables science teachers to transform a traditional laboratory activity into a short integrated instructional unit. To illustrate how the ADI instructional model works, this article describes an ADI lesson developed for a 10th-grade chemistry class. This example lesson was designed to…
Robust high-performance control for robotic manipulators
NASA Technical Reports Server (NTRS)
Seraji, Homayoun (Inventor)
1991-01-01
Model-based and performance-based control techniques are combined for an electrical robotic control system. Thus, two distinct and separate design philosophies have been merged into a single control system having a control law formulation including two distinct and separate components, each of which yields a respective signal component that is combined into a total command signal for the system. Those two separate system components include a feedforward controller and a feedback controller. The feedforward controller is model-based and contains any known part of the manipulator dynamics that can be used for on-line control to produce a nominal feedforward component of the system's control signal. The feedback controller is performance-based and consists of a simple adaptive PID controller which generates an adaptive control signal to complement the nominal feedforward signal.
Physics-based Control-oriented Modeling of the Current Profile Evolution in NSTX-Upgrade
NASA Astrophysics Data System (ADS)
Ilhan, Zeki; Barton, Justin; Shi, Wenyu; Schuster, Eugenio; Gates, David; Gerhardt, Stefan; Kolemen, Egemen; Menard, Jonathan
2013-10-01
The operational goals for the NSTX-Upgrade device include non-inductive sustainment of high- β plasmas, realization of the high performance equilibrium scenarios with neutral beam heating, and achievement of longer pulse durations. Active feedback control of the current profile is proposed to enable these goals. Motivated by the coupled, nonlinear, multivariable, distributed-parameter plasma dynamics, the first step towards feedback control design is the development of a physics-based, control-oriented model for the current profile evolution in response to non-inductive current drives and heating systems. For this purpose, the nonlinear magnetic-diffusion equation is coupled with empirical models for the electron density, electron temperature, and non-inductive current drives (neutral beams). The resulting first-principles-driven, control-oriented model is tailored for NSTX-U based on the PTRANSP predictions. Main objectives and possible challenges associated with the use of the developed model for control design are discussed. This work was supported by PPPL.
Mechanical-magnetic-electric coupled behaviors for stress-driven Terfenol-D energy harvester
NASA Astrophysics Data System (ADS)
Cao, Shuying; Zheng, Jiaju; Wang, Bowen; Pan, Ruzheng; Zhao, Ran; Weng, Ling; Sun, Ying; Liu, Chengcheng
2017-05-01
The stress-driven Terfernol-D energy harvester exhibits the nonlinear mechanical-magnetic-electric coupled (MMEC) behaviors and the eddy current effects. To analyze and design the device, it is necessary to establish an accurate model of the device. Based on the effective magnetic field expression, the constitutive equations with eddy currents and variable coefficients, and the dynamic equations, a nonlinear dynamic MMEC model for the device is founded. Comparisons between the measured and calculated results show that the model can describe the nonlinear coupled curves of magnetization versus stress and strain versus stress under different bias fields, and can provide the reasonable data trends of piezomagnetic coefficients, Young's modulus and relative permeability for Terfenol-D. Moreover, the calculated power results show that the model can determine the optimal bias conditions, optimal resistance, suitable proof mass, suitable slices for the maximum energy extraction of the device under broad stress amplitude and broad frequency.
NASA Technical Reports Server (NTRS)
Hopcroft, J.
1987-01-01
The potential benefits of automation in space are significant. The science base needed to support this automation not only will help control costs and reduce lead-time in the earth-based design and construction of space stations, but also will advance the nation's capability for computer design, simulation, testing, and debugging of sophisticated objects electronically. Progress in automation will require the ability to electronically represent, reason about, and manipulate objects. Discussed here is the development of representations, languages, editors, and model-driven simulation systems to support electronic prototyping. In particular, it identifies areas where basic research is needed before further progress can be made.
Hydro-dynamic damping theory in flowing water
NASA Astrophysics Data System (ADS)
Monette, C.; Nennemann, B.; Seeley, C.; Coutu, A.; Marmont, H.
2014-03-01
Fluid-structure interaction (FSI) has a major impact on the dynamic response of the structural components of hydroelectric turbines. On mid-head to high-head Francis runners, the rotor-stator interaction (RSI) phenomenon always has to be considered carefully during the design phase to avoid operational issues later on. The RSI dynamic response amplitudes are driven by three main factors: (1) pressure forcing amplitudes, (2) excitation frequencies in relation to natural frequencies and (3) damping. The prediction of the two first factors has been largely documented in the literature. However, the prediction of fluid damping has received less attention in spite of being critical when the runner is close to resonance. Experimental damping measurements in flowing water on hydrofoils were presented previously. Those results showed that the hydro-dynamic damping increased linearly with the flow. This paper presents development and validation of a mathematical model, based on momentum exchange, to predict damping due to fluid structure interaction in flowing water. The model is implemented as an analytical procedure for simple structures, such as cantilever beams, but is also implemented in more general ways using three different approaches for more complex structures such as runner blades: a finite element procedure, a CFD modal work based approach and a CFD 1DOF approach. The mathematical model and all three implementation approaches are shown to agree well with experimental results.
NASA Astrophysics Data System (ADS)
Adams, Jordan M.; Gasparini, Nicole M.; Hobley, Daniel E. J.; Tucker, Gregory E.; Hutton, Eric W. H.; Nudurupati, Sai S.; Istanbulluoglu, Erkan
2017-04-01
Representation of flowing water in landscape evolution models (LEMs) is often simplified compared to hydrodynamic models, as LEMs make assumptions reducing physical complexity in favor of computational efficiency. The Landlab modeling framework can be used to bridge the divide between complex runoff models and more traditional LEMs, creating a new type of framework not commonly used in the geomorphology or hydrology communities. Landlab is a Python-language library that includes tools and process components that can be used to create models of Earth-surface dynamics over a range of temporal and spatial scales. The Landlab OverlandFlow component is based on a simplified inertial approximation of the shallow water equations, following the solution of de Almeida et al.(2012). This explicit two-dimensional hydrodynamic algorithm simulates a flood wave across a model domain, where water discharge and flow depth are calculated at all locations within a structured (raster) grid. Here, we illustrate how the OverlandFlow component contained within Landlab can be applied as a simplified event-based runoff model and how to couple the runoff model with an incision model operating on decadal timescales. Examples of flow routing on both real and synthetic landscapes are shown. Hydrographs from a single storm at multiple locations in the Spring Creek watershed, Colorado, USA, are illustrated, along with a map of shear stress applied on the land surface by flowing water. The OverlandFlow component can also be coupled with the Landlab DetachmentLtdErosion component to illustrate how the non-steady flow routing regime impacts incision across a watershed. The hydrograph and incision results are compared to simulations driven by steady-state runoff. Results from the coupled runoff and incision model indicate that runoff dynamics can impact landscape relief and channel concavity, suggesting that, on landscape evolution timescales, the OverlandFlow model may lead to differences in simulated topography in comparison with traditional methods. The exploratory test cases described within demonstrate how the OverlandFlow component can be used in both hydrologic and geomorphic applications.
An Evaluation Research Model for System-Wide Textbook Selection.
ERIC Educational Resources Information Center
Talmage, Harriet; Walberg, Herbert T.
One component of an evaluation research model for system-wide selection of curriculum materials is reported: implementation of an evaluation design for obtaining data that permits professional and lay persons to base curriculum materials decisions on a "best fit" principle. The design includes teacher characteristics, learning environment…
Kim, Young-Deuk; Thu, Kyaw; Ng, Kim Choon; Amy, Gary L; Ghaffour, Noreddine
2016-09-01
In this paper, a hybrid desalination system consisting of vacuum membrane distillation (VMD) and adsorption desalination (AD) units, designated as VMD-AD cycle, is proposed. The synergetic integration of the VMD and AD is demonstrated where a useful effect of the AD cycle is channelled to boost the operation of the VMD process, namely the low vacuum environment to maintain the high pressure gradient across the microporous hydrophobic membrane. A solar-assisted multi-stage VMD-AD hybrid desalination system with temperature modulating unit is first designed, and its performance is then examined with a mathematical model of each component in the system and compared with the VMD-only system with temperature modulating and heat recovery units. The total water production and water recovery ratio of a solar-assisted 24-stage VMD-AD hybrid system are found to be about 21% and 23% higher, respectively, as compared to the VMD-only system. For the solar-assisted 24-stage VMD-AD desalination system having 150 m(2) of evacuated-tube collectors and 10 m(3) seawater storage tanks, both annual collector efficiency and solar fraction are close to 60%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prototyping and testing of mechanical components for the GRAVITY spectrometers
NASA Astrophysics Data System (ADS)
Wiest, Michael; Fischer, Sebastian; Thiel, Markus; Haug, Marcus; Rohloff, Ralf-Rainer; Straubmeier, Christian; Araujo-Hauck, Constanza; Yazici, Senol; Eisenhauer, Frank; Perrin, Guy; Brandner, Wolfgang; Perraut, Karine; Amorim, Antonio; Schöller, Markus; Eckart, Andreas
2010-07-01
GRAVITY is a 2nd generation VLTI Instrument which operates on 6 interferometric baselines by using all 4 UTs. It will offer narrow angle astrometry in the infrared K-band with an accuracy of 10 ìas. The University of Cologne is part of the international GRAVITY consortium and responsible for the design and manufacturing of the two spectrometers. One is optimized for observing the science object, providing three different spectral resolutions and optional polarimetry, the other is optimized for a fast fringe tracking at a spectral resolution of R=22 with optional polarimetry. In order to achieve the necessary image quality, the current mechanical design foresees 5 motorized functions, 2 linear motions and 3 filter wheels. Additionally the latest optical design proposal includes 20 degrees of freedom for manual adjustments distributed over the different optical elements. Both spectrometers require precise linear and rotational movements on micrometer or arcsecond scales. These movements will be realized using custom linear stages based on compliant joints. These stages will be driven by actuators based on a Phytron/Harmonic Drive combination. For dimensioning and in order to qualify the reliability of these mechanisms, it is necessary to evaluate the mechanisms on the base of several prototypes. Due to the cryogenic environment the wheel mechanisms will be driven by Phytron stepper motors, too. A ratchet mechanism, which is currently in the beginning of his design phase, will deliver the required precision to the filter wheels. This contribution will give a first impression how the next mechanical prototypes will look like. Besides, advantages of purchasing and integrating a distance sensor and a resolver are reported. Both are supposed to work under cryogenic conditions and should achieve high resolutions for the measuring of movements inside the test cryostat.
CASE/A - COMPUTER AIDED SYSTEM ENGINEERING AND ANALYSIS, ECLSS/ATCS SERIES
NASA Technical Reports Server (NTRS)
Bacskay, A.
1994-01-01
Design and analysis of Environmental Control and Life Support Systems (ECLSS) and Active Thermal Control Systems (ATCS) for spacecraft missions requires powerful software that is flexible and responsive to the demands of particular projects. CASE/A is an interactive trade study and analysis tool designed to increase productivity during all phases of systems engineering. The graphics-based command-driven package provides a user-friendly environment in which the engineer can analyze the performance and interface characteristics of an ECLS/ATC system. The package is useful during all phases of a spacecraft design program, from initial conceptual design trade studies to the actual flight, including pre-flight prediction and in-flight anomaly analysis. The CASE/A program consists of three fundamental parts: 1) the schematic management system, 2) the database management system, and 3) the simulation control and execution system. The schematic management system allows the user to graphically construct a system model by arranging icons representing system components and connecting the components with physical fluid streams. Version 4.1 contains 51 fully coded and documented default component routines. New components can be added by the user through the "blackbox" component option. The database management system supports the storage and manipulation of component data, output data, and solution control data through interactive edit screens. The simulation control and execution system initiates and controls the iterative solution process, displaying time status and any necessary diagnostic messages. In addition to these primary functions, the program provides three other important functional areas: 1) model output management, 2) system utility commands, and 3) user operations logic capacity. The model output management system provides tabular and graphical output capability. Complete fluid constituent mass fraction and properties data (mass flow, pressure, temperature, specific heat, density, and viscosity) is generated at user-selected output intervals and stored for reference. The Integrated Plot Utility (IPU) provides plotting capability for all data output. System utility commands are provided to enable the user to operate more efficiently in the CASE/A environment. The user is able to customize a simulation through optional operations FORTRAN logic. This user-developed code is compiled and linked with a CASE/A model and enables the user to control and timeline component operating parameters during various phases of the iterative solution process. CASE/A provides for transient tracking of the flow stream constituents and determination of their thermodynamic state throughout an ECLSS/ATCS simulation, performing heat transfer, chemical reaction, mass/energy balance, and system pressure drop analysis based on user-specified operating conditions. The program tracks each constituent through all combination and decomposition states while maintaining a mass and energy balance on the overall system. This allows rapid assessment of ECLSS designs, the impact of alternate technologies, and impacts due to changes in metabolic forcing functions, consumables usage, and system control considerations. CASE/A is written in FORTRAN 77 for the DEC VAX/VMS computer series, and requires 12Mb of disk storage and a minimum paging file quota of 20,000 pages. The program operates on the Tektronix 4014 graphics standard and VT100 text standard. The program requires a Tektronix 4014 or later graphics terminal, third party composite graphics/text terminal, or personal computer loaded with appropriate VT100/TEK 4014 emulator software. The use of composite terminals or personal computers with popular emulation software is recommended for enhanced CASE/A operations and general ease of use. The program is available on an unlabeled 9-track 6250 BPI DEC VAX BACKUP format magnetic tape. CASE/A development began in 1985 under contract to NASA/Marshall Space Flight Center. The latest version (4.1) was released in 1990. Tektronix and TEK 4014 are trademarks of Tektronix, Inc. VT100 is a trademark of Digital Equipment Corporation.
A hybrid PCA-CART-MARS-based prognostic approach of the remaining useful life for aircraft engines.
Sánchez Lasheras, Fernando; García Nieto, Paulino José; de Cos Juez, Francisco Javier; Mayo Bayón, Ricardo; González Suárez, Victor Manuel
2015-03-23
Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines.
A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines
Lasheras, Fernando Sánchez; Nieto, Paulino José García; de Cos Juez, Francisco Javier; Bayón, Ricardo Mayo; Suárez, Victor Manuel González
2015-01-01
Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines. PMID:25806876
Agile IT: Thinking in User-Centric Models
NASA Astrophysics Data System (ADS)
Margaria, Tiziana; Steffen, Bernhard
We advocate a new teaching direction for modern CS curricula: extreme model-driven development (XMDD), a new development paradigm designed to continuously involve the customer/application expert throughout the whole systems' life cycle. Based on the `One-Thing Approach', which works by successively enriching and refining one single artifact, system development becomes in essence a user-centric orchestration of intuitive service functionality. XMDD differs radically from classical software development, which, in our opinion is no longer adequate for the bulk of application programming - in particular when it comes to heterogeneous, cross organizational systems which must adapt to rapidly changing market requirements. Thus there is a need for new curricula addressing this model-driven, lightweight, and cooperative development paradigm that puts the user process in the center of the development and the application expert in control of the process evolution.
Multivariate Time Series Decomposition into Oscillation Components.
Matsuda, Takeru; Komaki, Fumiyasu
2017-08-01
Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.
Dynamic partial reconfiguration of logic controllers implemented in FPGAs
NASA Astrophysics Data System (ADS)
Bazydło, Grzegorz; Wiśniewski, Remigiusz
2016-09-01
Technological progress in recent years benefits in digital circuits containing millions of logic gates with the capability for reprogramming and reconfiguring. On the one hand it provides the unprecedented computational power, but on the other hand the modelled systems are becoming increasingly complex, hierarchical and concurrent. Therefore, abstract modelling supported by the Computer Aided Design tools becomes a very important task. Even the higher consumption of the basic electronic components seems to be acceptable because chip manufacturing costs tend to fall over the time. The paper presents a modelling approach for logic controllers with the use of Unified Modelling Language (UML). Thanks to the Model Driven Development approach, starting with a UML state machine model, through the construction of an intermediate Hierarchical Concurrent Finite State Machine model, a collection of Verilog files is created. The system description generated in hardware description language can be synthesized and implemented in reconfigurable devices, such as FPGAs. Modular specification of the prototyped controller permits for further dynamic partial reconfiguration of the prototyped system. The idea bases on the exchanging of the functionality of the already implemented controller without stopping of the FPGA device. It means, that a part (for example a single module) of the logic controller is replaced by other version (called context), while the rest of the system is still running. The method is illustrated by a practical example by an exemplary Home Area Network system.
Model reduction by weighted Component Cost Analysis
NASA Technical Reports Server (NTRS)
Kim, Jae H.; Skelton, Robert E.
1990-01-01
Component Cost Analysis considers any given system driven by a white noise process as an interconnection of different components, and assigns a metric called 'component cost' to each component. These component costs measure the contribution of each component to a predefined quadratic cost function. A reduced-order model of the given system may be obtained by deleting those components that have the smallest component costs. The theory of Component Cost Analysis is extended to include finite-bandwidth colored noises. The results also apply when actuators have dynamics of their own. Closed-form analytical expressions of component costs are also derived for a mechanical system described by its modal data. This is very useful to compute the modal costs of very high order systems. A numerical example for MINIMAST system is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaenko, Alexander; Windus, Theresa L.; Sosonkina, Masha
2012-10-19
The design and development of scientific software components to provide an interface to the effective fragment potential (EFP) methods are reported. Multiscale modeling of physical and chemical phenomena demands the merging of software packages developed by research groups in significantly different fields. Componentization offers an efficient way to realize new high performance scientific methods by combining the best models available in different software packages without a need for package readaptation after the initial componentization is complete. The EFP method is an efficient electronic structure theory based model potential that is suitable for predictive modeling of intermolecular interactions in large molecularmore » systems, such as liquids, proteins, atmospheric aerosols, and nanoparticles, with an accuracy that is comparable to that of correlated ab initio methods. The developed components make the EFP functionality accessible for any scientific component-aware software package. The performance of the component is demonstrated on a protein interaction model, and its accuracy is compared with results obtained with coupled cluster methods.« less
Harbison, K; Kelly, J; Burnell, L; Silva, J
1995-01-01
The Scenario-based Engineering Process (SEP) is a user-focused methodology for large and complex system design. This process supports new application development from requirements analysis with domain models to component selection, design and modification, implementation, integration, and archival placement. It is built upon object-oriented methodologies, domain modeling strategies, and scenario-based techniques to provide an analysis process for mapping application requirements to available components. We are using SEP in the health care applications that we are developing. The process has already achieved success in the manufacturing and military domains and is being adopted by many organizations. SEP should prove viable in any domain containing scenarios that can be decomposed into tasks.
Constraint-based component-modeling for knowledge-based design
NASA Technical Reports Server (NTRS)
Kolb, Mark A.
1992-01-01
The paper describes the application of various advanced programming techniques derived from artificial intelligence research to the development of flexible design tools for conceptual design. Special attention is given to two techniques which appear to be readily applicable to such design tools: the constraint propagation technique and the object-oriented programming. The implementation of these techniques in a prototype computer tool, Rubber Airplane, is described.
Bar-Code System for a Microbiological Laboratory
NASA Technical Reports Server (NTRS)
Law, Jennifer; Kirschner, Larry
2007-01-01
A bar-code system has been assembled for a microbiological laboratory that must examine a large number of samples. The system includes a commercial bar-code reader, computer hardware and software components, plus custom-designed database software. The software generates a user-friendly, menu-driven interface.
An Energy-Aware Trajectory Optimization Layer for sUAS
NASA Astrophysics Data System (ADS)
Silva, William A.
The focus of this work is the implementation of an energy-aware trajectory optimization algorithm that enables small unmanned aircraft systems (sUAS) to operate in unknown, dynamic severe weather environments. The software is designed as a component of an Energy-Aware Dynamic Data Driven Application System (EA-DDDAS) for sUAS. This work addresses the challenges of integrating and executing an online trajectory optimization algorithm during mission operations in the field. Using simplified aircraft kinematics, the energy-aware algorithm enables extraction of kinetic energy from measured winds to optimize thrust use and endurance during flight. The optimization layer, based upon a nonlinear program formulation, extracts energy by exploiting strong wind velocity gradients in the wind field, a process known as dynamic soaring. The trajectory optimization layer extends the energy-aware path planner developed by Wenceslao Shaw-Cortez te{Shaw-cortez2013} to include additional mission configurations, simulations with a 6-DOF model, and validation of the system with flight testing in June 2015 in Lubbock, Texas. The trajectory optimization layer interfaces with several components within the EA-DDDAS to provide an sUAS with optimal flight trajectories in real-time during severe weather. As a result, execution timing, data transfer, and scalability are considered in the design of the software. Severe weather also poses a measure of unpredictability to the system with respect to communication between systems and available data resources during mission operations. A heuristic mission tree with different cost functions and constraints is implemented to provide a level of adaptability to the optimization layer. Simulations and flight experiments are performed to assess the efficacy of the trajectory optimization layer. The results are used to assess the feasibility of flying dynamic soaring trajectories with existing controllers as well as to verify the interconnections between EA-DDDAS components. Results also demonstrate the usage of the trajectory optimization layer in conjunction with a lattice-based path planner as a method of guiding the optimization layer and stitching together subsequent trajectories.