A data driven control method for structure vibration suppression
NASA Astrophysics Data System (ADS)
Xie, Yangmin; Wang, Chao; Shi, Hang; Shi, Junwei
2018-02-01
High radio-frequency space applications have motivated continuous research on vibration suppression of large space structures both in academia and industry. This paper introduces a novel data driven control method to suppress vibrations of flexible structures and experimentally validates the suppression performance. Unlike model-based control approaches, the data driven control method designs a controller directly from the input-output test data of the structure, without requiring parametric dynamics and hence free of system modeling. It utilizes the discrete frequency response via spectral analysis technique and formulates a non-convex optimization problem to obtain optimized controller parameters with a predefined controller structure. Such approach is then experimentally applied on an end-driving flexible beam-mass structure. The experiment results show that the presented method can achieve competitive disturbance rejections compared to a model-based mixed sensitivity controller under the same design criterion but with much less orders and design efforts, demonstrating the proposed data driven control is an effective approach for vibration suppression of flexible structures.
Data-driven non-linear elasticity: constitutive manifold construction and problem discretization
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Borzacchiello, Domenico; Aguado, Jose Vicente; Abisset-Chavanne, Emmanuelle; Cueto, Elias; Ladeveze, Pierre; Chinesta, Francisco
2017-11-01
The use of constitutive equations calibrated from data has been implemented into standard numerical solvers for successfully addressing a variety problems encountered in simulation-based engineering sciences (SBES). However, the complexity remains constantly increasing due to the need of increasingly detailed models as well as the use of engineered materials. Data-Driven simulation constitutes a potential change of paradigm in SBES. Standard simulation in computational mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy,\\ldots ), whereas the second one consists of models that scientists have extracted from collected, either natural or synthetic, data. Data-driven (or data-intensive) simulation consists of directly linking experimental data to computers in order to perform numerical simulations. These simulations will employ laws, universally recognized as epistemic, while minimizing the need of explicit, often phenomenological, models. The main drawback of such an approach is the large amount of required data, some of them inaccessible from the nowadays testing facilities. Such difficulty can be circumvented in many cases, and in any case alleviated, by considering complex tests, collecting as many data as possible and then using a data-driven inverse approach in order to generate the whole constitutive manifold from few complex experimental tests, as discussed in the present work.
NASA Astrophysics Data System (ADS)
Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin
2018-02-01
Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model
NASA Astrophysics Data System (ADS)
Tellman, B.; Schwarz, B.
2014-12-01
This talk describes the development of a web application to predict and communicate vulnerability to floods given publicly available data, disaster science, and geotech cloud capabilities. The proof of concept in Google Earth Engine API with initial testing on case studies in New York and Utterakhand India demonstrates the potential of highly parallelized cloud computing to model socio-ecological disaster vulnerability at high spatial and temporal resolution and in near real time. Cloud computing facilitates statistical modeling with variables derived from large public social and ecological data sets, including census data, nighttime lights (NTL), and World Pop to derive social parameters together with elevation, satellite imagery, rainfall, and observed flood data from Dartmouth Flood Observatory to derive biophysical parameters. While more traditional, physically based hydrological models that rely on flow algorithms and numerical methods are currently unavailable in parallelized computing platforms like Google Earth Engine, there is high potential to explore "data driven" modeling that trades physics for statistics in a parallelized environment. A data driven approach to flood modeling with geographically weighted logistic regression has been initially tested on Hurricane Irene in southeastern New York. Comparison of model results with observed flood data reveals a 97% accuracy of the model to predict flooded pixels. Testing on multiple storms is required to further validate this initial promising approach. A statistical social-ecological flood model that could produce rapid vulnerability assessments to predict who might require immediate evacuation and where could serve as an early warning. This type of early warning system would be especially relevant in data poor places lacking the computing power, high resolution data such as LiDar and stream gauges, or hydrologic expertise to run physically based models in real time. As the data-driven model presented relies on globally available data, the only real time data input required would be typical data from a weather service, e.g. precipitation or coarse resolution flood prediction. However, model uncertainty will vary locally depending upon the resolution and frequency of observed flood and socio-economic damage impact data.
Model-Drive Architecture for Agent-Based Systems
NASA Technical Reports Server (NTRS)
Gradanin, Denis; Singh, H. Lally; Bohner, Shawn A.; Hinchey, Michael G.
2004-01-01
The Model Driven Architecture (MDA) approach uses a platform-independent model to define system functionality, or requirements, using some specification language. The requirements are then translated to a platform-specific model for implementation. An agent architecture based on the human cognitive model of planning, the Cognitive Agent Architecture (Cougaar) is selected for the implementation platform. The resulting Cougaar MDA prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models. Using the existing Cougaar architecture, the level of application composition is elevated from individual components to domain level model specifications in order to generate software artifacts. The software artifacts generation is based on a metamodel. Each component maps to a UML structured component which is then converted into multiple artifacts: Cougaar/Java code, documentation, and test cases.
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Roychoudhury, Indranil; Balaban, Edward; Saxena, Abhinav
2010-01-01
Model-based diagnosis typically uses analytical redundancy to compare predictions from a model against observations from the system being diagnosed. However this approach does not work very well when it is not feasible to create analytic relations describing all the observed data, e.g., for vibration data which is usually sampled at very high rates and requires very detailed finite element models to describe its behavior. In such cases, features (in time and frequency domains) that contain diagnostic information are extracted from the data. Since this is a computationally intensive process, it is not efficient to extract all the features all the time. In this paper we present an approach that combines the analytic model-based and feature-driven diagnosis approaches. The analytic approach is used to reduce the set of possible faults and then features are chosen to best distinguish among the remaining faults. We describe an implementation of this approach on the Flyable Electro-mechanical Actuator (FLEA) test bed.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
ERIC Educational Resources Information Center
Lawlor, John; Marshall, Kevin; Tangney, Brendan
2016-01-01
It is generally accepted that intrinsic student motivation is a critical requirement for effective learning but formal learning in school places a huge reliance on extrinsic motivation to focus the learner. This reliance on extrinsic motivation is driven by the pressure on formal schooling to "deliver to the test." The experience of the…
Model based manipulator control
NASA Technical Reports Server (NTRS)
Petrosky, Lyman J.; Oppenheim, Irving J.
1989-01-01
The feasibility of using model based control (MBC) for robotic manipulators was investigated. A double inverted pendulum system was constructed as the experimental system for a general study of dynamically stable manipulation. The original interest in dynamically stable systems was driven by the objective of high vertical reach (balancing), and the planning of inertially favorable trajectories for force and payload demands. The model-based control approach is described and the results of experimental tests are summarized. Results directly demonstrate that MBC can provide stable control at all speeds of operation and support operations requiring dynamic stability such as balancing. The application of MBC to systems with flexible links is also discussed.
NASA Astrophysics Data System (ADS)
Othman, M. F.; Kurniawan, R.; Schramm, D.; Ariffin, A. K.
2018-05-01
Modeling a cable model in multibody dynamics simulation tool which dynamically varies in length, mass and stiffness is a challenging task. Simulation of cable-driven parallel robots (CDPR) for instance requires a cable model that can dynamically change in length for every desired pose of the platform. Thus, in this paper, a detailed procedure for modeling and simulation of a dynamic cable model in Dymola is proposed. The approach is also applicable for other types of Modelica simulation environments. The cable is modeled using standard mechanical elements like mass, spring, damper and joint. The parameters of the cable model are based on the factsheet of the manufacturer and experimental results. Its dynamic ability is tested by applying it on a complete planar CDPR model in which the parameters are based on a prototype named CABLAR, which is developed in Chair of Mechatronics, University of Duisburg-Essen. The prototype has been developed to demonstrate an application of CDPR as a goods storage and retrieval machine. The performance of the cable model during the simulation is analyzed and discussed.
Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition
Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.
2015-01-01
In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601
NASA Astrophysics Data System (ADS)
Maizir, H.; Suryanita, R.
2018-01-01
A few decades, many methods have been developed to predict and evaluate the bearing capacity of driven piles. The problem of the predicting and assessing the bearing capacity of the pile is very complicated and not yet established, different soil testing and evaluation produce a widely different solution. However, the most important thing is to determine methods used to predict and evaluate the bearing capacity of the pile to the required degree of accuracy and consistency value. Accurate prediction and evaluation of axial bearing capacity depend on some variables, such as the type of soil, diameter, and length of pile, etc. The aims of the study of Artificial Neural Networks (ANNs) are utilized to obtain more accurate and consistent axial bearing capacity of a driven pile. ANNs can be described as mapping an input to the target output data. The method using the ANN model developed to predict and evaluate the axial bearing capacity of the pile based on the pile driving analyzer (PDA) test data for more than 200 selected data. The results of the predictions obtained by the ANN model and the PDA test were then compared. This research as the neural network models give a right prediction and evaluation of the axial bearing capacity of piles using neural networks.
A Learning Framework for Control-Oriented Modeling of Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubio-Herrero, Javier; Chandan, Vikas; Siegel, Charles M.
Buildings consume a significant amount of energy worldwide. Several building optimization and control use cases require models of energy consumption which are control oriented, have high predictive capability, imposes minimal data pre-processing requirements, and have the ability to be adapted continuously to account for changing conditions as new data becomes available. Data driven modeling techniques, that have been investigated so far, while promising in the context of buildings, have been unable to simultaneously satisfy all the requirements mentioned above. In this context, deep learning techniques such as Recurrent Neural Networks (RNNs) hold promise, empowered by advanced computational capabilities and bigmore » data opportunities. In this paper, we propose a deep learning based methodology for the development of control oriented models for building energy management and test in on data from a real building. Results show that the proposed methodology outperforms other data driven modeling techniques significantly. We perform a detailed analysis of the proposed methodology along dimensions such as topology, sensitivity, and downsampling. Lastly, we conclude by envisioning a building analytics suite empowered by the proposed deep framework, that can drive several use cases related to building energy management.« less
Development strategies for the satellite flight software on-board Meteosat Third Generation
NASA Astrophysics Data System (ADS)
Tipaldi, Massimo; Legendre, Cedric; Koopmann, Olliver; Ferraguto, Massimo; Wenker, Ralf; D'Angelo, Gianni
2018-04-01
Nowadays, satellites are becoming increasingly software dependent. Satellite Flight Software (FSW), that is to say, the application software running on the satellite main On-Board Computer (OBC), plays a relevant role in implementing complex space mission requirements. In this paper, we examine relevant technical approaches and programmatic strategies adopted for the development of the Meteosat Third Generation Satellite (MTG) FSW. To begin with, we present its layered model-based architecture, and the means for ensuring a robust and reliable interaction among the FSW components. Then, we focus on the selection of an effective software development life cycle model. In particular, by combining plan-driven and agile approaches, we can fulfill the need of having preliminary SW versions. They can be used for the elicitation of complex system-level requirements as well as for the initial satellite integration and testing activities. Another important aspect can be identified in the testing activities. Indeed, very demanding quality requirements have to be fulfilled in satellite SW applications. This manuscript proposes a test automation framework, which uses an XML-based test procedure language independent of the underlying test environment. Finally, a short overview of the MTG FSW sizing and timing budgets concludes the paper.
NASA Astrophysics Data System (ADS)
Strickland, D. K.; Heckman, T. M.; Colbert, E. J. M.; Hoopes, C. G.; Weaver, K. A.
2002-12-01
We present arcsecond resolution Chandra X-ray and ground-based optical Hα imaging of a sample of ten edge-on star-forming disk galaxies (seven starburst and three ``normal'' spiral galaxies), a sample which covers the full range of star-formation intensity found in disk galaxies. The X-ray observations make use of the unprecented spatial resolution of the Chandra X-ray observatory to robustly remove X-ray emission from point sources, and hence obtain the X-ray properties of the diffuse thermal emission alone. This data has been combined with existing, comparable-resolution, ground-based Hα imaging. We compare these empirically-derived diffuse X-ray properties with various models for the generation of hot gas in the halos of star-forming galaxies: supernova feedback-based models (starburst-driven winds, galactic fountains), cosmologically-motivated accretion of the IGM and AGN-driven winds. SN feedback models best explain the observed diffuse X-ray emission. We then use the data to test basic, but fundamental, aspects of wind and fountain theories, e.g. the critical energy required for disk "break-out." DKS is supported by NASA through Chandra Postdoctoral Fellowship Award Number PF0-10012.
Swash saturation: an assessment of available models
NASA Astrophysics Data System (ADS)
Hughes, Michael G.; Baldock, Tom E.; Aagaard, Troels
2018-06-01
An extensive previously published (Hughes et al. Mar Geol 355, 88-97, 2014) field data set representing the full range of micro-tidal beach states (reflective, intermediate and dissipative) is used to investigate swash saturation. Two models that predict the behavior of saturated swash are tested: one driven by standing waves and the other driven by bores. Despite being based on entirely different premises, they predict similar trends in the limiting (saturated) swash height with respect to dependency on frequency and beach gradient. For a given frequency and beach gradient, however, the bore-driven model predicts a larger saturated swash height by a factor 2.5. Both models broadly predict the general behavior of swash saturation evident in the data, but neither model is accurate in detail. While swash saturation in the short-wave frequency band is common on some beach types, it does not always occur across all beach types. Further work is required on wave reflection/breaking and the role of wave-wave and wave-swash interactions to determine limiting swash heights on natural beaches.
The Damper Spring Unit of the Sentinel 1 Solar Array
NASA Technical Reports Server (NTRS)
Doejaaren, Frans; Ellenbroek, Marcel
2012-01-01
The Damper Spring Unit (DSU, see Figure 1) has been designed to provide the damping required to control the deployment speed of the spring driven solar array deployment in an ARA Mk3 or FRED based Solar Array in situations where the standard application of a damper at the root-hinge is not feasible. The unit consists of four major parts: a main bracket, an eddy current damper, a spring unit, an actuation pulley which is coupled via Kevlar cables to a synchro-pulley of a hinge. The damper slows down the deployment speed and prevents deployment shocks at deployment completion. The spring unit includes 4 springs which overcome the resistances of the damper and the specific DSU control cable loop. This means it can be added to any spring driven deployment system without major modifications of that system. Engineering models of the Sentinel 1 solar array wing have been built to identify the deployment behavior, and to help to determine the optimal pulley ratios of the solar array and to finalize the DSU design. During the functional tests, the behavior proved to be very sensitive for the alignment of the DSU. This was therefore monitored carefully during the qualification program, especially prior to the TV cold testing. During TV "Cold" testing the measured retarding torque exceeded the max. required value: 284 N-mm versus the required 247 N-mm. Although this requirement was not met, the torque balance analysis shows that the 284 N-mm can be accepted, because the spring unit can provide 1.5 times more torque than required. Some functional tests of the DSU have been performed without the eddy current damper attached. It provided input data for the ADAMS solar array wing model. Simulation of the Sentinel-1 deployment (including DSU) in ADAMS allowed the actual wing deployment tests to be limited in both complexity and number of tests. The DSU for the Sentinel-1 solar array was successfully qualified and the flight models are in production.
NASA Astrophysics Data System (ADS)
Rodríguez, A.; Astrain, D.; Martínez, A.; Aranguren, P.
2014-06-01
In the work discussed in this paper a thermoelectric generator was developed to harness waste heat from the exhaust gas of a boiler in a biomass power plant and thus generate electric power to operate a flowmeter installed in the chimney, to make it autonomous. The main objective was to conduct an experimental study to optimize a previous design obtained after computational work based on a simulation model for thermoelectric generators. First, several places inside and outside the chimney were considered as sites for the thermoelectricity-driven autonomous sensor. Second, the thermoelectric generator was built and tested to assess the effect of the cold-side heat exchanger on the electric power, power consumption by the flowmeter, and transmission frequency. These tests provided the best configuration for the heat exchanger, which met the transmission requirements for different working conditions. The final design is able to transmit every second and requires neither batteries nor electric wires. It is a promising application in the field of thermoelectric generation.
MATTS- A Step Towards Model Based Testing
NASA Astrophysics Data System (ADS)
Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.
2016-08-01
In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
Revel8or: Model Driven Capacity Planning Tool Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Liming; Liu, Yan; Bui, Ngoc B.
2007-05-31
Designing complex multi-tier applications that must meet strict performance requirements is a challenging software engineering problem. Ideally, the application architect could derive accurate performance predictions early in the project life-cycle, leveraging initial application design-level models and a description of the target software and hardware platforms. To this end, we have developed a capacity planning tool suite for component-based applications, called Revel8tor. The tool adheres to the model driven development paradigm and supports benchmarking and performance prediction for J2EE, .Net and Web services platforms. The suite is composed of three different tools: MDAPerf, MDABench and DSLBench. MDAPerf allows annotation of designmore » diagrams and derives performance analysis models. MDABench allows a customized benchmark application to be modeled in the UML 2.0 Testing Profile and automatically generates a deployable application, with measurement automatically conducted. DSLBench allows the same benchmark modeling and generation to be conducted using a simple performance engineering Domain Specific Language (DSL) in Microsoft Visual Studio. DSLBench integrates with Visual Studio and reuses its load testing infrastructure. Together, the tool suite can assist capacity planning across platforms in an automated fashion.« less
Balancing Plan-Driven and Agile Methods in Software Engineering Project Courses
NASA Astrophysics Data System (ADS)
Boehm, Barry; Port, Dan; Winsor Brown, A.
2002-09-01
For the past 6 years, we have been teaching a two-semester software engineering project course. The students organize into 5-person teams and develop largely web-based electronic services projects for real USC campus clients. We have been using and evolving a method called Model- Based (System) Architecting and Software Engineering (MBASE) for use in both the course and in industrial applications. The MBASE Guidelines include a lot of documents. We teach risk-driven documentation: if it is risky to document something, and not risky to leave it out (e.g., GUI screen placements), leave it out. Even so, students tend to associate more documentation with higher grades, although our grading eventually discourages this. We are always on the lookout for ways to have students learn best practices without having to produce excessive documentation. Thus, we were very interested in analyzing the various emerging agile methods. We found that agile methods and milestone plan-driven methods are part of a “how much planning is enough?” spectrum. Both agile and plan-driven methods have home grounds of project characteristics where they clearly work best, and where the other will have difficulties. Hybrid agile/plan-driven approaches are feasible, and necessary for projects having a mix of agile and plan-driven home ground characteristics. Information technology trends are going more toward the agile methods' home ground characteristics of emergent requirements and rapid change, although there is a concurrent increase in concern with dependability. As a result, we are currently experimenting with risk-driven combinations of MBASE and agile methods, such as integrating requirements, test plans, peer reviews, and pair programming into “agile quality management.”
Data-driven modeling, control and tools for cyber-physical energy systems
NASA Astrophysics Data System (ADS)
Behl, Madhur
Energy systems are experiencing a gradual but substantial change in moving away from being non-interactive and manually-controlled systems to utilizing tight integration of both cyber (computation, communications, and control) and physical representations guided by first principles based models, at all scales and levels. Furthermore, peak power reduction programs like demand response (DR) are becoming increasingly important as the volatility on the grid continues to increase due to regulation, integration of renewables and extreme weather conditions. In order to shield themselves from the risk of price volatility, end-user electricity consumers must monitor electricity prices and be flexible in the ways they choose to use electricity. This requires the use of control-oriented predictive models of an energy system's dynamics and energy consumption. Such models are needed for understanding and improving the overall energy efficiency and operating costs. However, learning dynamical models using grey/white box approaches is very cost and time prohibitive since it often requires significant financial investments in retrofitting the system with several sensors and hiring domain experts for building the model. We present the use of data-driven methods for making model capture easy and efficient for cyber-physical energy systems. We develop Model-IQ, a methodology for analysis of uncertainty propagation for building inverse modeling and controls. Given a grey-box model structure and real input data from a temporary set of sensors, Model-IQ evaluates the effect of the uncertainty propagation from sensor data to model accuracy and to closed-loop control performance. We also developed a statistical method to quantify the bias in the sensor measurement and to determine near optimal sensor placement and density for accurate data collection for model training and control. Using a real building test-bed, we show how performing an uncertainty analysis can reveal trends about inverse model accuracy and control performance, which can be used to make informed decisions about sensor requirements and data accuracy. We also present DR-Advisor, a data-driven demand response recommender system for the building's facilities manager which provides suitable control actions to meet the desired load curtailment while maintaining operations and maximizing the economic reward. We develop a model based control with regression trees algorithm (mbCRT), which allows us to perform closed-loop control for DR strategy synthesis for large commercial buildings. Our data-driven control synthesis algorithm outperforms rule-based demand response methods for a large DoE commercial reference building and leads to a significant amount of load curtailment (of 380kW) and over $45,000 in savings which is 37.9% of the summer energy bill for the building. The performance of DR-Advisor is also evaluated for 8 buildings on Penn's campus; where it achieves 92.8% to 98.9% prediction accuracy. We also compare DR-Advisor with other data driven methods and rank 2nd on ASHRAE's benchmarking data-set for energy prediction.
A miniature cable-driven robot for crawling on the heart.
Patronik, N A; Zenati, M A; Riviere, C N
2005-01-01
This document describes the design and preliminary testing of a cable-driven robot for the purpose of traveling on the surface of the beating heart to administer therapy. This methodology obviates mechanical stabilization and lung deflation, which are typically required during minimally invasive cardiac surgery. Previous versions of the robot have been remotely actuated through push-pull wires, while visual feedback was provided by fiber optic transmission. Although these early models were able to perform locomotion in vivo on porcine hearts, the stiffness of the wire-driven transmission and fiber optic camera limited the mobility of the robots. The new prototype described in this document is actuated by two antagonistic cable pairs, and contains a color CCD camera located in the front section of the device. These modifications have resulted in superior mobility and visual feedback. The cable-driven prototype has successfully demonstrated prehension, locomotion, and tissue dye injection during in vitro testing with a poultry model.
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
NASA Astrophysics Data System (ADS)
Barretta, Raffaele; Fabbrocino, Francesco; Luciano, Raimondo; Sciarra, Francesco Marotti de
2018-03-01
Strain-driven and stress-driven integral elasticity models are formulated for the analysis of the structural behaviour of fuctionally graded nano-beams. An innovative stress-driven two-phases constitutive mixture defined by a convex combination of local and nonlocal phases is presented. The analysis reveals that the Eringen strain-driven fully nonlocal model cannot be used in Structural Mechanics since it is ill-posed and the local-nonlocal mixtures based on the Eringen integral model partially resolve the ill-posedeness of the model. In fact, a singular behaviour of continuous nano-structures appears if the local fraction tends to vanish so that the ill-posedness of the Eringen integral model is not eliminated. On the contrary, local-nonlocal mixtures based on the stress-driven theory are mathematically and mechanically appropriate for nanosystems. Exact solutions of inflected functionally graded nanobeams of technical interest are established by adopting the new local-nonlocal mixture stress-driven integral relation. Effectiveness of the new nonlocal approach is tested by comparing the contributed results with the ones corresponding to the mixture Eringen theory.
DATA-CONSTRAINED CORONAL MASS EJECTIONS IN A GLOBAL MAGNETOHYDRODYNAMICS MODEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, M.; Manchester, W. B.; Van der Holst, B.
We present a first-principles-based coronal mass ejection (CME) model suitable for both scientific and operational purposes by combining a global magnetohydrodynamics (MHD) solar wind model with a flux-rope-driven CME model. Realistic CME events are simulated self-consistently with high fidelity and forecasting capability by constraining initial flux rope parameters with observational data from GONG, SOHO /LASCO, and STEREO /COR. We automate this process so that minimum manual intervention is required in specifying the CME initial state. With the newly developed data-driven Eruptive Event Generator using Gibson–Low configuration, we present a method to derive Gibson–Low flux rope parameters through a handful ofmore » observational quantities so that the modeled CMEs can propagate with the desired CME speeds near the Sun. A test result with CMEs launched with different Carrington rotation magnetograms is shown. Our study shows a promising result for using the first-principles-based MHD global model as a forecasting tool, which is capable of predicting the CME direction of propagation, arrival time, and ICME magnetic field at 1 au (see the companion paper by Jin et al. 2016a).« less
Activity-Centered Domain Characterization for Problem-Driven Scientific Visualization
Marai, G. Elisabeta
2018-01-01
Although visualization design models exist in the literature in the form of higher-level methodological frameworks, these models do not present a clear methodological prescription for the domain characterization step. This work presents a framework and end-to-end model for requirements engineering in problem-driven visualization application design. The framework and model are based on the activity-centered design paradigm, which is an enhancement of human-centered design. The proposed activity-centered approach focuses on user tasks and activities, and allows an explicit link between the requirements engineering process with the abstraction stage—and its evaluation—of existing, higher-level visualization design models. In a departure from existing visualization design models, the resulting model: assigns value to a visualization based on user activities; ranks user tasks before the user data; partitions requirements in activity-related capabilities and nonfunctional characteristics and constraints; and explicitly incorporates the user workflows into the requirements process. A further merit of this model is its explicit integration of functional specifications, a concept this work adapts from the software engineering literature, into the visualization design nested model. A quantitative evaluation using two sets of interdisciplinary projects supports the merits of the activity-centered model. The result is a practical roadmap to the domain characterization step of visualization design for problem-driven data visualization. Following this domain characterization model can help remove a number of pitfalls that have been identified multiple times in the visualization design literature. PMID:28866550
The application of domain-driven design in NMS
NASA Astrophysics Data System (ADS)
Zhang, Jinsong; Chen, Yan; Qin, Shengjun
2011-12-01
In the traditional design approach of data-model-driven, system analysis and design phases are often separated which makes the demand information can not be expressed explicitly. The method is also easy to lead developer to the process-oriented programming, making codes between the modules or between hierarchies disordered. So it is hard to meet requirement of system scalability. The paper proposes a software hiberarchy based on rich domain model according to domain-driven design named FHRDM, then the Webwork + Spring + Hibernate (WSH) framework is determined. Domain-driven design aims to construct a domain model which not only meets the demand of the field where the software exists but also meets the need of software development. In this way, problems in Navigational Maritime System (NMS) development like big system business volumes, difficulty of requirement elicitation, high development costs and long development cycle can be resolved successfully.
2006-11-01
Serial #: T-018447EFJM Weight: 19,340 pounds 4. Semitrailer, flatbed , breakbulk/container transporter, 34 ton Model #: M872A1 Manufactured by Heller...WILL BE USED WHENEVER POS- SIBLE WHEN NAILS ARE DRIVEN INTO JOINTS OF DUNNAGE AS- SEMBLIES OR WHEN LAMINATING DUNNAGE. ADDITIONALLY, THE NAILING...PATTERN FOR AN UPPER PIECE OF LAMINATED DUNNAGE WILL BE ADJUSTED AS REQUIRED SO THAT A NAIL FOR THAT PIECE WILL NOT BE DRIVEN THROUGH, ONTO, OR RIGHT
An efficient soil water balance model based on hybrid numerical and statistical methods
NASA Astrophysics Data System (ADS)
Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei
2018-04-01
Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new model makes it particularly suitable for large-scale simulation of soil water movement, because the new model can be used with coarse discretization in space and time.
NASA Technical Reports Server (NTRS)
Hopcroft, J.
1987-01-01
The potential benefits of automation in space are significant. The science base needed to support this automation not only will help control costs and reduce lead-time in the earth-based design and construction of space stations, but also will advance the nation's capability for computer design, simulation, testing, and debugging of sophisticated objects electronically. Progress in automation will require the ability to electronically represent, reason about, and manipulate objects. Discussed here is the development of representations, languages, editors, and model-driven simulation systems to support electronic prototyping. In particular, it identifies areas where basic research is needed before further progress can be made.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller-Driven, Commuter Category Airplane Certification Tests Prior to.... F Appendix F to Part 36—Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller-Driven, Commuter Category Airplane Certification Tests Prior to.... F Appendix F to Part 36—Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller-Driven, Commuter Category Airplane Certification Tests Prior to.... F Appendix F to Part 36—Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller-Driven, Commuter Category Airplane Certification Tests Prior to.... F Appendix F to Part 36—Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller-Driven, Commuter Category Airplane Certification Tests Prior to.... F Appendix F to Part 36—Flyover Noise Requirements for Propeller-Driven Small Airplane and Propeller...
Test Driven Development: Lessons from a Simple Scientific Model
NASA Astrophysics Data System (ADS)
Clune, T. L.; Kuo, K.
2010-12-01
In the commercial software industry, unit testing frameworks have emerged as a disruptive technology that has permanently altered the process by which software is developed. Unit testing frameworks significantly reduce traditional barriers, both practical and psychological, to creating and executing tests that verify software implementations. A new development paradigm, known as test driven development (TDD), has emerged from unit testing practices, in which low-level tests (i.e. unit tests) are created by developers prior to implementing new pieces of code. Although somewhat counter-intuitive, this approach actually improves developer productivity. In addition to reducing the average time for detecting software defects (bugs), the requirement to provide procedure interfaces that enable testing frequently leads to superior design decisions. Although TDD is widely accepted in many software domains, its applicability to scientific modeling still warrants reasonable skepticism. While the technique is clearly relevant for infrastructure layers of scientific models such as the Earth System Modeling Framework (ESMF), numerical and scientific components pose a number of challenges to TDD that are not often encountered in commercial software. Nonetheless, our experience leads us to believe that the technique has great potential not only for developer productivity, but also as a tool for understanding and documenting the basic scientific assumptions upon which our models are implemented. We will provide a brief introduction to test driven development and then discuss our experience in using TDD to implement a relatively simple numerical model that simulates the growth of snowflakes. Many of the lessons learned are directly applicable to larger scientific models.
Sublimator Driven Coldplate Engineering Development Unit Test Results
NASA Technical Reports Server (NTRS)
Sheth, Rubik B.; Stephan, Ryan A.; Leimkuehler, Thomas O.
2010-01-01
The Sublimator Driven Coldplate (SDC) is a unique piece of thermal control hardware that has several advantages over a traditional thermal control scheme. The principal advantage is the possible elimination of a pumped fluid loop, potentially increasing reliability and reducing complexity while saving both mass and power. Because the SDC requires a consumable feedwater, it can only be used for short mission durations. Additionally, the SDC is ideal for a vehicle with small transport distances and low heat rejection requirements. An SDC Engineering Development Unit was designed and fabricated. Performance tests were performed in a vacuum chamber to quantify and assess the performance of the SDC. The test data was then used to develop correlated thermal math models. Nonetheless, an Integrated Sublimator Driven Coldplate (ISDC) concept is being developed. The ISDC couples a coolant loop with the previously described SDC hardware. This combination allows the SDC to be used as a traditional coldplate during long mission phases and provides for dissimilar system redundancy
Data-based virtual unmodeled dynamics driven multivariable nonlinear adaptive switching control.
Chai, Tianyou; Zhang, Yajun; Wang, Hong; Su, Chun-Yi; Sun, Jing
2011-12-01
For a complex industrial system, its multivariable and nonlinear nature generally make it very difficult, if not impossible, to obtain an accurate model, especially when the model structure is unknown. The control of this class of complex systems is difficult to handle by the traditional controller designs around their operating points. This paper, however, explores the concepts of controller-driven model and virtual unmodeled dynamics to propose a new design framework. The design consists of two controllers with distinct functions. First, using input and output data, a self-tuning controller is constructed based on a linear controller-driven model. Then the output signals of the controller-driven model are compared with the true outputs of the system to produce so-called virtual unmodeled dynamics. Based on the compensator of the virtual unmodeled dynamics, the second controller based on a nonlinear controller-driven model is proposed. Those two controllers are integrated by an adaptive switching control algorithm to take advantage of their complementary features: one offers stabilization function and another provides improved performance. The conditions on the stability and convergence of the closed-loop system are analyzed. Both simulation and experimental tests on a heavily coupled nonlinear twin-tank system are carried out to confirm the effectiveness of the proposed method.
ODISEES: Ontology-Driven Interactive Search Environment for Earth Sciences
NASA Technical Reports Server (NTRS)
Rutherford, Matthew T.; Huffer, Elisabeth B.; Kusterer, John M.; Quam, Brandi M.
2015-01-01
This paper discusses the Ontology-driven Interactive Search Environment for Earth Sciences (ODISEES) project currently being developed to aid researchers attempting to find usable data among an overabundance of closely related data. ODISEES' ontological structure relies on a modular, adaptable concept modeling approach, which allows the domain to be modeled more or less as it is without worrying about terminology or external requirements. In the model, variables are individually assigned semantic content based on the characteristics of the measurements they represent, allowing intuitive discovery and comparison of data without requiring the user to sift through large numbers of data sets and variables to find the desired information.
Machine learning based cloud mask algorithm driven by radiative transfer modeling
NASA Astrophysics Data System (ADS)
Chen, N.; Li, W.; Tanikawa, T.; Hori, M.; Shimada, R.; Stamnes, K. H.
2017-12-01
Cloud detection is a critically important first step required to derive many satellite data products. Traditional threshold based cloud mask algorithms require a complicated design process and fine tuning for each sensor, and have difficulty over snow/ice covered areas. With the advance of computational power and machine learning techniques, we have developed a new algorithm based on a neural network classifier driven by extensive radiative transfer modeling. Statistical validation results obtained by using collocated CALIOP and MODIS data show that its performance is consistent over different ecosystems and significantly better than the MODIS Cloud Mask (MOD35 C6) during the winter seasons over mid-latitude snow covered areas. Simulations using a reduced number of satellite channels also show satisfactory results, indicating its flexibility to be configured for different sensors.
NASA Astrophysics Data System (ADS)
Chang, Daniel Y.; Rowe, Neil C.
2013-05-01
While conducting a cutting-edge research in a specific domain, we realize that (1) requirements clarity and correctness are crucial to our success [1], (2) hardware is hard to change, most work is in software requirements development, coding and testing [2], (3) requirements are constantly changing, so that configurability, reusability, scalability, adaptability, modularity and testability are important non-functional attributes [3], (4) cross-domain knowledge is necessary for complex systems [4], and (5) if our research is successful, the results could be applied to other domains with similar problems. In this paper, we propose to use model-driven requirements engineering (MDRE) to model and guide our requirements/development, since models are easy to understand, execute, and modify. The domain for our research is Electronic Warfare (EW) real-time ultra-wide instantaneous bandwidth (IBW1) signal simulation. The proposed four MDRE models are (1) Switch-and-Filter architecture, (2) multiple parallel data bit streams alignment, (3) post-ADC and pre-DAC bits re-mapping, and (4) Discrete Fourier Transform (DFT) filter bank. This research is unique since the instantaneous bandwidth we are dealing with is in gigahertz range instead of conventional megahertz.
A gyrokinetic one-dimensional scrape-off layer model of an edge-localized mode heat pulse
Shi, E. L.; Hakim, A. H.; Hammett, G. W.
2015-02-03
An electrostatic gyrokinetic-based model is applied to simulate parallel plasma transport in the scrape-off layer to a divertor plate. We focus on a test problem that has been studied previously, using parameters chosen to model a heat pulse driven by an edge-localized mode in JET. Previous work has used direct particle-in-cellequations with full dynamics, or Vlasov or fluid equations with only parallel dynamics. With the use of the gyrokinetic quasineutrality equation and logical sheathboundary conditions, spatial and temporal resolution requirements are no longer set by the electron Debye length and plasma frequency, respectively. Finally, this test problem also helps illustratemore » some of the physics contained in the Hamiltonian form of the gyrokineticequations and some of the numerical challenges in developing an edge gyrokinetic code.« less
Wang, Hongqing; Meselhe, Ehab A.; Waldon, Michael G.; Harwell, Matthew C.; Chen, Chunfang
2012-01-01
The last remaining large remnant of softwater wetlands in the US Florida Everglades lies within the Arthur R. Marshall Loxahatchee National Wildlife Refuge. However, Refuge water quality today is impacted by pumped stormwater inflows to the eutrophic and mineral-enriched 100-km canal, which circumscribes the wetland. Optimal management is a challenge and requires scientifically based predictive tools to assess and forecast the impacts of water management on Refuge water quality. In this research, we developed a compartment-based numerical model of hydrodynamics and water quality for the Refuge. Using the numerical model, we examined the dynamics in stage, water depth, discharge from hydraulic structures along the canal, and exchange flow among canal and marsh compartments. We also investigated the transport of chloride, sulfate and total phosphorus from the canal to the marsh interior driven by hydraulic gradients as well as biological removal of sulfate and total phosphorus. The model was calibrated and validated using long-term stage and water quality data (1995-2007). Statistical analysis indicates that the model is capable of capturing the spatial (from canal to interior marsh) gradients of constituents across the Refuge. Simulations demonstrate that flow from the eutrophic and mineral-enriched canal impacts chloride and sulfate in the interior marsh. In contrast, total phosphorus in the interior marsh shows low sensitivity to intrusion and dispersive transport. We conducted a rainfall-driven scenario test in which the pumped inflow concentrations of chloride, sulfate and total phosphorus were equal to rainfall concentrations (wet deposition). This test shows that pumped inflow is the dominant factor responsible for the substantially increased chloride and sulfate concentrations in the interior marsh. Therefore, the present day Refuge should not be classified as solely a rainfall-driven or ombrotrophic wetland. The model provides an effective screening tool for studying the impacts of various water management alternatives on water quality across the Refuge, and demonstrates the practicality of similarly modeling other wetland systems. As a general rule, modeling provides one component of a multi-faceted effort to provide technical support for ecosystem management decisions.
Development and validation of a turbulent-mix model for variable-density and compressible flows.
Banerjee, Arindam; Gore, Robert A; Andrews, Malcolm J
2010-10-01
The modeling of buoyancy driven turbulent flows is considered in conjunction with an advanced statistical turbulence model referred to as the BHR (Besnard-Harlow-Rauenzahn) k-S-a model. The BHR k-S-a model is focused on variable-density and compressible flows such as Rayleigh-Taylor (RT), Richtmyer-Meshkov (RM), and Kelvin-Helmholtz (KH) driven mixing. The BHR k-S-a turbulence mix model has been implemented in the RAGE hydro-code, and model constants are evaluated based on analytical self-similar solutions of the model equations. The results are then compared with a large test database available from experiments and direct numerical simulations (DNS) of RT, RM, and KH driven mixing. Furthermore, we describe research to understand how the BHR k-S-a turbulence model operates over a range of moderate to high Reynolds number buoyancy driven flows, with a goal of placing the modeling of buoyancy driven turbulent flows at the same level of development as that of single phase shear flows.
Assessments of aggregate exposure to pesticides and other surface contamination in residential environments are often driven by assumptions about dermal contacts. Accurately predicting cumulative doses from realistic skin contact scenarios requires characterization of exposure sc...
Testing Strategies for Model-Based Development
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.
2006-01-01
This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.
Network Model-Assisted Inference from Respondent-Driven Sampling Data
Gile, Krista J.; Handcock, Mark S.
2015-01-01
Summary Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population. PMID:26640328
Network Model-Assisted Inference from Respondent-Driven Sampling Data.
Gile, Krista J; Handcock, Mark S
2015-06-01
Respondent-Driven Sampling is a widely-used method for sampling hard-to-reach human populations by link-tracing over their social networks. Inference from such data requires specialized techniques because the sampling process is both partially beyond the control of the researcher, and partially implicitly defined. Therefore, it is not generally possible to directly compute the sampling weights for traditional design-based inference, and likelihood inference requires modeling the complex sampling process. As an alternative, we introduce a model-assisted approach, resulting in a design-based estimator leveraging a working network model. We derive a new class of estimators for population means and a corresponding bootstrap standard error estimator. We demonstrate improved performance compared to existing estimators, including adjustment for an initial convenience sample. We also apply the method and an extension to the estimation of HIV prevalence in a high-risk population.
Sierra-de-Grado, Rosario; Pando, Valentín; Martínez-Zurimendi, Pablo; Peñalvo, Alejandro; Báscones, Esther; Moulia, Bruno
2008-06-01
Stem straightness is an important selection trait in Pinus pinaster Ait. breeding programs. Despite the stability of stem straightness rankings in provenance trials, the efficiency of breeding programs based on a quantitative index of stem straightness remains low. An alternative approach is to analyze biomechanical processes that underlie stem form. The rationale for this selection method is that genetic differences in the biomechanical processes that maintain stem straightness in young plants will continue to control stem form throughout the life of the tree. We analyzed the components contributing most to genetic differences among provenances in stem straightening processes by kinetic analysis and with a biomechanical model defining the interactions between the variables involved (Fournier's model). This framework was tested on three P. pinaster provenances differing in adult stem straightness and growth. One-year-old plants were tilted at 45 degrees, and individual stem positions and sizes were recorded weekly for 5 months. We measured the radial extension of reaction wood and the anatomical features of wood cells in serial stem cross sections. The integral effect of reaction wood on stem leaning was computed with Fournier's model. Responses driven by both primary and secondary growth were involved in the stem straightening process, but secondary-growth-driven responses accounted for most differences among provenances. Plants from the straight-stemmed provenance showed a greater capacity for stem straightening than plants from the sinuous provenances mainly because of (1) more efficient reaction wood (higher maturation strains) and (2) more pronounced secondary-growth-driven autotropic decurving. These two process-based traits are thus good candidates for early selection of stem straightness, but additional tests on a greater number of genotypes over a longer period are required.
Balakrishnan, Karthik; Goico, Brian; Arjmand, Ellis M
2015-04-01
(1) To describe the application of a detailed cost-accounting method (time-driven activity-cased costing) to operating room personnel costs, avoiding the proxy use of hospital and provider charges. (2) To model potential cost efficiencies using different staffing models with the case study of outpatient adenotonsillectomy. Prospective cost analysis case study. Tertiary pediatric hospital. All otolaryngology providers and otolaryngology operating room staff at our institution. Time-driven activity-based costing demonstrated precise per-case and per-minute calculation of personnel costs. We identified several areas of unused personnel capacity in a basic staffing model. Per-case personnel costs decreased by 23.2% by allowing a surgeon to run 2 operating rooms, despite doubling all other staff. Further cost reductions up to a total of 26.4% were predicted with additional staffing rearrangements. Time-driven activity-based costing allows detailed understanding of not only personnel costs but also how personnel time is used. This in turn allows testing of alternative staffing models to decrease unused personnel capacity and increase efficiency. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs
NASA Technical Reports Server (NTRS)
Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.
2004-01-01
This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.
NASA Astrophysics Data System (ADS)
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO2 leaks and associated concentrations from geological CO2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems.
Humane Society International's global campaign to end animal testing.
Seidle, Troy
2013-12-01
The Research & Toxicology Department of Humane Society International (HSI) operates a multifaceted and science-driven global programme aimed at ending the use of animals in toxicity testing and research. The key strategic objectives include: a) ending cosmetics animal testing worldwide, via the multinational Be Cruelty-Free campaign; b) achieving near-term reductions in animal testing requirements through revision of product sector regulations; and c) advancing humane science by exposing failing animal models of human disease and shifting science funding toward human biology-based research and testing tools fit for the 21st century. HSI was instrumental in ensuring the implementation of the March 2013 European sales ban for newly animal-tested cosmetics, in achieving the June 2013 cosmetics animal testing ban in India as well as major cosmetics regulatory policy shifts in China and South Korea, and in securing precedent-setting reductions in in vivo data requirements for pesticides in the EU through the revision of biocides and plant protection product regulations, among others. HSI is currently working to export these life-saving measures to more than a dozen industrial and emerging economies. 2013 FRAME.
Prediction of the interior noise levels of high-speed propeller-driven aircraft
NASA Technical Reports Server (NTRS)
Rennison, D. C.; Wilby, J. F.; Wilby, E. G.
1980-01-01
The theoretical basis for an analytical model developed to predict the interior noise levels of high-speed propeller-driven airplanes is presented. Particular emphasis is given to modeling the transmission of discrete tones through a fuselage element into a cavity, estimates for the mean and standard deviation of the acoustic power flow, the coupling between a non-homogeneous excitation and the fuselage vibration response, and the prediction of maximum interior noise levels. The model allows for convenient examination of the various roles of the excitation and fuselage structural characteristics on the fuselage vibration response and the interior noise levels, as is required for the design of model or prototype noise control validation tests.
Modular, Semantics-Based Composition of Biosimulation Models
ERIC Educational Resources Information Center
Neal, Maxwell Lewis
2010-01-01
Biosimulation models are valuable, versatile tools used for hypothesis generation and testing, codification of biological theory, education, and patient-specific modeling. Driven by recent advances in computational power and the accumulation of systems-level experimental data, modelers today are creating models with an unprecedented level of…
NASA Astrophysics Data System (ADS)
Wolfs, Vincent; Willems, Patrick
2013-10-01
Many applications in support of water management decisions require hydrodynamic models with limited calculation time, including real time control of river flooding, uncertainty and sensitivity analyses by Monte-Carlo simulations, and long term simulations in support of the statistical analysis of the model simulation results (e.g. flood frequency analysis). Several computationally efficient hydrodynamic models exist, but little attention is given to the modelling of floodplains. This paper presents a methodology that can emulate output from a full hydrodynamic model by predicting one or several levels in a floodplain, together with the flow rate between river and floodplain. The overtopping of the embankment is modelled as an overflow at a weir. Adaptive neuro fuzzy inference systems (ANFIS) are exploited to cope with the varying factors affecting the flow. Different input sets and identification methods are considered in model construction. Because of the dual use of simplified physically based equations and data-driven techniques, the ANFIS consist of very few rules with a low number of input variables. A second calculation scheme can be followed for exceptionally large floods. The obtained nominal emulation model was tested for four floodplains along the river Dender in Belgium. Results show that the obtained models are accurate with low computational cost.
Relational machine learning for electronic health record-driven phenotyping.
Peissig, Peggy L; Santos Costa, Vitor; Caldwell, Michael D; Rottscheit, Carla; Berg, Richard L; Mendonca, Eneida A; Page, David
2014-12-01
Electronic health records (EHR) offer medical and pharmacogenomics research unprecedented opportunities to identify and classify patients at risk. EHRs are collections of highly inter-dependent records that include biological, anatomical, physiological, and behavioral observations. They comprise a patient's clinical phenome, where each patient has thousands of date-stamped records distributed across many relational tables. Development of EHR computer-based phenotyping algorithms require time and medical insight from clinical experts, who most often can only review a small patient subset representative of the total EHR records, to identify phenotype features. In this research we evaluate whether relational machine learning (ML) using inductive logic programming (ILP) can contribute to addressing these issues as a viable approach for EHR-based phenotyping. Two relational learning ILP approaches and three well-known WEKA (Waikato Environment for Knowledge Analysis) implementations of non-relational approaches (PART, J48, and JRIP) were used to develop models for nine phenotypes. International Classification of Diseases, Ninth Revision (ICD-9) coded EHR data were used to select training cohorts for the development of each phenotypic model. Accuracy, precision, recall, F-Measure, and Area Under the Receiver Operating Characteristic (AUROC) curve statistics were measured for each phenotypic model based on independent manually verified test cohorts. A two-sided binomial distribution test (sign test) compared the five ML approaches across phenotypes for statistical significance. We developed an approach to automatically label training examples using ICD-9 diagnosis codes for the ML approaches being evaluated. Nine phenotypic models for each ML approach were evaluated, resulting in better overall model performance in AUROC using ILP when compared to PART (p=0.039), J48 (p=0.003) and JRIP (p=0.003). ILP has the potential to improve phenotyping by independently delivering clinically expert interpretable rules for phenotype definitions, or intuitive phenotypes to assist experts. Relational learning using ILP offers a viable approach to EHR-driven phenotyping. Copyright © 2014 Elsevier Inc. All rights reserved.
Model-based testing with UML applied to a roaming algorithm for bluetooth devices.
Dai, Zhen Ru; Grabowski, Jens; Neukirchen, Helmut; Pals, Holger
2004-11-01
In late 2001, the Object Management Group issued a Request for Proposal to develop a testing profile for UML 2.0. In June 2003, the work on the UML 2.0 Testing Profile was finally adopted by the OMG. Since March 2004, it has become an official standard of the OMG. The UML 2.0 Testing Profile provides support for UML based model-driven testing. This paper introduces a methodology on how to use the testing profile in order to modify and extend an existing UML design model for test issues. The application of the methodology will be explained by applying it to an existing UML Model for a Bluetooth device.
Guide for users of the National Transonic Facility
NASA Technical Reports Server (NTRS)
Fuller, D. E.; Gloss, B. B.; Nystrom, D.
1981-01-01
The National Transonic Facility (NTF) is a fan-driven, closed-circuit, continuous flow, pressurized wind tunnel. The test section is 2.5 m x 2.5 m and 7.62 m long with a slotted-wall configuration. The NTF will have a Mach number range from 0.2 to 1.2, with Reynolds number up to 120 10 to the sixth power at Mach 1 (based on a reference length of 0.25 m). The pressure range for the facility will be from 1 to about 9 bars (1 ban = 100 kPa), and the temperature can be varied from 340 to 78 K. This report provides potential users of the NTF with the information required for preliminary planning to test programs and for preliminary layout of models and model supports which may be used in such programs.
Estimating Setup of Driven Piles into Louisiana Clayey Soils
DOT National Transportation Integrated Search
2009-11-15
Two types of mathematical models for pile setup prediction, the Skov-Denver model and the newly developed rate-based model, have been established from all the dynamic and static testing data, including restrikes of the production piles, restrikes, st...
Estimating setup of driven piles into Louisiana clayey soils.
DOT National Transportation Integrated Search
2010-11-15
Two types of mathematical models for pile setup prediction, the Skov-Denver model and the newly developed rate-based model, have been established from all the dynamic and static testing data, including restrikes of the production piles, restrikes, st...
A hybrid PCA-CART-MARS-based prognostic approach of the remaining useful life for aircraft engines.
Sánchez Lasheras, Fernando; García Nieto, Paulino José; de Cos Juez, Francisco Javier; Mayo Bayón, Ricardo; González Suárez, Victor Manuel
2015-03-23
Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines.
A Hybrid PCA-CART-MARS-Based Prognostic Approach of the Remaining Useful Life for Aircraft Engines
Lasheras, Fernando Sánchez; Nieto, Paulino José García; de Cos Juez, Francisco Javier; Bayón, Ricardo Mayo; Suárez, Victor Manuel González
2015-01-01
Prognostics is an engineering discipline that predicts the future health of a system. In this research work, a data-driven approach for prognostics is proposed. Indeed, the present paper describes a data-driven hybrid model for the successful prediction of the remaining useful life of aircraft engines. The approach combines the multivariate adaptive regression splines (MARS) technique with the principal component analysis (PCA), dendrograms and classification and regression trees (CARTs). Elements extracted from sensor signals are used to train this hybrid model, representing different levels of health for aircraft engines. In this way, this hybrid algorithm is used to predict the trends of these elements. Based on this fitting, one can determine the future health state of a system and estimate its remaining useful life (RUL) with accuracy. To evaluate the proposed approach, a test was carried out using aircraft engine signals collected from physical sensors (temperature, pressure, speed, fuel flow, etc.). Simulation results show that the PCA-CART-MARS-based approach can forecast faults long before they occur and can predict the RUL. The proposed hybrid model presents as its main advantage the fact that it does not require information about the previous operation states of the input variables of the engine. The performance of this model was compared with those obtained by other benchmark models (multivariate linear regression and artificial neural networks) also applied in recent years for the modeling of remaining useful life. Therefore, the PCA-CART-MARS-based approach is very promising in the field of prognostics of the RUL for aircraft engines. PMID:25806876
A model-driven approach to information security compliance
NASA Astrophysics Data System (ADS)
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
A Knowledge-Based and Model-Driven Requirements Engineering Approach to Conceptual Satellite Design
NASA Astrophysics Data System (ADS)
Dos Santos, Walter A.; Leonor, Bruno B. F.; Stephany, Stephan
Satellite systems are becoming even more complex, making technical issues a significant cost driver. The increasing complexity of these systems makes requirements engineering activities both more important and difficult. Additionally, today's competitive pressures and other market forces drive manufacturing companies to improve the efficiency with which they design and manufacture space products and systems. This imposes a heavy burden on systems-of-systems engineering skills and particularly on requirements engineering which is an important phase in a system's life cycle. When this is poorly performed, various problems may occur, such as failures, cost overruns and delays. One solution is to underpin the preliminary conceptual satellite design with computer-based information reuse and integration to deal with the interdisciplinary nature of this problem domain. This can be attained by taking a model-driven engineering approach (MDE), in which models are the main artifacts during system development. MDE is an emergent approach that tries to address system complexity by the intense use of models. This work outlines the use of SysML (Systems Modeling Language) and a novel knowledge-based software tool, named SatBudgets, to deal with these and other challenges confronted during the conceptual phase of a university satellite system, called ITASAT, currently being developed by INPE and some Brazilian universities.
Linking Goal-Oriented Requirements and Model-Driven Development
NASA Astrophysics Data System (ADS)
Pastor, Oscar; Giachetti, Giovanni
In the context of Goal-Oriented Requirement Engineering (GORE) there are interesting modeling approaches for the analysis of complex scenarios that are oriented to obtain and represent the relevant requirements for the development of software products. However, the way to use these GORE models in an automated Model-Driven Development (MDD) process is not clear, and, in general terms, the translation of these models into the final software products is still manually performed. Therefore, in this chapter, we show an approach to automatically link GORE models and MDD processes, which has been elaborated by considering the experience obtained from linking the i * framework with an industrially applied MDD approach. The linking approach proposed is formulated by means of a generic process that is based on current modeling standards and technologies in order to facilitate its application for different MDD and GORE approaches. Special attention is paid to how this process generates appropriate model transformation mechanisms to automatically obtain MDD conceptual models from GORE models, and how it can be used to specify validation mechanisms to assure the correct model transformations.
A Hybrid Physics-Based Data-Driven Approach for Point-Particle Force Modeling
NASA Astrophysics Data System (ADS)
Moore, Chandler; Akiki, Georges; Balachandar, S.
2017-11-01
This study improves upon the physics-based pairwise interaction extended point-particle (PIEP) model. The PIEP model leverages a physical framework to predict fluid mediated interactions between solid particles. While the PIEP model is a powerful tool, its pairwise assumption leads to increased error in flows with high particle volume fractions. To reduce this error, a regression algorithm is used to model the differences between the current PIEP model's predictions and the results of direct numerical simulations (DNS) for an array of monodisperse solid particles subjected to various flow conditions. The resulting statistical model and the physical PIEP model are superimposed to construct a hybrid, physics-based data-driven PIEP model. It must be noted that the performance of a pure data-driven approach without the model-form provided by the physical PIEP model is substantially inferior. The hybrid model's predictive capabilities are analyzed using more DNS. In every case tested, the hybrid PIEP model's prediction are more accurate than those of physical PIEP model. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1315138 and the U.S. DOE, NNSA, ASC Program, as a Cooperative Agreement under Contract No. DE-NA0002378.
When do drilling alliances add value? The alliance value model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brett, J.F.; Craig, V.B.; Wadsworth, D.B.
1996-12-31
A recent GRI report details three previously unstudied aspects of alliances: specific measurable factors that improve alliance success, how a successful alliance should be structured, and when an alliance makes economic sense. The most innovative tool to emerge from the report, the Alliance Value Model, addresses the third aspect. The theory behind the Alliance Value Model is that the long-term viability of any drilling relationship hinges on its ability to create real value and achieve stability. Based upon the report`s findings, the most effective way to form such an alliance is through a detailed description and integration of the technicalmore » processes involved. This new type of process-driven alliance is characterized by a value chain which links together a common set of technical processes, mutually defined bottomline goals, and shared benefits. Building a process-driven alliance requires time and people and therefore has an associated cost. The real value generated by an alliance must exceed this start-up cost. The Alliance Value Model computes the net present value (NPV) of the cash flows for four different operating arrangements: (1) Business As Usual (conventional competitive bidding process), (2) Process-Driven Alliance (linking technical processes to accelerate production and reduce expenses), (3) Incentivized Process-Driven Alliance (linked technical processes with performance incentives to promote stability), and (4) No Drill Case (primarily used to gauge the market value of services). These arrangements test different degrees of process integration between an operator and its suppliers. They can also help determine if the alliance can add enough value to exceed startup costs and if the relationship will be stable. Each partner can test the impact of the relational structure on its own profitability. When an alliance is warranted, all participants can benefit from real value generated in a stable relationship.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oldenburg, C.M.
2011-06-01
The need for risk-driven field experiments for CO{sub 2} geologic storage processes to complement ongoing pilot-scale demonstrations is discussed. These risk-driven field experiments would be aimed at understanding the circumstances under which things can go wrong with a CO{sub 2} capture and storage (CCS) project and cause it to fail, as distinguished from accomplishing this end using demonstration and industrial scale sites. Such risk-driven tests would complement risk-assessment efforts that have already been carried out by providing opportunities to validate risk models. In addition to experimenting with high-risk scenarios, these controlled field experiments could help validate monitoring approaches to improvemore » performance assessment and guide development of mitigation strategies.« less
Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J
2018-03-01
Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.
Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kue-Young; Jun, Seong-Chun; Choung, Sungwook; Yun, Seong-Taek; Oh, Junho; Kim, Hyun-Jun
2017-11-01
In this study, a data-driven method for predicting CO 2 leaks and associated concentrations from geological CO 2 sequestration is developed. Several candidate models are compared based on their reproducibility and predictive capability for CO 2 concentration measurements from the Environment Impact Evaluation Test (EIT) site in Korea. Based on the data mining results, a one-dimensional solution of the advective-dispersive equation for steady flow (i.e., Ogata-Banks solution) is found to be most representative for the test data, and this model is adopted as the data model for the developed method. In the validation step, the method is applied to estimate future CO 2 concentrations with the reference estimation by the Ogata-Banks solution, where a part of earlier data is used as the training dataset. From the analysis, it is found that the ensemble mean of multiple estimations based on the developed method shows high prediction accuracy relative to the reference estimation. In addition, the majority of the data to be predicted are included in the proposed quantile interval, which suggests adequate representation of the uncertainty by the developed method. Therefore, the incorporation of a reasonable physically-based data model enhances the prediction capability of the data-driven model. The proposed method is not confined to estimations of CO 2 concentration and may be applied to various real-time monitoring data from subsurface sites to develop automated control, management or decision-making systems. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Karns, James
1993-01-01
The objective of this study was to establish the initial quantitative reliability bounds for nuclear electric propulsion systems in a manned Mars mission required to ensure crew safety and mission success. Finding the reliability bounds involves balancing top-down (mission driven) requirements and bottom-up (technology driven) capabilities. In seeking this balance we hope to accomplish the following: (1) provide design insights into the achievability of the baseline design in terms of reliability requirements, given the existing technology base; (2) suggest alternative design approaches which might enhance reliability and crew safety; and (3) indicate what technology areas require significant research and development to achieve the reliability objectives.
Scaling, Similarity, and the Fourth Paradigm for Hydrology
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Clark, Martyn; Samaniego, Luis; Verhoest, Niko E. C.; van Emmerik, Tim; Uijlenhoet, Remko; Achieng, Kevin; Franz, Trenton E.; Woods, Ross
2017-01-01
In this synthesis paper addressing hydrologic scaling and similarity, we posit that roadblocks in the search for universal laws of hydrology are hindered by our focus on computational simulation (the third paradigm), and assert that it is time for hydrology to embrace a fourth paradigm of data-intensive science. Advances in information-based hydrologic science, coupled with an explosion of hydrologic data and advances in parameter estimation and modelling, have laid the foundation for a data-driven framework for scrutinizing hydrological scaling and similarity hypotheses. We summarize important scaling and similarity concepts (hypotheses) that require testing, describe a mutual information framework for testing these hypotheses, describe boundary condition, state flux, and parameter data requirements across scales to support testing these hypotheses, and discuss some challenges to overcome while pursuing the fourth hydrological paradigm. We call upon the hydrologic sciences community to develop a focused effort towards adopting the fourth paradigm and apply this to outstanding challenges in scaling and similarity.
Authoring and verification of clinical guidelines: a model driven approach.
Pérez, Beatriz; Porres, Ivan
2010-08-01
The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc. All rights reserved.
Genetic Programming for Automatic Hydrological Modelling
NASA Astrophysics Data System (ADS)
Chadalawada, Jayashree; Babovic, Vladan
2017-04-01
One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach for conceptual hydrological modeling: 1. Motivation and theoretical development, Water Resources Research, 47(11).
Arcaroli, John; Quackenbush, Kevin; Dasari, Arvind; Powell, Rebecca; McManus, Martine; Tan, Aik-Choon; Foster, Nathan R; Picus, Joel; Wright, John; Nallapareddy, Sujatha; Erlichman, Charles; Hidalgo, Manuel; Messersmith, Wells A
2012-10-01
Src tyrosine kinases are overexpressed in pancreatic cancers, and the oral Src inhibitor saracatinib has shown antitumor activity in preclinical models of pancreas cancer. We performed a CTEP-sponsored Phase II clinical trial of saracatinib in previously treated pancreas cancer patients, with a primary endpoint of 6-month survival. A Simon MinMax two-stage phase II design was used. Saracatinib (175 mg/day) was administered orally continuously in 28-day cycles. In the unselected portion of the study, 18 patients were evaluable. Only two (11%) patients survived for at least 6 months, and three 6-month survivors were required to move to second stage of study as originally designed. The study was amended as a biomarker-driven trial (leucine rich repeat containing protein 19 [LRRC19] > insulin-like growth factor-binding protein 2 [IGFBP2] "top scoring pairs" polymerase chain reaction [PCR] assay, and PIK3CA mutant) based on preclinical data in a human pancreas tumor explant model. In the biomarker study, archival tumor tissue or fresh tumor biopsies were tested. Biomarker-positive patients were eligible for the study. Only one patient was PIK3CA mutant in a 3' untranslated region (UTR) portion of the gene. This patient was enrolled in the study and failed to meet the 6-month survival endpoint. As the frequency of biomarker-positive patients was very low (<3%), the study was closed. Although we were unable to conclude whether enriching for a subset of second/third line pancreatic cancer patients treated with a Src inhibitor based on a biomarker would improve 6-month survival, we demonstrate that testing pancreatic tumor samples for a biomarker-driven, multicenter study in metastatic pancreas cancer is feasible.
Developing a semantic web model for medical differential diagnosis recommendation.
Mohammed, Osama; Benlamri, Rachid
2014-10-01
In this paper we describe a novel model for differential diagnosis designed to make recommendations by utilizing semantic web technologies. The model is a response to a number of requirements, ranging from incorporating essential clinical diagnostic semantics to the integration of data mining for the process of identifying candidate diseases that best explain a set of clinical features. We introduce two major components, which we find essential to the construction of an integral differential diagnosis recommendation model: the evidence-based recommender component and the proximity-based recommender component. Both approaches are driven by disease diagnosis ontologies designed specifically to enable the process of generating diagnostic recommendations. These ontologies are the disease symptom ontology and the patient ontology. The evidence-based diagnosis process develops dynamic rules based on standardized clinical pathways. The proximity-based component employs data mining to provide clinicians with diagnosis predictions, as well as generates new diagnosis rules from provided training datasets. This article describes the integration between these two components along with the developed diagnosis ontologies to form a novel medical differential diagnosis recommendation model. This article also provides test cases from the implementation of the overall model, which shows quite promising diagnostic recommendation results.
Testing the Accuracy of Data-driven MHD Simulations of Active Region Evolution and Eruption
NASA Astrophysics Data System (ADS)
Leake, J. E.; Linton, M.; Schuck, P. W.
2017-12-01
Models for the evolution of the solar coronal magnetic field are vital for understanding solar activity, yet the best measurements of the magnetic field lie at the photosphere, necessitating the recent development of coronal models which are "data-driven" at the photosphere. Using magnetohydrodynamic simulations of active region formation and our recently created validation framework we investigate the source of errors in data-driven models that use surface measurements of the magnetic field, and derived MHD quantities, to model the coronal magnetic field. The primary sources of errors in these studies are the temporal and spatial resolution of the surface measurements. We will discuss the implications of theses studies for accurately modeling the build up and release of coronal magnetic energy based on photospheric magnetic field observations.
Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated
NASA Astrophysics Data System (ADS)
Chen, Xiaoguang; Brodland, G. Wayne
2008-03-01
The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10 239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.
Multi-scale finite element modeling allows the mechanics of amphibian neurulation to be elucidated.
Chen, Xiaoguang; Brodland, G Wayne
2008-04-11
The novel multi-scale computational approach introduced here makes possible a new means for testing hypotheses about the forces that drive specific morphogenetic movements. A 3D model based on this approach is used to investigate neurulation in the axolotl (Ambystoma mexicanum), a type of amphibian. The model is based on geometric data from 3D surface reconstructions of live embryos and from serial sections. Tissue properties are described by a system of cell-based constitutive equations, and parameters in the equations are determined from physical tests. The model includes the effects of Shroom-activated neural ridge reshaping and lamellipodium-driven convergent extension. A typical whole-embryo model consists of 10,239 elements and to run its 100 incremental time steps requires 2 days. The model shows that a normal phenotype does not result if lamellipodium forces are uniform across the width of the neural plate; but it can result if the lamellipodium forces decrease from a maximum value at the mid-sagittal plane to zero at the plate edge. Even the seemingly simple motions of neurulation are found to contain important features that would remain hidden, they were not studied using an advanced computational model. The present model operates in a setting where data are extremely sparse and an important outcome of the study is a better understanding of the role of computational models in such environments.
NASA Astrophysics Data System (ADS)
Frew, E.; Argrow, B. M.; Houston, A. L.; Weiss, C.
2014-12-01
The energy-aware airborne dynamic, data-driven application system (EA-DDDAS) performs persistent sampling in complex atmospheric conditions by exploiting wind energy using the dynamic data-driven application system paradigm. The main challenge for future airborne sampling missions is operation with tight integration of physical and computational resources over wireless communication networks, in complex atmospheric conditions. The physical resources considered here include sensor platforms, particularly mobile Doppler radar and unmanned aircraft, the complex conditions in which they operate, and the region of interest. Autonomous operation requires distributed computational effort connected by layered wireless communication. Onboard decision-making and coordination algorithms can be enhanced by atmospheric models that assimilate input from physics-based models and wind fields derived from multiple sources. These models are generally too complex to be run onboard the aircraft, so they need to be executed in ground vehicles in the field, and connected over broadband or other wireless links back to the field. Finally, the wind field environment drives strong interaction between the computational and physical systems, both as a challenge to autonomous path planning algorithms and as a novel energy source that can be exploited to improve system range and endurance. Implementation details of a complete EA-DDDAS will be provided, along with preliminary flight test results targeting coherent boundary-layer structures.
The evolution of meaning: spatio-temporal dynamics of visual object recognition.
Clarke, Alex; Taylor, Kirsten I; Tyler, Lorraine K
2011-08-01
Research on the spatio-temporal dynamics of visual object recognition suggests a recurrent, interactive model whereby an initial feedforward sweep through the ventral stream to prefrontal cortex is followed by recurrent interactions. However, critical questions remain regarding the factors that mediate the degree of recurrent interactions necessary for meaningful object recognition. The novel prediction we test here is that recurrent interactivity is driven by increasing semantic integration demands as defined by the complexity of semantic information required by the task and driven by the stimuli. To test this prediction, we recorded magnetoencephalography data while participants named living and nonliving objects during two naming tasks. We found that the spatio-temporal dynamics of neural activity were modulated by the level of semantic integration required. Specifically, source reconstructed time courses and phase synchronization measures showed increased recurrent interactions as a function of semantic integration demands. These findings demonstrate that the cortical dynamics of object processing are modulated by the complexity of semantic information required from the visual input.
Multi-source micro-friction identification for a class of cable-driven robots with passive backbone
NASA Astrophysics Data System (ADS)
Tjahjowidodo, Tegoeh; Zhu, Ke; Dailey, Wayne; Burdet, Etienne; Campolo, Domenico
2016-12-01
This paper analyses the dynamics of cable-driven robots with a passive backbone and develops techniques for their dynamic identification, which are tested on the H-Man, a planar cabled differential transmission robot for haptic interaction. The mechanism is optimized for human-robot interaction by accounting for the cost-benefit-ratio of the system, specifically by eliminating the necessity of an external force sensor to reduce the overall cost. As a consequence, this requires an effective dynamic model for accurate force feedback applications which include friction behavior in the system. We first consider the significance of friction in both the actuator and backbone spaces. Subsequently, we study the required complexity of the stiction model for the application. Different models representing different levels of complexity are investigated, ranging from the conventional approach of Coulomb to an advanced model which includes hysteresis. The results demonstrate each model's ability to capture the dynamic behavior of the system. In general, it is concluded that there is a trade-off between model accuracy and the model cost.
Evaluation in the Design of Complex Systems
ERIC Educational Resources Information Center
Ho, Li-An; Schwen, Thomas M.
2006-01-01
We identify literature that argues the process of creating knowledge-based system is often imbalanced. In most knowledge-based systems, development is often technology-driven instead of requirement-driven. Therefore, we argue designers must recognize that evaluation is a critical link in the application of requirement-driven development models…
Test Driven Development of Scientific Models
NASA Technical Reports Server (NTRS)
Clune, Thomas L.
2014-01-01
Test-Driven Development (TDD), a software development process that promises many advantages for developer productivity and software reliability, has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices.After a brief overview of the TDD process and my experience in applying the methodology for development activities at Goddard, I will delve more deeply into some of the challenges that are posed by numerical and scientific software as well as tools and implementation approaches that should address those challenges.
NASA Astrophysics Data System (ADS)
Jung, Sukgeun; Pang, Ig-Chan; Lee, Joon-ho; Lee, Kyunghwan
2016-12-01
Recent studies in the western North Pacific reported a declining standing stock biomass of anchovy ( Engraulis japonicus) in the Yellow Sea and a climate-driven southward shift of anchovy catch in Korean waters. We investigated the effects of a warming ocean on the latitudinal shift of anchovy catch by developing and applying individual-based models (IBMs) based on a regional ocean circulation model and an IPCC climate change scenario. Despite the greater uncertainty, our two IBMs projected that, by the 2030s, the strengthened Tsushima warm current in the Korea Strait and the East Sea, driven by global warming, and the subsequent confinement of the relatively cold water masses within the Yellow Sea will decrease larval anchovy biomass in the Yellow Sea, but will increase it in the Korea Strait and the East Sea. The decreasing trend of anchovy biomass in the Yellow Sea was reproduced by our models, but further validation and enhancement of the models is required together with extended ichthyoplankton surveys to understand and reliably project range shifts of anchovy and the impacts such range shifts will have on the marine ecosystems and fisheries in the region.
ON HIGHLY CLUMPED MAGNETIC WIND MODELS FOR COOL EVOLVED STARS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, G. M.
2010-09-10
Recently, it has been proposed that the winds of non-pulsating and non-dusty K and M giants and supergiants may be driven by some form of magnetic pressure acting on highly clumped wind material. While many researchers believe that magnetic processes are responsible for cool evolved stellar winds, existing MHD and Alfven wave-driven wind models have magnetic fields that are essentially radial and tied to the photosphere. The clumped magnetic wind scenario is quite different in that the magnetic flux is also being carried away from the star with the wind. We test this clumped wind hypothesis by computing continuum radiomore » fluxes from the {zeta} Aur semiempirical model of Baade et al., which is based on wind-scattered line profiles. The radio continuum opacity is proportional to the electron density squared, while the line scattering opacity is proportional to the gas density. This difference in proportionality provides a test for the presence of large clumping factors. We derive the radial distribution of clump factors (CFs) for {zeta} Aur by comparing the nonthermal pressures required to produce the semiempirical velocity distribution with the expected thermal pressures. The CFs are {approx}5 throughout the sub-sonic inner wind region and then decline outward. These implied clumping factors lead to excess radio emission at 2.0 cm, while at 6.2 cm it improves agreement with the smooth unclumped model. Smaller clumping factors of {approx}2 lead to better overall agreement but also increase the discrepancy at 2 cm. These results do not support the magnetic clumped wind hypothesis and instead suggest that inherent uncertainties in the underlying semiempirical model probably dominate uncertainties in predicted radio fluxes. However, new ultraviolet line and radio continuum observations are needed to test the new generations of inhomogeneous magnetohydrodynamic wind models.« less
Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya
2013-01-01
Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607
Model-Driven Useware Engineering
NASA Astrophysics Data System (ADS)
Meixner, Gerrit; Seissler, Marc; Breiner, Kai
User-oriented hardware and software development relies on a systematic development process based on a comprehensive analysis focusing on the users' requirements and preferences. Such a development process calls for the integration of numerous disciplines, from psychology and ergonomics to computer sciences and mechanical engineering. Hence, a correspondingly interdisciplinary team must be equipped with suitable software tools to allow it to handle the complexity of a multimodal and multi-device user interface development approach. An abstract, model-based development approach seems to be adequate for handling this complexity. This approach comprises different levels of abstraction requiring adequate tool support. Thus, in this chapter, we present the current state of our model-based software tool chain. We introduce the use model as the core model of our model-based process, transformation processes, and a model-based architecture, and we present different software tools that provide support for creating and maintaining the models or performing the necessary model transformations.
NASA Astrophysics Data System (ADS)
Franz, K. J.; Bowman, A. L.; Hogue, T. S.; Kim, J.; Spies, R.
2011-12-01
In the face of a changing climate, growing populations, and increased human habitation in hydrologically risky locations, both short- and long-range planners increasingly require robust and reliable streamflow forecast information. Current operational forecasting utilizes watershed-scale, conceptual models driven by ground-based (commonly point-scale) observations of precipitation and temperature and climatological potential evapotranspiration (PET) estimates. The PET values are derived from historic pan evaporation observations and remain static from year-to-year. The need for regional dynamic PET values is vital for improved operational forecasting. With the advent of satellite remote sensing and the adoption of a more flexible operational forecast system by the National Weather Service, incorporation of advanced data products is now more feasible than in years past. In this study, we will test a previously developed satellite-derived PET product (UCLA MODIS-PET) in the National Weather Service forecast models and compare the model results to current methods. The UCLA MODIS-PET method is based on the Priestley-Taylor formulation, is driven with MODIS satellite products, and produces a daily, 250m PET estimate. The focus area is eight headwater basins in the upper Midwest U.S. There is a need to develop improved forecasting methods for this region that are able to account for climatic and landscape changes more readily and effectively than current methods. This region is highly flood prone yet sensitive to prolonged dry periods in late summer and early fall, and is characterized by a highly managed landscape, which has drastically altered the natural hydrologic cycle. Our goal is to improve model simulations, and thereby, the initial conditions prior to the start of a forecast through the use of PET values that better reflect actual watershed conditions. The forecast models are being tested in both distributed and lumped mode.
An earth imaging camera simulation using wide-scale construction of reflectance surfaces
NASA Astrophysics Data System (ADS)
Murthy, Kiran; Chau, Alexandra H.; Amin, Minesh B.; Robinson, M. Dirk
2013-10-01
Developing and testing advanced ground-based image processing systems for earth-observing remote sensing applications presents a unique challenge that requires advanced imagery simulation capabilities. This paper presents an earth-imaging multispectral framing camera simulation system called PayloadSim (PaySim) capable of generating terabytes of photorealistic simulated imagery. PaySim leverages previous work in 3-D scene-based image simulation, adding a novel method for automatically and efficiently constructing 3-D reflectance scenes by draping tiled orthorectified imagery over a geo-registered Digital Elevation Map (DEM). PaySim's modeling chain is presented in detail, with emphasis given to the techniques used to achieve computational efficiency. These techniques as well as cluster deployment of the simulator have enabled tuning and robust testing of image processing algorithms, and production of realistic sample data for customer-driven image product development. Examples of simulated imagery of Skybox's first imaging satellite are shown.
Design and testing of a novel multi-stroke micropositioning system with variable resolutions.
Xu, Qingsong
2014-02-01
Multi-stroke stages are demanded in micro-/nanopositioning applications which require smaller and larger motion strokes with fine and coarse resolutions, respectively. This paper presents the conceptual design of a novel multi-stroke, multi-resolution micropositioning stage driven by a single actuator for each working axis. It eliminates the issue of the interference among different drives, which resides in conventional multi-actuation stages. The stage is devised based on a fully compliant variable stiffness mechanism, which exhibits unequal stiffnesses in different strokes. Resistive strain sensors are employed to offer variable position resolutions in the different strokes. To quantify the design of the motion strokes and coarse/fine resolution ratio, analytical models are established. These models are verified through finite-element analysis simulations. A proof-of-concept prototype XY stage is designed, fabricated, and tested to demonstrate the feasibility of the presented ideas. Experimental results of static and dynamic testing validate the effectiveness of the proposed design.
High Performance Automatic Character Skinning Based on Projection Distance
NASA Astrophysics Data System (ADS)
Li, Jun; Lin, Feng; Liu, Xiuling; Wang, Hongrui
2018-03-01
Skeleton-driven-deformation methods have been commonly used in the character deformations. The process of painting skin weights for character deformation is a long-winded task requiring manual tweaking. We present a novel method to calculate skinning weights automatically from 3D human geometric model and corresponding skeleton. The method first, groups each mesh vertex of 3D human model to a skeleton bone by the minimum distance from a mesh vertex to each bone. Secondly, calculates each vertex's weights to the adjacent bones by the vertex's projection point distance to the bone joints. Our method's output can not only be applied to any kind of skeleton-driven deformation, but also to motion capture driven (mocap-driven) deformation. Experiments results show that our method not only has strong generality and robustness, but also has high performance.
Implementation of and Ada real-time executive: A case study
NASA Technical Reports Server (NTRS)
Laird, James D.; Burton, Bruce A.; Koppes, Mary R.
1986-01-01
Current Ada language implementations and runtime environments are immature, unproven and are a key risk area for real-time embedded computer system (ECS). A test-case environment is provided in which the concerns of the real-time, ECS community are addressed. A priority driven executive is selected to be implemented in the Ada programming language. The model selected is representative of real-time executives tailored for embedded systems used missile, spacecraft, and avionics applications. An Ada-based design methodology is utilized, and two designs are considered. The first of these designs requires the use of vendor supplied runtime and tasking support. An alternative high-level design is also considered for an implementation requiring no vendor supplied runtime or tasking support. The former approach is carried through to implementation.
Developing stochastic model of thrust and flight dynamics for small UAVs
NASA Astrophysics Data System (ADS)
Tjhai, Chandra
This thesis presents a stochastic thrust model and aerodynamic model for small propeller driven UAVs whose power plant is a small electric motor. First a model which relates thrust generated by a small propeller driven electric motor as a function of throttle setting and commanded engine RPM is developed. A perturbation of this model is then used to relate the uncertainty in throttle and engine RPM commanded to the error in the predicted thrust. Such a stochastic model is indispensable in the design of state estimation and control systems for UAVs where the performance requirements of the systems are specied in stochastic terms. It is shown that thrust prediction models for small UAVs are not a simple, explicit functions relating throttle input and RPM command to thrust generated. Rather they are non-linear, iterative procedures which depend on a geometric description of the propeller and mathematical model of the motor. A detailed derivation of the iterative procedure is presented and the impact of errors which arise from inaccurate propeller and motor descriptions are discussed. Validation results from a series of wind tunnel tests are presented. The results show a favorable statistical agreement between the thrust uncertainty predicted by the model and the errors measured in the wind tunnel. The uncertainty model of aircraft aerodynamic coefficients developed based on wind tunnel experiment will be discussed at the end of this thesis.
NASA Technical Reports Server (NTRS)
Griffin, Roy N., Jr.; Holzhauser, Curt A.; Weiberg, James A.
1958-01-01
An investigation was made to determine the lifting effectiveness and flow requirements of blowing over the trailing-edge flaps and ailerons on a large-scale model of a twin-engine, propeller-driven airplane having a high-aspect-ratio, thick, straight wing. With sufficient blowing jet momentum to prevent flow separation on the flap, the lift increment increased for flap deflections up to 80 deg (the maximum tested). This lift increment also increased with increasing propeller thrust coefficient. The blowing jet momentum coefficient required for attached flow on the flaps was not significantly affected by thrust coefficient, angle of attack, or blowing nozzle height.
A hydraulically driven colonoscope.
Coleman, Stuart A; Tapia-Siles, Silvia C; Pakleppa, Markus; Vorstius, Jan B; Keatch, Robert P; Tang, Benjie; Cuschieri, Alfred
2016-10-01
Conventional colonoscopy requires a high degree of operator skill and is often painful for the patient. We present a preliminary feasibility study of an alternative approach where a self-propelled colonoscope is hydraulically driven through the colon. A hydraulic colonoscope which could be controlled manually or automatically was developed and assessed in a test bed modelled on the anatomy of the human colon. A conventional colonoscope was used by an experienced colonoscopist in the same test bed for comparison. Pressures and forces on the colon were measured during the test. The hydraulic colonoscope was able to successfully advance through the test bed in a comparable time to the conventional colonoscope. The hydraulic colonoscope reduces measured loads on artificial mesenteries, but increases intraluminal pressure compared to the colonoscope. Both manual and automatically controlled modes were able to successfully advance the hydraulic colonoscope through the colon. However, the automatic controller mode required lower pressures than manual control, but took longer to reach the caecum. The hydraulic colonoscope appears to be a viable device for further development as forces and pressures observed during use are comparable to those used in current clinical practice.
Scenario driven data modelling: a method for integrating diverse sources of data and data streams
2011-01-01
Background Biology is rapidly becoming a data intensive, data-driven science. It is essential that data is represented and connected in ways that best represent its full conceptual content and allows both automated integration and data driven decision-making. Recent advancements in distributed multi-relational directed graphs, implemented in the form of the Semantic Web make it possible to deal with complicated heterogeneous data in new and interesting ways. Results This paper presents a new approach, scenario driven data modelling (SDDM), that integrates multi-relational directed graphs with data streams. SDDM can be applied to virtually any data integration challenge with widely divergent types of data and data streams. In this work, we explored integrating genetics data with reports from traditional media. SDDM was applied to the New Delhi metallo-beta-lactamase gene (NDM-1), an emerging global health threat. The SDDM process constructed a scenario, created a RDF multi-relational directed graph that linked diverse types of data to the Semantic Web, implemented RDF conversion tools (RDFizers) to bring content into the Sematic Web, identified data streams and analytical routines to analyse those streams, and identified user requirements and graph traversals to meet end-user requirements. Conclusions We provided an example where SDDM was applied to a complex data integration challenge. The process created a model of the emerging NDM-1 health threat, identified and filled gaps in that model, and constructed reliable software that monitored data streams based on the scenario derived multi-relational directed graph. The SDDM process significantly reduced the software requirements phase by letting the scenario and resulting multi-relational directed graph define what is possible and then set the scope of the user requirements. Approaches like SDDM will be critical to the future of data intensive, data-driven science because they automate the process of converting massive data streams into usable knowledge. PMID:22165854
Quantum random number generation for loophole-free Bell tests
NASA Astrophysics Data System (ADS)
Mitchell, Morgan; Abellan, Carlos; Amaya, Waldimar
2015-05-01
We describe the generation of quantum random numbers at multi-Gbps rates, combined with real-time randomness extraction, to give very high purity random numbers based on quantum events at most tens of ns in the past. The system satisfies the stringent requirements of quantum non-locality tests that aim to close the timing loophole. We describe the generation mechanism using spontaneous-emission-driven phase diffusion in a semiconductor laser, digitization, and extraction by parity calculation using multi-GHz logic chips. We pay special attention to experimental proof of the quality of the random numbers and analysis of the randomness extraction. In contrast to widely-used models of randomness generators in the computer science literature, we argue that randomness generation by spontaneous emission can be extracted from a single source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panaccione, Charles; Staab, Greg; Meuleman, Erik
ION has developed a mathematically driven model for a contacting device incorporating mass transfer, heat transfer, and computational fluid dynamics. This model is based upon a parametric structure for purposes of future commercialization. The most promising design from modeling was 3D printed and tested in a bench scale CO 2 capture unit and compared to commercially available structured packing tested in the same unit.
ERIC Educational Resources Information Center
Doherty, Brooks
2016-01-01
Driven by faculty-based action research, redesigned residential and online courses, and changes to placement testing, Rasmussen College increased its developmental education pass rates by double digits while decreasing the number and percentage of students who require remedial coursework. Like many institutions of higher education, Rasmussen…
Designing an optimal software intensive system acquisition: A game theoretic approach
NASA Astrophysics Data System (ADS)
Buettner, Douglas John
The development of schedule-constrained software-intensive space systems is challenging. Case study data from national security space programs developed at the U.S. Air Force Space and Missile Systems Center (USAF SMC) provide evidence of the strong desire by contractors to skip or severely reduce software development design and early defect detection methods in these schedule-constrained environments. The research findings suggest recommendations to fully address these issues at numerous levels. However, the observations lead us to investigate modeling and theoretical methods to fundamentally understand what motivated this behavior in the first place. As a result, Madachy's inspection-based system dynamics model is modified to include unit testing and an integration test feedback loop. This Modified Madachy Model (MMM) is used as a tool to investigate the consequences of this behavior on the observed defect dynamics for two remarkably different case study software projects. Latin Hypercube sampling of the MMM with sample distributions for quality, schedule and cost-driven strategies demonstrate that the higher cost and effort quality-driven strategies provide consistently better schedule performance than the schedule-driven up-front effort-reduction strategies. Game theory reasoning for schedule-driven engineers cutting corners on inspections and unit testing is based on the case study evidence and Austin's agency model to describe the observed phenomena. Game theory concepts are then used to argue that the source of the problem and hence the solution to developers cutting corners on quality for schedule-driven system acquisitions ultimately lies with the government. The game theory arguments also lead to the suggestion that the use of a multi-player dynamic Nash bargaining game provides a solution for our observed lack of quality game between the government (the acquirer) and "large-corporation" software developers. A note is provided that argues this multi-player dynamic Nash bargaining game also provides the solution to Freeman Dyson's problem, for a way to place a label of good or bad on systems.
AGN outflows and feedback twenty years on
NASA Astrophysics Data System (ADS)
Harrison, C. M.; Costa, T.; Tadhunter, C. N.; Flütsch, A.; Kakkad, D.; Perna, M.; Vietri, G.
2018-03-01
It is twenty years since the seminal works by Magorrian and co-authors and by Silk and Rees, which, along with other related work, ignited an explosion of publications connecting active galactic nucleus (AGN)-driven outflows to galaxy evolution. With a surge in observations of AGN outflows, studies are attempting to test AGN feedback models directly using the outflow properties. With a focus on outflows traced by optical and CO emission lines, we discuss significant challenges that greatly complicate this task, from both an observational and theoretical perspective. We highlight the observational uncertainties involved and the assumptions required when deriving kinetic coupling efficiencies (that is, outflow kinetic power as a fraction of AGN luminosity) from typical observations. Based on recent models we demonstrate that extreme caution should be taken when comparing observationally derived kinetic coupling efficiencies to coupling efficiencies from fiducial feedback models.
MDA-based EHR application security services.
Blobel, Bernd; Pharow, Peter
2004-01-01
Component-oriented, distributed, virtual EHR systems have to meet enhanced security and privacy requirements. In the context of advanced architectural paradigms such as component-orientation, model-driven, and knowledge-based, standardised security services needed have to be specified and implemented in an integrated way following the same paradigm. This concerns the deployment of formal models, meta-languages, reference models such as the ISO RM-ODP, and development as well as implementation tools. International projects' results presented proceed on that streamline.
Use case driven approach to develop simulation model for PCS of APR1400 simulator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong Wook, Kim; Hong Soo, Kim; Hyeon Tae, Kang
2006-07-01
The full-scope simulator is being developed to evaluate specific design feature and to support the iterative design and validation in the Man-Machine Interface System (MMIS) design of Advanced Power Reactor (APR) 1400. The simulator consists of process model, control logic model, and MMI for the APR1400 as well as the Power Control System (PCS). In this paper, a use case driven approach is proposed to develop a simulation model for PCS. In this approach, a system is considered from the point of view of its users. User's view of the system is based on interactions with the system and themore » resultant responses. In use case driven approach, we initially consider the system as a black box and look at its interactions with the users. From these interactions, use cases of the system are identified. Then the system is modeled using these use cases as functions. Lower levels expand the functionalities of each of these use cases. Hence, starting from the topmost level view of the system, we proceeded down to the lowest level (the internal view of the system). The model of the system thus developed is use case driven. This paper will introduce the functionality of the PCS simulation model, including a requirement analysis based on use case and the validation result of development of PCS model. The PCS simulation model using use case will be first used during the full-scope simulator development for nuclear power plant and will be supplied to Shin-Kori 3 and 4 plant. The use case based simulation model development can be useful for the design and implementation of simulation models. (authors)« less
NASA Astrophysics Data System (ADS)
Pierson, Kyle D.; Hochhalter, Jacob D.; Spear, Ashley D.
2018-05-01
Systematic correlation analysis was performed between simulated micromechanical fields in an uncracked polycrystal and the known path of an eventual fatigue-crack surface based on experimental observation. Concurrent multiscale finite-element simulation of cyclic loading was performed using a high-fidelity representation of grain structure obtained from near-field high-energy x-ray diffraction microscopy measurements. An algorithm was developed to parameterize and systematically correlate the three-dimensional (3D) micromechanical fields from simulation with the 3D fatigue-failure surface from experiment. For comparison, correlation coefficients were also computed between the micromechanical fields and hypothetical, alternative surfaces. The correlation of the fields with hypothetical surfaces was found to be consistently weaker than that with the known crack surface, suggesting that the micromechanical fields of the cyclically loaded, uncracked microstructure might provide some degree of predictiveness for microstructurally small fatigue-crack paths, although the extent of such predictiveness remains to be tested. In general, gradients of the field variables exhibit stronger correlations with crack path than the field variables themselves. Results from the data-driven approach implemented here can be leveraged in future model development for prediction of fatigue-failure surfaces (for example, to facilitate univariate feature selection required by convolution-based models).
Zhang, Huaguang; Cui, Lili; Zhang, Xin; Luo, Yanhong
2011-12-01
In this paper, a novel data-driven robust approximate optimal tracking control scheme is proposed for unknown general nonlinear systems by using the adaptive dynamic programming (ADP) method. In the design of the controller, only available input-output data is required instead of known system dynamics. A data-driven model is established by a recurrent neural network (NN) to reconstruct the unknown system dynamics using available input-output data. By adding a novel adjustable term related to the modeling error, the resultant modeling error is first guaranteed to converge to zero. Then, based on the obtained data-driven model, the ADP method is utilized to design the approximate optimal tracking controller, which consists of the steady-state controller and the optimal feedback controller. Further, a robustifying term is developed to compensate for the NN approximation errors introduced by implementing the ADP method. Based on Lyapunov approach, stability analysis of the closed-loop system is performed to show that the proposed controller guarantees the system state asymptotically tracking the desired trajectory. Additionally, the obtained control input is proven to be close to the optimal control input within a small bound. Finally, two numerical examples are used to demonstrate the effectiveness of the proposed control scheme.
Threat driven modeling framework using petri nets for e-learning system.
Khamparia, Aditya; Pandey, Babita
2016-01-01
Vulnerabilities at various levels are main cause of security risks in e-learning system. This paper presents a modified threat driven modeling framework, to identify the threats after risk assessment which requires mitigation and how to mitigate those threats. To model those threat mitigations aspects oriented stochastic petri nets are used. This paper included security metrics based on vulnerabilities present in e-learning system. The Common Vulnerability Scoring System designed to provide a normalized method for rating vulnerabilities which will be used as basis in metric definitions and calculations. A case study has been also proposed which shows the need and feasibility of using aspect oriented stochastic petri net models for threat modeling which improves reliability, consistency and robustness of the e-learning system.
Data-driven Modeling of Metal-oxide Sensors with Dynamic Bayesian Networks
NASA Astrophysics Data System (ADS)
Gosangi, Rakesh; Gutierrez-Osuna, Ricardo
2011-09-01
We present a data-driven probabilistic framework to model the transient response of MOX sensors modulated with a sequence of voltage steps. Analytical models of MOX sensors are usually built based on the physico-chemical properties of the sensing materials. Although building these models provides an insight into the sensor behavior, they also require a thorough understanding of the underlying operating principles. Here we propose a data-driven approach to characterize the dynamical relationship between sensor inputs and outputs. Namely, we use dynamic Bayesian networks (DBNs), probabilistic models that represent temporal relations between a set of random variables. We identify a set of control variables that influence the sensor responses, create a graphical representation that captures the causal relations between these variables, and finally train the model with experimental data. We validated the approach on experimental data in terms of predictive accuracy and classification performance. Our results show that DBNs can accurately predict the dynamic response of MOX sensors, as well as capture the discriminatory information present in the sensor transients.
Evaluation of Mid-Size Male Hybrid III Models for use in Spaceflight Occupant Protection Analysis
NASA Technical Reports Server (NTRS)
Putnam, J.; Somers, J.; Wells, J.; Newby, N.; Currie-Gregg, N.; Lawrence, C.
2016-01-01
Introduction: In an effort to improve occupant safety during dynamic phases of spaceflight, the National Aeronautics and Space Administration (NASA) has worked to develop occupant protection standards for future crewed spacecraft. One key aspect of these standards is the identification of injury mechanisms through anthropometric test devices (ATDs). Within this analysis, both physical and computational ATD evaluations are required to reasonably encompass the vast range of loading conditions any spaceflight crew may encounter. In this study the accuracy of publically available mid-size male HIII ATD finite element (FE) models are evaluated within applicable loading conditions against extensive sled testing performed on their physical counterparts. Methods: A series of sled tests were performed at the Wright Patterson Air force Base (WPAFB) employing variations of magnitude, duration, and impact direction to encompass the dynamic loading range for expected spaceflight. FE simulations were developed to the specifications of the test setup and driven using measured acceleration profiles. Both fast and detailed FE models of the mid-size male HIII were ran to quantify differences in their accuracy and thus assess the applicability of each within this field. Results: Preliminary results identify the dependence of model accuracy on loading direction, magnitude, and rate. Additionally the accuracy of individual response metrics are shown to vary across each model within evaluated test conditions. Causes for model inaccuracy are identified based on the observed relationships. Discussion: Computational modeling provides an essential component to ATD injury metric evaluation used to ensure the safety of future spaceflight occupants. The assessment of current ATD models lays the groundwork for how these models can be used appropriately in the future. Identification of limitations and possible paths for improvement aid in the development of these effective analysis tools.
Evaluation of Mid-Size Male Hybrid III Models for use in Spaceflight Occupant Protection Analysis
NASA Technical Reports Server (NTRS)
Putnam, Jacob B.; Sommers, Jeffrey T.; Wells, Jessica A.; Newby, Nathaniel J.; Currie-Gregg, Nancy J.; Lawrence, Chuck
2016-01-01
In an effort to improve occupant safety during dynamic phases of spaceflight, the National Aeronautics and Space Administration (NASA) has worked to develop occupant protection standards for future crewed spacecraft. One key aspect of these standards is the identification of injury mechanisms through anthropometric test devices (ATDs). Within this analysis, both physical and computational ATD evaluations are required to reasonably encompass the vast range of loading conditions any spaceflight crew may encounter. In this study the accuracy of publically available mid-size male HIII ATD finite element (FE) models are evaluated within applicable loading conditions against extensive sled testing performed on their physical counterparts. Methods: A series of sled tests were performed at the Wright Patterson Air force Base (WPAFB) employing variations of magnitude, duration, and impact direction to encompass the dynamic loading range for expected spaceflight. FE simulations were developed to the specifications of the test setup and driven using measured acceleration profiles. Both fast and detailed FE models of the mid-size male HIII were ran to quantify differences in their accuracy and thus assess the applicability of each within this field. Results: Preliminary results identify the dependence of model accuracy on loading direction, magnitude, and rate. Additionally the accuracy of individual response metrics are shown to vary across each model within evaluated test conditions. Causes for model inaccuracy are identified based on the observed relationships. Discussion: Computational modeling provides an essential component to ATD injury metric evaluation used to ensure the safety of future spaceflight occupants. The assessment of current ATD models lays the groundwork for how these models can be used appropriately in the future. Identification of limitations and possible paths for improvement aid in the development of these effective analysis tools.
NASA Technical Reports Server (NTRS)
Gwaltney, D. A.
2002-01-01
A FY 2001 Center Director's Discretionary Fund task to develop a test platform for the development, implementation. and evaluation of adaptive and other advanced control techniques for brushless DC (BLDC) motor-driven mechanisms is described. Important applications for BLDC motor-driven mechanisms are the translation of specimens in microgravity experiments and electromechanical actuation of nozzle and fuel valves in propulsion systems. Motor-driven aerocontrol surfaces are also being utilized in developmental X vehicles. The experimental test platform employs a linear translation stage that is mounted vertically and driven by a BLDC motor. Control approaches are implemented on a digital signal processor-based controller for real-time, closed-loop control of the stage carriage position. The goal of the effort is to explore the application of advanced control approaches that can enhance the performance of a motor-driven actuator over the performance obtained using linear control approaches with fixed gains. Adaptive controllers utilizing an exact model knowledge controller and a self-tuning controller are implemented and the control system performance is illustrated through the presentation of experimental results.
NASA Technical Reports Server (NTRS)
Penn, John M.
2013-01-01
This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA/Johnson Space Center and many other NASA facilities. It describes what was learned and the significant benefits seen, such as fast, thorough, and clear test feedback every time code is checked-in to the code repository. It also describes a system that encourages development of code that is much more flexible, maintainable, and reliable. The Trick Simulation Toolkit development environment provides a common architecture for user-defined simulations. Trick builds executable simulations using user-supplied simulation-definition files (S_define) and user supplied "model code". For each Trick-based simulation, Trick automatically provides job scheduling, checkpoint / restore, data-recording, interactive variable manipulation (variable server), and an input-processor. Also included are tools for plotting recorded data and various other supporting tools and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX. Prior to adopting this new development approach, Trick testing consisted primarily of running a few large simulations, with the hope that their complexity and scale would exercise most of Trick's code and expose any recently introduced bugs. Unsurprising, this approach yielded inconsistent results. It was obvious that a more systematic, thorough approach was required. After seeing examples of some Java-based projects that used the JUnit test framework, similar test frameworks for C and C++ were sought. Several were found, all clearly inspired by JUnit. Googletest, a freely available Open source testing framework, was selected as the most appropriate and capable. The new approach was implemented while rewriting the Trick memory management component, to eliminate a fundamental design flaw. The benefits became obvious almost immediately, not just in the correctness of the individual functions and classes but also in the correctness and flexibility being added to the overall design. Creating code to be testable, and testing as it was created resulted not only in better working code, but also in better-organized, flexible, and readable (i.e., articulate) code. This was, in essence the Test-driven development (TDD) methodology created by Kent Beck. Seeing the benefits of Test Driven Development, other Trick components were refactored to make them more testable and tests were designed and implemented for them.
Test Driven Development of Scientific Models
NASA Technical Reports Server (NTRS)
Clune, Thomas L.
2012-01-01
Test-Driven Development (TDD) is a software development process that promises many advantages for developer productivity and has become widely accepted among professional software engineers. As the name suggests, TDD practitioners alternate between writing short automated tests and producing code that passes those tests. Although this overly simplified description will undoubtedly sound prohibitively burdensome to many uninitiated developers, the advent of powerful unit-testing frameworks greatly reduces the effort required to produce and routinely execute suites of tests. By testimony, many developers find TDD to be addicting after only a few days of exposure, and find it unthinkable to return to previous practices. Of course, scientific/technical software differs from other software categories in a number of important respects, but I nonetheless believe that TDD is quite applicable to the development of such software and has the potential to significantly improve programmer productivity and code quality within the scientific community. After a detailed introduction to TDD, I will present the experience within the Software Systems Support Office (SSSO) in applying the technique to various scientific applications. This discussion will emphasize the various direct and indirect benefits as well as some of the difficulties and limitations of the methodology. I will conclude with a brief description of pFUnit, a unit testing framework I co-developed to support test-driven development of parallel Fortran applications.
NASA Astrophysics Data System (ADS)
Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.
2010-12-01
The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.
Realizing the potential of pathway-based toxicity testing requires a fresh look at how we describe phenomena leading to adverse effects in vivo, how we assess them in vitro and how we extrapolate them in silico across chemicals, doses and species. We developed the ToxPlorer™ fram...
Floating rGO-based black membranes for solar driven sterilization.
Zhang, Yao; Zhao, Dengwu; Yu, Fan; Yang, Chao; Lou, Jinwei; Liu, Yanming; Chen, Yingying; Wang, Zhongyong; Tao, Peng; Shang, Wen; Wu, Jianbo; Song, Chengyi; Deng, Tao
2017-12-14
This paper presents a new steam sterilization approach that uses a solar-driven evaporation system at the water/air interface. Compared to the conventional solar autoclave, this new steam sterilization approach via interfacial evaporation requires no complex system design to bear high steam pressure. In such a system, a reduced graphene oxide/polytetrafluoroethylene composite membrane floating at the water/air interface serves as a light-to-heat conversion medium to harvest and convert incident solar light into localized heat. Such localized heat raises the temperature of the membrane substantially and helps generate steam with a temperature higher than 120 °C. A sterilization device that takes advantage of the interfacial solar-driven evaporation system was built and its successful sterilization capability was demonstrated through both chemical and biological sterilization tests. The interfacial evaporation-based solar driven sterilization approach offers a potential low cost solution to meet the need for sterilization in undeveloped areas that lack electrical power but have ample solar radiation.
Solar Energy Evolution and Diffusion Studies | Solar Research | NREL
industry-wide studies that use data-driven and evidence-based methods to identify characteristics developed models of U.S. household PV adoption. The project also conducted two market pilots to test methods
CD volume design and verification
NASA Technical Reports Server (NTRS)
Li, Y. P.; Hughes, J. S.
1993-01-01
In this paper, we describe a prototype for CD-ROM volume design and verification. This prototype allows users to create their own model of CD volumes by modifying a prototypical model. Rule-based verification of the test volumes can then be performed later on against the volume definition. This working prototype has proven the concept of model-driven rule-based design and verification for large quantity of data. The model defined for the CD-ROM volumes becomes a data model as well as an executable specification.
Enhanced fuel efficiency on tractor-trailers using synthetic jet-based active flow control
NASA Astrophysics Data System (ADS)
Amitay, Michael; Menicovich, David; Gallardo, Daniele
2016-04-01
The application of piezo-electrically-driven synthetic-jet-based active flow control to reduce drag on tractor-trailers was explored experimentally in wind tunnel testing as well as full-scale road tests. Aerodynamic drag accounts for more than 50% of the usable energy at highway speeds, a problem that applies primarily to trailer trucks. Therefore, a reduction in aerodynamic drag results in large saving of fuel and reduction in CO2 emissions. The active flow control technique that is being used relies on a modular system comprised of distributed, small, highly efficient actuators. These actuators, called synthetic jets, are jets that are synthesized at the edge of an orifice by a periodic motion of a piezoelectric diaphragm(s) mounted on one (or more) walls of a sealed cavity. The synthetic jet is zero net mass flux (ZNMF), but it allows momentum transfer to flow. It is typically driven near diaphragm and/or cavity resonance, and therefore, small electric input [O(10W)] is required. Another advantage of this actuator is that no plumbing is required. The system doesn't require changes to the body of the truck, can be easily reconfigured to various types of vehicles, and consumes small amounts of electrical power from the existing electrical system of the truck. Preliminary wind tunnel results showed up to 18% reduction in fuel consumption, whereas road tests also showed very promising results.
NASA Technical Reports Server (NTRS)
Lawrence, C.; Somers, J. T.; Baldwin, M. A.; Wells, J. A.; Newby, N.; Currie, N. J.
2014-01-01
NASA spacecraft design requirements for occupant protection are a combination of the Brinkley criteria and injury metrics extracted from anthropomorphic test devices (ATD's). For the ATD injury metrics, the requirements specify the use of the 5th percentile female Hybrid III and the 95th percentile male Hybrid III. Furthermore, each of these ATD's is required to be fitted with an articulating pelvis and a straight spine. The articulating pelvis is necessary for the ATD to fit into spacecraft seats, while the straight spine is required as injury metrics for vertical accelerations are better defined for this configuration. The requirements require that physical testing be performed with both ATD's to demonstrate compliance. Before compliance testing can be conducted, extensive modeling and simulation are required to determine appropriate test conditions, simulate conditions not feasible for testing, and assess design features to better ensure compliance testing is successful. While finite element (FE) models are currently available for many of the physical ATD's, currently there are no complete models for either the 5th percentile female or the 95th percentile male Hybrid III with a straight spine and articulating pelvis. The purpose of this work is to assess the accuracy of the existing Livermore Software Technology Corporation's FE models of the 5th and 95th percentile ATD's. To perform this assessment, a series of tests will be performed at Wright Patterson Air Force Research Lab using their horizontal impact accelerator sled test facility. The ATD's will be placed in the Orion seat with a modified-advanced-crew-escape-system (MACES) pressure suit and helmet, and driven with loadings similar to what is expected for the actual Orion vehicle during landing, launch abort, and chute deployment. Test data will be compared to analytical predictions and modelling uncertainty factors will be determined for each injury metric. Additionally, the test data will be used to further improve the FE model, particularly in the areas of the ATD neck components, harness, and suit and helmet effects.
Wimmer, Lena; Bellingrath, Silja; von Stockhausen, Lisa
2016-01-01
The present paper reports a pilot study which tested cognitive effects of mindfulness practice in a theory-driven approach. Thirty-four fifth graders received either a mindfulness training which was based on the mindfulness-based stress reduction approach (experimental group), a concentration training (active control group), or no treatment (passive control group). Based on the operational definition of mindfulness by Bishop et al. (2004), effects on sustained attention, cognitive flexibility, cognitive inhibition, and data-driven as opposed to schema-based information processing were predicted. These abilities were assessed in a pre-post design by means of a vigilance test, a reversible figures test, the Wisconsin Card Sorting Test, a Stroop test, a visual search task, and a recognition task of prototypical faces. Results suggest that the mindfulness training specifically improved cognitive inhibition and data-driven information processing. PMID:27462287
Wimmer, Lena; Bellingrath, Silja; von Stockhausen, Lisa
2016-01-01
The present paper reports a pilot study which tested cognitive effects of mindfulness practice in a theory-driven approach. Thirty-four fifth graders received either a mindfulness training which was based on the mindfulness-based stress reduction approach (experimental group), a concentration training (active control group), or no treatment (passive control group). Based on the operational definition of mindfulness by Bishop et al. (2004), effects on sustained attention, cognitive flexibility, cognitive inhibition, and data-driven as opposed to schema-based information processing were predicted. These abilities were assessed in a pre-post design by means of a vigilance test, a reversible figures test, the Wisconsin Card Sorting Test, a Stroop test, a visual search task, and a recognition task of prototypical faces. Results suggest that the mindfulness training specifically improved cognitive inhibition and data-driven information processing.
Komatsoulis, George A; Warzel, Denise B; Hartel, Francis W; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; Coronado, Sherri de; Reeves, Dianne M; Hadfield, Jillaine B; Ludet, Christophe; Covitz, Peter A
2008-02-01
One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service-Oriented Architecture (SSOA) for cancer research by the National Cancer Institute's cancer Biomedical Informatics Grid (caBIG).
Komatsoulis, George A.; Warzel, Denise B.; Hartel, Frank W.; Shanbhag, Krishnakant; Chilukuri, Ram; Fragoso, Gilberto; de Coronado, Sherri; Reeves, Dianne M.; Hadfield, Jillaine B.; Ludet, Christophe; Covitz, Peter A.
2008-01-01
One of the requirements for a federated information system is interoperability, the ability of one computer system to access and use the resources of another system. This feature is particularly important in biomedical research systems, which need to coordinate a variety of disparate types of data. In order to meet this need, the National Cancer Institute Center for Bioinformatics (NCICB) has created the cancer Common Ontologic Representation Environment (caCORE), an interoperability infrastructure based on Model Driven Architecture. The caCORE infrastructure provides a mechanism to create interoperable biomedical information systems. Systems built using the caCORE paradigm address both aspects of interoperability: the ability to access data (syntactic interoperability) and understand the data once retrieved (semantic interoperability). This infrastructure consists of an integrated set of three major components: a controlled terminology service (Enterprise Vocabulary Services), a standards-based metadata repository (the cancer Data Standards Repository) and an information system with an Application Programming Interface (API) based on Domain Model Driven Architecture. This infrastructure is being leveraged to create a Semantic Service Oriented Architecture (SSOA) for cancer research by the National Cancer Institute’s cancer Biomedical Informatics Grid (caBIG™). PMID:17512259
Research on Generating Method of Embedded Software Test Document Based on Dynamic Model
NASA Astrophysics Data System (ADS)
Qu, MingCheng; Wu, XiangHu; Tao, YongChao; Liu, Ying
2018-03-01
This paper provides a dynamic model-based test document generation method for embedded software that provides automatic generation of two documents: test requirements specification documentation and configuration item test documentation. This method enables dynamic test requirements to be implemented in dynamic models, enabling dynamic test demand tracking to be easily generated; able to automatically generate standardized, standardized test requirements and test documentation, improved document-related content inconsistency and lack of integrity And other issues, improve the efficiency.
Data-Driven Based Asynchronous Motor Control for Printing Servo Systems
NASA Astrophysics Data System (ADS)
Bian, Min; Guo, Qingyun
Modern digital printing equipment aims to the environmental-friendly industry with high dynamic performances and control precision and low vibration and abrasion. High performance motion control system of printing servo systems was required. Control system of asynchronous motor based on data acquisition was proposed. Iterative learning control (ILC) algorithm was studied. PID control was widely used in the motion control. However, it was sensitive to the disturbances and model parameters variation. The ILC applied the history error data and present control signals to approximate the control signal directly in order to fully track the expect trajectory without the system models and structures. The motor control algorithm based on the ILC and PID was constructed and simulation results were given. The results show that data-driven control method is effective dealing with bounded disturbances for the motion control of printing servo systems.
Basit, Mujeeb A; Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L
2018-04-13
Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test-driven development and automated regression testing promotes reliability. Test-driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a "safety net" for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and "living" design documentation. Rapid-cycle development or "agile" methods are being successfully applied to CDS development. The agile practice of automated test-driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as "executable requirements." We aimed to establish feasibility of acceptance test-driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory's expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. We used test-driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the "executable requirements" are shown prior to building the CDS alert, during build, and after successful build. Automated acceptance test-driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test-driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. ©Mujeeb A Basit, Krystal L Baldwin, Vaishnavi Kannan, Emily L Flahaven, Cassandra J Parks, Jason M Ott, Duwayne L Willett. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 13.04.2018.
NASA Astrophysics Data System (ADS)
Peck, M. A.
2016-02-01
Gaining a cause-and-effect understanding of climate-driven changes in marine fish populations at appropriate spatial scales is important for providing robust advice for ecosystem-based fisheries management. Coupling long-term, retrospective analyses and 3-d biophysical, individual-based models (IBMs) shows great potential to reveal mechanism underlying historical changes and to project future changes in marine fishes. IBMs created for marine fish early life stages integrate organismal-level physiological responses and climate-driven changes in marine habitats (from ocean physics to lower trophic level productivity) to test and reveal processes affecting marine fish recruitment. Case studies are provided for hindcasts and future (A1 and B2 projection) simulations performed on some of the most ecologically- and commercially-important pelagic and demersal fishes in the North Sea including European anchovy, Atlantic herring, European sprat and Atlantic cod. We discuss the utility of coupling biophysical IBMs to size-spectrum models to better project indirect (trophodynamic) pathways of climate influence on the early life stages of these and other fishes. Opportunities and challenges are discussed regarding the ability of these physiological-based tools to capture climate-driven changes in living marine resources and food web dynamics of shelf seas.
NASA Astrophysics Data System (ADS)
Bashash, Saeid; Jalili, Nader
2007-02-01
Piezoelectrically-driven nanostagers have limited performance in a variety of feedforward and feedback positioning applications because of their nonlinear hysteretic response to input voltage. The hysteresis phenomenon is well known for its complex and multi-path behavior. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligence properties of hysteresis with the effects of non-local memories are discussed here. Through performing a set of experiments on a piezoelectrically-driven nanostager with a high resolution capacitive position sensor, it is shown that for the precise prediction of the hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of the hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the ever-present nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect, if memory units are sufficiently chosen for the inverse model.
EVA/ORU model architecture using RAMCOST
NASA Technical Reports Server (NTRS)
Ntuen, Celestine A.; Park, Eui H.; Wang, Y. M.; Bretoi, R.
1990-01-01
A parametrically driven simulation model is presented in order to provide a detailed insight into the effects of various input parameters in the life testing of a modular space suit. The RAMCOST model employed is a user-oriented simulation model for studying the life-cycle costs of designs under conditions of uncertainty. The results obtained from the EVA simulated model are used to assess various mission life testing parameters such as the number of joint motions per EVA cycle time, part availability, and number of inspection requirements. RAMCOST first simulates EVA completion for NASA application using a probabilistic like PERT network. With the mission time heuristically determined, RAMCOST then models different orbital replacement unit policies with special application to the astronaut's space suit functional designs.
NASA Astrophysics Data System (ADS)
Tu, Weichao; Cunningham, G. S.; Chen, Y.; Henderson, M. G.; Camporeale, E.; Reeves, G. D.
2013-10-01
a response to the Geospace Environment Modeling (GEM) "Global Radiation Belt Modeling Challenge," a 3D diffusion model is used to simulate the radiation belt electron dynamics during two intervals of the Combined Release and Radiation Effects Satellite (CRRES) mission, 15 August to 15 October 1990 and 1 February to 31 July 1991. The 3D diffusion model, developed as part of the Dynamic Radiation Environment Assimilation Model (DREAM) project, includes radial, pitch angle, and momentum diffusion and mixed pitch angle-momentum diffusion, which are driven by dynamic wave databases from the statistical CRRES wave data, including plasmaspheric hiss, lower-band, and upper-band chorus. By comparing the DREAM3D model outputs to the CRRES electron phase space density (PSD) data, we find that, with a data-driven boundary condition at Lmax = 5.5, the electron enhancements can generally be explained by radial diffusion, though additional local heating from chorus waves is required. Because the PSD reductions are included in the boundary condition at Lmax = 5.5, our model captures the fast electron dropouts over a large L range, producing better model performance compared to previous published results. Plasmaspheric hiss produces electron losses inside the plasmasphere, but the model still sometimes overestimates the PSD there. Test simulations using reduced radial diffusion coefficients or increased pitch angle diffusion coefficients inside the plasmasphere suggest that better wave models and more realistic radial diffusion coefficients, both inside and outside the plasmasphere, are needed to improve the model performance. Statistically, the results show that, with the data-driven outer boundary condition, including radial diffusion and plasmaspheric hiss is sufficient to model the electrons during geomagnetically quiet times, but to best capture the radiation belt variations during active times, pitch angle and momentum diffusion from chorus waves are required.
Virtual sensor models for real-time applications
NASA Astrophysics Data System (ADS)
Hirsenkorn, Nils; Hanke, Timo; Rauch, Andreas; Dehlink, Bernhard; Rasshofer, Ralph; Biebl, Erwin
2016-09-01
Increased complexity and severity of future driver assistance systems demand extensive testing and validation. As supplement to road tests, driving simulations offer various benefits. For driver assistance functions the perception of the sensors is crucial. Therefore, sensors also have to be modeled. In this contribution, a statistical data-driven sensor-model, is described. The state-space based method is capable of modeling various types behavior. In this contribution, the modeling of the position estimation of an automotive radar system, including autocorrelations, is presented. For rendering real-time capability, an efficient implementation is presented.
Model-Driven Approach for Body Area Network Application Development.
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-05-12
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application.
Model-Driven Approach for Body Area Network Application Development
Venčkauskas, Algimantas; Štuikys, Vytautas; Jusas, Nerijus; Burbaitė, Renata
2016-01-01
This paper introduces the sensor-networked IoT model as a prototype to support the design of Body Area Network (BAN) applications for healthcare. Using the model, we analyze the synergistic effect of the functional requirements (data collection from the human body and transferring it to the top level) and non-functional requirements (trade-offs between energy-security-environmental factors, treated as Quality-of-Service (QoS)). We use feature models to represent the requirements at the earliest stage for the analysis and describe a model-driven methodology to design the possible BAN applications. Firstly, we specify the requirements as the problem domain (PD) variability model for the BAN applications. Next, we introduce the generative technology (meta-programming as the solution domain (SD)) and the mapping procedure to map the PD feature-based variability model onto the SD feature model. Finally, we create an executable meta-specification that represents the BAN functionality to describe the variability of the problem domain though transformations. The meta-specification (along with the meta-language processor) is a software generator for multiple BAN-oriented applications. We validate the methodology with experiments and a case study to generate a family of programs for the BAN sensor controllers. This enables to obtain the adequate measure of QoS efficiently through the interactive adjustment of the meta-parameter values and re-generation process for the concrete BAN application. PMID:27187394
NASA Astrophysics Data System (ADS)
Bellugi, D. G.; Tennant, C.; Larsen, L.
2016-12-01
Catchment and climate heterogeneity complicate prediction of runoff across time and space, and resulting parameter uncertainty can lead to large accumulated errors in hydrologic models, particularly in ungauged basins. Recently, data-driven modeling approaches have been shown to avoid the accumulated uncertainty associated with many physically-based models, providing an appealing alternative for hydrologic prediction. However, the effectiveness of different methods in hydrologically and geomorphically distinct catchments, and the robustness of these methods to changing climate and changing hydrologic processes remain to be tested. Here, we evaluate the use of machine learning techniques to predict daily runoff across time and space using only essential climatic forcing (e.g. precipitation, temperature, and potential evapotranspiration) time series as model input. Model training and testing was done using a high quality dataset of daily runoff and climate forcing data for 25+ years for 600+ minimally-disturbed catchments (drainage area range 5-25,000 km2, median size 336 km2) that cover a wide range of climatic and physical characteristics. Preliminary results using Support Vector Regression (SVR) suggest that in some catchments this nonlinear-based regression technique can accurately predict daily runoff, while the same approach fails in other catchments, indicating that the representation of climate inputs and/or catchment filter characteristics in the model structure need further refinement to increase performance. We bolster this analysis by using Sparse Identification of Nonlinear Dynamics (a sparse symbolic regression technique) to uncover the governing equations that describe runoff processes in catchments where SVR performed well and for ones where it performed poorly, thereby enabling inference about governing processes. This provides a robust means of examining how catchment complexity influences runoff prediction skill, and represents a contribution towards the integration of data-driven inference and physically-based models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawler, J.S.
2001-10-29
Previous theoretical work has shown that when all loss mechanisms are neglected the constant power speed range (CPSR) of a brushless dc motor (BDCM) is infinite when the motor is driven by the dual-mode inverter control (DMIC) [1,2]. In a physical drive, losses, particularly speed-sensitive losses, will limit the CPSR to a finite value. In this paper we report the results of laboratory testing of a low-inductance, 7.5-hp BDCM driven by the DMIC. The speed rating of the test motor rotor limited the upper speed of the testing, and the results show that the CPSR of the test machine ismore » greater than 6:1 when driven by the DMIC. Current wave shape, peak, and rms values remained controlled and within rating over the entire speed range. The laboratory measurements allowed the speed-sensitive losses to be quantified and incorporated into computer simulation models, which then accurately reproduce the results of lab testing. The simulator shows that the limiting CPSR of the test motor is 8:1. These results confirm that the DMIC is capable of driving low-inductance BDCMs over the wide CPSR that would be required in electric vehicle applications.« less
Measuring Experiential Avoidance: A Preliminary Test of a Working Model
ERIC Educational Resources Information Center
Hayes, Steven C.; Strosahl, Kirk; Wilson, Kelly G.; Bissett, Richard T.; Pistorello, Jacqueline; Toarmino, Dosheen; Polusny, Melissa A.; Dykstra, Thane A.; Batten, Sonja V.; Bergan, John; Stewart, Sherry H.; Zvolensky, Michael J.; Eifert, Georg H.; Bond, Frank W.; Forsyth, John P.; Karekla, Maria; Mccurry, Susan M.
2004-01-01
The present study describes the development of a short, general measure of experiential avoidance, based on a specific theoretical approach to this process. A theoretically driven iterative exploratory analysis using structural equation modeling on data from a clinical sample yielded a single factor comprising 9 items. A fully confirmatory factor…
AC motor and generator requirements for isolated WECS
NASA Technical Reports Server (NTRS)
Park, G. L.; Mccleer, P. J.; Hanson, B.; Weinberg, B.; Krauss, O.
1985-01-01
After surveying electrically driven loads used on productive farms, the investigators chose three pumps for testing at voltages and frequencies far outside the normal operating range. These loads extract and circulate water and move heat via air, and all are critical to farm productivity. The object was to determine the envelope of supply voltage and frequency over which these loads would operate stably for time intervals under 1 hour. This information is among that needed to determine the feasibility of supplying critical loads, in case of a utility outage, from a wind driven alternator whose output voltage and frequency will vary dramatically in most continental wind regimes. Other related work is surveyed. The salient features and limitations of the test configurations used and the data reduction are described. The development of simulation models suitable for a small computer are outlined. The results are primarily displayed on the voltage frequency plane with the general conclusion that the particular pump models considered will operate over the range of 50 to 90 Hz and a voltage band which starts below rated, decreases as frequency decreases, and is limited on the high side by excessive motor heating. For example, centrifugal pump operating voltage ranges as extensive .4 to 1.4 appear possible. Particular problems with starting, stalling due to lack of motor torque, high speed cavitation, and likely overheating are addressed in a listing of required properties for wind driven alternators and their controllers needed for use in the isolated or stand alone configuration considered.
Share2Quit: Web-Based Peer-Driven Referrals for Smoking Cessation
2013-01-01
Background Smoking is the number one preventable cause of death in the United States. Effective Web-assisted tobacco interventions are often underutilized and require new and innovative engagement approaches. Web-based peer-driven chain referrals successfully used outside health care have the potential for increasing the reach of Internet interventions. Objective The objective of our study was to describe the protocol for the development and testing of proactive Web-based chain-referral tools for increasing the access to Decide2Quit.org, a Web-assisted tobacco intervention system. Methods We will build and refine proactive chain-referral tools, including email and Facebook referrals. In addition, we will implement respondent-driven sampling (RDS), a controlled chain-referral sampling technique designed to remove inherent biases in chain referrals and obtain a representative sample. We will begin our chain referrals with an initial recruitment of former and current smokers as seeds (initial participants) who will be trained to refer current smokers from their social network using the developed tools. In turn, these newly referred smokers will also be provided the tools to refer other smokers from their social networks. We will model predictors of referral success using sample weights from the RDS to estimate the success of the system in the targeted population. Results This protocol describes the evaluation of proactive Web-based chain-referral tools, which can be used in tobacco interventions to increase the access to hard-to-reach populations, for promoting smoking cessation. Conclusions Share2Quit represents an innovative advancement by capitalizing on naturally occurring technology trends to recruit smokers to Web-assisted tobacco interventions. PMID:24067329
Varying execution discipline to increase performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, P.L.; Maccabe, A.B.
1993-12-22
This research investigates the relationship between execution discipline and performance. The hypothesis has two parts: 1. Different execution disciplines exhibit different performance for different computations, and 2. These differences can be effectively predicted by heuristics. A machine model is developed that can vary its execution discipline. That is, the model can execute a given program using either the control-driven, data-driven or demand-driven execution discipline. This model is referred to as a ``variable-execution-discipline`` machine. The instruction set for the model is the Program Dependence Web (PDW). The first part of the hypothesis will be tested by simulating the execution of themore » machine model on a suite of computations, based on the Livermore Fortran Kernel (LFK) Test (a.k.a. the Livermore Loops), using all three execution disciplines. Heuristics are developed to predict relative performance. These heuristics predict (a) the execution time under each discipline for one iteration of each loop and (b) the number of iterations taken by that loop; then the heuristics use those predictions to develop a prediction for the execution of the entire loop. Similar calculations are performed for branch statements. The second part of the hypothesis will be tested by comparing the results of the simulated execution with the predictions produced by the heuristics. If the hypothesis is supported, then the door is open for the development of machines that can vary execution discipline to increase performance.« less
Working with the HL7 metamodel in a Model Driven Engineering context.
Martínez-García, A; García-García, J A; Escalona, M J; Parra-Calderón, C L
2015-10-01
HL7 (Health Level 7) International is an organization that defines health information standards. Most HL7 domain information models have been designed according to a proprietary graphic language whose domain models are based on the HL7 metamodel. Many researchers have considered using HL7 in the MDE (Model-Driven Engineering) context. A limitation has been identified: all MDE tools support UML (Unified Modeling Language), which is a standard model language, but most do not support the HL7 proprietary model language. We want to support software engineers without HL7 experience, thus real-world problems would be modeled by them by defining system requirements in UML that are compliant with HL7 domain models transparently. The objective of the present research is to connect HL7 with software analysis using a generic model-based approach. This paper introduces a first approach to an HL7 MDE solution that considers the MIF (Model Interchange Format) metamodel proposed by HL7 by making use of a plug-in developed in the EA (Enterprise Architect) tool. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Aycinena, Ana Corina; Jennings, Kerri-Ann; Gaffney, Ann Ogden; Koch, Pamela A.; Contento, Isobel R.; Gonzalez, Monica; Guidon, Ela; Karmally, Wahida; Hershman, Dawn; Greenlee, Heather
2017-01-01
We developed a theory-based dietary change curriculum for Hispanic breast cancer survivors with the goal of testing the effects of the intervention on change in dietary intake of fruits/vegetables and fat in a randomized, clinical trial. Social cognitive theory and the transtheoretical model were used as theoretical frameworks to structure…
Study on Capturing Functional Requirements of the New Product Based on Evolution
NASA Astrophysics Data System (ADS)
Liu, Fang; Song, Liya; Bai, Zhonghang; Zhang, Peng
In order to exist in an increasingly competitive global marketplace, it is important for corporations to forecast the evolutionary direction of new products rapidly and effectively. Most products in the world are developed based on the design of existing products. In the product design, capturing functional requirements is a key step. Function is continuously evolving, which is driven by the evolution of needs and technologies. So the functional requirements of new product can be forecasted based on the functions of existing product. Eight laws of function evolution are put forward in this paper. The process model of capturing the functional requirements of new product based on function evolution is proposed. An example illustrates the design process.
Mission Driven Scene Understanding: Candidate Model Training and Validation
2016-09-01
driven scene understanding. One of the candidate engines that we are evaluating is a convolutional neural network (CNN) program installed on a Windows 10...Theano-AlexNet6,7) installed on a Windows 10 notebook computer. To the best of our knowledge, an implementation of the open-source, Python-based...AlexNet CNN on a Windows notebook computer has not been previously reported. In this report, we present progress toward the proof-of-principle testing
Imaging plus X: multimodal models of neurodegenerative disease.
Oxtoby, Neil P; Alexander, Daniel C
2017-08-01
This article argues that the time is approaching for data-driven disease modelling to take centre stage in the study and management of neurodegenerative disease. The snowstorm of data now available to the clinician defies qualitative evaluation; the heterogeneity of data types complicates integration through traditional statistical methods; and the large datasets becoming available remain far from the big-data sizes necessary for fully data-driven machine-learning approaches. The recent emergence of data-driven disease progression models provides a balance between imposed knowledge of disease features and patterns learned from data. The resulting models are both predictive of disease progression in individual patients and informative in terms of revealing underlying biological patterns. Largely inspired by observational models, data-driven disease progression models have emerged in the last few years as a feasible means for understanding the development of neurodegenerative diseases. These models have revealed insights into frontotemporal dementia, Huntington's disease, multiple sclerosis, Parkinson's disease and other conditions. For example, event-based models have revealed finer graded understanding of progression patterns; self-modelling regression and differential equation models have provided data-driven biomarker trajectories; spatiotemporal models have shown that brain shape changes, for example of the hippocampus, can occur before detectable neurodegeneration; and network models have provided some support for prion-like mechanistic hypotheses of disease propagation. The most mature results are in sporadic Alzheimer's disease, in large part because of the availability of the Alzheimer's disease neuroimaging initiative dataset. Results generally support the prevailing amyloid-led hypothetical model of Alzheimer's disease, while revealing finer detail and insight into disease progression. The emerging field of disease progression modelling provides a natural mechanism to integrate different kinds of information, for example from imaging, serum and cerebrospinal fluid markers and cognitive tests, to obtain new insights into progressive diseases. Such insights include fine-grained longitudinal patterns of neurodegeneration, from early stages, and the heterogeneity of these trajectories over the population. More pragmatically, such models enable finer precision in patient staging and stratification, prediction of progression rates and earlier and better identification of at-risk individuals. We argue that this will make disease progression modelling invaluable for recruitment and end-points in future clinical trials, potentially ameliorating the high failure rate in trials of, e.g., Alzheimer's disease therapies. We review the state of the art in these techniques and discuss the future steps required to translate the ideas to front-line application.
Consistent model driven architecture
NASA Astrophysics Data System (ADS)
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Nguyen, Hai Van; Finkelstein, Eric Andrew; Mital, Shweta; Gardner, Daphne Su-Lyn
2017-11-01
Offering genetic testing for Maturity Onset Diabetes of the Young (MODY) to all young patients with type 2 diabetes has been shown to be not cost-effective. This study tests whether a novel algorithm-driven genetic testing strategy for MODY is incrementally cost-effective relative to the setting of no testing. A decision tree was constructed to estimate the costs and effectiveness of the algorithm-driven MODY testing strategy and a strategy of no genetic testing over a 30-year time horizon from a payer's perspective. The algorithm uses glutamic acid decarboxylase (GAD) antibody testing (negative antibodies), age of onset of diabetes (<45 years) and body mass index (<25 kg/m 2 if diagnosed >30 years) to stratify the population of patients with diabetes into three subgroups, and testing for MODY only among the subgroup most likely to have the mutation. Singapore-specific costs and prevalence of MODY obtained from local studies and utility values sourced from the literature are used to populate the model. The algorithm-driven MODY testing strategy has an incremental cost-effectiveness ratio of US$93 663 per quality-adjusted life year relative to the no testing strategy. If the price of genetic testing falls from US$1050 to US$530 (a 50% decrease), it will become cost-effective. Our proposed algorithm-driven testing strategy for MODY is not yet cost-effective based on established benchmarks. However, as genetic testing prices continue to fall, this strategy is likely to become cost-effective in the near future. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Adapting Rational Unified Process (RUP) approach in designing a secure e-Tendering model
NASA Astrophysics Data System (ADS)
Mohd, Haslina; Robie, Muhammad Afdhal Muhammad; Baharom, Fauziah; Darus, Norida Muhd; Saip, Mohamed Ali; Yasin, Azman
2016-08-01
e-Tendering is an electronic processing of the tender document via internet and allow tenderer to publish, communicate, access, receive and submit all tender related information and documentation via internet. This study aims to design the e-Tendering system using Rational Unified Process approach. RUP provides a disciplined approach on how to assign tasks and responsibilities within the software development process. RUP has four phases that can assist researchers to adjust the requirements of various projects with different scope, problem and the size of projects. RUP is characterized as a use case driven, architecture centered, iterative and incremental process model. However the scope of this study only focusing on Inception and Elaboration phases as step to develop the model and perform only three of nine workflows (business modeling, requirements, analysis and design). RUP has a strong focus on documents and the activities in the inception and elaboration phases mainly concern the creation of diagrams and writing of textual descriptions. The UML notation and the software program, Star UML are used to support the design of e-Tendering. The e-Tendering design based on the RUP approach can contribute to e-Tendering developers and researchers in e-Tendering domain. In addition, this study also shows that the RUP is one of the best system development methodology that can be used as one of the research methodology in Software Engineering domain related to secured design of any observed application. This methodology has been tested in various studies in certain domains, such as in Simulation-based Decision Support, Security Requirement Engineering, Business Modeling and Secure System Requirement, and so forth. As a conclusion, these studies showed that the RUP one of a good research methodology that can be adapted in any Software Engineering (SE) research domain that required a few artifacts to be generated such as use case modeling, misuse case modeling, activity diagram, and initial class diagram from a list of requirements as identified earlier by the SE researchers
Algorithms for optimization of branching gravity-driven water networks
NASA Astrophysics Data System (ADS)
Dardani, Ian; Jones, Gerard F.
2018-05-01
The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holden, Jacob; Wood, Eric W; Zhu, Lei
A data-driven technique for estimation of energy requirements for a proposed vehicle trip has been developed. Based on over 700,000 miles of driving data, the technique has been applied to generate a model that estimates trip energy requirements. The model uses a novel binning approach to categorize driving by road type, traffic conditions, and driving profile. The trip-level energy estimations can easily be aggregated to any higher-level transportation system network desired. The model has been tested and validated on the Austin, Texas, data set used to build this model. Ground-truth energy consumption for the data set was obtained from Futuremore » Automotive Systems Technology Simulator (FASTSim) vehicle simulation results. The energy estimation model has demonstrated 12.1 percent normalized total absolute error. The energy estimation from the model can be used to inform control strategies in routing tools, such as change in departure time, alternate routing, and alternate destinations, to reduce energy consumption. The model can also be used to determine more accurate energy consumption of regional or national transportation networks if trip origin and destinations are known. Additionally, this method allows the estimation tool to be tuned to a specific driver or vehicle type.« less
Regenerative Blower for EVA Suit Ventilation Fan
NASA Technical Reports Server (NTRS)
Izenson, Michael G.; Chen, Weibo; Paul, Heather L.
2010-01-01
Portable life support systems in future space suits will include a ventilation subsystem driven by a dedicated fan. This ventilation fan must meet challenging requirements for pressure rise, flow rate, efficiency, size, safety, and reliability. This paper describes research and development that showed the feasibility of a regenerative blower that is uniquely suited to meet these requirements. We proved feasibility through component tests, blower tests, and design analysis. Based on the requirements for the Constellation Space Suit Element (CSSE) Portable Life Support System (PLSS) ventilation fan, we designed the critical elements of the blower. We measured the effects of key design parameters on blower performance using separate effects tests, and used the results of these tests to design a regenerative blower that will meet the ventilation fan requirements. We assembled a proof-of-concept blower and measured its performance at sub-atmospheric pressures that simulate a PLSS ventilation loop environment. Head/flow performance and maximum efficiency point data were used to specify the design and operating conditions for the ventilation fan. We identified materials for the blower that will enhance safety for operation in a lunar environment, and produced a solid model that illustrates the final design. The proof-of-concept blower produced the flow rate and pressure rise needed for the CSSE ventilation subsystem while running at 5400 rpm, consuming only 9 W of electric power using a non-optimized, commercial motor and controller and inefficient bearings. Scaling the test results to a complete design shows that a lightweight, compact, reliable, and low power regenerative blower can meet the performance requirements for future space suit life support systems.
Thermal Control Subsystem Design for the Avionics of a Space Station Payload
NASA Technical Reports Server (NTRS)
Moran, Matthew E.
1996-01-01
A case study of the thermal control subsystem development for a space based payload is presented from the concept stage through preliminary design. This payload, the Space Acceleration Measurement System 2 (SAMS-2), will measure the acceleration environment at select locations within the International Space Station. Its thermal control subsystem must maintain component temperatures within an acceptable range over a 10 year life span, while restricting accessible surfaces to touch temperature limits and insuring fail safe conditions in the event of loss of cooling. In addition to these primary design objectives, system level requirements and constraints are imposed on the payload, many of which are driven by multidisciplinary issues. Blending these issues into the overall system design required concurrent design sessions with the project team, iterative conceptual design layouts, thermal analysis and modeling, and hardware testing. Multiple tradeoff studies were also performed to investigate the many options which surfaced during the development cycle.
A requirement for memory retrieval during and after long-term extinction learning
Ouyang, Ming; Thomas, Steven A.
2005-01-01
Current learning theories are based on the idea that learning is driven by the difference between expectations and experience (the delta rule). In extinction, one learns that certain expectations no longer apply. Here, we test the potential validity of the delta rule by manipulating memory retrieval (and thus expectations) during extinction learning. Adrenergic signaling is critical for the time-limited retrieval (but not acquisition or consolidation) of contextual fear. Using genetic and pharmacologic approaches to manipulate adrenergic signaling, we find that long-term extinction requires memory retrieval but not conditioned responding. Identical manipulations of the adrenergic system that do not affect memory retrieval do not alter extinction. The results provide substantial support for the delta rule of learning theory. In addition, the timing over which extinction is sensitive to adrenergic manipulation suggests a model whereby memory retrieval occurs during, and several hours after, extinction learning to consolidate long-term extinction memory. PMID:15947076
Tang, Tao; Chen, Sisi; Huang, Xuanlin; Yang, Tao; Qi, Bo
2018-01-01
High-performance position control can be improved by the compensation of disturbances for a gear-driven control system. This paper presents a mode-free disturbance observer (DOB) based on sensor-fusion to reduce some errors related disturbances for a gear-driven gimbal. This DOB uses the rate deviation to detect disturbances for implementation of a high-gain compensator. In comparison with the angular position signal the rate deviation between load and motor can exhibits the disturbances exiting in the gear-driven gimbal quickly. Due to high bandwidth of the motor rate closed loop, the inverse model of the plant is not necessary to implement DOB. Besides, this DOB requires neither complex modeling of plant nor the use of additive sensors. Without rate sensors providing angular rate, the rate deviation is easily detected by encoders mounted on the side of motor and load, respectively. Extensive experiments are provided to demonstrate the benefits of the proposed algorithm. PMID:29498643
Tang, Tao; Chen, Sisi; Huang, Xuanlin; Yang, Tao; Qi, Bo
2018-03-02
High-performance position control can be improved by the compensation of disturbances for a gear-driven control system. This paper presents a mode-free disturbance observer (DOB) based on sensor-fusion to reduce some errors related disturbances for a gear-driven gimbal. This DOB uses the rate deviation to detect disturbances for implementation of a high-gain compensator. In comparison with the angular position signal the rate deviation between load and motor can exhibits the disturbances exiting in the gear-driven gimbal quickly. Due to high bandwidth of the motor rate closed loop, the inverse model of the plant is not necessary to implement DOB. Besides, this DOB requires neither complex modeling of plant nor the use of additive sensors. Without rate sensors providing angular rate, the rate deviation is easily detected by encoders mounted on the side of motor and load, respectively. Extensive experiments are provided to demonstrate the benefits of the proposed algorithm.
Data-driven reverse engineering of signaling pathways using ensembles of dynamic models.
Henriques, David; Villaverde, Alejandro F; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R
2017-02-01
Despite significant efforts and remarkable progress, the inference of signaling networks from experimental data remains very challenging. The problem is particularly difficult when the objective is to obtain a dynamic model capable of predicting the effect of novel perturbations not considered during model training. The problem is ill-posed due to the nonlinear nature of these systems, the fact that only a fraction of the involved proteins and their post-translational modifications can be measured, and limitations on the technologies used for growing cells in vitro, perturbing them, and measuring their variations. As a consequence, there is a pervasive lack of identifiability. To overcome these issues, we present a methodology called SELDOM (enSEmbLe of Dynamic lOgic-based Models), which builds an ensemble of logic-based dynamic models, trains them to experimental data, and combines their individual simulations into an ensemble prediction. It also includes a model reduction step to prune spurious interactions and mitigate overfitting. SELDOM is a data-driven method, in the sense that it does not require any prior knowledge of the system: the interaction networks that act as scaffolds for the dynamic models are inferred from data using mutual information. We have tested SELDOM on a number of experimental and in silico signal transduction case-studies, including the recent HPN-DREAM breast cancer challenge. We found that its performance is highly competitive compared to state-of-the-art methods for the purpose of recovering network topology. More importantly, the utility of SELDOM goes beyond basic network inference (i.e. uncovering static interaction networks): it builds dynamic (based on ordinary differential equation) models, which can be used for mechanistic interpretations and reliable dynamic predictions in new experimental conditions (i.e. not used in the training). For this task, SELDOM's ensemble prediction is not only consistently better than predictions from individual models, but also often outperforms the state of the art represented by the methods used in the HPN-DREAM challenge.
Data-driven reverse engineering of signaling pathways using ensembles of dynamic models
Henriques, David; Villaverde, Alejandro F.; Banga, Julio R.
2017-01-01
Despite significant efforts and remarkable progress, the inference of signaling networks from experimental data remains very challenging. The problem is particularly difficult when the objective is to obtain a dynamic model capable of predicting the effect of novel perturbations not considered during model training. The problem is ill-posed due to the nonlinear nature of these systems, the fact that only a fraction of the involved proteins and their post-translational modifications can be measured, and limitations on the technologies used for growing cells in vitro, perturbing them, and measuring their variations. As a consequence, there is a pervasive lack of identifiability. To overcome these issues, we present a methodology called SELDOM (enSEmbLe of Dynamic lOgic-based Models), which builds an ensemble of logic-based dynamic models, trains them to experimental data, and combines their individual simulations into an ensemble prediction. It also includes a model reduction step to prune spurious interactions and mitigate overfitting. SELDOM is a data-driven method, in the sense that it does not require any prior knowledge of the system: the interaction networks that act as scaffolds for the dynamic models are inferred from data using mutual information. We have tested SELDOM on a number of experimental and in silico signal transduction case-studies, including the recent HPN-DREAM breast cancer challenge. We found that its performance is highly competitive compared to state-of-the-art methods for the purpose of recovering network topology. More importantly, the utility of SELDOM goes beyond basic network inference (i.e. uncovering static interaction networks): it builds dynamic (based on ordinary differential equation) models, which can be used for mechanistic interpretations and reliable dynamic predictions in new experimental conditions (i.e. not used in the training). For this task, SELDOM’s ensemble prediction is not only consistently better than predictions from individual models, but also often outperforms the state of the art represented by the methods used in the HPN-DREAM challenge. PMID:28166222
Assessing Requirements Quality through Requirements Coverage
NASA Technical Reports Server (NTRS)
Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt
2008-01-01
In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.
Data-driven outbreak forecasting with a simple nonlinear growth model
Lega, Joceline; Brown, Heidi E.
2016-01-01
Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. PMID:27770752
Eguizabal, Johnny; Tufaga, Michael; Scheer, Justin K; Ames, Christopher; Lotz, Jeffrey C; Buckley, Jenni M
2010-05-07
In vitro multi-axial bending testing using pure moment loading conditions has become the standard in evaluating the effects of different types of surgical intervention on spinal kinematics. Simple, cable-driven experimental set-ups have been widely adopted because they require little infrastructure. Traditionally, "fixed ring" cable-driven experimental designs have been used; however, there have been concerns with the validity of this set-up in applying pure moment loading. This study involved directly comparing the loading state induced by a traditional "fixed ring" apparatus versus a novel "sliding ring" approach. Flexion-extension bending was performed on an artificial spine model and a single cadaveric test specimen, and the applied loading conditions to the specimen were measured with an in-line multiaxial load cell. The results showed that the fixed ring system applies flexion-extension moments that are 50-60% less than the intended values. This design also imposes non-trivial anterior-posterior shear forces, and non-uniform loading conditions were induced along the length of the specimen. The results of this study indicate that fixed ring systems have the potential to deviate from a pure moment loading state and that our novel sliding ring modification corrects this error in the original test design. This suggests that the proposed sliding ring design should be used for future in vitro spine biomechanics studies involving a cable-driven pure moment apparatus. Copyright 2010 Elsevier Ltd. All rights reserved.
Human sleep and circadian rhythms: a simple model based on two coupled oscillators.
Strogatz, S H
1987-01-01
We propose a model of the human circadian system. The sleep-wake and body temperature rhythms are assumed to be driven by a pair of coupled nonlinear oscillators described by phase variables alone. The novel aspect of the model is that its equations may be solved analytically. Computer simulations are used to test the model against sleep-wake data pooled from 15 studies of subjects living for weeks in unscheduled, time-free environments. On these tests the model performs about as well as the existing models, although its mathematical structure is far simpler.
NASA Astrophysics Data System (ADS)
Stotz, I. L.; Iaffaldano, G.; Davies, D. R.
2018-01-01
The Pacific Plate is thought to be driven mainly by slab pull, associated with subduction along the Aleutians-Japan, Marianas-Izu-Bonin, and Tonga-Kermadec trenches. This implies that viscous flow within the sub-Pacific asthenosphere is mainly generated by overlying plate motion (i.e., Couette flow) and that the associated shear stresses at the lithosphere's base are resisting such motion. Recent studies on glacial isostatic adjustment and lithosphere dynamics provide tighter constraints on the viscosity and thickness of Earth's asthenosphere and, therefore, on the amount of shear stress that asthenosphere and lithosphere mutually exchange, by virtue of Newton's third law of motion. In light of these constraints, the notion that subduction is the main driver of present-day Pacific Plate motion becomes somewhat unviable, as the pulling force that would be required by slabs exceeds the maximum available from their negative buoyancy. Here we use coupled global models of mantle and lithosphere dynamics to show that the sub-Pacific asthenosphere features a significant component of pressure-driven (i.e., Poiseuille) flow and that this has driven at least 50% of the Pacific Plate motion since, at least, 15 Ma. A corollary of our models is that a sublithospheric pressure difference as high as ±50 MPa is required across the Pacific domain.
A Decision Fusion Framework for Treatment Recommendation Systems.
Mei, Jing; Liu, Haifeng; Li, Xiang; Xie, Guotong; Yu, Yiqin
2015-01-01
Treatment recommendation is a nontrivial task--it requires not only domain knowledge from evidence-based medicine, but also data insights from descriptive, predictive and prescriptive analysis. A single treatment recommendation system is usually trained or modeled with a limited (size or quality) source. This paper proposes a decision fusion framework, combining both knowledge-driven and data-driven decision engines for treatment recommendation. End users (e.g. using the clinician workstation or mobile apps) could have a comprehensive view of various engines' opinions, as well as the final decision after fusion. For implementation, we leverage several well-known fusion algorithms, such as decision templates and meta classifiers (of logistic and SVM, etc.). Using an outcome-driven evaluation metric, we compare the fusion engine with base engines, and our experimental results show that decision fusion is a promising way towards a more valuable treatment recommendation.
LexValueSets: An Approach for Context-Driven Value Sets Extraction
Pathak, Jyotishman; Jiang, Guoqian; Dwarkanath, Sridhar O.; Buntrock, James D.; Chute, Christopher G.
2008-01-01
The ability to model, share and re-use value sets across multiple medical information systems is an important requirement. However, generating value sets semi-automatically from a terminology service is still an unresolved issue, in part due to the lack of linkage to clinical context patterns that provide the constraints in defining a concept domain and invocation of value sets extraction. Towards this goal, we develop and evaluate an approach for context-driven automatic value sets extraction based on a formal terminology model. The crux of the technique is to identify and define the context patterns from various domains of discourse and leverage them for value set extraction using two complementary ideas based on (i) local terms provided by the Subject Matter Experts (extensional) and (ii) semantic definition of the concepts in coding schemes (intensional). A prototype was implemented based on SNOMED CT rendered in the LexGrid terminology model and a preliminary evaluation is presented. PMID:18998955
Dynamically adaptive data-driven simulation of extreme hydrological flows
NASA Astrophysics Data System (ADS)
Kumar Jain, Pushkar; Mandli, Kyle; Hoteit, Ibrahim; Knio, Omar; Dawson, Clint
2018-02-01
Hydrological hazards such as storm surges, tsunamis, and rainfall-induced flooding are physically complex events that are costly in loss of human life and economic productivity. Many such disasters could be mitigated through improved emergency evacuation in real-time and through the development of resilient infrastructure based on knowledge of how systems respond to extreme events. Data-driven computational modeling is a critical technology underpinning these efforts. This investigation focuses on the novel combination of methodologies in forward simulation and data assimilation. The forward geophysical model utilizes adaptive mesh refinement (AMR), a process by which a computational mesh can adapt in time and space based on the current state of a simulation. The forward solution is combined with ensemble based data assimilation methods, whereby observations from an event are assimilated into the forward simulation to improve the veracity of the solution, or used to invert for uncertain physical parameters. The novelty in our approach is the tight two-way coupling of AMR and ensemble filtering techniques. The technology is tested using actual data from the Chile tsunami event of February 27, 2010. These advances offer the promise of significantly transforming data-driven, real-time modeling of hydrological hazards, with potentially broader applications in other science domains.
High performance cellular level agent-based simulation with FLAME for the GPU.
Richmond, Paul; Walker, Dawn; Coakley, Simon; Romano, Daniela
2010-05-01
Driven by the availability of experimental data and ability to simulate a biological scale which is of immediate interest, the cellular scale is fast emerging as an ideal candidate for middle-out modelling. As with 'bottom-up' simulation approaches, cellular level simulations demand a high degree of computational power, which in large-scale simulations can only be achieved through parallel computing. The flexible large-scale agent modelling environment (FLAME) is a template driven framework for agent-based modelling (ABM) on parallel architectures ideally suited to the simulation of cellular systems. It is available for both high performance computing clusters (www.flame.ac.uk) and GPU hardware (www.flamegpu.com) and uses a formal specification technique that acts as a universal modelling format. This not only creates an abstraction from the underlying hardware architectures, but avoids the steep learning curve associated with programming them. In benchmarking tests and simulations of advanced cellular systems, FLAME GPU has reported massive improvement in performance over more traditional ABM frameworks. This allows the time spent in the development and testing stages of modelling to be drastically reduced and creates the possibility of real-time visualisation for simple visual face-validation.
Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes
Pakhotin, I P; Drozdov, A Y; Shprits, Y Y; Boynton, R J; Subbotin, D A; Balikhin, M A
2014-01-01
This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system has been tested for three extended time periods totalling several weeks of observations. The time periods involved periods of quiet, moderate, and strong geomagnetic activity and captured a range of dynamics typical of the radiation belts. The model has successfully simulated energetic electron fluxes for various magnetospheric conditions. Physical mechanisms that may be responsible for the discrepancies between the model results and observations are discussed. PMID:26167432
Equivalent circuit of radio frequency-plasma with the transformer model
NASA Astrophysics Data System (ADS)
Nishida, K.; Mochizuki, S.; Ohta, M.; Yasumoto, M.; Lettry, J.; Mattei, S.; Hatayama, A.
2014-02-01
LINAC4 H- source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H- source, to that of final amplifier. We model RF plasma inside the H- source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.
How schizophrenia develops: cognitive and brain mechanisms underlying onset of psychosis
Cannon, Tyrone D.
2015-01-01
Identifying cognitive and neural mechanisms involved in the development of schizophrenia requires longitudinal observation of individuals prior to onset. Here recent studies of prodromal individuals who progress to full psychosis are briefly reviewed in relation to models of schizophrenia pathophysiology. Together, this body of work suggests that disruption in brain connectivity, driven primarily by a progressive reduction in dendritic spines on cortical pyramidal neurons, may represent a key triggering mechanism. The earliest disruptions appear to be in circuits involved in referencing experiences according to time, place, and agency, which may result in a failure to recognize particular cognitions as self-generated or to constrain interpretations of the meaning of events based on prior experiences, providing the scaffolding for faulty reality testing. PMID:26493362
Pre-Launch Tasks Proposed in our Contract of December 1991
NASA Technical Reports Server (NTRS)
1998-01-01
We propose, during the pre-EOS phase to: (1) develop, with other MODIS Team Members, a means of discriminating different major biome types with NDVI and other AVHRR-based data; (2) develop a simple ecosystem process model for each of these biomes, BIOME-BGC; (3) relate the seasonal trend of weekly composite NDVI to vegetation phenology and temperature limits to develop a satellite defined growing season for vegetation; and (4) define physiologically based energy to mass conversion factors for carbon and water for each biome. Our final core at-launch product will be simplified, completely satellite driven biome specific models for net primary production. We will build these biome specific satellite driven algorithms using a family of simple ecosystem process models as calibration models, collectively called BIOME-BGC, and establish coordination with an existing network of ecological study sites in order to test and validate these products. Field datasets will then be available for both BIOME-BGC development and testing, use for algorithm developments of other MODIS Team Members, and ultimately be our first test point for MODIS land vegetation products upon launch. We will use field sites from the National Science Foundation Long-Term Ecological Research network, and develop Glacier National Park as a major site for intensive validation.
Pre-Launch Tasks Proposed in our Contract of December 1991
NASA Technical Reports Server (NTRS)
Running, Steven W.; Nemani, Ramakrishna R.; Glassy, Joseph
1997-01-01
We propose, during the pre-EOS phase to: (1) develop, with other MODIS Team Members, a means of discriminating different major biome types with NDVI and other AVHRR-based data. (2) develop a simple ecosystem process model for each of these biomes, BIOME-BGC (3) relate the seasonal trend of weekly composite NDVI to vegetation phenology and temperature limits to develop a satellite defined growing season for vegetation; and (4) define physiologically based energy to mass conversion factors for carbon and water for each biome. Our final core at-launch product will be simplified, completely satellite driven biome specific models for net primary production. We will build these biome specific satellite driven algorithms using a family of simple ecosystem process models as calibration models, collectively called BIOME-BGC, and establish coordination with an existing network of ecological study sites in order to test and validate these products. Field datasets will then be available for both BIOME-BGC development and testing, use for algorithm developments of other MODIS Team Members, and ultimately be our first test point for MODIS land vegetation products upon launch. We will use field sites from the National Science Foundation Long-Term Ecological Research network, and develop Glacier National Park as a major site for intensive validation.
Flow-aggregated traffic-driven label mapping in label-switching networks
NASA Astrophysics Data System (ADS)
Nagami, Kenichi; Katsube, Yasuhiro; Esaki, Hiroshi; Nakamura, Osamu
1998-12-01
Label switching technology enables high performance, flexible, layer-3 packet forwarding based on the fixed length label information mapped to the layer-3 packet stream. A Label Switching Router (LSR) forwards layer-3 packets based on their label information mapped to the layer-3 address information as well as their layer-3 address information. This paper evaluates the required number of labels under traffic-driven label mapping policy using the real backbone traffic traces. The evaluation shows that the label mapping policy requires a large number of labels. In order to reduce the required number of labels, we propose a label mapping policy which is a traffic-driven label mapping for the traffic toward the same destination network. The evaluation shows that the proposed label mapping policy requires only about one tenth as many labels compared with the traffic-driven label mapping for the host-pair packet stream,and the topology-driven label mapping for the destination network packet stream.
Model-Driven Development for scientific computing. An upgrade of the RHEEDGr program
NASA Astrophysics Data System (ADS)
Daniluk, Andrzej
2009-11-01
Model-Driven Engineering (MDE) is the software engineering discipline, which considers models as the most important element for software development, and for the maintenance and evolution of software, through model transformation. Model-Driven Architecture (MDA) is the approach for software development under the Model-Driven Engineering framework. This paper surveys the core MDA technology that was used to upgrade of the RHEEDGR program to C++0x language standards. New version program summaryProgram title: RHEEDGR-09 Catalogue identifier: ADUY_v3_0 Program summary URL:
NASA Technical Reports Server (NTRS)
Klutz, Glenn
1989-01-01
A facility was established that uses collected data and feeds it into mathematical models that generate improved data arrays by correcting for various losses, base line drift, and conversion to unity scaling. These developed data arrays have headers and other identifying information affixed and are subsequently stored in a Laser Materials and Characteristics data base which is accessible to various users. The two part data base: absorption - emission spectra and tabulated data, is developed around twelve laser models. The tabulated section of the data base is divided into several parts: crystalline, optical, mechanical, and thermal properties; aborption and emission spectra information; chemical name and formulas; and miscellaneous. A menu-driven, language-free graphing program will reduce and/or remove the requirement that users become competent FORTRAN programmers and the concomitant requirement that they also spend several days to a few weeks becoming conversant with the GEOGRAF library and sequence of calls and the continual refreshers of both. The work included becoming thoroughly conversant with or at least very familiar with GEOGRAF by GEOCOMP Corp. The development of the graphing program involved trial runs of the various callable library routines on dummy data in order to become familiar with actual implementation and sequencing. This was followed by trial runs with actual data base files and some additional data from current research that was not in the data base but currently needed graphs. After successful runs, with dummy and real data, using actual FORTRAN instructions steps were undertaken to develop the menu-driven language-free implementation of a program which would require the user only know how to use microcomputers. The user would simply be responding to items displayed on the video screen. To assist the user in arriving at the optimum values needed for a specific graph, a paper, and pencil check list was made available to use on the trial runs.
Thaker, Nikhil G; Orio, Peter F; Potters, Louis
Magnetic resonance imaging (MRI) simulation and planning for prostate brachytherapy (PBT) may deliver potential clinical benefits but at an unknown cost to the provider and healthcare system. Time-driven activity-based costing (TDABC) is an innovative bottom-up costing tool in healthcare that can be used to measure the actual consumption of resources required over the full cycle of care. TDABC analysis was conducted to compare patient-level costs for an MRI-based versus traditional PBT workflow. TDABC cost was only 1% higher for the MRI-based workflow, and utilization of MRI allowed for cost shifting from other imaging modalities, such as CT and ultrasound, to MRI during the PBT process. Future initiatives will be required to follow the costs of care over longer periods of time to determine if improvements in outcomes and toxicities with an MRI-based approach lead to lower resource utilization and spending over the long-term. Understanding provider costs will become important as healthcare reform transitions to value-based purchasing and other alternative payment models. Copyright © 2016 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Preliminary research of a novel center-driven robot for upper extremity rehabilitation.
Cao, Wujing; Zhang, Fei; Yu, Hongliu; Hu, Bingshan; Meng, Qiaoling
2018-01-19
Loss of upper limb function often appears after stroke. Robot-assisted systems are becoming increasingly common in upper extremity rehabilitation. Rehabilitation robot provides intensive motor therapy, which can be performed in a repetitive, accurate and controllable manner. This study aims to propose a novel center-driven robot for upper extremity rehabilitation. A new power transmission mechanism is designed to transfer the power to elbow and shoulder joints from three motors located on the base. The forward and inverse kinematics equations of the center-driven robot (CENTROBOT) are deduced separately. The theoretical values of the scope of joint movements are obtained with the Denavit-Hartenberg parameters method. A prototype of the CENTROBOT is developed and tested. The elbow flexion/extension, shoulder flexion/extension and shoulder adduction/abduction can be realized of the center-driven robot. The angles value of joints are in conformity with the theoretical value. The CENTROBOT reduces the overall size of the robot arm, the influence of motor noise, radiation and other adverse factors by setting all motors on the base. It can satisfy the requirements of power and movement transmission of the robot arm.
NASA Office of Aeronautics and Space Technology Summer Workshop. Volume 10: Basic research panel
NASA Technical Reports Server (NTRS)
1975-01-01
Possible research experiments using the space transportation system are identified based on user requirements. Opportunity driven research areas include quantum electronics, cryogenics system technology, superconducting devices and detectors, and photo-induced reactions. Mission driven research requirements were examined and ranked based on inputs from the user group.
Fuzzy model-based fault detection and diagnosis for a pilot heat exchanger
NASA Astrophysics Data System (ADS)
Habbi, Hacene; Kidouche, Madjid; Kinnaert, Michel; Zelmat, Mimoun
2011-04-01
This article addresses the design and real-time implementation of a fuzzy model-based fault detection and diagnosis (FDD) system for a pilot co-current heat exchanger. The design method is based on a three-step procedure which involves the identification of data-driven fuzzy rule-based models, the design of a fuzzy residual generator and the evaluation of the residuals for fault diagnosis using statistical tests. The fuzzy FDD mechanism has been implemented and validated on the real co-current heat exchanger, and has been proven to be efficient in detecting and isolating process, sensor and actuator faults.
Bouguecha, Salah T; Boubakri, Ali; Aly, Samir E; Al-Beirutty, Mohammad H; Hamdi, Mohamed M
2016-01-01
Membrane distillation (MD) is considered as a relatively high-energy requirement. To overcome this drawback, it is recommended to couple the MD process with solar energy as the renewable energy source in order to provide heat energy required to optimize its performance to produce permeate flux. In the present work, an original solar energy driven direct contact membrane distillation (DCMD) pilot plant was built and tested under actual weather conditions at Jeddah, KSA, in order to model and optimize permeate flux. The dependency of permeate flux on various operating parameters such as feed temperature (46.6-63.4°C), permeate temperature (6.6-23.4°C), feed flow rate (199-451L/h) and permeate flow rate (199-451L/h) was studied by response surface methodology based on central composite design approach. The analysis of variance (ANOVA) confirmed that all independent variables had significant influence on the model (where P-value <0.05). The high coefficient of determination (R(2) = 0.9644 and R(adj)(2) = 0.9261) obtained by ANOVA demonstrated good correlation between experimental and predicted values of the response. The optimized conditions, determined using desirability function, were T(f) = 63.4°C, Tp = 6.6°C, Q(f) = 451L/h and Q(p) = 451L/h. Under these conditions, the maximum permeate flux of 6.122 kg/m(2).h was achieved, which was close to the predicted value of 6.398 kg/m(2).h.
A Cohesive Zone Approach for Fatigue-Driven Delamination Analysis in Composite Materials
NASA Astrophysics Data System (ADS)
Amiri-Rad, Ahmad; Mashayekhi, Mohammad
2017-08-01
A new model for prediction of fatigue-driven delamination in laminated composites is proposed using cohesive interface elements. The presented model provides a link between cohesive elements damage evolution rate and crack growth rate of Paris law. This is beneficial since no additional material parameters are required and the well-known Paris law constants are used. The link between the cohesive zone method and fracture mechanics is achieved without use of effective length which has led to more accurate results. The problem of unknown failure path in calculation of the energy release rate is solved by imposing a condition on the damage model which leads to completely vertical failure path. A global measure of energy release rate is used for the whole cohesive zone which is computationally more efficient compared to previous similar models. The performance of the proposed model is investigated by simulation of well-known delamination tests and comparison against experimental data of the literature.
NASA Astrophysics Data System (ADS)
Kirchner, James W.
2006-03-01
The science of hydrology is on the threshold of major advances, driven by new hydrologic measurements, new methods for analyzing hydrologic data, and new approaches to modeling hydrologic systems. Here I suggest several promising directions forward, including (1) designing new data networks, field observations, and field experiments, with explicit recognition of the spatial and temporal heterogeneity of hydrologic processes, (2) replacing linear, additive "black box" models with "gray box" approaches that better capture the nonlinear and non-additive character of hydrologic systems, (3) developing physically based governing equations for hydrologic behavior at the catchment or hillslope scale, recognizing that they may look different from the equations that describe the small-scale physics, (4) developing models that are minimally parameterized and therefore stand some chance of failing the tests that they are subjected to, and (5) developing ways to test models more comprehensively and incisively. I argue that scientific progress will mostly be achieved through the collision of theory and data, rather than through increasingly elaborate and parameter-rich models that may succeed as mathematical marionettes, dancing to match the calibration data even if their underlying premises are unrealistic. Thus advancing the science of hydrology will require not only developing theories that get the right answers but also testing whether they get the right answers for the right reasons.
NASA Astrophysics Data System (ADS)
Kovalskyy, V.; Henebry, G. M.
2012-01-01
Phenologies of the vegetated land surface are being used increasingly for diagnosis and prognosis of climate change consequences. Current prospective and retrospective phenological models stand far apart in their approaches to the subject. We report on an exploratory attempt to implement a phenological model based on a new event driven concept which has both diagnostic and prognostic capabilities in the same modeling framework. This Event Driven Phenological Model (EDPM) is shown to simulate land surface phenologies and phenophase transition dates in agricultural landscapes based on assimilation of weather data and land surface observations from spaceborne sensors. The model enables growing season phenologies to develop in response to changing environmental conditions and disturbance events. It also has the ability to ingest remotely sensed data to adjust its output to improve representation of the modeled variable. We describe the model and report results of initial testing of the EDPM using Level 2 flux tower records from the Ameriflux sites at Mead, Nebraska, USA, and at Bondville, Illinois, USA. Simulating the dynamics of normalized difference vegetation index based on flux tower data, the predictions by the EDPM show good agreement (RMSE < 0.08; r2 > 0.8) for maize and soybean during several growing seasons at different locations. This study presents the EDPM used in the companion paper (Kovalskyy and Henebry, 2011) in a coupling scheme to estimate daily actual evapotranspiration over multiple growing seasons.
NASA Astrophysics Data System (ADS)
Kovalskyy, V.; Henebry, G. M.
2011-05-01
Phenologies of the vegetated land surface are being used increasingly for diagnosis and prognosis of climate change consequences. Current prospective and retrospective phenological models stand far apart in their approaches to the subject. We report on an exploratory attempt to implement a phenological model based on a new event driven concept which has both diagnostic and prognostic capabilities in the same modeling framework. This Event Driven Phenological Model (EDPM) is shown to simulate land surface phenologies and phenophase transition dates in agricultural landscapes based on assimilation of weather data and land surface observations from spaceborne sensors. The model enables growing season phenologies to develop in response to changing environmental conditions and disturbance events. It also has the ability to ingest remotely sensed data to adjust its output to improve representation of the modeled variable. We describe the model and report results of initial testing of the EDPM using Level 2 flux tower records from the Ameriflux sites at Mead, Nebraska, USA, and at Bondville, Illinois, USA. Simulating the dynamics of normalized difference vegetation index based on flux tower data, the predictions by the EDPM show good agreement (RMSE < 0.08; r2>0.8) for maize and soybean during several growing seasons at different locations. This study presents the EDPM used in the companion paper (Kovalskyy and Henebry, 2011) in a coupling scheme to estimate daily actual evapotranspiration over multiple growing seasons.
Liu, Zhuofu; Wang, Lin; Luo, Zhongming; Heusch, Andrew I; Cascioli, Vincenzo; McCarthy, Peter W
2015-11-01
There is a need to develop a greater understanding of temperature at the skin-seat interface during prolonged seating from the perspectives of both industrial design (comfort/discomfort) and medical care (skin ulcer formation). Here we test the concept of predicting temperature at the seat surface and skin interface during prolonged sitting (such as required from wheelchair users). As caregivers are usually busy, such a method would give them warning ahead of a problem. This paper describes a data-driven model capable of predicting thermal changes and thus having the potential to provide an early warning (15- to 25-min ahead prediction) of an impending temperature that may increase the risk for potential skin damages for those subject to enforced sitting and who have little or no sensory feedback from this area. Initially, the oscillations of the original signal are suppressed using the reconstruction strategy of empirical mode decomposition (EMD). Consequentially, the autoregressive data-driven model can be used to predict future thermal trends based on a shorter period of acquisition, which reduces the possibility of introducing human errors and artefacts associated with longer duration "enforced" sitting by volunteers. In this study, the method had a maximum predictive error of <0.4 °C when used to predict the temperature at the seat and skin interface 15 min ahead, but required 45 min data prior to give this accuracy. Although the 45 min front loading of data appears large (in proportion to the 15 min prediction), a relative strength derives from the fact that the same algorithm could be used on the other 4 sitting datasets created by the same individual, suggesting that the period of 45 min required to train the algorithm is transferable to other data from the same individual. This approach might be developed (along with incorporation of other measures such as movement and humidity) into a system that can give caregivers prior warning to help avoid exacerbating the skin disorders of patients who suffer from low body insensitivity and disability requiring them to be immobile in seats for prolonged periods. Copyright © 2015 Tissue Viability Society. Published by Elsevier Ltd. All rights reserved.
Towal, R Blythe; Mormann, Milica; Koch, Christof
2013-10-01
Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift-diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions.
Towal, R. Blythe; Mormann, Milica; Koch, Christof
2013-01-01
Many decisions we make require visually identifying and evaluating numerous alternatives quickly. These usually vary in reward, or value, and in low-level visual properties, such as saliency. Both saliency and value influence the final decision. In particular, saliency affects fixation locations and durations, which are predictive of choices. However, it is unknown how saliency propagates to the final decision. Moreover, the relative influence of saliency and value is unclear. Here we address these questions with an integrated model that combines a perceptual decision process about where and when to look with an economic decision process about what to choose. The perceptual decision process is modeled as a drift–diffusion model (DDM) process for each alternative. Using psychophysical data from a multiple-alternative, forced-choice task, in which subjects have to pick one food item from a crowded display via eye movements, we test four models where each DDM process is driven by (i) saliency or (ii) value alone or (iii) an additive or (iv) a multiplicative combination of both. We find that models including both saliency and value weighted in a one-third to two-thirds ratio (saliency-to-value) significantly outperform models based on either quantity alone. These eye fixation patterns modulate an economic decision process, also described as a DDM process driven by value. Our combined model quantitatively explains fixation patterns and choices with similar or better accuracy than previous models, suggesting that visual saliency has a smaller, but significant, influence than value and that saliency affects choices indirectly through perceptual decisions that modulate economic decisions. PMID:24019496
Data Driven Synthesis of Three Term Digital Controllers
NASA Astrophysics Data System (ADS)
Keel, Lee H.; Mitra, Sandipan; Bhattacharyya, Shankar P.
This paper presents a method for digital PID and first order controller synthesis based on frequency domain data alone. The techniques given here first determine all stabilizing controllers from measurement data. In both PID and first order controller cases, the only information required are frequency domain data (Nyquist-Bode data) and the number of open-loop RHP poles. Specifically no identification of the plant model is required. Examples are given for illustration.
Zhao, Ming; Rattanatamrong, Prapaporn; DiGiovanna, Jack; Mahmoudi, Babak; Figueiredo, Renato J; Sanchez, Justin C; Príncipe, José C; Fortes, José A B
2008-01-01
Dynamic data-driven brain-machine interfaces (DDDBMI) have great potential to advance the understanding of neural systems and improve the design of brain-inspired rehabilitative systems. This paper presents a novel cyberinfrastructure that couples in vivo neurophysiology experimentation with massive computational resources to provide seamless and efficient support of DDDBMI research. Closed-loop experiments can be conducted with in vivo data acquisition, reliable network transfer, parallel model computation, and real-time robot control. Behavioral experiments with live animals are supported with real-time guarantees. Offline studies can be performed with various configurations for extensive analysis and training. A Web-based portal is also provided to allow users to conveniently interact with the cyberinfrastructure, conducting both experimentation and analysis. New motor control models are developed based on this approach, which include recursive least square based (RLS) and reinforcement learning based (RLBMI) algorithms. The results from an online RLBMI experiment shows that the cyberinfrastructure can successfully support DDDBMI experiments and meet the desired real-time requirements.
Event-driven simulation in SELMON: An overview of EDSE
NASA Technical Reports Server (NTRS)
Rouquette, Nicolas F.; Chien, Steve A.; Charest, Leonard, Jr.
1992-01-01
EDSE (event-driven simulation engine), a model-based event-driven simulator implemented for SELMON, a tool for sensor selection and anomaly detection in real-time monitoring is described. The simulator is used in conjunction with a causal model to predict future behavior of the model from observed data. The behavior of the causal model is interpreted as equivalent to the behavior of the physical system being modeled. An overview of the functionality of the simulator and the model-based event-driven simulation paradigm on which it is based is provided. Included are high-level descriptions of the following key properties: event consumption and event creation, iterative simulation, synchronization and filtering of monitoring data from the physical system. Finally, how EDSE stands with respect to the relevant open issues of discrete-event and model-based simulation is discussed.
Bridging paradigms: hybrid mechanistic-discriminative predictive models.
Doyle, Orla M; Tsaneva-Atansaova, Krasimira; Harte, James; Tiffin, Paul A; Tino, Peter; Díaz-Zuccarini, Vanessa
2013-03-01
Many disease processes are extremely complex and characterized by multiple stochastic processes interacting simultaneously. Current analytical approaches have included mechanistic models and machine learning (ML), which are often treated as orthogonal viewpoints. However, to facilitate truly personalized medicine, new perspectives may be required. This paper reviews the use of both mechanistic models and ML in healthcare as well as emerging hybrid methods, which are an exciting and promising approach for biologically based, yet data-driven advanced intelligent systems.
NASA Technical Reports Server (NTRS)
Paciotti, Gabriel; Humphries, Martin; Rottmeier, Fabrice; Blecha, Luc
2014-01-01
In the frame of ESA's Solar Orbiter scientific mission, Almatech has been selected to design, develop and test the Slit Change Mechanism of the SPICE (SPectral Imaging of the Coronal Environment) instrument. In order to guaranty optical cleanliness level while fulfilling stringent positioning accuracies and repeatability requirements for slit positioning in the optical path of the instrument, a linear guiding system based on a double flexible blade arrangement has been selected. The four different slits to be used for the SPICE instrument resulted in a total stroke of 16.5 mm in this linear slit changer arrangement. The combination of long stroke and high precision positioning requirements has been identified as the main design challenge to be validated through breadboard models testing. This paper presents the development of SPICE's Slit Change Mechanism (SCM) and the two-step validation tests successfully performed on breadboard models of its flexible blade support system. The validation test results have demonstrated the full adequacy of the flexible blade guiding system implemented in SPICE's Slit Change Mechanism in a stand-alone configuration. Further breadboard test results, studying the influence of the compliant connection to the SCM linear actuator on an enhanced flexible guiding system design have shown significant enhancements in the positioning accuracy and repeatability of the selected flexible guiding system. Preliminary evaluation of the linear actuator design, including a detailed tolerance analyses, has shown the suitability of this satellite roller screw based mechanism for the actuation of the tested flexible guiding system and compliant connection. The presented development and preliminary testing of the high-precision long-stroke Slit Change Mechanism for the SPICE Instrument are considered fully successful such that future tests considering the full Slit Change Mechanism can be performed, with the gained confidence, directly on a Qualification Model. The selected linear Slit Change Mechanism design concept, consisting of a flexible guiding system driven by a hermetically sealed linear drive mechanism, is considered validated for the specific application of the SPICE instrument, with great potential for other special applications where contamination and high precision positioning are dominant design drivers.
Study on the CO2 electric driven fixed swash plate type compressor for eco-friendly vehicles
NASA Astrophysics Data System (ADS)
Nam, Donglim; Kim, Kitae; Lee, Jehie; Kwon, Yunki; Lee, Geonho
2017-08-01
The purpose of this study is to experiment and to performance analysis about the electric-driven fixed swash plate compressor using alternate refrigerant(R744). Comprehensive simulation model for an electric driven compressor using CO2 for eco-friendly vehicle is presented. This model consists of compression model and dynamic model. The compression model included valve dynamics, leakage, and heat transfer models. And the dynamic model included frictional loss between piston ring and cylinder wall, frictional loss between shoe and swash plate, frictional loss of bearings, and electric efficiency. Especially, because the efficiency of an electric parts(motor and inverter) in the compressor affects the loss of the compressor, the dynamo test was performed. We made the designed compressor, and tested the performance of the compressor about the variety pressure conditions. Also we compared the performance analysis result and performance test result.
Liu, Rentao; Jiang, Jiping; Guo, Liang; Shi, Bin; Liu, Jie; Du, Zhaolin; Wang, Peng
2016-06-01
In-depth filtering of emergency disposal technology (EDT) and materials has been required in the process of environmental pollution emergency disposal. However, an urgent problem that must be solved is how to quickly and accurately select the most appropriate materials for treating a pollution event from the existing spill control and clean-up materials (SCCM). To meet this need, the following objectives were addressed in this study. First, the material base and a case base for environment pollution emergency disposal were established to build a foundation and provide material for SCCM screening. Second, the multiple case-based reasoning model method with a difference-driven revision strategy (DDRS-MCBR) was applied to improve the original dual case-based reasoning model method system, and screening and decision-making was performed for SCCM using this model. Third, an actual environmental pollution accident from 2012 was used as a case study to verify the material base, case base, and screening model. The results demonstrated that the DDRS-MCBR method was fast, efficient, and practical. The DDRS-MCBR method changes the passive situation in which the choice of SCCM screening depends only on the subjective experience of the decision maker and offers a new approach to screening SCCM.
Thomas, Russell S.
2013-01-01
Based on existing data and previous work, a series of studies is proposed as a basis toward a pragmatic early step in transforming toxicity testing. These studies were assembled into a data-driven framework that invokes successive tiers of testing with margin of exposure (MOE) as the primary metric. The first tier of the framework integrates data from high-throughput in vitro assays, in vitro-to-in vivo extrapolation (IVIVE) pharmacokinetic modeling, and exposure modeling. The in vitro assays are used to separate chemicals based on their relative selectivity in interacting with biological targets and identify the concentration at which these interactions occur. The IVIVE modeling converts in vitro concentrations into external dose for calculation of the point of departure (POD) and comparisons to human exposure estimates to yield a MOE. The second tier involves short-term in vivo studies, expanded pharmacokinetic evaluations, and refined human exposure estimates. The results from the second tier studies provide more accurate estimates of the POD and the MOE. The third tier contains the traditional animal studies currently used to assess chemical safety. In each tier, the POD for selective chemicals is based primarily on endpoints associated with a proposed mode of action, whereas the POD for nonselective chemicals is based on potential biological perturbation. Based on the MOE, a significant percentage of chemicals evaluated in the first 2 tiers could be eliminated from further testing. The framework provides a risk-based and animal-sparing approach to evaluate chemical safety, drawing broadly from previous experience but incorporating technological advances to increase efficiency. PMID:23958734
Mao, Ningying; Lesher, Beth; Liu, Qifa; Qin, Lei; Chen, Yixi; Gao, Xin; Earnshaw, Stephanie R; McDade, Cheryl L; Charbonneau, Claudie
2016-01-01
Invasive fungal infections (IFIs) require rapid diagnosis and treatment. A decision-analytic model was used to estimate total costs and survival associated with a diagnostic-driven (DD) or an empiric treatment approach in neutropenic patients with hematological malignancies receiving chemotherapy or autologous/allogeneic stem cell transplants in Shanghai, Beijing, Chengdu, and Guangzhou, the People's Republic of China. Treatment initiation for the empiric approach occurred after clinical suspicion of an IFI; treatment initiation for the DD approach occurred after clinical suspicion and a positive IFI diagnostic test result. Model inputs were obtained from the literature; treatment patterns and resource use were based on clinical opinion. Total costs were lower for the DD versus the empiric approach in Shanghai (¥3,232 vs ¥4,331), Beijing (¥3,894 vs ¥4,864), Chengdu, (¥4,632 vs ¥5,795), and Guangzhou (¥8,489 vs ¥9,795). Antifungal administration was lower using the DD (5.7%) than empiric (9.8%) approach, with similar survival rates. Results from one-way and probabilistic sensitivity analyses were most sensitive to changes in diagnostic test sensitivity and IFI incidence; the DD approach dominated the empiric approach in 88% of scenarios. These results suggest that a DD compared to an empiric treatment approach in the People's Republic of China may be cost saving, with similar overall survival in immunocompromised patients with suspected IFIs.
NASA Astrophysics Data System (ADS)
Oskouie, M. Faraji; Ansari, R.; Rouhi, H.
2018-04-01
Eringen's nonlocal elasticity theory is extensively employed for the analysis of nanostructures because it is able to capture nanoscale effects. Previous studies have revealed that using the differential form of the strain-driven version of this theory leads to paradoxical results in some cases, such as bending analysis of cantilevers, and recourse must be made to the integral version. In this article, a novel numerical approach is developed for the bending analysis of Euler-Bernoulli nanobeams in the context of strain- and stress-driven integral nonlocal models. This numerical approach is proposed for the direct solution to bypass the difficulties related to converting the integral governing equation into a differential equation. First, the governing equation is derived based on both strain-driven and stress-driven nonlocal models by means of the minimum total potential energy. Also, in each case, the governing equation is obtained in both strong and weak forms. To solve numerically the derived equations, matrix differential and integral operators are constructed based upon the finite difference technique and trapezoidal integration rule. It is shown that the proposed numerical approach can be efficiently applied to the strain-driven nonlocal model with the aim of resolving the mentioned paradoxes. Also, it is able to solve the problem based on the strain-driven model without inconsistencies of the application of this model that are reported in the literature.
Remaining useful life assessment of lithium-ion batteries in implantable medical devices
NASA Astrophysics Data System (ADS)
Hu, Chao; Ye, Hui; Jain, Gaurav; Schmidt, Craig
2018-01-01
This paper presents a prognostic study on lithium-ion batteries in implantable medical devices, in which a hybrid data-driven/model-based method is employed for remaining useful life assessment. The method is developed on and evaluated against data from two sets of lithium-ion prismatic cells used in implantable applications exhibiting distinct fade performance: 1) eight cells from Medtronic, PLC whose rates of capacity fade appear to be stable and gradually decrease over a 10-year test duration; and 2) eight cells from Manufacturer X whose rates appear to be greater and show sharp increase after some period over a 1.8-year test duration. The hybrid method enables online prediction of remaining useful life for predictive maintenance/control. It consists of two modules: 1) a sparse Bayesian learning module (data-driven) for inferring capacity from charge-related features; and 2) a recursive Bayesian filtering module (model-based) for updating empirical capacity fade models and predicting remaining useful life. A generic particle filter is adopted to implement recursive Bayesian filtering for the cells from the first set, whose capacity fade behavior can be represented by a single fade model; a multiple model particle filter with fixed-lag smoothing is proposed for the cells from the second data set, whose capacity fade behavior switches between multiple fade models.
Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation
NASA Astrophysics Data System (ADS)
Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua
2015-09-01
Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.
Aspects of the BPRIM Language for Risk Driven Process Engineering
NASA Astrophysics Data System (ADS)
Sienou, Amadou; Lamine, Elyes; Pingaud, Hervé; Karduck, Achim
Nowadays organizations are exposed to frequent changes in business environment requiring continuous alignment of business processes on business strategies. This agility requires methods promoted in enterprise engineering approaches. Risk consideration in enterprise engineering is getting important since the business environment is becoming more and more competitive and unpredictable. Business processes are subject to the same quality requirements as material and human resources. Thus, process management is supposed to tackle value creation challenges but also the ones related to value preservation. Our research considers risk driven business process design as an integral part of enterprise engineering. A graphical modelling language for risk driven business process engineering was introduced in former research. This paper extends the language and handles questions related to modelling risk in organisational context.
Security Requirements Management in Software Product Line Engineering
NASA Astrophysics Data System (ADS)
Mellado, Daniel; Fernández-Medina, Eduardo; Piattini, Mario
Security requirements engineering is both a central task and a critical success factor in product line development due to the complexity and extensive nature of product lines. However, most of the current product line practices in requirements engineering do not adequately address security requirements engineering. Therefore, in this chapter we will propose a security requirements engineering process (SREPPLine) driven by security standards and based on a security requirements decision model along with a security variability model to manage the variability of the artefacts related to security requirements. The aim of this approach is to deal with security requirements from the early stages of the product line development in a systematic way, in order to facilitate conformance with the most relevant security standards with regard to the management of security requirements, such as ISO/IEC 27001 and ISO/IEC 15408.
Disaster Emergency Rapid Assessment Based on Remote Sensing and Background Data
NASA Astrophysics Data System (ADS)
Han, X.; Wu, J.
2018-04-01
The period from starting to the stable conditions is an important stage of disaster development. In addition to collecting and reporting information on disaster situations, remote sensing images by satellites and drones and monitoring results from disaster-stricken areas should be obtained. Fusion of multi-source background data such as population, geography and topography, and remote sensing monitoring information can be used in geographic information system analysis to quickly and objectively assess the disaster information. According to the characteristics of different hazards, the models and methods driven by the rapid assessment of mission requirements are tested and screened. Based on remote sensing images, the features of exposures quickly determine disaster-affected areas and intensity levels, and extract key disaster information about affected hospitals and schools as well as cultivated land and crops, and make decisions after emergency response with visual assessment results.
Machine Learning for Flood Prediction in Google Earth Engine
NASA Astrophysics Data System (ADS)
Kuhn, C.; Tellman, B.; Max, S. A.; Schwarz, B.
2015-12-01
With the increasing availability of high-resolution satellite imagery, dynamic flood mapping in near real time is becoming a reachable goal for decision-makers. This talk describes a newly developed framework for predicting biophysical flood vulnerability using public data, cloud computing and machine learning. Our objective is to define an approach to flood inundation modeling using statistical learning methods deployed in a cloud-based computing platform. Traditionally, static flood extent maps grounded in physically based hydrologic models can require hours of human expertise to construct at significant financial cost. In addition, desktop modeling software and limited local server storage can impose restraints on the size and resolution of input datasets. Data-driven, cloud-based processing holds promise for predictive watershed modeling at a wide range of spatio-temporal scales. However, these benefits come with constraints. In particular, parallel computing limits a modeler's ability to simulate the flow of water across a landscape, rendering traditional routing algorithms unusable in this platform. Our project pushes these limits by testing the performance of two machine learning algorithms, Support Vector Machine (SVM) and Random Forests, at predicting flood extent. Constructed in Google Earth Engine, the model mines a suite of publicly available satellite imagery layers to use as algorithm inputs. Results are cross-validated using MODIS-based flood maps created using the Dartmouth Flood Observatory detection algorithm. Model uncertainty highlights the difficulty of deploying unbalanced training data sets based on rare extreme events.
Lako, Christiaan J; Rosenau, Pauline
2009-03-01
In the Netherlands, current policy opinion emphasizes demand-driven health care. Central to this model is the view, advocated by some Dutch health policy makers, that patients should be encouraged to be aware of and make use of health quality and health outcomes information in making personal health care provider choices. The success of the new health care system in the Netherlands is premised on this being the case. After a literature review and description of the new Dutch health care system, the adequacy of this demand-driven health policy is tested. The data from a July 2005, self-administered questionnaire survey of 409 patients (response rate of 94%) as to how they choose a hospital are presented. Results indicate that most patients did not choose by actively employing available quality and outcome information. They were, rather, referred by their general practitioner. Hospital choice is highly related to the importance a patient attaches to his or her physician's opinion about a hospital. Some patients indicated that their hospital choice was affected by the reputation of the hospital, by the distance they lived from the hospital, etc. but physician's advice was, by far, the most important factor. Policy consequences are important; the assumptions underlying the demand-driven model of patient health provider choice are inadequate to explain the pattern of observed responses. An alternative, more adequate model is required, one that takes into account the patient's confidence in physician referral and advice.
ERIC Educational Resources Information Center
Henke, Karen Greenwood
2005-01-01
With the passage of "No Child Left Behind" in 2001, schools are expected to provide a standards-based curriculum for students to attain math and reading proficiency and demonstrate progress each year. "NCLB" requires more frequent student testing with publicly reported results in an effort to close the achievement gap and to inform parents,…
Sorokine, Alexandre; Schlicher, Bob G.; Ward, Richard C.; ...
2015-05-22
This paper describes an original approach to generating scenarios for the purpose of testing the algorithms used to detect special nuclear materials (SNM) that incorporates the use of ontologies. Separating the signal of SNM from the background requires sophisticated algorithms. To assist in developing such algorithms, there is a need for scenarios that capture a very wide range of variables affecting the detection process, depending on the type of detector being used. To provide such a cpability, we developed an ontology-driven information system (ODIS) for generating scenarios that can be used in creating scenarios for testing of algorithms for SNMmore » detection. The ontology-driven scenario generator (ODSG) is an ODIS based on information supplied by subject matter experts and other documentation. The details of the creation of the ontology, the development of the ontology-driven information system, and the design of the web user interface (UI) are presented along with specific examples of scenarios generated using the ODSG. We demonstrate that the paradigm behind the ODSG is capable of addressing the problem of semantic complexity at both the user and developer levels. Compared to traditional approaches, an ODIS provides benefits such as faithful representation of the users' domain conceptualization, simplified management of very large and semantically diverse datasets, and the ability to handle frequent changes to the application and the UI. Furthermore, the approach makes possible the generation of a much larger number of specific scenarios based on limited user-supplied information« less
IoT-Based User-Driven Service Modeling Environment for a Smart Space Management System
Choi, Hoan-Suk; Rhee, Woo-Seop
2014-01-01
The existing Internet environment has been extended to the Internet of Things (IoT) as an emerging new paradigm. The IoT connects various physical entities. These entities have communication capability and deploy the observed information to various service areas such as building management, energy-saving systems, surveillance services, and smart homes. These services are designed and developed by professional service providers. Moreover, users' needs have become more complicated and personalized with the spread of user-participation services such as social media and blogging. Therefore, some active users want to create their own services to satisfy their needs, but the existing IoT service-creation environment is difficult for the non-technical user because it requires a programming capability to create a service. To solve this problem, we propose the IoT-based user-driven service modeling environment to provide an easy way to create IoT services. Also, the proposed environment deploys the defined service to another user. Through the personalization and customization of the defined service, the value and dissemination of the service is increased. This environment also provides the ontology-based context-information processing that produces and describes the context information for the IoT-based user-driven service. PMID:25420153
IoT-based user-driven service modeling environment for a smart space management system.
Choi, Hoan-Suk; Rhee, Woo-Seop
2014-11-20
The existing Internet environment has been extended to the Internet of Things (IoT) as an emerging new paradigm. The IoT connects various physical entities. These entities have communication capability and deploy the observed information to various service areas such as building management, energy-saving systems, surveillance services, and smart homes. These services are designed and developed by professional service providers. Moreover, users' needs have become more complicated and personalized with the spread of user-participation services such as social media and blogging. Therefore, some active users want to create their own services to satisfy their needs, but the existing IoT service-creation environment is difficult for the non-technical user because it requires a programming capability to create a service. To solve this problem, we propose the IoT-based user-driven service modeling environment to provide an easy way to create IoT services. Also, the proposed environment deploys the defined service to another user. Through the personalization and customization of the defined service, the value and dissemination of the service is increased. This environment also provides the ontology-based context-information processing that produces and describes the context information for the IoT-based user-driven service.
Tls Field Data Based Intensity Correction for Forest Environments
NASA Astrophysics Data System (ADS)
Heinzel, J.; Huber, M. O.
2016-06-01
Terrestrial laser scanning (TLS) is increasingly used for forestry applications. Besides the three dimensional point coordinates, the 'intensity' of the reflected signal plays an important role in forestry and vegetation studies. The benefit of the signal intensity is caused by the wavelength of the laser that is within the near infrared (NIR) for most scanners. The NIR is highly indicative for various vegetation characteristics. However, the intensity as recorded by most terrestrial scanners is distorted by both external and scanner specific factors. Since details about system internal alteration of the signal are often unknown to the user, model driven approaches are impractical. On the other hand, existing data driven calibration procedures require laborious acquisition of separate reference datasets or areas of homogenous reflection characteristics from the field data. In order to fill this gap, the present study introduces an approach to correct unwanted intensity variations directly from the point cloud of the field data. The focus is on the variation over range and sensor specific distortions. Instead of an absolute calibration of the values, a relative correction within the dataset is sufficient for most forestry applications. Finally, a method similar to time series detrending is presented with the only pre-condition of a relative equal distribution of forest objects and materials over range. Our test data covers 50 terrestrial scans captured with a FARO Focus 3D S120 scanner using a laser wavelength of 905 nm. Practical tests demonstrate that our correction method removes range and scanner based alterations of the intensity.
Genomics, "Discovery Science," Systems Biology, and Causal Explanation: What Really Works?
Davidson, Eric H
2015-01-01
Diverse and non-coherent sets of epistemological principles currently inform research in the general area of functional genomics. Here, from the personal point of view of a scientist with over half a century of immersion in hypothesis driven scientific discovery, I compare and deconstruct the ideological bases of prominent recent alternatives, such as "discovery science," some productions of the ENCODE project, and aspects of large data set systems biology. The outputs of these types of scientific enterprise qualitatively reflect their radical definitions of scientific knowledge, and of its logical requirements. Their properties emerge in high relief when contrasted (as an example) to a recent, system-wide, predictive analysis of a developmental regulatory apparatus that was instead based directly on hypothesis-driven experimental tests of mechanism.
A new costing model in hospital management: time-driven activity-based costing system.
Öker, Figen; Özyapıcı, Hasan
2013-01-01
Traditional cost systems cause cost distortions because they cannot meet the requirements of today's businesses. Therefore, a new and more effective cost system is needed. Consequently, time-driven activity-based costing system has emerged. The unit cost of supplying capacity and the time needed to perform an activity are the only 2 factors considered by the system. Furthermore, this system determines unused capacity by considering practical capacity. The purpose of this article is to emphasize the efficiency of the time-driven activity-based costing system and to display how it can be applied in a health care institution. A case study was conducted in a private hospital in Cyprus. Interviews and direct observations were used to collect the data. The case study revealed that the cost of unused capacity is allocated to both open and laparoscopic (closed) surgeries. Thus, by using the time-driven activity-based costing system, managers should eliminate the cost of unused capacity so as to obtain better results. Based on the results of the study, hospital management is better able to understand the costs of different surgeries. In addition, managers can easily notice the cost of unused capacity and decide how many employees to be dismissed or directed to other productive areas.
Dynamic testing for shuttle design verification
NASA Technical Reports Server (NTRS)
Green, C. E.; Leadbetter, S. A.; Rheinfurth, M. H.
1972-01-01
Space shuttle design verification requires dynamic data from full scale structural component and assembly tests. Wind tunnel and other scaled model tests are also required early in the development program to support the analytical models used in design verification. Presented is a design philosophy based on mathematical modeling of the structural system strongly supported by a comprehensive test program; some of the types of required tests are outlined.
Rational design of capillary-driven flows for paper-based microfluidics.
Elizalde, Emanuel; Urteaga, Raúl; Berli, Claudio L A
2015-05-21
The design of paper-based assays that integrate passive pumping requires a precise programming of the fluid transport, which has to be encoded in the geometrical shape of the substrate. This requirement becomes critical in multiple-step processes, where fluid handling must be accurate and reproducible for each operation. The present work theoretically investigates the capillary imbibition in paper-like substrates to better understand fluid transport in terms of the macroscopic geometry of the flow domain. A fluid dynamic model was derived for homogeneous porous substrates with arbitrary cross-sectional shapes, which allows one to determine the cross-sectional profile required for a prescribed fluid velocity or mass transport rate. An extension of the model to slit microchannels is also demonstrated. Calculations were validated by experiments with prototypes fabricated in our lab. The proposed method constitutes a valuable tool for the rational design of paper-based assays.
The Cascading Impacts of Technology Selection: Incorporating Ruby on Rails into ECHO
NASA Astrophysics Data System (ADS)
Pilone, D.; Cechini, M.
2010-12-01
NASA’s Earth Observing System (EOS) ClearingHOuse (ECHO) is a SOA based Earth Science Data search and order system implemented in Java with one significant exception: the web client used by 98% of our users is written in Perl. After several decades of maintenance the Perl based application had reached the end of its serviceable life and ECHO was tasked with implementing a replacement. Despite a broad investment in Java, the ECHO team conducted a survey of modern development technologies including Flex, Python/Django, JSF2/Spring and Ruby on Rails. The team ultimately chose Ruby on Rails (RoR) with Cucumber for testing due to its perceived applicability to web application development and corresponding development efficiency gains. Both positive and negative impacts on the entire ECHO team, including our stakeholders, were immediate and sometimes subtle. The technology selection caused shifts in our architecture and design, development and deployment procedures, requirement definition approach, testing approach, and, somewhat surprisingly, our project team structure and software process. This presentation discusses our experiences, including technical, process, and psychological, using RoR on a production system. During this session we will discuss: - Real impacts of introducing a dynamic language to a Java team - Real and perceived efficiency advantages - Impediments to adoption and effectiveness - Impacts of transition from Test Driven Development to Behavior Driven Development - Leveraging Cucumber to provide fully executable requirement documents - Impacts on team structure and roles
Phelps, G.A.; Halford, K.J.
2011-01-01
In Yucca Flat, on the Nevada National Security Site in southern Nevada, the migration of radionuclides from tests located in the alluvial deposits into the Paleozoic carbonate aquifer involves passage through a thick, heterogeneous section of late Tertiary and Quaternary alluvial sediments. An understanding of the lateral and vertical changes in the material properties of the alluvial sediments will aid in the further development of the hydrogeologic framework and the delineation of hydrostratigraphic units and hydraulic properties required for simulating groundwater flow in the Yucca Flat area. Previously published geologic models for the alluvial sediments within Yucca Flat are based on extensive examination and categorization of drill-hole data, combined with a simple, data-driven interpolation scheme. The U.S. Geological Survey, in collaboration with Stanford University, is researching improvements to the modeling of the alluvial section, incorporating prior knowledge of geologic structure into the interpolation method and estimating the uncertainty of the modeled hydrogeologic units.
Lin, Chih-Tin; Meyhofer, Edgar; Kurabayashi, Katsuo
2010-01-01
Directional control of microtubule shuttles via microfabricated tracks is key to the development of controlled nanoscale mass transport by kinesin motor molecules. Here we develop and test a model to quantitatively predict the stochastic behavior of microtubule guiding when they mechanically collide with the sidewalls of lithographically patterned tracks. By taking into account appropriate probability distributions of microscopic states of the microtubule system, the model allows us to theoretically analyze the roles of collision conditions and kinesin surface densities in determining how the motion of microtubule shuttles is controlled. In addition, we experimentally observe the statistics of microtubule collision events and compare our theoretical prediction with experimental data to validate our model. The model will direct the design of future hybrid nanotechnology devices that integrate nanoscale transport systems powered by kinesin-driven molecular shuttles.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saha, Sankalita; Goebel, Kai
2011-01-01
Accelerated aging methodologies for electrolytic components have been designed and accelerated aging experiments have been carried out. The methodology is based on imposing electrical and/or thermal overstresses via electrical power cycling in order to mimic the real world operation behavior. Data are collected in-situ and offline in order to periodically characterize the devices' electrical performance as it ages. The data generated through these experiments are meant to provide capability for the validation of prognostic algorithms (both model-based and data-driven). Furthermore, the data allow validation of physics-based and empirical based degradation models for this type of capacitor. A first set of models and algorithms has been designed and tested on the data.
Energy modelling in sensor networks
NASA Astrophysics Data System (ADS)
Schmidt, D.; Krämer, M.; Kuhn, T.; Wehn, N.
2007-06-01
Wireless sensor networks are one of the key enabling technologies for the vision of ambient intelligence. Energy resources for sensor nodes are very scarce. A key challenge is the design of energy efficient communication protocols. Models of the energy consumption are needed to accurately simulate the efficiency of a protocol or application design, and can also be used for automatic energy optimizations in a model driven design process. We propose a novel methodology to create models for sensor nodes based on few simple measurements. In a case study the methodology was used to create models for MICAz nodes. The models were integrated in a simulation environment as well as in a SDL runtime framework of a model driven design process. Measurements on a test application that was created automatically from an SDL specification showed an 80% reduction in energy consumption compared to an implementation without power saving strategies.
A Remote Sensing-Based Tool for Assessing Rainfall-Driven Hazards
Wright, Daniel B.; Mantilla, Ricardo; Peters-Lidard, Christa D.
2018-01-01
RainyDay is a Python-based platform that couples rainfall remote sensing data with Stochastic Storm Transposition (SST) for modeling rainfall-driven hazards such as floods and landslides. SST effectively lengthens the extreme rainfall record through temporal resampling and spatial transposition of observed storms from the surrounding region to create many extreme rainfall scenarios. Intensity-Duration-Frequency (IDF) curves are often used for hazard modeling but require long records to describe the distribution of rainfall depth and duration and do not provide information regarding rainfall space-time structure, limiting their usefulness to small scales. In contrast, RainyDay can be used for many hazard applications with 1-2 decades of data, and output rainfall scenarios incorporate detailed space-time structure from remote sensing. Thanks to global satellite coverage, RainyDay can be used in inaccessible areas and developing countries lacking ground measurements, though results are impacted by remote sensing errors. RainyDay can be useful for hazard modeling under nonstationary conditions. PMID:29657544
A Remote Sensing-Based Tool for Assessing Rainfall-Driven Hazards.
Wright, Daniel B; Mantilla, Ricardo; Peters-Lidard, Christa D
2017-04-01
RainyDay is a Python-based platform that couples rainfall remote sensing data with Stochastic Storm Transposition (SST) for modeling rainfall-driven hazards such as floods and landslides. SST effectively lengthens the extreme rainfall record through temporal resampling and spatial transposition of observed storms from the surrounding region to create many extreme rainfall scenarios. Intensity-Duration-Frequency (IDF) curves are often used for hazard modeling but require long records to describe the distribution of rainfall depth and duration and do not provide information regarding rainfall space-time structure, limiting their usefulness to small scales. In contrast, RainyDay can be used for many hazard applications with 1-2 decades of data, and output rainfall scenarios incorporate detailed space-time structure from remote sensing. Thanks to global satellite coverage, RainyDay can be used in inaccessible areas and developing countries lacking ground measurements, though results are impacted by remote sensing errors. RainyDay can be useful for hazard modeling under nonstationary conditions.
A Remote Sensing-Based Tool for Assessing Rainfall-Driven Hazards
NASA Technical Reports Server (NTRS)
Wright, Daniel B.; Mantilla, Ricardo; Peters-Lidard, Christa D.
2017-01-01
RainyDay is a Python-based platform that couples rainfall remote sensing data with Stochastic Storm Transposition (SST) for modeling rainfall-driven hazards such as floods and landslides. SST effectively lengthens the extreme rainfall record through temporal resampling and spatial transposition of observed storms from the surrounding region to create many extreme rainfall scenarios. Intensity-Duration-Frequency (IDF) curves are often used for hazard modeling but require long records to describe the distribution of rainfall depth and duration and do not provide information regarding rainfall space-time structure, limiting their usefulness to small scales. In contrast, Rainy Day can be used for many hazard applications with 1-2 decades of data, and output rainfall scenarios incorporate detailed space-time structure from remote sensing. Thanks to global satellite coverage, Rainy Day can be used in inaccessible areas and developing countries lacking ground measurements, though results are impacted by remote sensing errors. Rainy Day can be useful for hazard modeling under nonstationary conditions.
Mignan, A; Broccardo, M; Wiemer, S; Giardini, D
2017-10-19
The rise in the frequency of anthropogenic earthquakes due to deep fluid injections is posing serious economic, societal, and legal challenges to many geo-energy and waste-disposal projects. Existing tools to assess such problems are still inherently heuristic and mostly based on expert elicitation (so-called clinical judgment). We propose, as a complementary approach, an adaptive traffic light system (ATLS) that is function of a statistical model of induced seismicity. It offers an actuarial judgement of the risk, which is based on a mapping between earthquake magnitude and risk. Using data from six underground reservoir stimulation experiments, mostly from Enhanced Geothermal Systems, we illustrate how such a data-driven adaptive forecasting system could guarantee a risk-based safety target. The proposed model, which includes a linear relationship between seismicity rate and flow rate, as well as a normal diffusion process for post-injection, is first confirmed to be representative of the data. Being integrable, the model yields a closed-form ATLS solution that is both transparent and robust. Although simulations verify that the safety target is consistently ensured when the ATLS is applied, the model from which simulations are generated is validated on a limited dataset, hence still requiring further tests in additional fluid injection environments.
Optical components damage parameters database system
NASA Astrophysics Data System (ADS)
Tao, Yizheng; Li, Xinglan; Jin, Yuquan; Xie, Dongmei; Tang, Dingyong
2012-10-01
Optical component is the key to large-scale laser device developed by one of its load capacity is directly related to the device output capacity indicators, load capacity depends on many factors. Through the optical components will damage parameters database load capacity factors of various digital, information technology, for the load capacity of optical components to provide a scientific basis for data support; use of business processes and model-driven approach, the establishment of component damage parameter information model and database systems, system application results that meet the injury test optical components business processes and data management requirements of damage parameters, component parameters of flexible, configurable system is simple, easy to use, improve the efficiency of the optical component damage test.
Austin, Åsa N.; Hansen, Joakim P.; Donadi, Serena; Eklöf, Johan S.
2017-01-01
Field surveys often show that high water turbidity limits cover of aquatic vegetation, while many small-scale experiments show that vegetation can reduce turbidity by decreasing water flow, stabilizing sediments, and competing with phytoplankton for nutrients. Here we bridged these two views by exploring the direction and strength of causal relationships between aquatic vegetation and turbidity across seasons (spring and late summer) and spatial scales (local and regional), using causal modeling based on data from a field survey along the central Swedish Baltic Sea coast. The two best-fitting regional-scale models both suggested that in spring, high cover of vegetation reduces water turbidity. In summer, the relationships differed between the two models; in the first model high vegetation cover reduced turbidity; while in the second model reduction of summer turbidity by high vegetation cover in spring had a positive effect on summer vegetation which suggests a positive feedback of vegetation on itself. Nitrogen load had a positive effect on turbidity in both seasons, which was comparable in strength to the effect of vegetation on turbidity. To assess whether the effect of vegetation was primarily caused by sediment stabilization or a reduction of phytoplankton, we also tested models where turbidity was replaced by phytoplankton fluorescence or sediment-driven turbidity. The best-fitting regional-scale models suggested that high sediment-driven turbidity in spring reduces vegetation cover in summer, which in turn has a negative effect on sediment-driven turbidity in summer, indicating a potential positive feedback of sediment-driven turbidity on itself. Using data at the local scale, few relationships were significant, likely due to the influence of unmeasured variables and/or spatial heterogeneity. In summary, causal modeling based on data from a large-scale field survey suggested that aquatic vegetation can reduce turbidity at regional scales, and that high vegetation cover vs. high sediment-driven turbidity may represent two self-enhancing, alternative states of shallow bay ecosystems. PMID:28854185
Austin, Åsa N; Hansen, Joakim P; Donadi, Serena; Eklöf, Johan S
2017-01-01
Field surveys often show that high water turbidity limits cover of aquatic vegetation, while many small-scale experiments show that vegetation can reduce turbidity by decreasing water flow, stabilizing sediments, and competing with phytoplankton for nutrients. Here we bridged these two views by exploring the direction and strength of causal relationships between aquatic vegetation and turbidity across seasons (spring and late summer) and spatial scales (local and regional), using causal modeling based on data from a field survey along the central Swedish Baltic Sea coast. The two best-fitting regional-scale models both suggested that in spring, high cover of vegetation reduces water turbidity. In summer, the relationships differed between the two models; in the first model high vegetation cover reduced turbidity; while in the second model reduction of summer turbidity by high vegetation cover in spring had a positive effect on summer vegetation which suggests a positive feedback of vegetation on itself. Nitrogen load had a positive effect on turbidity in both seasons, which was comparable in strength to the effect of vegetation on turbidity. To assess whether the effect of vegetation was primarily caused by sediment stabilization or a reduction of phytoplankton, we also tested models where turbidity was replaced by phytoplankton fluorescence or sediment-driven turbidity. The best-fitting regional-scale models suggested that high sediment-driven turbidity in spring reduces vegetation cover in summer, which in turn has a negative effect on sediment-driven turbidity in summer, indicating a potential positive feedback of sediment-driven turbidity on itself. Using data at the local scale, few relationships were significant, likely due to the influence of unmeasured variables and/or spatial heterogeneity. In summary, causal modeling based on data from a large-scale field survey suggested that aquatic vegetation can reduce turbidity at regional scales, and that high vegetation cover vs. high sediment-driven turbidity may represent two self-enhancing, alternative states of shallow bay ecosystems.
Electrically Driven Liquid Film Boiling Experiment
NASA Technical Reports Server (NTRS)
Didion, Jeffrey R.
2016-01-01
This presentation presents the science background and ground based results that form the basis of the Electrically Driven Liquid Film Boiling Experiment. This is an ISS experiment that is manifested for 2021. Objective: Characterize the effects of gravity on the interaction of electric and flow fields in the presence of phase change specifically pertaining to: a) The effects of microgravity on the electrically generated two-phase flow. b) The effects of microgravity on electrically driven liquid film boiling (includes extreme heat fluxes). Electro-wetting of the boiling section will repel the bubbles away from the heated surface in microgravity environment. Relevance/Impact: Provides phenomenological foundation for the development of electric field based two-phase thermal management systems leveraging EHD, permitting optimization of heat transfer surface area to volume ratios as well as achievement of high heat transfer coefficients thus resulting in system mass and volume savings. EHD replaces buoyancy or flow driven bubble removal from heated surface. Development Approach: Conduct preliminary experiments in low gravity and ground-based facilities to refine technique and obtain preliminary data for model development. ISS environment required to characterize electro-wetting effect on nucleate boiling and CHF in the absence of gravity. Will operate in the FIR - designed for autonomous operation.
NASA Technical Reports Server (NTRS)
Glotter, Michael J.; Ruane, Alex C.; Moyer, Elisabeth J.; Elliott, Joshua W.
2015-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled and observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources reanalysis, reanalysis that is bias corrected with observed climate, and a control dataset and compared with observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by non-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. Some issues persist for all choices of climate inputs: crop yields appear to be oversensitive to precipitation fluctuations but under sensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves.
Evaluating the sensitivity of agricultural model performance to different climate inputs
Glotter, Michael J.; Moyer, Elisabeth J.; Ruane, Alex C.; Elliott, Joshua W.
2017-01-01
Projections of future food production necessarily rely on models, which must themselves be validated through historical assessments comparing modeled to observed yields. Reliable historical validation requires both accurate agricultural models and accurate climate inputs. Problems with either may compromise the validation exercise. Previous studies have compared the effects of different climate inputs on agricultural projections, but either incompletely or without a ground truth of observed yields that would allow distinguishing errors due to climate inputs from those intrinsic to the crop model. This study is a systematic evaluation of the reliability of a widely-used crop model for simulating U.S. maize yields when driven by multiple observational data products. The parallelized Decision Support System for Agrotechnology Transfer (pDSSAT) is driven with climate inputs from multiple sources – reanalysis, reanalysis bias-corrected with observed climate, and a control dataset – and compared to observed historical yields. The simulations show that model output is more accurate when driven by any observation-based precipitation product than when driven by un-bias-corrected reanalysis. The simulations also suggest, in contrast to previous studies, that biased precipitation distribution is significant for yields only in arid regions. However, some issues persist for all choices of climate inputs: crop yields appear oversensitive to precipitation fluctuations but undersensitive to floods and heat waves. These results suggest that the most important issue for agricultural projections may be not climate inputs but structural limitations in the crop models themselves. PMID:29097985
An Integrated Analysis-Test Approach
NASA Technical Reports Server (NTRS)
Kaufman, Daniel
2003-01-01
This viewgraph presentation provides an overview of a project to develop a computer program which integrates data analysis and test procedures. The software application aims to propose a new perspective to traditional mechanical analysis and test procedures and to integrate pre-test and test analysis calculation methods. The program also should also be able to be used in portable devices and allows for the 'quasi-real time' analysis of data sent by electronic means. Test methods reviewed during this presentation include: shaker swept sine and random tests, shaker shock mode tests, shaker base driven model survey tests and acoustic tests.
Flight-Test Evaluation of Flutter-Prediction Methods
NASA Technical Reports Server (NTRS)
Lind, RIck; Brenner, Marty
2003-01-01
The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.
Increase of stagnation pressure and enthalpy in shock tunnels
NASA Technical Reports Server (NTRS)
Bogdanoff, David W.; Cambier, Jean-Luc
1992-01-01
High stagnation pressures and enthalpies are required for the testing of aerospace vehicles such as aerospace planes, aeroassist vehicles, and reentry vehicles. Among the most useful ground test facilities for performing such tests are shock tunnels. With a given driver gas condition, the enthalpy and pressure in the driven tube nozzle reservoir condition can be varied by changing the driven tube geometry and initial gas fill pressure. Reducing the driven tube diameter yields only very modest increases in reservoir pressure and enthalpy. Reducing the driven tube initial gas fill pressure can increase the reservoir enthalpy significantly, but at the cost of reduced reservoir pressure and useful test time. A new technique, the insertion of a converging section in the driven tube is found to produce substantial increases in both reservoir pressure and enthalpy. Using a one-dimensional inviscid full kinetics code, a number of different locations and shapes for the converging driven tube section were studied and the best cases found. For these best cases, for driven tube diameter reductions of factors of 2 and 3, the reservoir pressure can be increased by factors of 2.1 and 3.2, respectively and the enthalpy can be increased by factors of 1.5 and 2.1, respectively.
Data-driven outbreak forecasting with a simple nonlinear growth model.
Lega, Joceline; Brown, Heidi E
2016-12-01
Recent events have thrown the spotlight on infectious disease outbreak response. We developed a data-driven method, EpiGro, which can be applied to cumulative case reports to estimate the order of magnitude of the duration, peak and ultimate size of an ongoing outbreak. It is based on a surprisingly simple mathematical property of many epidemiological data sets, does not require knowledge or estimation of disease transmission parameters, is robust to noise and to small data sets, and runs quickly due to its mathematical simplicity. Using data from historic and ongoing epidemics, we present the model. We also provide modeling considerations that justify this approach and discuss its limitations. In the absence of other information or in conjunction with other models, EpiGro may be useful to public health responders. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Agile IT: Thinking in User-Centric Models
NASA Astrophysics Data System (ADS)
Margaria, Tiziana; Steffen, Bernhard
We advocate a new teaching direction for modern CS curricula: extreme model-driven development (XMDD), a new development paradigm designed to continuously involve the customer/application expert throughout the whole systems' life cycle. Based on the `One-Thing Approach', which works by successively enriching and refining one single artifact, system development becomes in essence a user-centric orchestration of intuitive service functionality. XMDD differs radically from classical software development, which, in our opinion is no longer adequate for the bulk of application programming - in particular when it comes to heterogeneous, cross organizational systems which must adapt to rapidly changing market requirements. Thus there is a need for new curricula addressing this model-driven, lightweight, and cooperative development paradigm that puts the user process in the center of the development and the application expert in control of the process evolution.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.; Som, Sukhamony
1990-01-01
The performance modeling and enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures is examined. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called ATAMM (Algorithm To Architecture Mapping Model). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Strategies for concurrent processing of complex algorithms in data driven architectures
NASA Technical Reports Server (NTRS)
Som, Sukhamoy; Stoughton, John W.; Mielke, Roland R.
1990-01-01
Performance modeling and performance enhancement for periodic execution of large-grain, decision-free algorithms in data flow architectures are discussed. Applications include real-time implementation of control and signal processing algorithms where performance is required to be highly predictable. The mapping of algorithms onto the specified class of data flow architectures is realized by a marked graph model called algorithm to architecture mapping model (ATAMM). Performance measures and bounds are established. Algorithm transformation techniques are identified for performance enhancement and reduction of resource (computing element) requirements. A systematic design procedure is described for generating operating conditions for predictable performance both with and without resource constraints. An ATAMM simulator is used to test and validate the performance prediction by the design procedure. Experiments on a three resource testbed provide verification of the ATAMM model and the design procedure.
Data-driven Modelling for decision making under uncertainty
NASA Astrophysics Data System (ADS)
Angria S, Layla; Dwi Sari, Yunita; Zarlis, Muhammad; Tulus
2018-01-01
The rise of the issues with the uncertainty of decision making has become a very warm conversation in operation research. Many models have been presented, one of which is with data-driven modelling (DDM). The purpose of this paper is to extract and recognize patterns in data, and find the best model in decision-making problem under uncertainty by using data-driven modeling approach with linear programming, linear and nonlinear differential equation, bayesian approach. Model criteria tested to determine the smallest error, and it will be the best model that can be used.
Nazem-Zadeh, Mohammad-Reza; Elisevich, Kost V; Schwalb, Jason M; Bagher-Ebadian, Hassan; Mahmoudi, Fariborz; Soltanian-Zadeh, Hamid
2014-12-15
Multiple modalities are used in determining laterality in mesial temporal lobe epilepsy (mTLE). It is unclear how much different imaging modalities should be weighted in decision-making. The purpose of this study is to develop response-driven multimodal multinomial models for lateralization of epileptogenicity in mTLE patients based upon imaging features in order to maximize the accuracy of noninvasive studies. The volumes, means and standard deviations of FLAIR intensity and means of normalized ictal-interictal SPECT intensity of the left and right hippocampi were extracted from preoperative images of a retrospective cohort of 45 mTLE patients with Engel class I surgical outcomes, as well as images of a cohort of 20 control, nonepileptic subjects. Using multinomial logistic function regression, the parameters of various univariate and multivariate models were estimated. Based on the Bayesian model averaging (BMA) theorem, response models were developed as compositions of independent univariate models. A BMA model composed of posterior probabilities of univariate response models of hippocampal volumes, means and standard deviations of FLAIR intensity, and means of SPECT intensity with the estimated weighting coefficients of 0.28, 0.32, 0.09, and 0.31, respectively, as well as a multivariate response model incorporating all mentioned attributes, demonstrated complete reliability by achieving a probability of detection of one with no false alarms to establish proper laterality in all mTLE patients. The proposed multinomial multivariate response-driven model provides a reliable lateralization of mesial temporal epileptogenicity including those patients who require phase II assessment. Copyright © 2014 Elsevier B.V. All rights reserved.
Estimating daily forest carbon fluxes using a combination of ground and remotely sensed data
NASA Astrophysics Data System (ADS)
Chirici, Gherardo; Chiesi, Marta; Corona, Piermaria; Salvati, Riccardo; Papale, Dario; Fibbi, Luca; Sirca, Costantino; Spano, Donatella; Duce, Pierpaolo; Marras, Serena; Matteucci, Giorgio; Cescatti, Alessandro; Maselli, Fabio
2016-02-01
Several studies have demonstrated that Monteith's approach can efficiently predict forest gross primary production (GPP), while the modeling of net ecosystem production (NEP) is more critical, requiring the additional simulation of forest respirations. The NEP of different forest ecosystems in Italy was currently simulated by the use of a remote sensing driven parametric model (modified C-Fix) and a biogeochemical model (BIOME-BGC). The outputs of the two models, which simulate forests in quasi-equilibrium conditions, are combined to estimate the carbon fluxes of actual conditions using information regarding the existing woody biomass. The estimates derived from the methodology have been tested against daily reference GPP and NEP data collected through the eddy correlation technique at five study sites in Italy. The first test concerned the theoretical validity of the simulation approach at both annual and daily time scales and was performed using optimal model drivers (i.e., collected or calibrated over the site measurements). Next, the test was repeated to assess the operational applicability of the methodology, which was driven by spatially extended data sets (i.e., data derived from existing wall-to-wall digital maps). A good estimation accuracy was generally obtained for GPP and NEP when using optimal model drivers. The use of spatially extended data sets worsens the accuracy to a varying degree, which is properly characterized. The model drivers with the most influence on the flux modeling strategy are, in increasing order of importance, forest type, soil features, meteorology, and forest woody biomass (growing stock volume).
Model-Driven Design: Systematically Building Integrated Blended Learning Experiences
ERIC Educational Resources Information Center
Laster, Stephen
2010-01-01
Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…
NASA Astrophysics Data System (ADS)
Wang, Xiaojing; Yu, Qingquan; Zhang, Xiaodong; Zhang, Yang; Zhu, Sizheng; Wang, Xiaoguang; Wu, Bin
2018-04-01
Numerical studies on the stabilization of neoclassical tearing modes (NTMs) by electron cyclotron current drive (ECCD) have been carried out based on reduced MHD equations, focusing on the amount of the required driven current for mode stabilization and the comparison with analytical results. The dependence of the minimum driven current required for NTM stabilization on some parameters, including the bootstrap current density, radial width of the driven current, radial deviation of the driven current from the resonant surface, and the island width when applying ECCD, are studied. By fitting the numerical results, simple expressions for these dependences are obtained. Analysis based on the modified Rutherford equation (MRE) has also been carried out, and the corresponding results have the same trend as numerical ones, while a quantitative difference between them exists. This difference becomes smaller when the applied radio frequency (rf) current is smaller.
NASA Astrophysics Data System (ADS)
Bilke, Lars; Watanabe, Norihiro; Naumov, Dmitri; Kolditz, Olaf
2016-04-01
A complex software project in general with high standards regarding code quality requires automated tools to help developers in doing repetitive and tedious tasks such as compilation on different platforms and configurations, doing unit testing as well as end-to-end tests and generating distributable binaries and documentation. This is known as continuous integration (CI). A community-driven FOSS-project within the Earth Sciences benefits even more from CI as time and resources regarding software development are often limited. Therefore testing developed code on more than the developers PC is a task which is often neglected and where CI can be the solution. We developed an integrated workflow based on GitHub, Travis and Jenkins for the community project OpenGeoSys - a coupled multiphysics modeling and simulation package - allowing developers to concentrate on implementing new features in a tight feedback loop. Every interested developer/user can create a pull request containing source code modifications on the online collaboration platform GitHub. The modifications are checked (compilation, compiler warnings, memory leaks, undefined behaviors, unit tests, end-to-end tests, analyzing differences in simulation run results between changes etc.) from the CI system which automatically responds to the pull request or by email on success or failure with detailed reports eventually requesting to improve the modifications. Core team developers review the modifications and merge them into the main development line once they satisfy agreed standards. We aim for efficient data structures and algorithms, self-explaining code, comprehensive documentation and high test code coverage. This workflow keeps entry barriers to get involved into the project low and permits an agile development process concentrating on feature additions rather than software maintenance procedures.
Aerodynamically and acoustically driven modes of vibration in a physical model of the vocal folds.
Zhang, Zhaoyan; Neubauer, Juergen; Berry, David A
2006-11-01
In a single-layered, isotropic, physical model of the vocal folds, distinct phonation types were identified based on the medial surface dynamics of the vocal fold. For acoustically driven phonation, a single, in-phase, x-10 like eigenmode captured the essential dynamics, and coupled with one of the acoustic resonances of the subglottal tract. Thus, the fundamental frequency appeared to be determined primarily by a subglottal acoustic resonance. In contrast, aerodynamically driven phonation did not naturally appear in the single-layered model, but was facilitated by the introduction of a vertical constraint. For this phonation type, fundamental frequency was relatively independent of the acoustic resonances, and two eigenmodes were required to capture the essential dynamics of the vocal fold, including an out-of-phase x-11 like eigenmode and an in-phase x-10 like eigenmode, as described in earlier theoretical work. The two eigenmodes entrained to the same frequency, and were decoupled from subglottal acoustic resonances. With this independence from the acoustic resonances, vocal fold dynamics appeared to be determined primarily by near-field, fluid-structure interactions.
Greenberg, L; Cultice, J M
1997-01-01
OBJECTIVE: The Health Resources and Services Administration's Bureau of Health Professions developed a demographic utilization-based model of physician specialty requirements to explore the consequences of a broad range of scenarios pertaining to the nation's health care delivery system on need for physicians. DATA SOURCE/STUDY SETTING: The model uses selected data primarily from the National Center for Health Statistics, the American Medical Association, and the U.S. Bureau of Census. Forecasts are national estimates. STUDY DESIGN: Current (1989) utilization rates for ambulatory and inpatient medical specialty services were obtained for the population according to age, gender, race/ethnicity, and insurance status. These rates are used to estimate specialty-specific total service utilization expressed in patient care minutes for future populations and converted to physician requirements by applying per-physician productivity estimates. DATA COLLECTION/EXTRACTION METHODS: Secondary data were analyzed and put into matrixes for use in the mainframe computer-based model. Several missing data points, e.g., for HMO-enrolled populations, were extrapolated from available data by the project's contractor. PRINCIPAL FINDINGS: The authors contend that the Bureau's demographic utilization model represents improvements over other data-driven methodologies that rely on staffing ratios and similar supply-determined bases for estimating requirements. The model's distinct utility rests in offering national-level physician specialty requirements forecasts. Images Figure 1 PMID:9018213
Dynamic Emulation Modelling (DEMo) of large physically-based environmental models
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2012-12-01
In environmental modelling large, spatially-distributed, physically-based models are widely adopted to describe the dynamics of physical, social and economic processes. Such an accurate process characterization comes, however, to a price: the computational requirements of these models are considerably high and prevent their use in any problem requiring hundreds or thousands of model runs to be satisfactory solved. Typical examples include optimal planning and management, data assimilation, inverse modelling and sensitivity analysis. An effective approach to overcome this limitation is to perform a top-down reduction of the physically-based model by identifying a simplified, computationally efficient emulator, constructed from and then used in place of the original model in highly resource-demanding tasks. The underlying idea is that not all the process details in the original model are equally important and relevant to the dynamics of the outputs of interest for the type of problem considered. Emulation modelling has been successfully applied in many environmental applications, however most of the literature considers non-dynamic emulators (e.g. metamodels, response surfaces and surrogate models), where the original dynamical model is reduced to a static map between input and the output of interest. In this study we focus on Dynamic Emulation Modelling (DEMo), a methodological approach that preserves the dynamic nature of the original physically-based model, with consequent advantages in a wide variety of problem areas. In particular, we propose a new data-driven DEMo approach that combines the many advantages of data-driven modelling in representing complex, non-linear relationships, but preserves the state-space representation typical of process-based models, which is both particularly effective in some applications (e.g. optimal management and data assimilation) and facilitates the ex-post physical interpretation of the emulator structure, thus enhancing the credibility of the model to stakeholders and decision-makers. Numerical results from the application of the approach to the reduction of 3D coupled hydrodynamic-ecological models in several real world case studies, including Marina Reservoir (Singapore) and Googong Reservoir (Australia), are illustrated.
ERIC Educational Resources Information Center
Poole, Dennis L.; Nelson, Joan; Carnahan, Sharon; Chepenik, Nancy G.; Tubiak, Christine
2000-01-01
Developed and field tested the Performance Accountability Quality Scale (PAQS) on 191 program performance measurement systems developed by nonprofit agencies in central Florida. Preliminary findings indicate that the PAQS provides a structure for obtaining expert opinions based on a theory-driven model about the quality of proposed measurement…
Algorithms and architecture for multiprocessor based circuit simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deutsch, J.T.
Accurate electrical simulation is critical to the design of high performance integrated circuits. Logic simulators can verify function and give first-order timing information. Switch level simulators are more effective at dealing with charge sharing than standard logic simulators, but cannot provide accurate timing information or discover DC problems. Delay estimation techniques and cell level simulation can be used in constrained design methods, but must be tuned for each application, and circuit simulation must still be used to generate the cell models. None of these methods has the guaranteed accuracy that many circuit designers desire, and none can provide detailed waveformmore » information. Detailed electrical-level simulation can predict circuit performance if devices and parasitics are modeled accurately. However, the computational requirements of conventional circuit simulators make it impractical to simulate current large circuits. In this dissertation, the implementation of Iterated Timing Analysis (ITA), a relaxation-based technique for accurate circuit simulation, on a special-purpose multiprocessor is presented. The ITA method is an SOR-Newton, relaxation-based method which uses event-driven analysis and selective trace to exploit the temporal sparsity of the electrical network. Because event-driven selective trace techniques are employed, this algorithm lends itself to implementation on a data-driven computer.« less
Experimental studies of characteristic combustion-driven flows for CFD validation
NASA Technical Reports Server (NTRS)
Santoro, R. J.; Moser, M.; Anderson, W.; Pal, S.; Ryan, H.; Merkle, C. L.
1992-01-01
A series of rocket-related studies intended to develop a suitable data base for validation of Computational Fluid Dynamics (CFD) models of characteristic combustion-driven flows was undertaken at the Propulsion Engineering Research Center at Penn State. Included are studies of coaxial and impinging jet injectors as well as chamber wall heat transfer effects. The objective of these studies is to provide fundamental understanding and benchmark quality data for phenomena important to rocket combustion under well-characterized conditions. Diagnostic techniques utilized in these studies emphasize determinations of velocity, temperature, spray and droplet characteristics, and combustion zone distribution. Since laser diagnostic approaches are favored, the development of an optically accessible rocket chamber has been a high priority in the initial phase of the project. During the design phase for this chamber, the advice and input of the CFD modeling community were actively sought through presentations and written surveys. Based on this procedure, a suitable uni-element rocket chamber was fabricated and is presently under preliminary testing. Results of these tests, as well as the survey findings leading to the chamber design, were presented.
Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil
NASA Astrophysics Data System (ADS)
Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo
2013-04-01
The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive precipitation was reduced by adjusting some parameters in the cloud microphysics scheme. Precipitation overestimate still occurs and further tests are still necessary. The increase of horizontal resolution to 1 km required adjusting model diffusion parameters and refining divergence calculations. Available observations in the region for a thorough evaluation is a major constraint.
Using a data base management system for modelling SSME test history data
NASA Technical Reports Server (NTRS)
Abernethy, K.
1985-01-01
The usefulness of a data base management system (DBMS) for modelling historical test data for the complete series of static test firings for the Space Shuttle Main Engine (SSME) was assessed. From an analysis of user data base query requirements, it became clear that a relational DMBS which included a relationally complete query language would permit a model satisfying the query requirements. Representative models and sample queries are discussed. A list of environment-particular evaluation criteria for the desired DBMS was constructed; these criteria include requirements in the areas of user-interface complexity, program independence, flexibility, modifiability, and output capability. The evaluation process included the construction of several prototype data bases for user assessement. The systems studied, representing the three major DBMS conceptual models, were: MIRADS, a hierarchical system; DMS-1100, a CODASYL-based network system; ORACLE, a relational system; and DATATRIEVE, a relational-type system.
Camerini, Luca; Schulz, Peter Johannes
2012-07-18
The effectiveness of eHealth interventions in terms of reach and outcomes is now well documented. However, there is a need to understand not only whether eHealth interventions work, but also what kind of functions and mechanisms enhance their effectiveness. The present investigation contributes to tackling these challenges by investigating the role played by functional interactivity on patients' knowledge, empowerment, and health outcomes. To test whether health knowledge and empowerment mediate a possible relationship between the availability of interactive features on an eHealth application and individuals' health outcomes. We present an empirical, model-driven evaluation of the effects of functional interactivity implemented in an eHealth application, based on a brief theoretical review of the constructs of interactivity, health knowledge, empowerment, and health outcomes. We merged these constructs into a theoretical model of interactivity effects that we tested on an eHealth application for patients with fibromyalgia syndrome (FMS). This study used a pretest-posttest experimental design. We recruited 165 patients and randomly assigned them to three study groups, corresponding to different levels of functional interactivity. Eligibility to participate in the study required that patients (1) be fluent in Italian, (2) have access to the Internet, (3) report confidence in how to use a computer, and (4) have received a diagnosis of FMS from a doctor. We used structural equation modeling techniques to analyze changes between the pretest and the posttest results. The main finding was that functional interactivity had no impact on empowerment dimensions, nor direct observable effects on knowledge. However, knowledge positively affected health outcomes (b = -.12, P = .02), as did the empowerment dimensions of meaning (b = -.49, P < .001) and impact (b = -.25, P < .001). The theoretical model was partially confirmed, but only as far as the effects of knowledge and empowerment were concerned. The differential effect of interactive functions was by far weaker than expected. The strong impact of knowledge and empowerment on health outcomes suggests that these constructs should be targeted and enhanced by eHealth applications.
TOPEX/Poseidon precision orbit determination production and expert system
NASA Technical Reports Server (NTRS)
Putney, Barbara; Zelensky, Nikita; Klosko, Steven
1993-01-01
TOPEX/Poseidon (T/P) is a joint mission between NASA and the Centre National d'Etudes Spatiales (CNES), the French Space Agency. The TOPEX/Poseidon Precision Orbit Determination Production System (PODPS) was developed at Goddard Space Flight Center (NASA/GSFC) to produce the absolute orbital reference required to support the fundamental ocean science goals of this satellite altimeter mission within NASA. The orbital trajectory for T/P is required to have a RMS accuracy of 13 centimeters in its radial component. This requirement is based on the effective use of the satellite altimetry for the isolation of absolute long-wavelength ocean topography important for monitoring global changes in the ocean circulation system. This orbit modeling requirement is at an unprecedented accuracy level for this type of satellite. In order to routinely produce and evaluate these orbits, GSFC has developed a production and supporting expert system. The PODPS is a menu driven system allowing routine importation and processing of tracking data for orbit determination, and an evaluation of the quality of the orbit so produced through a progressive series of tests. Phase 1 of the expert system grades the orbit and displays test results. Later phases undergoing implementation, will prescribe corrective actions when unsatisfactory results are seen. This paper describes the design and implementation of this orbit determination production system and the basis for its orbit accuracy assessment within the expert system.
Baldwin, Krystal L; Kannan, Vaishnavi; Flahaven, Emily L; Parks, Cassandra J; Ott, Jason M; Willett, Duwayne L
2018-01-01
Background Moving to electronic health records (EHRs) confers substantial benefits but risks unintended consequences. Modern EHRs consist of complex software code with extensive local configurability options, which can introduce defects. Defects in clinical decision support (CDS) tools are surprisingly common. Feasible approaches to prevent and detect defects in EHR configuration, including CDS tools, are needed. In complex software systems, use of test–driven development and automated regression testing promotes reliability. Test–driven development encourages modular, testable design and expanding regression test coverage. Automated regression test suites improve software quality, providing a “safety net” for future software modifications. Each automated acceptance test serves multiple purposes, as requirements (prior to build), acceptance testing (on completion of build), regression testing (once live), and “living” design documentation. Rapid-cycle development or “agile” methods are being successfully applied to CDS development. The agile practice of automated test–driven development is not widely adopted, perhaps because most EHR software code is vendor-developed. However, key CDS advisory configuration design decisions and rules stored in the EHR may prove amenable to automated testing as “executable requirements.” Objective We aimed to establish feasibility of acceptance test–driven development of clinical decision support advisories in a commonly used EHR, using an open source automated acceptance testing framework (FitNesse). Methods Acceptance tests were initially constructed as spreadsheet tables to facilitate clinical review. Each table specified one aspect of the CDS advisory’s expected behavior. Table contents were then imported into a test suite in FitNesse, which queried the EHR database to automate testing. Tests and corresponding CDS configuration were migrated together from the development environment to production, with tests becoming part of the production regression test suite. Results We used test–driven development to construct a new CDS tool advising Emergency Department nurses to perform a swallowing assessment prior to administering oral medication to a patient with suspected stroke. Test tables specified desired behavior for (1) applicable clinical settings, (2) triggering action, (3) rule logic, (4) user interface, and (5) system actions in response to user input. Automated test suite results for the “executable requirements” are shown prior to building the CDS alert, during build, and after successful build. Conclusions Automated acceptance test–driven development and continuous regression testing of CDS configuration in a commercial EHR proves feasible with open source software. Automated test–driven development offers one potential contribution to achieving high-reliability EHR configuration. Vetting acceptance tests with clinicians elicits their input on crucial configuration details early during initial CDS design and iteratively during rapid-cycle optimization. PMID:29653922
NASA Astrophysics Data System (ADS)
Crouch, Dustin L.; (Helen Huang, He
2017-06-01
Objective. We investigated the feasibility of a novel, customizable, simplified EMG-driven musculoskeletal model for estimating coordinated hand and wrist motions during a real-time path tracing task. Approach. A two-degree-of-freedom computational musculoskeletal model was implemented for real-time EMG-driven control of a stick figure hand displayed on a computer screen. After 5-10 minutes of undirected practice, subjects were given three attempts to trace 10 straight paths, one at a time, with the fingertip of the virtual hand. Able-bodied subjects completed the task on two separate test days. Main results. Across subjects and test days, there was a significant linear relationship between log-transformed measures of accuracy and speed (Pearson’s r = 0.25, p < 0.0001). The amputee subject could coordinate movement between the wrist and MCP joints, but favored metacarpophalangeal joint motion more highly than able-bodied subjects in 8 of 10 trials. For able-bodied subjects, tracing accuracy was lower at the extremes of the model’s range of motion, though there was no apparent relationship between tracing accuracy and fingertip location for the amputee. Our result suggests that, unlike able-bodied subjects, the amputee’s motor control patterns were not accustomed to the multi-joint dynamics of the wrist and hand, possibly as a result of post-amputation cortical plasticity, disuse, or sensory deficits. Significance. To our knowledge, our study is one of very few that have demonstrated the real-time simultaneous control of multi-joint movements, especially wrist and finger movements, using an EMG-driven musculoskeletal model, which differs from the many data-driven algorithms that dominate the literature on EMG-driven prosthesis control. Real-time control was achieved with very little training and simple, quick (~15 s) calibration. Thus, our model is potentially a practical and effective control platform for multifunctional myoelectric prostheses that could restore more life-like hand function for individuals with upper limb amputation.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
A condition metric for Eucalyptus woodland derived from expert evaluations.
Sinclair, Steve J; Bruce, Matthew J; Griffioen, Peter; Dodd, Amanda; White, Matthew D
2018-02-01
The evaluation of ecosystem quality is important for land-management and land-use planning. Evaluation is unavoidably subjective, and robust metrics must be based on consensus and the structured use of observations. We devised a transparent and repeatable process for building and testing ecosystem metrics based on expert data. We gathered quantitative evaluation data on the quality of hypothetical grassy woodland sites from experts. We used these data to train a model (an ensemble of 30 bagged regression trees) capable of predicting the perceived quality of similar hypothetical woodlands based on a set of 13 site variables as inputs (e.g., cover of shrubs, richness of native forbs). These variables can be measured at any site and the model implemented in a spreadsheet as a metric of woodland quality. We also investigated the number of experts required to produce an opinion data set sufficient for the construction of a metric. The model produced evaluations similar to those provided by experts, as shown by assessing the model's quality scores of expert-evaluated test sites not used to train the model. We applied the metric to 13 woodland conservation reserves and asked managers of these sites to independently evaluate their quality. To assess metric performance, we compared the model's evaluation of site quality with the managers' evaluations through multidimensional scaling. The metric performed relatively well, plotting close to the center of the space defined by the evaluators. Given the method provides data-driven consensus and repeatability, which no single human evaluator can provide, we suggest it is a valuable tool for evaluating ecosystem quality in real-world contexts. We believe our approach is applicable to any ecosystem. © 2017 State of Victoria.
Implementing partnership-driven clinical federated electronic health record data sharing networks.
Stephens, Kari A; Anderson, Nicholas; Lin, Ching-Ping; Estiri, Hossein
2016-09-01
Building federated data sharing architectures requires supporting a range of data owners, effective and validated semantic alignment between data resources, and consistent focus on end-users. Establishing these resources requires development methodologies that support internal validation of data extraction and translation processes, sustaining meaningful partnerships, and delivering clear and measurable system utility. We describe findings from two federated data sharing case examples that detail critical factors, shared outcomes, and production environment results. Two federated data sharing pilot architectures developed to support network-based research associated with the University of Washington's Institute of Translational Health Sciences provided the basis for the findings. A spiral model for implementation and evaluation was used to structure iterations of development and support knowledge share between the two network development teams, which cross collaborated to support and manage common stages. We found that using a spiral model of software development and multiple cycles of iteration was effective in achieving early network design goals. Both networks required time and resource intensive efforts to establish a trusted environment to create the data sharing architectures. Both networks were challenged by the need for adaptive use cases to define and test utility. An iterative cyclical model of development provided a process for developing trust with data partners and refining the design, and supported measureable success in the development of new federated data sharing architectures. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Concept definition study for an extremely large aerophysics range facility
NASA Technical Reports Server (NTRS)
Swift, Hallock F.
1993-01-01
A conceptual design of a very large aeroballistic range is presented, as are its operational characteristics and procedures. The proposed model launcher is a two-stage light-gas gun, having a launch tube diameter of 254 mm, and the capability of accelerating a 14 kg launch mass to 6.1 km/sec. The gun's 91.4 cm diameter piston is driven by pressurized helium. High pressures in the central breech are contained by a multiple disk arrangement. The blast tank and sabot separation tank are described, as are methods for arresting sabot segments. The conceptual design of the range itself includes a 3.3 m diameter test or flight chamber some 330 m in length. Provisions are made for testing of free flight models and tests in which the model is confined by a track system. Methods for model deceleration and recovery are described. Provisions required for future addition of advanced model launchers such as an electromagnetic launcher or ram accelerator are addressed. Siting and safety issues are also addressed.
Adaptive Finite Element Methods for Continuum Damage Modeling
NASA Technical Reports Server (NTRS)
Min, J. B.; Tworzydlo, W. W.; Xiques, K. E.
1995-01-01
The paper presents an application of adaptive finite element methods to the modeling of low-cycle continuum damage and life prediction of high-temperature components. The major objective is to provide automated and accurate modeling of damaged zones through adaptive mesh refinement and adaptive time-stepping methods. The damage modeling methodology is implemented in an usual way by embedding damage evolution in the transient nonlinear solution of elasto-viscoplastic deformation problems. This nonlinear boundary-value problem is discretized by adaptive finite element methods. The automated h-adaptive mesh refinements are driven by error indicators, based on selected principal variables in the problem (stresses, non-elastic strains, damage, etc.). In the time domain, adaptive time-stepping is used, combined with a predictor-corrector time marching algorithm. The time selection is controlled by required time accuracy. In order to take into account strong temperature dependency of material parameters, the nonlinear structural solution a coupled with thermal analyses (one-way coupling). Several test examples illustrate the importance and benefits of adaptive mesh refinements in accurate prediction of damage levels and failure time.
Information-driven trade and price-volume relationship in artificial stock markets
NASA Astrophysics Data System (ADS)
Liu, Xinghua; Liu, Xin; Liang, Xiaobei
2015-07-01
The positive relation between stock price changes and trading volume (price-volume relationship) as a stylized fact has attracted significant interest among finance researchers and investment practitioners. However, until now, consensus has not been reached regarding the causes of the relationship based on real market data because extracting valuable variables (such as information-driven trade volume) from real data is difficult. This lack of general consensus motivates us to develop a simple agent-based computational artificial stock market where extracting the necessary variables is easy. Based on this model and its artificial data, our tests have found that the aggressive trading style of informed agents can produce a price-volume relationship. Therefore, the information spreading process is not a necessary condition for producing price-volume relationship.
Investigation of lunar base thermal control system options
NASA Technical Reports Server (NTRS)
Ewart, Michael K.
1993-01-01
Long duration human exploration missions to the Moon will require active thermal control systems which have not previously been used in space. The two technologies which are most promising for long term lunar base thermal control are heat pumps and radiator shades. Recent trade-off studies at the Johnson Space Center have focused development efforts on the most promising heat pump and radiator shade technologies. Since these technologies are in the early stages of development and many parameters used in the study are not well defined, a parametric study was done to test the sensitivity to each assumption. The primary comparison factor in these studies was the total mass system, with power requirements included in the form of a mass penalty for power. Heat pump technologies considered were thermally driven heat pumps such as metal hydride, complex compound, absorption and zeolite. Also considered were electrically driven Stirling and vapor compression heat pumps. Radiator shade concepts considered included step shaped, V-shaped and parabolic (or catenary) shades and ground covers. A further trade study compared the masses of heat pump and radiator shade systems.
Electromagnetic Properties Analysis on Hybrid-driven System of Electromagnetic Motor
NASA Astrophysics Data System (ADS)
Zhao, Jingbo; Han, Bingyuan; Bei, Shaoyi
2018-01-01
The hybrid-driven system made of permanent-and electromagnets applied in the electromagnetic motor was analyzed, equivalent magnetic circuit was used to establish the mathematical models of hybrid-driven system, based on the models of hybrid-driven system, the air gap flux, air-gap magnetic flux density, electromagnetic force was proposed. Taking the air-gap magnetic flux density and electromagnetic force as main research object, the hybrid-driven system was researched. Electromagnetic properties of hybrid-driven system with different working current modes is studied preliminary. The results shown that analysis based on hybrid-driven system can improve the air-gap magnetic flux density and electromagnetic force more effectively and can also guarantee the output stability, the effectiveness and feasibility of the hybrid-driven system are verified, which proved theoretical basis for the design of hybrid-driven system.
Gaussian Processes for Data-Efficient Learning in Robotics and Control.
Deisenroth, Marc Peter; Fox, Dieter; Rasmussen, Carl Edward
2015-02-01
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
Requirements-Based Conformance Testing of ARINC 653 Real-Time Operating Systems
NASA Astrophysics Data System (ADS)
Maksimov, Andrey
2010-08-01
Requirements-based testing is emphasized in avionics certification documents because this strategy has been found to be the most effective at revealing errors. This paper describes the unified requirements-based approach to the creation of conformance test suites for mission-critical systems. The approach uses formal machine-readable specifications of requirements and finite state machine model for test sequences generation on-the-fly. The paper also presents the test system for automated test generation for ARINC 653 services built on this approach. Possible application of the presented approach to various areas of avionics embedded systems testing is discussed.
NASA Astrophysics Data System (ADS)
Cominola, A.; Spang, E. S.; Giuliani, M.; Castelletti, A.; Loge, F. J.; Lund, J. R.
2016-12-01
Demand side management strategies are key to meet future water and energy demands in urban contexts, promote water and energy efficiency in the residential sector, provide customized services and communications to consumers, and reduce utilities' costs. Smart metering technologies allow gathering high temporal and spatial resolution water and energy consumption data and support the development of data-driven models of consumers' behavior. Modelling and predicting resource consumption behavior is essential to inform demand management. Yet, analyzing big, smart metered, databases requires proper data mining and modelling techniques, in order to extract useful information supporting decision makers to spot end uses towards which water and energy efficiency or conservation efforts should be prioritized. In this study, we consider the following research questions: (i) how is it possible to extract representative consumers' personalities out of big smart metered water and energy data? (ii) are residential water and energy consumption profiles interconnected? (iii) Can we design customized water and energy demand management strategies based on the knowledge of water- energy demand profiles and other user-specific psychographic information? To address the above research questions, we contribute a data-driven approach to identify and model routines in water and energy consumers' behavior. We propose a novel customer segmentation procedure based on data-mining techniques. Our procedure consists of three steps: (i) extraction of typical water-energy consumption profiles for each household, (ii) profiles clustering based on their similarity, and (iii) evaluation of the influence of candidate explanatory variables on the identified clusters. The approach is tested onto a dataset of smart metered water and energy consumption data from over 1000 households in South California. Our methodology allows identifying heterogeneous groups of consumers from the studied sample, as well as characterizing them with respect to consumption profiles features and socio- demographic information. Results show how such better understanding of the considered users' community allows spotting potentially interesting areas for water and energy demand management interventions.
Geerts, Hugo; Hofmann-Apitius, Martin; Anastasio, Thomas J
2017-11-01
Neurodegenerative diseases such as Alzheimer's disease (AD) follow a slowly progressing dysfunctional trajectory, with a large presymptomatic component and many comorbidities. Using preclinical models and large-scale omics studies ranging from genetics to imaging, a large number of processes that might be involved in AD pathology at different stages and levels have been identified. The sheer number of putative hypotheses makes it almost impossible to estimate their contribution to the clinical outcome and to develop a comprehensive view on the pathological processes driving the clinical phenotype. Traditionally, bioinformatics approaches have provided correlations and associations between processes and phenotypes. Focusing on causality, a new breed of advanced and more quantitative modeling approaches that use formalized domain expertise offer new opportunities to integrate these different modalities and outline possible paths toward new therapeutic interventions. This article reviews three different computational approaches and their possible complementarities. Process algebras, implemented using declarative programming languages such as Maude, facilitate simulation and analysis of complicated biological processes on a comprehensive but coarse-grained level. A model-driven Integration of Data and Knowledge, based on the OpenBEL platform and using reverse causative reasoning and network jump analysis, can generate mechanistic knowledge and a new, mechanism-based taxonomy of disease. Finally, Quantitative Systems Pharmacology is based on formalized implementation of domain expertise in a more fine-grained, mechanism-driven, quantitative, and predictive humanized computer model. We propose a strategy to combine the strengths of these individual approaches for developing powerful modeling methodologies that can provide actionable knowledge for rational development of preventive and therapeutic interventions. Development of these computational approaches is likely to be required for further progress in understanding and treating AD. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Towards Test Driven Development for Computational Science with pFUnit
NASA Technical Reports Server (NTRS)
Rilee, Michael L.; Clune, Thomas L.
2014-01-01
Developers working in Computational Science & Engineering (CSE)/High Performance Computing (HPC) must contend with constant change due to advances in computing technology and science. Test Driven Development (TDD) is a methodology that mitigates software development risks due to change at the cost of adding comprehensive and continuous testing to the development process. Testing frameworks tailored for CSE/HPC, like pFUnit, can lower the barriers to such testing, yet CSE software faces unique constraints foreign to the broader software engineering community. Effective testing of numerical software requires a comprehensive suite of oracles, i.e., use cases with known answers, as well as robust estimates for the unavoidable numerical errors associated with implementation with finite-precision arithmetic. At first glance these concerns often seem exceedingly challenging or even insurmountable for real-world scientific applications. However, we argue that this common perception is incorrect and driven by (1) a conflation between model validation and software verification and (2) the general tendency in the scientific community to develop relatively coarse-grained, large procedures that compound numerous algorithmic steps.We believe TDD can be applied routinely to numerical software if developers pursue fine-grained implementations that permit testing, neatly side-stepping concerns about needing nontrivial oracles as well as the accumulation of errors. We present an example of a successful, complex legacy CSE/HPC code whose development process shares some aspects with TDD, which we contrast with current and potential capabilities. A mix of our proposed methodology and framework support should enable everyday use of TDD by CSE-expert developers.
Model-Driven Configuration of SELinux Policies
NASA Astrophysics Data System (ADS)
Agreiter, Berthold; Breu, Ruth
The need for access control in computer systems is inherent. However, the complexity to configure such systems is constantly increasing which affects the overall security of a system negatively. We think that it is important to define security requirements on a non-technical level while taking the application domain into respect in order to have a clear and separated view on security configuration (i.e. unblurred by technical details). On the other hand, security functionality has to be tightly integrated with the system and its development process in order to provide comprehensive means of enforcement. In this paper, we propose a systematic approach based on model-driven security configuration to leverage existing operating system security mechanisms (SELinux) for realising access control. We use UML models and develop a UML profile to satisfy these needs. Our goal is to exploit a comprehensive protection mechanism while rendering its security policy manageable by a domain specialist.
Deep Space Test Bed for Radiation Studies
NASA Technical Reports Server (NTRS)
Adams, James H.; Christl, Mark; Watts, John; Kuznetsov, Eugene; Lin, Zi-Wei
2006-01-01
A key factor affecting the technical feasibility and cost of missions to Mars or the Moon is the need to protect the crew from ionizing radiation in space. Some analyses indicate that large amounts of spacecraft shielding may be necessary for crew safety. The shielding requirements are driven by the need to protect the crew from Galactic cosmic rays (GCR). Recent research activities aimed at enabling manned exploration have included shielding materials studies. A major goal of this research is to develop accurate radiation transport codes to calculate the shielding effectiveness of materials and to develop effective shielding strategies for spacecraft design. Validation of these models and calculations must be addressed in a relevant radiation environment to assure their technical readiness and accuracy. Test data obtained in the deep space radiation environment can provide definitive benchmarks and yield uncertainty estimates of the radiation transport codes. The two approaches presently used for code validation are ground based testing at particle accelerators and flight tests in high-inclination low-earth orbits provided by the shuttle, free-flyer platforms, or polar-orbiting satellites. These approaches have limitations in addressing all the radiation-shielding issues of deep space missions in both technical and practical areas. An approach based on long duration high altitude polar balloon flights provides exposure to the galactic cosmic ray composition and spectra encountered in deep space at a lower cost and with easier and more frequent access than afforded with spaceflight opportunities. This approach also results in shorter development times than spaceflight experiments, which is important for addressing changing program goals and requirements.
NASA Astrophysics Data System (ADS)
Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi
2017-11-01
Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.
Model-driven approach to data collection and reporting for quality improvement
Curcin, Vasa; Woodcock, Thomas; Poots, Alan J.; Majeed, Azeem; Bell, Derek
2014-01-01
Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. PMID:24874182
Hydrodynamics with strength: scaling-invariant solutions for elastic-plastic cavity expansion models
NASA Astrophysics Data System (ADS)
Albright, Jason; Ramsey, Scott; Baty, Roy
2017-11-01
Spherical cavity expansion (SCE) models are used to describe idealized detonation and high-velocity impact in a variety of materials. The common theme in SCE models is the presence of a pressure-driven cavity or void within a domain comprised of plastic and elastic response sub-regions. In past work, the yield criterion characterizing material strength in the plastic sub-region is usually taken for granted and assumed to take a known functional form restrictive to certain classes of materials, e.g. ductile metals or brittle geologic materials. Our objective is to systematically determine a general functional form for the yield criterion under the additional requirement that the SCE admits a similarity solution. Solutions determined under this additional requirement have immediate implications toward development of new compressible flow algorithm verification test problems. However, more importantly, these results also provide novel insight into modeling the yield criteria from the perspective of hydrodynamic scaling.
The Power of Neuroimaging Biomarkers for Screening Frontotemporal Dementia
McMillan, Corey T.; Avants, Brian B.; Cook, Philip; Ungar, Lyle; Trojanowski, John Q.; Grossman, Murray
2014-01-01
Frontotemporal dementia (FTD) is a clinically and pathologically heterogeneous neurodegenerative disease that can result from either frontotemporal lobar degeneration (FTLD) or Alzheimer’s disease (AD) pathology. It is critical to establish statistically powerful biomarkers that can achieve substantial cost-savings and increase feasibility of clinical trials. We assessed three broad categories of neuroimaging methods to screen underlying FTLD and AD pathology in a clinical FTD series: global measures (e.g., ventricular volume), anatomical volumes of interest (VOIs) (e.g., hippocampus) using a standard atlas, and data-driven VOIs using Eigenanatomy. We evaluated clinical FTD patients (N=93) with cerebrospinal fluid, gray matter (GM) MRI, and diffusion tensor imaging (DTI) to assess whether they had underlying FTLD or AD pathology. Linear regression was performed to identify the optimal VOIs for each method in a training dataset and then we evaluated classification sensitivity and specificity in an independent test cohort. Power was evaluated by calculating minimum sample sizes (mSS) required in the test classification analyses for each model. The data-driven VOI analysis using a multimodal combination of GM MRI and DTI achieved the greatest classification accuracy (89% SENSITIVE; 89% SPECIFIC) and required a lower minimum sample size (N=26) relative to anatomical VOI and global measures. We conclude that a data-driven VOI approach employing Eigenanatomy provides more accurate classification, benefits from increased statistical power in unseen datasets, and therefore provides a robust method for screening underlying pathology in FTD patients for entry into clinical trials. PMID:24687814
Organization-based Model-driven Development of High-assurance Multiagent Systems
2009-02-27
based Model -driven Development of High-assurance Multiagent Systems " performed by Dr. Scott A . DeLoach and Dr Robby at Kansas State University... A Capabilities Based Model for Artificial Organizations. Journal of Autonomous Agents and Multiagent Systems . Volume 16, no. 1, February 2008, pp...Matson, E . T. (2007). A capabilities based theory of artificial organizations. Journal of Autonomous Agents and Multiagent Systems
Modeling of damage driven fracture failure of fiber post-restored teeth.
Xu, Binting; Wang, Yining; Li, Qing
2015-09-01
Mechanical failure of biomaterials, which can be initiated by either violent force, or progressive stress fatigue, is a serious issue. Great efforts have been made to improve the mechanical performances of dental restorations. Virtual simulation is a promising approach for biomechanical investigations, which presents significant advantages in improving efficiency than traditional in vivo/in vitro studies. Over the past few decades, a number of virtual studies have been conducted to investigate the biomechanical issues concerning dental biomaterials, but only with limited incorporation of brittle failure phenomena. Motivated by the contradictory findings between several finite element analyses and common clinical observations on the fracture resistance of post-restored teeth, this study aimed to provide an approach using numerical simulations for investigating the fracture failure process through a non-linear fracture mechanics model. The ability of this approach to predict fracture initiation and propagation in a complex biomechanical status based on the intrinsic material properties was investigated. Results of the virtual simulations matched the findings of experimental tests, in terms of the ultimate fracture failure strengths and predictive areas under risk of clinical failure. This study revealed that the failure of dental post-restored restorations is a typical damage-driven continuum-to-discrete process. This approach is anticipated to have ramifications not only for modeling fracture events, but also for the design and optimization of the mechanical properties of biomaterials for specific clinically determined requirements. Copyright © 2015 Elsevier Ltd. All rights reserved.
A Hypothesis-Driven Approach to Site Investigation
NASA Astrophysics Data System (ADS)
Nowak, W.
2008-12-01
Variability of subsurface formations and the scarcity of data lead to the notion of aquifer parameters as geostatistical random variables. Given an information need and limited resources for field campaigns, site investigation is often put into the context of optimal design. In optimal design, the types, numbers and positions of samples are optimized under case-specific objectives to meet the information needs. Past studies feature optimal data worth (balancing maximum financial profit in an engineering task versus the cost of additional sampling), or aim at a minimum prediction uncertainty of stochastic models for a prescribed investigation budget. Recent studies also account for other sources of uncertainty outside the hydrogeological range, such as uncertain toxicity, ingestion and behavioral parameters of the affected population when predicting the human health risk from groundwater contaminations. The current study looks at optimal site investigation from a new angle. Answering a yes/no question under uncertainty directly requires recasting the original question as a hypothesis test. Otherwise, false confidence in the resulting answer would be pretended. A straightforward example is whether a recent contaminant spill will cause contaminant concentrations in excess of a legal limit at a nearby drinking water well. This question can only be answered down to a specified chance of error, i.e., based on the significance level used in hypothesis tests. Optimal design is placed into the hypothesis-driven context by using the chance of providing a false yes/no answer as new criterion to be minimized. Different configurations apply for one-sided and two-sided hypothesis tests. If a false answer entails financial liability, the hypothesis-driven context can be re-cast in the context of data worth. The remaining difference is that failure is a hard constraint in the data worth context versus a monetary punishment term in the hypothesis-driven context. The basic principle is discussed and illustrated on the case of a hypothetical contaminant spill and the exceedance of critical contaminant levels at a downstream location. An tempting and important side question is whether site investigation could be tweaked towards a yes or no answer in maliciously biased campaigns by unfair formulation of the optimization objective.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-10
... Society of Automotive Engineers (SAE) Recommended Practice J918b--Passenger Car Tire Performance Requirements and Test Procedures (January 1967). 2, 3 As part of the strength test, a plunger is driven into a...--Passenger Car Tire Performance Requirements and Test Procedures (January 1967) Section 3.1. \\5\\ FMVSS No...
A zebrafish model of chordoma initiated by notochord-driven expression of HRASV12
Burger, Alexa; Vasilyev, Aleksandr; Tomar, Ritu; Selig, Martin K.; Nielsen, G. Petur; Peterson, Randall T.; Drummond, Iain A.; Haber, Daniel A.
2014-01-01
Chordoma is a malignant tumor thought to arise from remnants of the embryonic notochord, with its origin in the bones of the axial skeleton. Surgical resection is the standard treatment, usually in combination with radiation therapy, but neither chemotherapeutic nor targeted therapeutic approaches have demonstrated success. No animal model and only few chordoma cell lines are available for preclinical drug testing, and, although no druggable genetic drivers have been identified, activation of EGFR and downstream AKT-PI3K pathways have been described. Here, we report a zebrafish model of chordoma, based on stable transgene-driven expression of HRASV12 in notochord cells during development. Extensive intra-notochordal tumor formation is evident within days of transgene expression, ultimately leading to larval death. The zebrafish tumors share characteristics of human chordoma as demonstrated by immunohistochemistry and electron microscopy. The mTORC1 inhibitor rapamycin, which has some demonstrated activity in a chordoma cell line, delays the onset of tumor formation in our zebrafish model, and improves survival of tumor-bearing fish. Consequently, the HRASV12-driven zebrafish model of chordoma could enable high-throughput screening of potential therapeutic agents for the treatment of this refractory cancer. PMID:24311731
Test Operations Procedure (TOP) 10-2-400 Open End Compressed Gas Driven Shock Tube
gas-driven shock tube. Procedures are provided for instrumentation, test item positioning, estimation of key test parameters, operation of the shock...tube, data collection, and reporting. The procedures in this document are based on the use of helium gas and Mylar film diaphragms.
ERIC Educational Resources Information Center
Tough, David T.
2009-01-01
The purpose of this online study was to create a ranking of essential core competencies and technologies required by AET (audio engineering technology) programs 10 years in the future. The study was designed to facilitate curriculum development and improvement in the rapidly expanding number of small to medium sized audio engineering technology…
NASA Astrophysics Data System (ADS)
Vesey, Roger; Cuneo, M. E.; Hanson Porter, D. L., Jr.; Mehlhorn, T. A.; Ruggles, L. E.; Simpson, W. W.; Hammer, J. H.; Landen, O.
2000-10-01
Capsule radiation symmetry is a crucial issue in the design of the z-pinch driven hohlraum approach to high-yield inertial confinement fusion [1]. Capsule symmetry may be influenced by power imbalance of the two z-pinch x-ray sources, and by hohlraum effects (geometry, time-dependent albedo, wall motion). We have conducted two-dimensional radiation-hydrodynamics calculations to estimate the symmetry sensitivity of the 220 eV beryllium ablator capsule that nominally yields 400 MJ in this concept. These estimates then determine the symmetry requirements to be met by the hohlraum design (for even Legendre modes) and by the top-bottom pinch imbalance and mistiming (for odd Legendre modes). We have used a combination of 2- and 3-D radiosity ("viewfactor"), and 2-D radiation-hydrodynamics calculations to identify hohlraum geometries that meet these symmetry requirements for high-yield, and are testing these models against ongoing Z foam ball symmetry experiments. 1. J. H. Hammer et al., Phys. Plas. 6, 2129 (1999).
Tang, Jessica A; Scheer, Justin K; Ames, Christopher P; Buckley, Jenni M
2012-02-23
Pure moment testing has become a standard protocol for in vitro assessment of the effect of surgical techniques or devices on the bending rigidity of the spine. Of the methods used for pure moment testing, cable-driven set-ups are popular due to their low requirements and simple design. Fixed loading rings are traditionally used in conjunction with these cable-driven systems. However, the accuracy and validity of the loading conditions applied with fixed ring designs have raised some concern, and discrepancies have been found between intended and prescribed loading conditions for flexion-extension. This study extends this prior work to include lateral bending and axial torsion, and compares this fixed ring design with a novel "3D floating ring" design. A complete battery of multi-axial bending tests was conducted with both rings in multiple different configurations using an artificial lumbar spine. Applied moments were monitored and recorded by a multi-axial load cell at the base of the specimen. Results indicate that the fixed ring design deviates as much as 77% from intended moments and induces non-trivial shear forces (up to 18 N) when loaded to a non-destructive maximum of 4.5 Nm. The novel 3D floating ring design largely corrects the inherent errors in the fixed ring design by allowing additional directions of unconstrained motion and producing uniform loading conditions along the length of the specimen. In light of the results, it is suggested that the 3D floating ring set-up be used for future pure moment spine biomechanics applications using a cable-driven apparatus. Copyright © 2012 Elsevier Ltd. All rights reserved.
A real-time spiking cerebellum model for learning robot control.
Carrillo, Richard R; Ros, Eduardo; Boucheny, Christian; Coenen, Olivier J-M D
2008-01-01
We describe a neural network model of the cerebellum based on integrate-and-fire spiking neurons with conductance-based synapses. The neuron characteristics are derived from our earlier detailed models of the different cerebellar neurons. We tested the cerebellum model in a real-time control application with a robotic platform. Delays were introduced in the different sensorimotor pathways according to the biological system. The main plasticity in the cerebellar model is a spike-timing dependent plasticity (STDP) at the parallel fiber to Purkinje cell connections. This STDP is driven by the inferior olive (IO) activity, which encodes an error signal using a novel probabilistic low frequency model. We demonstrate the cerebellar model in a robot control system using a target-reaching task. We test whether the system learns to reach different target positions in a non-destructive way, therefore abstracting a general dynamics model. To test the system's ability to self-adapt to different dynamical situations, we present results obtained after changing the dynamics of the robotic platform significantly (its friction and load). The experimental results show that the cerebellar-based system is able to adapt dynamically to different contexts.
Dynamic information processing states revealed through neurocognitive models of object semantics
Clarke, Alex
2015-01-01
Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632
A Reward-Based Behavioral Platform to Measure Neural Activity during Head-Fixed Behavior.
Micallef, Andrew H; Takahashi, Naoya; Larkum, Matthew E; Palmer, Lucy M
2017-01-01
Understanding the neural computations that contribute to behavior requires recording from neurons while an animal is behaving. This is not an easy task as most subcellular recording techniques require absolute head stability. The Go/No-Go sensory task is a powerful decision-driven task that enables an animal to report a binary decision during head-fixation. Here we discuss how to set up an Ardunio and Python based platform system to control a Go/No-Go sensory behavior paradigm. Using an Arduino micro-controller and Python-based custom written program, a reward can be delivered to the animal depending on the decision reported. We discuss the various components required to build the behavioral apparatus that can control and report such a sensory stimulus paradigm. This system enables the end user to control the behavioral testing in real-time and therefore it provides a strong custom-made platform for probing the neural basis of behavior.
NASA Astrophysics Data System (ADS)
Zhang, Feng-Liang; Ni, Yan-Chun; Au, Siu-Kui; Lam, Heung-Fai
2016-03-01
The identification of modal properties from field testing of civil engineering structures is becoming economically viable, thanks to the advent of modern sensor and data acquisition technology. Its demand is driven by innovative structural designs and increased performance requirements of dynamic-prone structures that call for a close cross-checking or monitoring of their dynamic properties and responses. Existing instrumentation capabilities and modal identification techniques allow structures to be tested under free vibration, forced vibration (known input) or ambient vibration (unknown broadband loading). These tests can be considered complementary rather than competing as they are based on different modeling assumptions in the identification model and have different implications on costs and benefits. Uncertainty arises naturally in the dynamic testing of structures due to measurement noise, sensor alignment error, modeling error, etc. This is especially relevant in field vibration tests because the test condition in the field environment can hardly be controlled. In this work, a Bayesian statistical approach is developed for modal identification using the free vibration response of structures. A frequency domain formulation is proposed that makes statistical inference based on the Fast Fourier Transform (FFT) of the data in a selected frequency band. This significantly simplifies the identification model because only the modes dominating the frequency band need to be included. It also legitimately ignores the information in the excluded frequency bands that are either irrelevant or difficult to model, thereby significantly reducing modeling error risk. The posterior probability density function (PDF) of the modal parameters is derived rigorously from modeling assumptions and Bayesian probability logic. Computational difficulties associated with calculating the posterior statistics, including the most probable value (MPV) and the posterior covariance matrix, are addressed. Fast computational algorithms for determining the MPV are proposed so that the method can be practically implemented. In the companion paper (Part II), analytical formulae are derived for the posterior covariance matrix so that it can be evaluated without resorting to finite difference method. The proposed method is verified using synthetic data. It is also applied to modal identification of full-scale field structures.
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Feedforward hysteresis compensation in trajectory control of piezoelectrically-driven nanostagers
NASA Astrophysics Data System (ADS)
Bashash, Saeid; Jalili, Nader
2006-03-01
Complex structural nonlinearities of piezoelectric materials drastically degrade their performance in variety of micro- and nano-positioning applications. From the precision positioning and control perspective, the multi-path time-history dependent hysteresis phenomenon is the most concerned nonlinearity in piezoelectric actuators to be analyzed. To realize the underlying physics of this phenomenon and to develop an efficient compensation strategy, the intelligent properties of hysteresis with the effects of non-local memories are discussed. Through performing a set of experiments on a piezoelectrically-driven nanostager with high resolution capacitive position sensor, it is shown that for the precise prediction of hysteresis path, certain memory units are required to store the previous hysteresis trajectory data. Based on the experimental observations, a constitutive memory-based mathematical modeling framework is developed and trained for the precise prediction of hysteresis path for arbitrarily assigned input profiles. Using the inverse hysteresis model, a feedforward control strategy is then developed and implemented on the nanostager to compensate for the system everpresent nonlinearity. Experimental results demonstrate that the controller remarkably eliminates the nonlinear effect if memory units are sufficiently chosen for the inverse model.
Transforming Collaborative Process Models into Interface Process Models by Applying an MDA Approach
NASA Astrophysics Data System (ADS)
Lazarte, Ivanna M.; Chiotti, Omar; Villarreal, Pablo D.
Collaborative business models among enterprises require defining collaborative business processes. Enterprises implement B2B collaborations to execute these processes. In B2B collaborations the integration and interoperability of processes and systems of the enterprises are required to support the execution of collaborative processes. From a collaborative process model, which describes the global view of the enterprise interactions, each enterprise must define the interface process that represents the role it performs in the collaborative process in order to implement the process in a Business Process Management System. Hence, in this work we propose a method for the automatic generation of the interface process model of each enterprise from a collaborative process model. This method is based on a Model-Driven Architecture to transform collaborative process models into interface process models. By applying this method, interface processes are guaranteed to be interoperable and defined according to a collaborative process.
Wind-driven rain and its implications for natural hazard management
NASA Astrophysics Data System (ADS)
Marzen, Miriam; Iserloh, Thomas; de Lima, João L. M. P.; Fister, Wolfgang; Ries, Johannes B.
2017-04-01
Prediction and risk assessment of hydrological extremes are great challenges. Following climate predictions, frequent and violent rainstorms will become a new hazard to several regions in the medium term. Particularly agricultural soils will be severely threatened due to the combined action of heavy rainfall and accompanying winds on bare soil surfaces. Basing on the general underestimation of the effect of wind on rain erosion, conventional soil erosion measurements and modeling approaches lack related information to adequately calculate its impact. The presented experimental-empirical approach shows the powerful impact of wind on the erosive potential of rain. The tested soils had properties that characterise three different environments 1. Silty loam of semi-arid Mediterranean dryfarming and fallow, 2. clayey loam of humid agricultural sites and 3. cohesionless sandy substrates as found at coasts, dune fields and drift-sand areas. Erosion was found to increase by a factor of 1.3 to 7.1, depending on site characteristics. Complementary tests with a laboratory procedure were used to quantify explicitly the effect of wind on raindrop erosion as well as the influence of substrate, surface structure and slope on particle displacement. These tests confirmed the impact of wind-driven rain on total erosion rates to be of great importance when compared to all other tested factors. To successfully adapt soil erosion models to near-future challenges of climate change induced rain storms, wind-driven rain is supposed to be introduced into the hazard management agenda.
Alaska/Yukon Geoid Improvement by a Data-Driven Stokes's Kernel Modification Approach
NASA Astrophysics Data System (ADS)
Li, Xiaopeng; Roman, Daniel R.
2015-04-01
Geoid modeling over Alaska of USA and Yukon Canada being a trans-national issue faces a great challenge primarily due to the inhomogeneous surface gravity data (Saleh et al, 2013) and the dynamic geology (Freymueller et al, 2008) as well as its complex geological rheology. Previous study (Roman and Li 2014) used updated satellite models (Bruinsma et al 2013) and newly acquired aerogravity data from the GRAV-D project (Smith 2007) to capture the gravity field changes in the targeting areas primarily in the middle-to-long wavelength. In CONUS, the geoid model was largely improved. However, the precision of the resulted geoid model in Alaska was still in the decimeter level, 19cm at the 32 tide bench marks and 24cm on the 202 GPS/Leveling bench marks that gives a total of 23.8cm at all of these calibrated surface control points, where the datum bias was removed. Conventional kernel modification methods in this area (Li and Wang 2011) had limited effects on improving the precision of the geoid models. To compensate the geoid miss fits, a new Stokes's kernel modification method based on a data-driven technique is presented in this study. First, the method was tested on simulated data sets (Fig. 1), where the geoid errors have been reduced by 2 orders of magnitude (Fig 2). For the real data sets, some iteration steps are required to overcome the rank deficiency problem caused by the limited control data that are irregularly distributed in the target area. For instance, after 3 iterations, the standard deviation dropped about 2.7cm (Fig 3). Modification at other critical degrees can further minimize the geoid model miss fits caused either by the gravity error or the remaining datum error in the control points.
NASA Astrophysics Data System (ADS)
Deo, Ravinesh C.; Şahin, Mehmet
2015-02-01
The prediction of future drought is an effective mitigation tool for assessing adverse consequences of drought events on vital water resources, agriculture, ecosystems and hydrology. Data-driven model predictions using machine learning algorithms are promising tenets for these purposes as they require less developmental time, minimal inputs and are relatively less complex than the dynamic or physical model. This paper authenticates a computationally simple, fast and efficient non-linear algorithm known as extreme learning machine (ELM) for the prediction of Effective Drought Index (EDI) in eastern Australia using input data trained from 1957-2008 and the monthly EDI predicted over the period 2009-2011. The predictive variables for the ELM model were the rainfall and mean, minimum and maximum air temperatures, supplemented by the large-scale climate mode indices of interest as regression covariates, namely the Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and the Indian Ocean Dipole moment. To demonstrate the effectiveness of the proposed data-driven model a performance comparison in terms of the prediction capabilities and learning speeds was conducted between the proposed ELM algorithm and the conventional artificial neural network (ANN) algorithm trained with Levenberg-Marquardt back propagation. The prediction metrics certified an excellent performance of the ELM over the ANN model for the overall test sites, thus yielding Mean Absolute Errors, Root-Mean Square Errors, Coefficients of Determination and Willmott's Indices of Agreement of 0.277, 0.008, 0.892 and 0.93 (for ELM) and 0.602, 0.172, 0.578 and 0.92 (for ANN) models. Moreover, the ELM model was executed with learning speed 32 times faster and training speed 6.1 times faster than the ANN model. An improvement in the prediction capability of the drought duration and severity by the ELM model was achieved. Based on these results we aver that out of the two machine learning algorithms tested, the ELM was the more expeditious tool for prediction of drought and its related properties.
Agile Model Driven Development of Electronic Health Record-Based Specialty Population Registries.
Kannan, Vaishnavi; Fish, Jason C; Willett, DuWayne L
2016-02-01
The transformation of the American healthcare payment system from fee-for-service to value-based care increasingly makes it valuable to develop patient registries for specialized populations, to better assess healthcare quality and costs. Recent widespread adoption of Electronic Health Records (EHRs) in the U.S. now makes possible construction of EHR-based specialty registry data collection tools and reports, previously unfeasible using manual chart abstraction. But the complexities of specialty registry EHR tools and measures, along with the variety of stakeholders involved, can result in misunderstood requirements and frequent product change requests, as users first experience the tools in their actual clinical workflows. Such requirements churn could easily stall progress in specialty registry rollout. Modeling a system's requirements and solution design can be a powerful way to remove ambiguities, facilitate shared understanding, and help evolve a design to meet newly-discovered needs. "Agile Modeling" retains these values while avoiding excessive unused up-front modeling in favor of iterative incremental modeling. Using Agile Modeling principles and practices, in calendar year 2015 one institution developed 58 EHR-based specialty registries, with 111 new data collection tools, supporting 134 clinical process and outcome measures, and enrolling over 16,000 patients. The subset of UML and non-UML models found most consistently useful in designing, building, and iteratively evolving EHR-based specialty registries included User Stories, Domain Models, Use Case Diagrams, Decision Trees, Graphical User Interface Storyboards, Use Case text descriptions, and Solution Class Diagrams.
Advances in Optimizing Weather Driven Electric Power Systems.
NASA Astrophysics Data System (ADS)
Clack, C.; MacDonald, A. E.; Alexander, A.; Dunbar, A. D.; Xie, Y.; Wilczak, J. M.
2014-12-01
The importance of weather-driven renewable energies for the United States (and global) energy portfolio is growing. The main perceived problems with weather-driven renewable energies are their intermittent nature, low power density, and high costs. The National Energy with Weather System Simulator (NEWS) is a mathematical optimization tool that allows the construction of weather-driven energy sources that will work in harmony with the needs of the system. For example, it will match the electric load, reduce variability, decrease costs, and abate carbon emissions. One important test run included existing US carbon-free power sources, natural gas power when needed, and a High Voltage Direct Current power transmission network. This study shows that the costs and carbon emissions from an optimally designed national system decrease with geographic size. It shows that with achievable estimates of wind and solar generation costs, that the US could decrease its carbon emissions by up to 80% by the early 2030s, without an increase in electric costs. The key requirement would be a 48 state network of HVDC transmission, creating a national market for electricity not possible in the current AC grid. These results were found without the need for storage. Further, we tested the effect of changing natural gas fuel prices on the optimal configuration of the national electric power system. Another test that was carried out was an extension to global regions. The extension study shows that the same properties found in the US study extend to the most populous regions of the planet. The extra test is a simplified version of the US study, and is where much more research can be carried out. We compare our results to other model results.
Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information
NASA Astrophysics Data System (ADS)
Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam
2016-10-01
In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.
Model implementation for dynamic computation of system cost
NASA Astrophysics Data System (ADS)
Levri, J.; Vaccari, D.
The Advanced Life Support (ALS) Program metric is the ratio of the equivalent system mass (ESM) of a mission based on International Space Station (ISS) technology to the ESM of that same mission based on ALS technology. ESM is a mission cost analog that converts the volume, power, cooling and crewtime requirements of a mission into mass units to compute an estimate of the life support system emplacement cost. Traditionally, ESM has been computed statically, using nominal values for system sizing. However, computation of ESM with static, nominal sizing estimates cannot capture the peak sizing requirements driven by system dynamics. In this paper, a dynamic model for a near-term Mars mission is described. The model is implemented in Matlab/Simulink' for the purpose of dynamically computing ESM. This paper provides a general overview of the crew, food, biomass, waste, water and air blocks in the Simulink' model. Dynamic simulations of the life support system track mass flow, volume and crewtime needs, as well as power and cooling requirement profiles. The mission's ESM is computed, based upon simulation responses. Ultimately, computed ESM values for various system architectures will feed into an optimization search (non-derivative) algorithm to predict parameter combinations that result in reduced objective function values.
Comparative study of disinfectants for use in low-cost gravity driven household water purifiers.
Patil, Rajshree A; Kausley, Shankar B; Balkunde, Pradeep L; Malhotra, Chetan P
2013-09-01
Point-of-use (POU) gravity-driven household water purifiers have been proven to be a simple, low-cost and effective intervention for reducing the impact of waterborne diseases in developing countries. The goal of this study was to compare commonly used water disinfectants for their feasibility of adoption in low-cost POU water purifiers. The potency of each candidate disinfectant was evaluated by conducting a batch disinfection study for estimating the concentration of disinfectant needed to inactivate a given concentration of the bacterial strain Escherichia coli ATCC 11229. Based on the concentration of disinfectant required, the size, weight and cost of a model purifier employing that disinfectant were estimated. Model purifiers based on different disinfectants were compared and disinfectants which resulted in the most safe, compact and inexpensive purifiers were identified. Purifiers based on bromine, tincture iodine, calcium hypochlorite and sodium dichloroisocyanurate were found to be most efficient, cost effective and compact with replacement parts costing US$3.60-6.00 for every 3,000 L of water purified and are thus expected to present the most attractive value proposition to end users.
A Novel Online Data-Driven Algorithm for Detecting UAV Navigation Sensor Faults.
Sun, Rui; Cheng, Qi; Wang, Guanyu; Ochieng, Washington Yotto
2017-09-29
The use of Unmanned Aerial Vehicles (UAVs) has increased significantly in recent years. On-board integrated navigation sensors are a key component of UAVs' flight control systems and are essential for flight safety. In order to ensure flight safety, timely and effective navigation sensor fault detection capability is required. In this paper, a novel data-driven Adaptive Neuron Fuzzy Inference System (ANFIS)-based approach is presented for the detection of on-board navigation sensor faults in UAVs. Contrary to the classic UAV sensor fault detection algorithms, based on predefined or modelled faults, the proposed algorithm combines an online data training mechanism with the ANFIS-based decision system. The main advantages of this algorithm are that it allows real-time model-free residual analysis from Kalman Filter (KF) estimates and the ANFIS to build a reliable fault detection system. In addition, it allows fast and accurate detection of faults, which makes it suitable for real-time applications. Experimental results have demonstrated the effectiveness of the proposed fault detection method in terms of accuracy and misdetection rate.
Testing homogeneity in Weibull-regression models.
Bolfarine, Heleno; Valença, Dione M
2005-10-01
In survival studies with families or geographical units it may be of interest testing whether such groups are homogeneous for given explanatory variables. In this paper we consider score type tests for group homogeneity based on a mixing model in which the group effect is modelled as a random variable. As opposed to hazard-based frailty models, this model presents survival times that conditioned on the random effect, has an accelerated failure time representation. The test statistics requires only estimation of the conventional regression model without the random effect and does not require specifying the distribution of the random effect. The tests are derived for a Weibull regression model and in the uncensored situation, a closed form is obtained for the test statistic. A simulation study is used for comparing the power of the tests. The proposed tests are applied to real data sets with censored data.
Interior noise control prediction study for high-speed propeller-driven aircraft
NASA Technical Reports Server (NTRS)
Rennison, D. C.; Wilby, J. F.; Marsh, A. H.; Wilby, E. G.
1979-01-01
An analytical model was developed to predict the noise levels inside propeller-driven aircraft during cruise at M = 0.8. The model was applied to three study aircraft with fuselages of different size (wide body, narrow body and small diameter) in order to determine the noise reductions required to achieve the goal of an A-weighted sound level which does not exceed 80 dB. The model was then used to determine noise control methods which could achieve the required noise reductions. Two classes of noise control treatments were investigated: add-on treatments which can be added to existing structures, and advanced concepts which would require changes to the fuselage primary structure. Only one treatment, a double wall with limp panel, provided the required noise reductions. Weight penalties associated with the treatment were estimated for the three study aircraft.
A meteorologically-driven yield reduction model for spring and winter wheat
NASA Technical Reports Server (NTRS)
Ravet, F. W.; Cremins, W. J.; Taylor, T. W.; Ashburn, P.; Smika, D.; Aaronson, A. (Principal Investigator)
1983-01-01
A yield reduction model for spring and winter wheat was developed for large-area crop condition assessment. Reductions are expressed in percentage from a base yield and are calculated on a daily basis. The algorithm contains two integral components: a two-layer soil water budget model and a crop calendar routine. Yield reductions associated with hot, dry winds (Sukhovey) and soil moisture stress are determined. Input variables include evapotranspiration, maximum temperature and precipitation; subsequently crop-stage, available water holding percentage and stress duration are evaluated. No specific base yield is required and may be selected by the user; however, it may be generally characterized as the maximum likely to be produced commercially at a location.
Big-Data-Driven Stem Cell Science and Tissue Engineering: Vision and Unique Opportunities.
Del Sol, Antonio; Thiesen, Hans J; Imitola, Jaime; Carazo Salas, Rafael E
2017-02-02
Achieving the promises of stem cell science to generate precise disease models and designer cell samples for personalized therapeutics will require harnessing pheno-genotypic cell-level data quantitatively and predictively in the lab and clinic. Those requirements could be met by developing a Big-Data-driven stem cell science strategy and community. Copyright © 2017 Elsevier Inc. All rights reserved.
Developing a Data Driven Process-Based Model for Remote Sensing of Ecosystem Production
NASA Astrophysics Data System (ADS)
Elmasri, B.; Rahman, A. F.
2010-12-01
Estimating ecosystem carbon fluxes at various spatial and temporal scales is essential for quantifying the global carbon cycle. Numerous models have been developed for this purpose using several environmental variables as well as vegetation indices derived from remotely sensed data. Here we present a data driven modeling approach for gross primary production (GPP) that is based on a process based model BIOME-BGC. The proposed model was run using available remote sensing data and it does not depend on look-up tables. Furthermore, this approach combines the merits of both empirical and process models, and empirical models were used to estimate certain input variables such as light use efficiency (LUE). This was achieved by using remotely sensed data to the mathematical equations that represent biophysical photosynthesis processes in the BIOME-BGC model. Moreover, a new spectral index for estimating maximum photosynthetic activity, maximum photosynthetic rate index (MPRI), is also developed and presented here. This new index is based on the ratio between the near infrared and the green bands (ρ858.5/ρ555). The model was tested and validated against MODIS GPP product and flux measurements from two eddy covariance flux towers located at Morgan Monroe State Forest (MMSF) in Indiana and Harvard Forest in Massachusetts. Satellite data acquired by the Advanced Microwave Scanning Radiometer (AMSR-E) and MODIS were used. The data driven model showed a strong correlation between the predicted and measured GPP at the two eddy covariance flux towers sites. This methodology produced better predictions of GPP than did the MODIS GPP product. Moreover, the proportion of error in the predicted GPP for MMSF and Harvard forest was dominated by unsystematic errors suggesting that the results are unbiased. The analysis indicated that maintenance respiration is one of the main factors that dominate the overall model outcome errors and improvement in maintenance respiration estimation will result in improved GPP predictions. Although there might be a room for improvements in our model outcomes through improved parameterization, our results suggest that such a methodology for running BIOME-BGC model based entirely on routinely available data can produce good predictions of GPP.
Using connectome-based predictive modeling to predict individual behavior from brain connectivity
Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd
2017-01-01
Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017
Devil is in the details: Using logic models to investigate program process.
Peyton, David J; Scicchitano, Michael
2017-12-01
Theory-based logic models are commonly developed as part of requirements for grant funding. As a tool to communicate complex social programs, theory based logic models are an effective visual communication. However, after initial development, theory based logic models are often abandoned and remain in their initial form despite changes in the program process. This paper examines the potential benefits of committing time and resources to revising the initial theory driven logic model and developing detailed logic models that describe key activities to accurately reflect the program and assist in effective program management. The authors use a funded special education teacher preparation program to exemplify the utility of drill down logic models. The paper concludes with lessons learned from the iterative revision process and suggests how the process can lead to more flexible and calibrated program management. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Data-Driven Approach to Interactive Visualization of Power Grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Jun
Driven by emerging industry standards, electric utilities and grid coordination organizations are eager to seek advanced tools to assist grid operators to perform mission-critical tasks and enable them to make quick and accurate decisions. The emerging field of visual analytics holds tremendous promise for improving the business practices in today’s electric power industry. The conducted investigation, however, has revealed that the existing commercial power grid visualization tools heavily rely on human designers, hindering user’s ability to discover. Additionally, for a large grid, it is very labor-intensive and costly to build and maintain the pre-designed visual displays. This project proposes amore » data-driven approach to overcome the common challenges. The proposed approach relies on developing powerful data manipulation algorithms to create visualizations based on the characteristics of empirically or mathematically derived data. The resulting visual presentations emphasize what the data is rather than how the data should be presented, thus fostering comprehension and discovery. Furthermore, the data-driven approach formulates visualizations on-the-fly. It does not require a visualization design stage, completely eliminating or significantly reducing the cost for building and maintaining visual displays. The research and development (R&D) conducted in this project is mainly divided into two phases. The first phase (Phase I & II) focuses on developing data driven techniques for visualization of power grid and its operation. Various data-driven visualization techniques were investigated, including pattern recognition for auto-generation of one-line diagrams, fuzzy model based rich data visualization for situational awareness, etc. The R&D conducted during the second phase (Phase IIB) focuses on enhancing the prototyped data driven visualization tool based on the gathered requirements and use cases. The goal is to evolve the prototyped tool developed during the first phase into a commercial grade product. We will use one of the identified application areas as an example to demonstrate how research results achieved in this project are successfully utilized to address an emerging industry need. In summary, the data-driven visualization approach developed in this project has proven to be promising for building the next-generation power grid visualization tools. Application of this approach has resulted in a state-of-the-art commercial tool currently being leveraged by more than 60 utility organizations in North America and Europe .« less
NASA Astrophysics Data System (ADS)
Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi
2015-11-01
Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.
Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2013-02-01
Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally very efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore) and Canning River (Western Australia)) representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.
Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling
NASA Astrophysics Data System (ADS)
Galelli, S.; Castelletti, A.
2013-07-01
Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.
Cost Modeling and Design of Field-Reversed Configuration Fusion Power Plants
NASA Astrophysics Data System (ADS)
Kirtley, David; Slough, John; Helion Team
2017-10-01
The Inductively Driven Liner (IDL) fusion concept uses the magnetically driven implosion of thin (0.5-1 mm) Aluminum hoops to magnetically compress a merged Field-Reversed Configuration (FRC) plasma to fusion conditions. Both the driver and the target have been studied experimentally and theoretically by researchers at Helion Energy, MSNW, and the University of Washington, demonstrating compression fields greater than 100 T and suitable fusion targets. In the presented study, a notional power plant facility using this approach will be described. In addition, a full cost study based on the LLNL Z-IFE and HYLIFE-II studies, the ARIES Tokamak concept, and RAND power plant studies will be described. Finally, the expected capital costs, development requirements, and LCOE for 50 and 500 MW power plants will be given. This analysis includes core FRC plant scaling, metallic liner recycling, radiation shielding, operations, and facilities capital requirements.
ENABLING SMART MANUFACTURING TECHNOLOGIES FOR DECISION-MAKING SUPPORT
Helu, Moneer; Libes, Don; Lubell, Joshua; Lyons, Kevin; Morris, KC
2017-01-01
Smart manufacturing combines advanced manufacturing capabilities and digital technologies throughout the product lifecycle. These technologies can provide decision-making support to manufacturers through improved monitoring, analysis, modeling, and simulation that generate more and better intelligence about manufacturing systems. However, challenges and barriers have impeded the adoption of smart manufacturing technologies. To begin to address this need, this paper defines requirements for data-driven decision making in manufacturing based on a generalized description of decision making. Using these requirements, we then focus on identifying key barriers that prevent the development and use of data-driven decision making in industry as well as examples of technologies and standards that have the potential to overcome these barriers. The goal of this research is to promote a common understanding among the manufacturing community that can enable standardization efforts and innovation needed to continue adoption and use of smart manufacturing technologies. PMID:28649678
On the formation of Friedlander waves in a compressed-gas-driven shock tube
Tasissa, Abiy F.; Hautefeuille, Martin; Fitek, John H.; Radovitzky, Raúl A.
2016-01-01
Compressed-gas-driven shock tubes have become popular as a laboratory-scale replacement for field blast tests. The well-known initial structure of the Riemann problem eventually evolves into a shock structure thought to resemble a Friedlander wave, although this remains to be demonstrated theoretically. In this paper, we develop a semi-analytical model to predict the key characteristics of pseudo blast waves forming in a shock tube: location where the wave first forms, peak over-pressure, decay time and impulse. The approach is based on combining the solutions of the two different types of wave interactions that arise in the shock tube after the family of rarefaction waves in the Riemann solution interacts with the closed end of the tube. The results of the analytical model are verified against numerical simulations obtained with a finite volume method. The model furnishes a rational approach to relate shock tube parameters to desired blast wave characteristics, and thus constitutes a useful tool for the design of shock tubes for blast testing. PMID:27118888
Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency
NASA Astrophysics Data System (ADS)
Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu
2018-03-01
Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.
NASA Astrophysics Data System (ADS)
Sherwood, R.; Mutz, D.; Estlin, T.; Chien, S.; Backes, P.; Norris, J.; Tran, D.; Cooper, B.; Rabideau, G.; Mishkin, A.; Maxwell, S.
2001-07-01
This article discusses a proof-of-concept prototype for ground-based automatic generation of validated rover command sequences from high-level science and engineering activities. This prototype is based on ASPEN, the Automated Scheduling and Planning Environment. This artificial intelligence (AI)-based planning and scheduling system will automatically generate a command sequence that will execute within resource constraints and satisfy flight rules. An automated planning and scheduling system encodes rover design knowledge and uses search and reasoning techniques to automatically generate low-level command sequences while respecting rover operability constraints, science and engineering preferences, environmental predictions, and also adhering to hard temporal constraints. This prototype planning system has been field-tested using the Rocky 7 rover at JPL and will be field-tested on more complex rovers to prove its effectiveness before transferring the technology to flight operations for an upcoming NASA mission. Enabling goal-driven commanding of planetary rovers greatly reduces the requirements for highly skilled rover engineering personnel. This in turn greatly reduces mission operations costs. In addition, goal-driven commanding permits a faster response to changes in rover state (e.g., faults) or science discoveries by removing the time-consuming manual sequence validation process, allowing rapid "what-if" analyses, and thus reducing overall cycle times.
Pendergrass, Sarah A; Verma, Shefali S; Holzinger, Emily R; Moore, Carrie B; Wallace, John; Dudek, Scott M; Huggins, Wayne; Kitchner, Terrie; Waudby, Carol; Berg, Richard; McCarty, Catherine A; Ritchie, Marylyn D
2013-01-01
Investigating the association between biobank derived genomic data and the information of linked electronic health records (EHRs) is an emerging area of research for dissecting the architecture of complex human traits, where cases and controls for study are defined through the use of electronic phenotyping algorithms deployed in large EHR systems. For our study, 2580 cataract cases and 1367 controls were identified within the Marshfield Personalized Medicine Research Project (PMRP) Biobank and linked EHR, which is a member of the NHGRI-funded electronic Medical Records and Genomics (eMERGE) Network. Our goal was to explore potential gene-gene and gene-environment interactions within these data for 529,431 single nucleotide polymorphisms (SNPs) with minor allele frequency > 1%, in order to explore higher level associations with cataract risk beyond investigations of single SNP-phenotype associations. To build our SNP-SNP interaction models we utilized a prior-knowledge driven filtering method called Biofilter to minimize the multiple testing burden of exploring the vast array of interaction models possible from our extensive number of SNPs. Using the Biofilter, we developed 57,376 prior-knowledge directed SNP-SNP models to test for association with cataract status. We selected models that required 6 sources of external domain knowledge. We identified 5 statistically significant models with an interaction term with p-value < 0.05, as well as an overall model with p-value < 0.05 associated with cataract status. We also conducted gene-environment interaction analyses for all GWAS SNPs and a set of environmental factors from the PhenX Toolkit: smoking, UV exposure, and alcohol use; these environmental factors have been previously associated with the formation of cataracts. We found a total of 288 models that exhibit an interaction term with a p-value ≤ 1×10(-4) associated with cataract status. Our results show these approaches enable advanced searches for epistasis and gene-environment interactions beyond GWAS, and that the EHR based approach provides an additional source of data for seeking these advanced explanatory models of the etiology of complex disease/outcome such as cataracts.
Piezoelectric-based actuators for improved tractor-trailer performance (Conference Presentation)
NASA Astrophysics Data System (ADS)
Menicovich, David; Amitay, Michael; Gallardo, Daniele
2017-04-01
The application of piezo-electrically-driven synthetic-jet-based active flow control to reduce drag on tractor-trailers and to improve thermal mixing in refrigerated trailers was explored on full-scale tests. The active flow control technique that is being used relies on a modular system comprised of distributed, small, highly efficient actuators. These actuators, called synthetic jets, are jets that are synthesized at the edge of an orifice by a periodic motion of a piezoelectric diaphragm(s) mounted on one (or more) walls of a sealed cavity. The synthetic jet is zero net mass flux (ZNMF), but it allows momentum transfer to flow. It is typically driven near diaphragm and/or cavity resonance, and therefore, small electric input [O(10W)] is required. Another advantage of this actuator is that no plumbing is required. The system doesn't require changes to the body of the truck, can be easily reconfigured to various types of vehicles, and consumes small amounts of electrical power from the existing electrical system of the truck. The actuators are operated in a closed feedback loop based on inputs received from the tractor's electronic control unit, various system components and environmental sensors. The data are collected and processed on-board and transmitted to a cloud-based data management platform for further big data analytics and diagnostics. The system functions as a smart connected product through the interchange of data between the physical truck-mounted system and its cloud platform.
NASA Astrophysics Data System (ADS)
Asaumi, Hiroyoshi; Fujimoto, Hiroshi
Ball screw driven stages are used for industrial equipments such as machine tools and semiconductor equipments. Fast and precise positioning is necessary to enhance productivity and microfabrication technology of the system. The rolling friction of the ball screw driven stage deteriorate the positioning performance. Therefore, the control system based on the friction model is necessary. In this paper, we propose variable natural length spring model (VNLS model) as the friction model. VNLS model is simple and easy to implement as friction controller. Next, we propose multi variable natural length spring model (MVNLS model) as the friction model. MVNLS model can represent friction characteristic of the stage precisely. Moreover, the control system based on MVNLS model and disturbance observer is proposed. Finally, the simulation results and experimental results show the advantages of the proposed method.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
NASA Astrophysics Data System (ADS)
Kim, Min-Suk; Won, Hwa-Yeon; Jeong, Jong-Mun; Böcker, Paul; Vergaij-Huizer, Lydia; Kupers, Michiel; Jovanović, Milenko; Sochal, Inez; Ryan, Kevin; Sun, Kyu-Tae; Lim, Young-Wan; Byun, Jin-Moo; Kim, Gwang-Gon; Suh, Jung-Joon
2016-03-01
In order to optimize yield in DRAM semiconductor manufacturing for 2x nodes and beyond, the (processing induced) overlay fingerprint towards the edge of the wafer needs to be reduced. Traditionally, this is achieved by acquiring denser overlay metrology at the edge of the wafer, to feed field-by-field corrections. Although field-by-field corrections can be effective in reducing localized overlay errors, the requirement for dense metrology to determine the corrections can become a limiting factor due to a significant increase of metrology time and cost. In this study, a more cost-effective solution has been found in extending the regular correction model with an edge-specific component. This new overlay correction model can be driven by an optimized, sparser sampling especially at the wafer edge area, and also allows for a reduction of noise propagation. Lithography correction potential has been maximized, with significantly less metrology needs. Evaluations have been performed, demonstrating the benefit of edge models in terms of on-product overlay performance, as well as cell based overlay performance based on metrology-to-cell matching improvements. Performance can be increased compared to POR modeling and sampling, which can contribute to (overlay based) yield improvement. Based on advanced modeling including edge components, metrology requirements have been optimized, enabling integrated metrology which drives down overall metrology fab footprint and lithography cycle time.
NASA Astrophysics Data System (ADS)
Börner, Michael; Manfletti, Chiara; Kroupa, Gerhard; Oschwald, Michael
2017-09-01
In search of reliable and light-weight ignition systems for re-ignitable upper stage engines, a laser ignition system was adapted and tested on an experimental combustion chamber for propellant injection into low combustion chamber pressures at 50-80 mbar. The injector head pattern consisted of five coaxial injector elements. Both, laser-ablation-driven ignition and laser-plasma-driven ignition were tested for the propellant combination liquid oxygen and gaseous hydrogen. The 122 test runs demonstrated the reliability of the ignition system for different ignition configurations and negligible degradation due to testing. For the laser-plasma-driven scheme, minimum laser pulse energies needed for 100% ignition probability were found to decrease when increasing the distance of the ignition location from the injector faceplate with a minimum of 2.6 mJ. For laser-ablation-driven ignition, the minimum pulse energy was found to be independent of the ablation material tested and was about 1.7 mJ. The ignition process was characterized using both high-speed Schlieren and OH* emission diagnostics. Based on these findings and on the increased fiber-based pulse transport capabilities recently published, new ignition system configurations for space propulsion systems relying on fiber-based pulse delivery are formulated. If the laser ignition system delivers enough pulse energy, the laser-plasma-driven configuration represents the more versatile configuration. If the laser ignition pulse power is limited, the application of laser-ablation-driven ignition is an option to realize ignition, but implies restrictions concerning the location of ignition.
Agile Model Driven Development of Electronic Health Record-Based Specialty Population Registries
Kannan, Vaishnavi; Fish, Jason C.; Willett, DuWayne L.
2018-01-01
The transformation of the American healthcare payment system from fee-for-service to value-based care increasingly makes it valuable to develop patient registries for specialized populations, to better assess healthcare quality and costs. Recent widespread adoption of Electronic Health Records (EHRs) in the U.S. now makes possible construction of EHR-based specialty registry data collection tools and reports, previously unfeasible using manual chart abstraction. But the complexities of specialty registry EHR tools and measures, along with the variety of stakeholders involved, can result in misunderstood requirements and frequent product change requests, as users first experience the tools in their actual clinical workflows. Such requirements churn could easily stall progress in specialty registry rollout. Modeling a system’s requirements and solution design can be a powerful way to remove ambiguities, facilitate shared understanding, and help evolve a design to meet newly-discovered needs. “Agile Modeling” retains these values while avoiding excessive unused up-front modeling in favor of iterative incremental modeling. Using Agile Modeling principles and practices, in calendar year 2015 one institution developed 58 EHR-based specialty registries, with 111 new data collection tools, supporting 134 clinical process and outcome measures, and enrolling over 16,000 patients. The subset of UML and non-UML models found most consistently useful in designing, building, and iteratively evolving EHR-based specialty registries included User Stories, Domain Models, Use Case Diagrams, Decision Trees, Graphical User Interface Storyboards, Use Case text descriptions, and Solution Class Diagrams. PMID:29750222
Acoustic flight test of the Piper Lance
DOT National Transportation Integrated Search
1986-12-01
Research is being conducted to refine current noise regulation of propeller-driven small airplanes. Studies are examining the prospect of a substituting a takeoff procedure of equal stringency for the level flyover certification test presently requir...
The Nett Warrior System: A Case Study for the Acquisition of Soldier Systems
2011-12-15
rpfkbpp=C=mr_if`=mlif`v - 10 - k^s^i=mlpqdo^ar^qb=p`elli The evolution of wearable computers continued as an open system– bus wearable design was...established. The success of NW will depend on the program?s ability to incorporate soldier-driven design requirements, commercial technology, and...on the program’s ability to incorporate soldier-driven design requirements, commercial technology, and thorough system testing. = = ^Åèìáëáíáçå
Shlizerman, Eli; Riffell, Jeffrey A.; Kutz, J. Nathan
2014-01-01
The antennal lobe (AL), olfactory processing center in insects, is able to process stimuli into distinct neural activity patterns, called olfactory neural codes. To model their dynamics we perform multichannel recordings from the projection neurons in the AL driven by different odorants. We then derive a dynamic neuronal network from the electrophysiological data. The network consists of lateral-inhibitory neurons and excitatory neurons (modeled as firing-rate units), and is capable of producing unique olfactory neural codes for the tested odorants. To construct the network, we (1) design a projection, an odor space, for the neural recording from the AL, which discriminates between distinct odorants trajectories (2) characterize scent recognition, i.e., decision-making based on olfactory signals and (3) infer the wiring of the neural circuit, the connectome of the AL. We show that the constructed model is consistent with biological observations, such as contrast enhancement and robustness to noise. The study suggests a data-driven approach to answer a key biological question in identifying how lateral inhibitory neurons can be wired to excitatory neurons to permit robust activity patterns. PMID:25165442
XMI2USE: A Tool for Transforming XMI to USE Specifications
NASA Astrophysics Data System (ADS)
Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.
The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.
Zhou, Ping; Guo, Dongwei; Wang, Hong; Chai, Tianyou
2017-09-29
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVR (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. This indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Ping; Guo, Dongwei; Wang, Hong
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVRmore » (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. In conclusion, this indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.« less
Zhou, Ping; Guo, Dongwei; Wang, Hong; ...
2017-09-29
Optimal operation of an industrial blast furnace (BF) ironmaking process largely depends on a reliable measurement of molten iron quality (MIQ) indices, which are not feasible using the conventional sensors. This paper proposes a novel data-driven robust modeling method for the online estimation and control of MIQ indices. First, a nonlinear autoregressive exogenous (NARX) model is constructed for the MIQ indices to completely capture the nonlinear dynamics of the BF process. Then, considering that the standard least-squares support vector regression (LS-SVR) cannot directly cope with the multioutput problem, a multitask transfer learning is proposed to design a novel multioutput LS-SVRmore » (M-LS-SVR) for the learning of the NARX model. Furthermore, a novel M-estimator is proposed to reduce the interference of outliers and improve the robustness of the M-LS-SVR model. Since the weights of different outlier data are properly given by the weight function, their corresponding contributions on modeling can properly be distinguished, thus a robust modeling result can be achieved. Finally, a novel multiobjective evaluation index on the modeling performance is developed by comprehensively considering the root-mean-square error of modeling and the correlation coefficient on trend fitting, based on which the nondominated sorting genetic algorithm II is used to globally optimize the model parameters. Both experiments using industrial data and industrial applications illustrate that the proposed method can eliminate the adverse effect caused by the fluctuation of data in BF process efficiently. In conclusion, this indicates its stronger robustness and higher accuracy. Moreover, control testing shows that the developed model can be well applied to realize data-driven control of the BF process.« less
NASA Technical Reports Server (NTRS)
Eluszkiewicz, Janusz; Nehrkorn, Thomas; Wofsy, Steven C.; Matross, Daniel; Gerbig, Christoph; Lin, John C.; Freitas, Saulo; Longo, Marcos; Andrews, Arlyn E.; Peters, Wouter
2007-01-01
This paper evaluates simulations of atmospheric CO2 measured in 2004 at continental surface and airborne receptors, intended to test the capability to use data with high temporal and spatial resolution for analyses of carbon sources and sinks at regional and continental scales. The simulations were performed using the Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by the Weather Forecast and Research (WRF) model, and linked to surface fluxes from the satellite-driven Vegetation Photosynthesis and Respiration Model (VPRM). The simulations provide detailed representations of hourly CO2 tower data and reproduce the shapes of airborne vertical profiles with high fidelity. WRF meteorology gives superior model performance compared with standard meteorological products, and the impact of including WRF convective mass fluxes in the STILT trajectory calculations is significant in individual cases. Important biases in the simulation are associated with the nighttime CO2 build-up and subsequent morning transition to convective conditions, and with errors in the advected lateral boundary condition. Comparison of STILT simulations driven by the WRF model against those driven by the Brazilian variant of the Regional Atmospheric Modeling System (BRAMS) shows that model-to-model differences are smaller than between an individual transport model and observations, pointing to systematic errors in the simulated transport. Future developments in the WRF model s data assimilation capabilities, basic research into the fundamental aspects of trajectory calculations, and intercomparison studies involving other transport models, are possible venues for reducing these errors. Overall, the STILT/WRF/VPRM offers a powerful tool for continental and regional scale carbon flux estimates.
NASA Astrophysics Data System (ADS)
Lowman, L.; Barros, A. P.
2016-12-01
Representation of plant photosynthesis in modeling studies requires phenologic indicators to scale carbon assimilation by plants. These indicators are typically the fraction of photosynthetically active radiation (FPAR) and leaf area index (LAI) which represent plant responses to light and water availability, as well as temperature constraints. In this study, a prognostic phenology model based on the growing season index is adapted to determine the phenologic indicators of LAI and FPAR at the sub-daily scale based on meteorological and soil conditions. Specifically, we directly model vegetation green-up and die-off responses to temperature, vapor pressure deficit, soil water potential, and incoming solar radiation. The indices are based on the properties of individual plant functional types, driven by observational data and prior modeling applications. First, we describe and test the sensitivity of the carbon uptake response to predicted phenology for different vegetation types. Second, the prognostic phenology model is incorporated into a land-surface hydrology model, the Duke Coupled Hydrology Model with Prognostic Vegetation (DCHM-PV), to demonstrate the impact of dynamic phenology on modeled carbon assimilation rates and hydrologic feedbacks. Preliminary results show reduced carbon uptake rates when incorporating a prognostic phenology model that match well against the eddy-covariance flux tower observations. Additionally, grassland vegetation shows the most variability in LAI and FPAR tied to meteorological and soil conditions. These results highlight the need to incorporate vegetation-specific responses to water limitation in order to accurately estimate the terrestrial carbon storage component of the global carbon budget.
Tjiam, Irene M; Schout, Barbara M A; Hendrikx, Ad J M; Scherpbier, Albert J J M; Witjes, J Alfred; van Merriënboer, Jeroen J G
2012-01-01
Most studies of simulator-based surgical skills training have focused on the acquisition of psychomotor skills, but surgical procedures are complex tasks requiring both psychomotor and cognitive skills. As skills training is modelled on expert performance consisting partly of unconscious automatic processes that experts are not always able to explicate, simulator developers should collaborate with educational experts and physicians in developing efficient and effective training programmes. This article presents an approach to designing simulator-based skill training comprising cognitive task analysis integrated with instructional design according to the four-component/instructional design model. This theory-driven approach is illustrated by a description of how it was used in the development of simulator-based training for the nephrostomy procedure.
Helsens, Kenny; Colaert, Niklaas; Barsnes, Harald; Muth, Thilo; Flikka, Kristian; Staes, An; Timmerman, Evy; Wortelkamp, Steffi; Sickmann, Albert; Vandekerckhove, Joël; Gevaert, Kris; Martens, Lennart
2010-03-01
MS-based proteomics produces large amounts of mass spectra that require processing, identification and possibly quantification before interpretation can be undertaken. High-throughput studies require automation of these various steps, and management of the data in association with the results obtained. We here present ms_lims (http://genesis.UGent.be/ms_lims), a freely available, open-source system based on a central database to automate data management and processing in MS-driven proteomics analyses.
Shim, Hongseok; Kim, Ji Hyun; Kim, Chan Yeong; Hwang, Sohyun; Kim, Hyojin; Yang, Sunmo; Lee, Ji Eun; Lee, Insuk
2016-11-16
Whole exome sequencing (WES) accelerates disease gene discovery using rare genetic variants, but further statistical and functional evidence is required to avoid false-discovery. To complement variant-driven disease gene discovery, here we present function-driven disease gene discovery in zebrafish (Danio rerio), a promising human disease model owing to its high anatomical and genomic similarity to humans. To facilitate zebrafish-based function-driven disease gene discovery, we developed a genome-scale co-functional network of zebrafish genes, DanioNet (www.inetbio.org/danionet), which was constructed by Bayesian integration of genomics big data. Rigorous statistical assessment confirmed the high prediction capacity of DanioNet for a wide variety of human diseases. To demonstrate the feasibility of the function-driven disease gene discovery using DanioNet, we predicted genes for ciliopathies and performed experimental validation for eight candidate genes. We also validated the existence of heterozygous rare variants in the candidate genes of individuals with ciliopathies yet not in controls derived from the UK10K consortium, suggesting that these variants are potentially involved in enhancing the risk of ciliopathies. These results showed that an integrated genomics big data for a model animal of diseases can expand our opportunity for harnessing WES data in disease gene discovery. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Histone acetyltransferase activity of MOF is required for MLL-AF9 leukemogenesis
Valerio, Daria G.; Xu, Haiming; Chen, Chun-Wei; Hoshii, Takayuki; Eisold, Meghan E.; Delaney, Christopher; Cusan, Monica; Deshpande, Aniruddha J.; Huang, Chun-Hao; Lujambio, Amaia; Zheng, George; Zuber, Johannes; Pandita, Tej K.; Lowe, Scott W.; Armstrong, Scott A.
2017-01-01
Chromatin-based mechanisms offer therapeutic targets in acute myeloid leukemia (AML) that are of great current interest. In this study, we conducted an RNAi-based screen to identify druggable chromatin regulator-based targets in leukemias marked by oncogenic rearrangements of the MLL gene. In this manner, we discovered the H4K16 histone acetyltransferase (HAT) MOF to be important for leukemia cell growth. Conditional deletion of Mof in a mouse model of MLL-AF9-driven leukemogenesis reduced tumor burden and prolonged host survival. RNA sequencing showed an expected downregulation of genes within DNA damage repair pathways that are controlled by MOF, as correlated with a significant increase in yH2AX nuclear foci in Mof-deficient MLL-AF9 tumor cells. In parallel, Mof loss also impaired global H4K16 acetylation in the tumor cell genome. Rescue experiments with catalytically inactive mutants of MOF showed that its enzymatic activity was required to maintain cancer pathogenicity. In support of the role of MOF in sustaining H4K16 acetylation, a small molecule inhibitor of the HAT component MYST blocked the growth of both murine and human MLL-AF9 leukemia cell lines. Furthermore Mof inactivation suppressed leukemia development in a NUP98-HOXA9 driven AML model. Taken together, our results establish that the HAT activity of MOF is required to sustain MLL-AF9 leukemia and may be important for multiple AML subtypes. Blocking this activity is sufficient to stimulate DNA damage, offering a rationale to pursue MOF inhibitors as a targeted approach to treat MLL-rearranged leukemias. PMID:28202522
NASA Technical Reports Server (NTRS)
Valdez, Thomas I.; Billings, Keith J.; Kisor, Adam; Bennett, William R.; Jakupca, Ian J.; Burke, Kenneth; Hoberecht, Mark A.
2012-01-01
Regenerative fuel cells provide a pathway to energy storage system development that are game changers for NASA missions. The fuel cell/ electrolysis MEA performance requirements 0.92 V/ 1.44 V at 200 mA/cm2 can be met. Fuel Cell MEAs have been incorporated into advanced NFT stacks. Electrolyzer stack development in progress. Fuel Cell MEA performance is a strong function of membrane selection, membrane selection will be driven by durability requirements. Electrolyzer MEA performance is catalysts driven, catalyst selection will be driven by durability requirements. Round Trip Efficiency, based on a cell performance, is approximately 65%.
A Data System for a Rapid Evaluation Class of Subscale Aerial Vehicle
NASA Technical Reports Server (NTRS)
Hogge, Edward F.; Quach, Cuong C.; Vazquez, Sixto L.; Hill, Boyd L.
2011-01-01
A low cost, rapid evaluation, test aircraft is used to develop and test airframe damage diagnosis algorithms at Langley Research Center as part of NASA's Aviation Safety Program. The remotely operated subscale aircraft is instrumented with sensors to monitor structural response during flight. Data is collected for good and compromised airframe configurations to develop data driven models for diagnosing airframe state. This paper describes the data acquisition system (DAS) of the rapid evaluation test aircraft. A PC/104 form factor DAS was developed to allow use of Matlab, Simulink simulation code in Langley's existing subscale aircraft flight test infrastructure. The small scale of the test aircraft permitted laboratory testing of the actual flight article under controlled conditions. The low cost and modularity of the DAS permitted adaptation to various flight experiment requirements.
A Window Into Clinical Next-Generation Sequencing-Based Oncology Testing Practices.
Nagarajan, Rakesh; Bartley, Angela N; Bridge, Julia A; Jennings, Lawrence J; Kamel-Reid, Suzanne; Kim, Annette; Lazar, Alexander J; Lindeman, Neal I; Moncur, Joel; Rai, Alex J; Routbort, Mark J; Vasalos, Patricia; Merker, Jason D
2017-12-01
- Detection of acquired variants in cancer is a paradigm of precision medicine, yet little has been reported about clinical laboratory practices across a broad range of laboratories. - To use College of American Pathologists proficiency testing survey results to report on the results from surveys on next-generation sequencing-based oncology testing practices. - College of American Pathologists proficiency testing survey results from more than 250 laboratories currently performing molecular oncology testing were used to determine laboratory trends in next-generation sequencing-based oncology testing. - These presented data provide key information about the number of laboratories that currently offer or are planning to offer next-generation sequencing-based oncology testing. Furthermore, we present data from 60 laboratories performing next-generation sequencing-based oncology testing regarding specimen requirements and assay characteristics. The findings indicate that most laboratories are performing tumor-only targeted sequencing to detect single-nucleotide variants and small insertions and deletions, using desktop sequencers and predesigned commercial kits. Despite these trends, a diversity of approaches to testing exists. - This information should be useful to further inform a variety of topics, including national discussions involving clinical laboratory quality systems, regulation and oversight of next-generation sequencing-based oncology testing, and precision oncology efforts in a data-driven manner.
MBSE-Driven Visualization of Requirements Allocation and Traceability
NASA Technical Reports Server (NTRS)
Jackson, Maddalena; Wilkerson, Marcus
2016-01-01
In a Model Based Systems Engineering (MBSE) infusion effort, there is a usually a concerted effort to define the information architecture, ontologies, and patterns that drive the construction and architecture of MBSE models, but less attention is given to the logical follow-on of that effort: how to practically leverage the resulting semantic richness of a well-formed populated model to enable systems engineers to work more effectively, as MBSE promises. While ontologies and patterns are absolutely necessary, an MBSE effort must also design and provide practical demonstration of value (through human-understandable representations of model data that address stakeholder concerns) or it will not succeed. This paper will discuss opportunities that exist for visualization in making the richness of a well-formed model accessible to stakeholders, specifically stakeholders who rely on the model for their day-to-day work. This paper will discuss the value added by MBSE-driven visualizations in the context of a small case study of interactive visualizations created and used on NASA's proposed Europa Mission. The case study visualizations were created for the purpose of understanding and exploring targeted aspects of requirements flow, allocation, and comparing the structure of that flow-down to a conceptual project decomposition. The work presented in this paper is an example of a product that leverages the richness and formalisms of our knowledge representation while also responding to the quality attributes SEs care about.
CEREF: A hybrid data-driven model for forecasting annual streamflow from a socio-hydrological system
NASA Astrophysics Data System (ADS)
Zhang, Hongbo; Singh, Vijay P.; Wang, Bin; Yu, Yinghao
2016-09-01
Hydrological forecasting is complicated by flow regime alterations in a coupled socio-hydrologic system, encountering increasingly non-stationary, nonlinear and irregular changes, which make decision support difficult for future water resources management. Currently, many hybrid data-driven models, based on the decomposition-prediction-reconstruction principle, have been developed to improve the ability to make predictions of annual streamflow. However, there exist many problems that require further investigation, the chief among which is the direction of trend components decomposed from annual streamflow series and is always difficult to ascertain. In this paper, a hybrid data-driven model was proposed to capture this issue, which combined empirical mode decomposition (EMD), radial basis function neural networks (RBFNN), and external forces (EF) variable, also called the CEREF model. The hybrid model employed EMD for decomposition and RBFNN for intrinsic mode function (IMF) forecasting, and determined future trend component directions by regression with EF as basin water demand representing the social component in the socio-hydrologic system. The Wuding River basin was considered for the case study, and two standard statistical measures, root mean squared error (RMSE) and mean absolute error (MAE), were used to evaluate the performance of CEREF model and compare with other models: the autoregressive (AR), RBFNN and EMD-RBFNN. Results indicated that the CEREF model had lower RMSE and MAE statistics, 42.8% and 7.6%, respectively, than did other models, and provided a superior alternative for forecasting annual runoff in the Wuding River basin. Moreover, the CEREF model can enlarge the effective intervals of streamflow forecasting compared to the EMD-RBFNN model by introducing the water demand planned by the government department to improve long-term prediction accuracy. In addition, we considered the high-frequency component, a frequent subject of concern in EMD-based forecasting, and results showed that removing high-frequency component is an effective measure to improve forecasting precision and is suggested for use with the CEREF model for better performance. Finally, the study concluded that the CEREF model can be used to forecast non-stationary annual streamflow change as a co-evolution of hydrologic and social systems with better accuracy. Also, the modification about removing high-frequency can further improve the performance of the CEREF model. It should be noted that the CEREF model is beneficial for data-driven hydrologic forecasting in complex socio-hydrologic systems, and as a simple data-driven socio-hydrologic forecasting model, deserves more attention.
76 FR 3604 - Information Collection; Qualified Products List for Engine Driven Pumps
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... levels. 2. Reliability and endurance requirements. These requirements include a 100-hour endurance test... evaluated to meet specific requirements related to safety, effectiveness, efficiency, and reliability of the... of the collection of information, including the validity of the methodology and assumptions used; (3...
NASA Astrophysics Data System (ADS)
Shi, Y.; Davis, K. J.; Eissenstat, D. M.; Kaye, J. P.; Duffy, C.; Yu, X.; He, Y.
2014-12-01
Belowground carbon processes are affected by soil moisture and soil temperature, but current biogeochemical models are 1-D and cannot resolve topographically driven hill-slope soil moisture patterns, and cannot simulate the nonlinear effects of soil moisture on carbon processes. Coupling spatially-distributed physically-based hydrologic models with biogeochemical models may yield significant improvements in the representation of topographic influence on belowground C processes. We will couple the Flux-PIHM model to the Biome-BGC (BBGC) model. Flux-PIHM is a coupled physically-based land surface hydrologic model, which incorporates a land-surface scheme into the Penn State Integrated Hydrologic Model (PIHM). The land surface scheme is adapted from the Noah land surface model. Because PIHM is capable of simulating lateral water flow and deep groundwater, Flux-PIHM is able to represent the link between groundwater and the surface energy balance, as well as the land surface heterogeneities caused by topography. The coupled Flux-PIHM-BBGC model will be tested at the Susquehanna/Shale Hills critical zone observatory (SSHCZO). The abundant observations, including eddy covariance fluxes, soil moisture, groundwater level, sap flux, stream discharge, litterfall, leaf area index, above ground carbon stock, and soil carbon efflux, make SSHCZO an ideal test bed for the coupled model. In the coupled model, each Flux-PIHM model grid will couple a BBGC cell. Flux-PIHM will provide BBGC with soil moisture and soil temperature information, while BBGC provides Flux-PIHM with leaf area index. Preliminary results show that when Biome- BGC is driven by PIHM simulated soil moisture pattern, the simulated soil carbon is clearly impacted by topography.
NASA Astrophysics Data System (ADS)
Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.
2018-03-01
This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.
Framework for a clinical information system.
Van De Velde, R; Lansiers, R; Antonissen, G
2002-01-01
The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
Coordination control of flexible manufacturing systems
NASA Astrophysics Data System (ADS)
Menon, Satheesh R.
One of the first attempts was made to develop a model driven system for coordination control of Flexible Manufacturing Systems (FMS). The structure and activities of the FMS are modeled using a colored Petri Net based system. This approach has the advantage of being able to model the concurrency inherent in the system. It provides a method for encoding the system state, state transitions and the feasible transitions at any given state. Further structural analysis (for detecting conflicting actions, deadlocks which might occur during operation, etc.) can be performed. The problem is also addressed of implementing and testing the behavior of existing dynamic scheduling approaches in simulations of realistic situations. A simulation architecture was proposed and performance evaluation was carried out for establishing the correctness of the model, stability of the system from a structural (deadlocks) and temporal (boundedness of backlogs) points of view, and for collection of statistics for performance measures such as machine and robot utilizations, average wait times and idle times of resources. A real-time implementation architecture for the coordination controller was also developed and implemented in a software simulated environment. Given the current technology of FMS control, the model-driven colored Petri net-based approach promises to develop a very flexible control environment.
Model-driven approach to data collection and reporting for quality improvement.
Curcin, Vasa; Woodcock, Thomas; Poots, Alan J; Majeed, Azeem; Bell, Derek
2014-12-01
Continuous data collection and analysis have been shown essential to achieving improvement in healthcare. However, the data required for local improvement initiatives are often not readily available from hospital Electronic Health Record (EHR) systems or not routinely collected. Furthermore, improvement teams are often restricted in time and funding thus requiring inexpensive and rapid tools to support their work. Hence, the informatics challenge in healthcare local improvement initiatives consists of providing a mechanism for rapid modelling of the local domain by non-informatics experts, including performance metric definitions, and grounded in established improvement techniques. We investigate the feasibility of a model-driven software approach to address this challenge, whereby an improvement model designed by a team is used to automatically generate required electronic data collection instruments and reporting tools. To that goal, we have designed a generic Improvement Data Model (IDM) to capture the data items and quality measures relevant to the project, and constructed Web Improvement Support in Healthcare (WISH), a prototype tool that takes user-generated IDM models and creates a data schema, data collection web interfaces, and a set of live reports, based on Statistical Process Control (SPC) for use by improvement teams. The software has been successfully used in over 50 improvement projects, with more than 700 users. We present in detail the experiences of one of those initiatives, Chronic Obstructive Pulmonary Disease project in Northwest London hospitals. The specific challenges of improvement in healthcare are analysed and the benefits and limitations of the approach are discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Optimal policy for value-based decision-making.
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-08-18
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.
Optimal policy for value-based decision-making
Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre
2016-01-01
For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down. PMID:27535638
Yang, Xiaowei; Nie, Kun
2008-03-15
Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.
NASA Astrophysics Data System (ADS)
Carton, Andrew; Driver, Cormac; Jackson, Andrew; Clarke, Siobhán
Theme/UML is an existing approach to aspect-oriented modelling that supports the modularisation and composition of concerns, including crosscutting ones, in design. To date, its lack of integration with model-driven engineering (MDE) techniques has limited its benefits across the development lifecycle. Here, we describe our work on facilitating the use of Theme/UML as part of an MDE process. We have developed a transformation tool that adopts model-driven architecture (MDA) standards. It defines a concern composition mechanism, implemented as a model transformation, to support the enhanced modularisation features of Theme/UML. We evaluate our approach by applying it to the development of mobile, context-aware applications-an application area characterised by many non-functional requirements that manifest themselves as crosscutting concerns.
Systematic characterization of degas-driven flow for poly(dimethylsiloxane) microfluidic devices
Liang, David Y.; Tentori, Augusto M.; Dimov, Ivan K.; ...
2011-01-01
Degas-driven flow is a novel phenomenon used to propel fluids in poly(dimethylsiloxane) (PDMS)-based microfluidic devices without requiring any external power. This method takes advantage of the inherently high porosity and air solubility of PDMS by removing air molecules from the bulk PDMS before initiating the flow. The dynamics of degas-driven flow are dependent on the channel and device geometries and are highly sensitive to temporal parameters. These dependencies have not been fully characterized, hindering broad use of degas-driven flow as a microfluidic pumping mechanism. Here, we characterize, for the first time, the effect of various parameters on the dynamics ofmore » degas-driven flow, including channel geometry, PDMS thickness, PDMS exposure area, vacuum degassing time, and idle time at atmospheric pressure before loading. We investigate the effect of these parameters on flow velocity as well as channel fill time for the degas-driven flow process. Using our devices, we achieved reproducible flow with a standard deviation of less than 8% for flow velocity, as well as maximum flow rates of up to 3 nL/s and mean flow rates of approximately 1-1.5 nL/s. Parameters such as channel surface area and PDMS chip exposure area were found to have negligible impact on degas-driven flow dynamics, whereas channel cross-sectional area, degas time, PDMS thickness, and idle time were found to have a larger impact. In addition, we develop a physical model that can predict mean flow velocities within 6% of experimental values and can be used as a tool for future design of PDMS-based microfluidic devices that utilize degas-driven flow.« less
Sensorimotor Learning Biases Choice Behavior: A Learning Neural Field Model for Decision Making
Schöner, Gregor; Gail, Alexander
2012-01-01
According to a prominent view of sensorimotor processing in primates, selection and specification of possible actions are not sequential operations. Rather, a decision for an action emerges from competition between different movement plans, which are specified and selected in parallel. For action choices which are based on ambiguous sensory input, the frontoparietal sensorimotor areas are considered part of the common underlying neural substrate for selection and specification of action. These areas have been shown capable of encoding alternative spatial motor goals in parallel during movement planning, and show signatures of competitive value-based selection among these goals. Since the same network is also involved in learning sensorimotor associations, competitive action selection (decision making) should not only be driven by the sensory evidence and expected reward in favor of either action, but also by the subject's learning history of different sensorimotor associations. Previous computational models of competitive neural decision making used predefined associations between sensory input and corresponding motor output. Such hard-wiring does not allow modeling of how decisions are influenced by sensorimotor learning or by changing reward contingencies. We present a dynamic neural field model which learns arbitrary sensorimotor associations with a reward-driven Hebbian learning algorithm. We show that the model accurately simulates the dynamics of action selection with different reward contingencies, as observed in monkey cortical recordings, and that it correctly predicted the pattern of choice errors in a control experiment. With our adaptive model we demonstrate how network plasticity, which is required for association learning and adaptation to new reward contingencies, can influence choice behavior. The field model provides an integrated and dynamic account for the operations of sensorimotor integration, working memory and action selection required for decision making in ambiguous choice situations. PMID:23166483
A roadmap for improving healthcare service quality.
Kennedy, Denise M; Caselli, Richard J; Berry, Leonard L
2011-01-01
A data-driven, comprehensive model for improving service and creating long-term value was developed and implemented at Mayo Clinic Arizona (MCA). Healthcare organizations can use this model to prepare for value-based purchasing, a payment system in which quality and patient experience measures will influence reimbursement. Surviving and thriving in such a system will require a comprehensive approach to sustaining excellent service performance from physicians and allied health staff (e.g., nurses, technicians, nonclinical staff). The seven prongs in MCA's service quality improvement model are (1) multiple data sources to drive improvement, (2) accountability for service quality, (3) service consultation and improvement tools, (4) service values and behaviors, (5) education and training, (6) ongoing monitoring and control, and (7) recognition and reward. The model was fully implemented and tested in five departments in which patient perception of provider-specific service attributes and/or overall quality of care were below the 90th percentile for patient satisfaction in the vendor's database. Extent of the implementation was at the discretion of department leadership. Perception data rating various service attributes were collected from randomly selected patients and monitored over a 24-month period. The largest increases in patient perception of excellence over the pilot period were realized when all seven prongs of the model were implemented as a comprehensive improvement approach. The results of this pilot may help other healthcare organizations prepare for value-based purchasing.
Improved design for driven piles based on a pile load test program in Illinois : phase 2.
DOT National Transportation Integrated Search
2014-09-01
A dynamic load test program consisting of 38 sites and 111 piles with restrikes was conducted throughout Illinois : to improve the Illinois Department of Transportation design of driven piling. Pile types included steel H-piles and : closed-ended pip...
Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.
Kamesh, Reddi; Rani, K Yamuna
2016-09-01
A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
A novel model for simulating the racing effect in capillary-driven underfill process in flip chip
NASA Astrophysics Data System (ADS)
Zhu, Wenhui; Wang, Kanglun; Wang, Yan
2018-04-01
Underfill is typically applied in flip chips to increase the reliability of the electronic packagings. In this paper, the evolution of the melt-front shape of the capillary-driven underfill flow is studied through 3D numerical analysis. Two different models, the prevailing surface force model and the capillary model based on the wetted wall boundary condition, are introduced to test their applicability, where level set method is used to track the interface of the two phase flow. The comparison between the simulation results and experimental data indicates that, the surface force model produces better prediction on the melt-front shape, especially in the central area of the flip chip. Nevertheless, the two above models cannot simulate properly the racing effect phenomenon that appears during underfill encapsulation. A novel ‘dynamic pressure boundary condition’ method is proposed based on the validated surface force model. Utilizing this approach, the racing effect phenomenon is simulated with high precision. In addition, a linear relationship is derived from this model between the flow front location at the edge of the flip chip and the filling time. Using the proposed approach, the impact of the underfill-dispensing length on the melt-front shape is also studied.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Laser-driven injector of electrons for IOTA
NASA Astrophysics Data System (ADS)
Romanov, Aleksandr
2017-03-01
Fermilab is developing the Integrable Optics Test Accelerator (IOTA) ring for experiments on nonlinear integrable optics. The machine will operate with either electron beams of 150 MeV or proton beams of 2.5 MeV energies, respectively. The stability of integrable optics depends critically on the precision of the magnetic lattice, which demands the use of beam-based lattice measurements for optics correction. In the proton mode, the low-energy proton beam does not represent a good probe for this application; hence we consider the use of a low-intensity reverse-injected electron beam of matched momentum (70 MeV). Such an injector could be implemented with the use of laser-driven acceleration techniques. This report presents the consideration for a laser-plasma injector for IOTA and discusses the requirements determined by the ring design.
Modeling Quasi-Static and Fatigue-Driven Delamination Migration
NASA Technical Reports Server (NTRS)
De Carvalho, N. V.; Ratcliffe, J. G.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Tay, T. E.
2014-01-01
An approach was proposed and assessed for the high-fidelity modeling of progressive damage and failure in composite materials. It combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. Delamination, matrix cracking, and migration were captured failure and migration criteria based on fracture mechanics. Quasi-static and fatigue loading were modeled within the same overall framework. The methodology proposed was illustrated by simulating the delamination migration test, showing good agreement with the available experimental data.
Access Control for Cooperation Systems Based on Group Situation
NASA Astrophysics Data System (ADS)
Kim, Minsoo; Joshi, James B. D.; Kim, Minkoo
Cooperation systems characterize many emerging environments such as ubiquitous and pervasive systems. Agent based cooperation systems have been proposed in the literature to address challenges of such emerging application environments. A key aspect of such agent based cooperation system is the group situation that changes dynamically and governs the requirements of the cooperation. While individual agent context is important, the overall cooperation behavior is more driven by the group context because of relationships and interactions between agents. Dynamic access control based on group situation is a crucial challenge in such cooperation systems. In this paper we propose a dynamic role based access control model for cooperation systems based on group situation. The model emphasizes capability based agent to role mapping and group situation based permission assignment to allow capturing dynamic access policies that evolve continuously.
Testing collapse models by a thermometer
NASA Astrophysics Data System (ADS)
Bahrami, M.
2018-05-01
Collapse models postulate that space is filled with a collapse noise field, inducing quantum Brownian motions, which are dominant during the measurement, thus causing collapse of the wave function. An important manifestation of the collapse noise field, if any, is thermal energy generation, thus disturbing the temperature profile of a system. The experimental investigation of a collapse-driven heating effect has provided, so far, the most promising test of collapse models against standard quantum theory. In this paper, we calculate the collapse-driven heat generation for a three-dimensional multi-atomic Bravais lattice by solving stochastic Heisenberg equations. We perform our calculation for the mass-proportional continuous spontaneous localization collapse model with nonwhite noise. We obtain the temperature distribution of a sphere under stationary-state and insulated surface conditions. However, the exact quantification of the collapse-driven heat-generation effect highly depends on the actual value of cutoff in the collapse noise spectrum.
A zebrafish model of chordoma initiated by notochord-driven expression of HRASV12.
Burger, Alexa; Vasilyev, Aleksandr; Tomar, Ritu; Selig, Martin K; Nielsen, G Petur; Peterson, Randall T; Drummond, Iain A; Haber, Daniel A
2014-07-01
Chordoma is a malignant tumor thought to arise from remnants of the embryonic notochord, with its origin in the bones of the axial skeleton. Surgical resection is the standard treatment, usually in combination with radiation therapy, but neither chemotherapeutic nor targeted therapeutic approaches have demonstrated success. No animal model and only few chordoma cell lines are available for preclinical drug testing, and, although no druggable genetic drivers have been identified, activation of EGFR and downstream AKT-PI3K pathways have been described. Here, we report a zebrafish model of chordoma, based on stable transgene-driven expression of HRASV12 in notochord cells during development. Extensive intra-notochordal tumor formation is evident within days of transgene expression, ultimately leading to larval death. The zebrafish tumors share characteristics of human chordoma as demonstrated by immunohistochemistry and electron microscopy. The mTORC1 inhibitor rapamycin, which has some demonstrated activity in a chordoma cell line, delays the onset of tumor formation in our zebrafish model, and improves survival of tumor-bearing fish. Consequently, the HRASV12-driven zebrafish model of chordoma could enable high-throughput screening of potential therapeutic agents for the treatment of this refractory cancer. © 2014. Published by The Company of Biologists Ltd.
NASA Technical Reports Server (NTRS)
Bakhtiari-Nejad, Maryam; Nguyen, Nhan T.; Krishnakumar, Kalmanje Srinvas
2009-01-01
This paper presents the application of Bounded Linear Stability Analysis (BLSA) method for metrics driven adaptive control. The bounded linear stability analysis method is used for analyzing stability of adaptive control models, without linearizing the adaptive laws. Metrics-driven adaptive control introduces a notion that adaptation should be driven by some stability metrics to achieve robustness. By the application of bounded linear stability analysis method the adaptive gain is adjusted during the adaptation in order to meet certain phase margin requirements. Analysis of metrics-driven adaptive control is evaluated for a linear damaged twin-engine generic transport model of aircraft. The analysis shows that the system with the adjusted adaptive gain becomes more robust to unmodeled dynamics or time delay.
Agent-Based Modeling of Cancer Stem Cell Driven Solid Tumor Growth.
Poleszczuk, Jan; Macklin, Paul; Enderling, Heiko
2016-01-01
Computational modeling of tumor growth has become an invaluable tool to simulate complex cell-cell interactions and emerging population-level dynamics. Agent-based models are commonly used to describe the behavior and interaction of individual cells in different environments. Behavioral rules can be informed and calibrated by in vitro assays, and emerging population-level dynamics may be validated with both in vitro and in vivo experiments. Here, we describe the design and implementation of a lattice-based agent-based model of cancer stem cell driven tumor growth.
Stress granule formation via ATP depletion-triggered phase separation
NASA Astrophysics Data System (ADS)
Wurtz, Jean David; Lee, Chiu Fan
2018-04-01
Stress granules (SG) are droplets of proteins and RNA that form in the cell cytoplasm during stress conditions. We consider minimal models of stress granule formation based on the mechanism of phase separation regulated by ATP-driven chemical reactions. Motivated by experimental observations, we identify a minimal model of SG formation triggered by ATP depletion. Our analysis indicates that ATP is continuously hydrolysed to deter SG formation under normal conditions, and we provide specific predictions that can be tested experimentally.
A rule-based system for real-time analysis of control systems
NASA Astrophysics Data System (ADS)
Larson, Richard R.; Millard, D. Edward
1992-10-01
An approach to automate the real-time analysis of flight critical health monitoring and system status is being developed and evaluated at the NASA Dryden Flight Research Facility. A software package was developed in-house and installed as part of the extended aircraft interrogation and display system. This design features a knowledge-base structure in the form of rules to formulate interpretation and decision logic of real-time data. This technique has been applied for ground verification and validation testing and flight testing monitoring where quick, real-time, safety-of-flight decisions can be very critical. In many cases post processing and manual analysis of flight system data are not required. The processing is described of real-time data for analysis along with the output format which features a message stack display. The development, construction, and testing of the rule-driven knowledge base, along with an application using the X-31A flight test program, are presented.
A rule-based system for real-time analysis of control systems
NASA Technical Reports Server (NTRS)
Larson, Richard R.; Millard, D. Edward
1992-01-01
An approach to automate the real-time analysis of flight critical health monitoring and system status is being developed and evaluated at the NASA Dryden Flight Research Facility. A software package was developed in-house and installed as part of the extended aircraft interrogation and display system. This design features a knowledge-base structure in the form of rules to formulate interpretation and decision logic of real-time data. This technique has been applied for ground verification and validation testing and flight testing monitoring where quick, real-time, safety-of-flight decisions can be very critical. In many cases post processing and manual analysis of flight system data are not required. The processing is described of real-time data for analysis along with the output format which features a message stack display. The development, construction, and testing of the rule-driven knowledge base, along with an application using the X-31A flight test program, are presented.
View subspaces for indexing and retrieval of 3D models
NASA Astrophysics Data System (ADS)
Dutagaci, Helin; Godil, Afzal; Sankur, Bülent; Yemez, Yücel
2010-02-01
View-based indexing schemes for 3D object retrieval are gaining popularity since they provide good retrieval results. These schemes are coherent with the theory that humans recognize objects based on their 2D appearances. The viewbased techniques also allow users to search with various queries such as binary images, range images and even 2D sketches. The previous view-based techniques use classical 2D shape descriptors such as Fourier invariants, Zernike moments, Scale Invariant Feature Transform-based local features and 2D Digital Fourier Transform coefficients. These methods describe each object independent of others. In this work, we explore data driven subspace models, such as Principal Component Analysis, Independent Component Analysis and Nonnegative Matrix Factorization to describe the shape information of the views. We treat the depth images obtained from various points of the view sphere as 2D intensity images and train a subspace to extract the inherent structure of the views within a database. We also show the benefit of categorizing shapes according to their eigenvalue spread. Both the shape categorization and data-driven feature set conjectures are tested on the PSB database and compared with the competitor view-based 3D shape retrieval algorithms.
Prediction of body lipid change in pregnancy and lactation.
Friggens, N C; Ingvartsen, K L; Emmans, G C
2004-04-01
A simple method to predict the genetically driven pattern of body lipid change through pregnancy and lactation in dairy cattle is proposed. The rationale and evidence for genetically driven body lipid change have their basis in evolutionary considerations and in the homeorhetic changes in lipid metabolism through the reproductive cycle. The inputs required to predict body lipid change are body lipid mass at calving (kg) and the date of conception (days in milk). Body lipid mass can be derived from body condition score and live weight. A key assumption is that there is a linear rate of change of the rate of body lipid change (dL/dt) between calving and a genetically determined time in lactation (T') at which a particular level of body lipid (L') is sought. A second assumption is that there is a linear rate of change of the rate of body lipid change (dL/dt) between T' and the next calving. The resulting model was evaluated using 2 sets of data. The first was from Holstein cows with 3 different levels of body fatness at calving. The second was from Jersey cows in first, second, and third parity. The model was found to reproduce the observed patterns of change in body lipid reserves through lactation in both data sets. The average error of prediction was low, less than the variation normally associated with the recording of condition score, and was similar for the 2 data sets. When the model was applied using the initially suggested parameter values derived from the literature the average error of prediction was 0.185 units of condition score (+/- 0.086 SD). After minor adjustments to the parameter values, the average error of prediction was 0.118 units of condition score (+/- 0.070 SD). The assumptions on which the model is based were sufficient to predict the changes in body lipid of both Holstein and Jersey cows under different nutritional conditions and parities. Thus, the model presented here shows that it is possible to predict genetically driven curves of body lipid change through lactation in a simple way that requires few parameters and inputs that can be derived in practice. It is expected that prediction of the cow's energy requirements can be substantially improved, particularly in early lactation, by incorporating a genetically driven body energy mobilization.
A Finite Element Model for Mixed Porohyperelasticity with Transport, Swelling, and Growth.
Armstrong, Michelle Hine; Buganza Tepole, Adrián; Kuhl, Ellen; Simon, Bruce R; Vande Geest, Jonathan P
2016-01-01
The purpose of this manuscript is to establish a unified theory of porohyperelasticity with transport and growth and to demonstrate the capability of this theory using a finite element model developed in MATLAB. We combine the theories of volumetric growth and mixed porohyperelasticity with transport and swelling (MPHETS) to derive a new method that models growth of biological soft tissues. The conservation equations and constitutive equations are developed for both solid-only growth and solid/fluid growth. An axisymmetric finite element framework is introduced for the new theory of growing MPHETS (GMPHETS). To illustrate the capabilities of this model, several example finite element test problems are considered using model geometry and material parameters based on experimental data from a porcine coronary artery. Multiple growth laws are considered, including time-driven, concentration-driven, and stress-driven growth. Time-driven growth is compared against an exact analytical solution to validate the model. For concentration-dependent growth, changing the diffusivity (representing a change in drug) fundamentally changes growth behavior. We further demonstrate that for stress-dependent, solid-only growth of an artery, growth of an MPHETS model results in a more uniform hoop stress than growth in a hyperelastic model for the same amount of growth time using the same growth law. This may have implications in the context of developing residual stresses in soft tissues under intraluminal pressure. To our knowledge, this manuscript provides the first full description of an MPHETS model with growth. The developed computational framework can be used in concert with novel in-vitro and in-vivo experimental approaches to identify the governing growth laws for various soft tissues.
A Finite Element Model for Mixed Porohyperelasticity with Transport, Swelling, and Growth
Armstrong, Michelle Hine; Buganza Tepole, Adrián; Kuhl, Ellen; Simon, Bruce R.; Vande Geest, Jonathan P.
2016-01-01
The purpose of this manuscript is to establish a unified theory of porohyperelasticity with transport and growth and to demonstrate the capability of this theory using a finite element model developed in MATLAB. We combine the theories of volumetric growth and mixed porohyperelasticity with transport and swelling (MPHETS) to derive a new method that models growth of biological soft tissues. The conservation equations and constitutive equations are developed for both solid-only growth and solid/fluid growth. An axisymmetric finite element framework is introduced for the new theory of growing MPHETS (GMPHETS). To illustrate the capabilities of this model, several example finite element test problems are considered using model geometry and material parameters based on experimental data from a porcine coronary artery. Multiple growth laws are considered, including time-driven, concentration-driven, and stress-driven growth. Time-driven growth is compared against an exact analytical solution to validate the model. For concentration-dependent growth, changing the diffusivity (representing a change in drug) fundamentally changes growth behavior. We further demonstrate that for stress-dependent, solid-only growth of an artery, growth of an MPHETS model results in a more uniform hoop stress than growth in a hyperelastic model for the same amount of growth time using the same growth law. This may have implications in the context of developing residual stresses in soft tissues under intraluminal pressure. To our knowledge, this manuscript provides the first full description of an MPHETS model with growth. The developed computational framework can be used in concert with novel in-vitro and in-vivo experimental approaches to identify the governing growth laws for various soft tissues. PMID:27078495
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roxas, R. M.; Monterola, C.; Carreon-Monterola, S. L.
2010-07-28
We probe the effect of seating arrangement, group composition and group-based competition on students' performance in Physics using a teaching technique adopted from Mazur's peer instruction method. Ninety eight lectures, involving 2339 students, were conducted across nine learning institutions from February 2006 to June 2009. All the lectures were interspersed with student interaction opportunities (SIO), in which students work in groups to discuss and answer concept tests. Two individual assessments were administered before and after the SIO. The ratio of the post-assessment score to the pre-assessment score and the Hake factor were calculated to establish the improvement in student performance.more » Using actual assessment results and neural network (NN) modeling, an optimal seating arrangement for a class was determined based on student seating location. The NN model also provided a quantifiable method for sectioning students. Lastly, the study revealed that competition-driven interactions increase within-group cooperation and lead to higher improvement on the students' performance.« less
Alternative Test Methods for Developmental Neurotoxicity: A ...
Exposure to environmental contaminants is well documented to adversely impact the development of the nervous system. However, the time, animal and resource intensive EPA and OECD testing guideline methods for developmental neurotoxicity (DNT) are not a viable solution to characterizing potential chemical hazards for the thousands of untested chemicals currently in commerce. Thus, research efforts over the past decade have endeavored to develop cost-effective alternative DNT testing methods. These efforts have begun to generate data that can inform regulatory decisions. Yet there are major challenges to both the acceptance and use of this data. Major scientific challenges for DNT include development of new methods and models that are “fit for purpose”, development of a decision-use framework, and regulatory acceptance of the methods. It is critical to understand that use of data from these methods will be driven mainly by the regulatory problems being addressed. Some problems may be addressed with limited datasets, while others may require data for large numbers of chemicals, or require the development and use of new biological and computational models. For example mechanistic information derived from in vitro DNT assays can be used to inform weight of evidence (WoE) or integrated approaches to testing and assessment (IATA) approaches for chemical-specific assessments. Alternatively, in vitro data can be used to prioritize (for further testing) the thousands
Data driven propulsion system weight prediction model
NASA Astrophysics Data System (ADS)
Gerth, Richard J.
1994-10-01
The objective of the research was to develop a method to predict the weight of paper engines, i.e., engines that are in the early stages of development. The impetus for the project was the Single Stage To Orbit (SSTO) project, where engineers need to evaluate alternative engine designs. Since the SSTO is a performance driven project the performance models for alternative designs were well understood. The next tradeoff is weight. Since it is known that engine weight varies with thrust levels, a model is required that would allow discrimination between engines that produce the same thrust. Above all, the model had to be rooted in data with assumptions that could be justified based on the data. The general approach was to collect data on as many existing engines as possible and build a statistical model of the engines weight as a function of various component performance parameters. This was considered a reasonable level to begin the project because the data would be readily available, and it would be at the level of most paper engines, prior to detailed component design.
NASA Astrophysics Data System (ADS)
Revunova, Svetlana; Vlasenko, Vyacheslav; Bukreev, Anatoly
2017-10-01
The article proposes the models of innovative activity development, which is driven by the formation of “points of innovation-driven growth”. The models are based on the analysis of the current state and dynamics of innovative development of construction enterprises in the transport sector and take into account a number of essential organizational and economic changes in management. The authors substantiate implementing such development models as an organizational innovation that has a communication genesis. The use of the communication approach to the formation of “points of innovation-driven growth” allowed the authors to apply the mathematical tools of the graph theory in order to activate the innovative activity of the transport industry in the region. As a result, the authors have proposed models that allow constructing an optimal mechanism for the formation of “points of innovation-driven growth”.
Data-Driven Instructional Leadership
ERIC Educational Resources Information Center
Blink, Rebecca
2006-01-01
With real-world examples from actual schools, this book illustrates how to nurture a culture of continuous improvement, meet the needs of individual students, foster an environment of high expectations, and meet the requirements of NCLB. Each component of the Data-Driven Instructional Leadership (DDIS) model represents several branches of…
An Infrastructure for UML-Based Code Generation Tools
NASA Astrophysics Data System (ADS)
Wehrmeister, Marco A.; Freitas, Edison P.; Pereira, Carlos E.
The use of Model-Driven Engineering (MDE) techniques in the domain of distributed embedded real-time systems are gain importance in order to cope with the increasing design complexity of such systems. This paper discusses an infrastructure created to build GenERTiCA, a flexible tool that supports a MDE approach, which uses aspect-oriented concepts to handle non-functional requirements from embedded and real-time systems domain. GenERTiCA generates source code from UML models, and also performs weaving of aspects, which have been specified within the UML model. Additionally, this paper discusses the Distributed Embedded Real-Time Compact Specification (DERCS), a PIM created to support UML-based code generation tools. Some heuristics to transform UML models into DERCS, which have been implemented in GenERTiCA, are also discussed.
Is lorazepam-induced amnesia specific to the type of memory or to the task used to assess it?
File, S E; Sharma, R; Shaffer, J
1992-01-01
Retrieval tasks can be classified along a continuum from conceptually driven (relying on the encoded meaning of the material) to data driven (relying on the perceptual record and surface features of the material). Since most explicit memory tests are conceptually driven and most implicit memory tests are data driven there has been considerable confounding of the memory system being assessed and the processing required by the retrieval task. The purpose of the present experiment was to investigate the effects of lorazepam on explicit memory, using both types of retrieval task. Lorazepam (2.5 mg) or matched placebo was administered to healthy volunteers and changes in subjective mood ratings and in performance in tests of memory were measured. Lorazepam made subjects significantly more drowsy, feeble, clumsy, muzzy, lethargic and mentally slow. Lorazepam significantly impaired recognition memory for slides, impaired the number of words remembered when the retrieval was cued by the first two letters and reduced the number of pictures remembered when retention was cued with picture fragments. Thus episodic memory was impaired whether the task used was conceptually driven (as in slide recognition) or data driven, as in the other two tasks. Analyses of covariance indicated that the memory impairments were independent of increased sedation, as assessed by self-ratings. In contrast to the deficits in episodic memory, there were no lorazepam-induced impairments in tests of semantic memory, whether this was measured in the conceptually driven task of category generation or in the data-driven task of wordstem completion.
Dopamine selectively remediates ‘model-based’ reward learning: a computational approach
Sharp, Madeleine E.; Foerde, Karin; Daw, Nathaniel D.
2016-01-01
Patients with loss of dopamine due to Parkinson’s disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from ‘model-free’ learning. The other, ‘model-based’ learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson’s disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson’s disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson’s disease may be related to an inability to pursue reward based on complete representations of the environment. PMID:26685155
Multi-faceted informatics system for digitising and streamlining the reablement care model.
Bond, Raymond R; Mulvenna, Maurice D; Finlay, Dewar D; Martin, Suzanne
2015-08-01
Reablement is new paradigm to increase independence in the home amongst the ageing population. And it remains a challenge to design an optimal electronic system to streamline and integrate reablement into current healthcare infrastructure. Furthermore, given reablement requires collaboration with a range of organisations (including national healthcare institutions and community/voluntary service providers), such a system needs to be co-created with all stakeholders involved. Thus, the purpose of this study is, (1) to bring together stakeholder groups to elicit a comprehensive set of requirements for a digital reablement system, (2) to utilise emerging technologies to implement a system and a data model based on the requirements gathered and (3) to involve user groups in a usability assessment of the system. In this study we employed a mixed qualitative approach that included a series of stakeholder-involved activities. Collectively, 73 subjects were recruited to participate in an ideation event, a quasi-hackathon and a usability study. The study unveiled stakeholder-led requirements, which resulted in a novel cloud-based system that was created using emerging web technologies. The system is driven by a unique data model and includes interactive features that are necessary for streamlining the reablement care model. In summary, this system allows community based interventions (or services) to be prescribed to occupants whilst also monitoring the occupant's progress of independent living. Copyright © 2015 Elsevier Inc. All rights reserved.
A data driven model for dune morphodynamics
NASA Astrophysics Data System (ADS)
Palmsten, M.; Brodie, K.; Spore, N.
2016-12-01
Dune morphology results from a number of competing feedbacks between wave, Aeolian, and biologic processes. Only now are conceptual and numerical models for dunes beginning to incorporate all aspects of the processes driving morphodynamics. Drawing on a 35-year record of observations of dune morphology and forcing conditions at the Army Corps of Engineers Field Research Facility (FRF) at Duck, NC, USA, we hypothesize that local dune morphology results from the competition between dune growth during dry windy periods and erosion during storms. We test our hypothesis by developing a data driven model using a Bayesian network to hindcast dune-crest elevation change, dune position change, and shoreline position change. Model inputs include a description of dune morphology from dune-crest elevation, dune-base elevation, dune width, and beach width. Wave forcing and the effect of moisture is parameterized in terms of the maximum total water level and period that waves impact the dunes, along with precipitation. Aeolian forcing is parameterized in terms of maximum wind speed, direction and period that wind exceeds a critical value for sediment transport. We test the sensitivity of our model to forcing parameters and hindcast the 35-year record of dune morphodynamics at the FRF. We also discuss the role of vegetation on dune morphologic differences observed at the FRF.
Analysis of xRAGE and flag high explosive burn models with PBX 9404 cylinder tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrier, Danielle; Andersen, Kyle Richard
High explosives are energetic materials that release their chemical energy in a short interval of time. They are able to generate extreme heat and pressure by a shock driven chemical decomposition reaction, which makes them valuable tools that must be understood. This study investigated the accuracy and performance of two Los Alamos National Laboratory hydrodynamic codes, which are used to determine the behavior of explosives within a variety of systems: xRAGE which utilizes an Eulerian mesh, and FLAG with utilizes a Lagrangian mesh. Various programmed and reactive burn models within both codes were tested using a copper cylinder expansion test.more » The test was based on a recent experimental setup which contained the plastic bonded explosive PBX 9404. Detonation velocity versus time curves for this explosive were obtained using Photon Doppler Velocimetry (PDV). The modeled results from each of the burn models tested were then compared to one another and to the experimental results. This study validate« less
A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture
NASA Technical Reports Server (NTRS)
Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.
2005-01-01
Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.
NASA Technical Reports Server (NTRS)
Ammer, R. C.; Kutney, J. T.
1977-01-01
A static scale model test program was conducted in the static test area of the NASA-Langley 9.14- by 18.29 m(30- by 60-ft) Full-Scale Wind Tunnel Facility to develop an over-the-wing (OTW) nozzle and reverser configuration for the Quiet Clean Short-Haul Experimental Engine (QCSEE). Three nozzles and one basic reverser configuration were tested over the QCSEE takeoff and approach power nozzle pressure ratio range between 1.1 and 1.3. The models were scaled to 8.53% of QCSEE engine size and tested behind two 13.97-cm (5.5-in.) diameter tip-turbine-driven fan simulators coupled in tandem. An OTW nozzle and reverser configuration was identified which satisfies the QCSEE experimental engine requirements in terms of nozzle cycle area variation capability and reverse thrust level, and provides good jet flow spreading over a wing upper surface for achievement of high propulsive lift performance.
Innovation of IT metasystems by means of event-driven paradigm using QDMS
NASA Astrophysics Data System (ADS)
Nedic, Vladimir; Despotovic, Danijela; Cvetanovic, Slobodan; Despotovic, Milan; Eric, Milan
2016-10-01
Globalisation of world economy brings new and more complex demands to business systems. In order to respond to these trends, business systems apply new paradigms that are inevitable reflecting on management metasystems - quality assurance (QA), as well as on information technology (IT) metasystems. Small and medium enterprises (in particular in food industry) do not have possibilities to access external resources to the extent that could provide adequate keeping up with these trends. That raises the question how to enhance synergetic effect of interaction between existing QA and IT metasystems in order to overcome resource gap and achieve set goals by internal resources. The focus of this article is to propose a methodology for utilisation of potential of quality assurance document management system (QDMS) as prototypical platform for initiating, developing, testing and improving new functionalities that are required by IT as support for buiness system management. In that way QDMS plays a role of catalyst that not only accelerates but could also enhance selectivity of the reactions of QA and IT metasystems and direct them on finding new functionalities based on event-driven paradigm. The article tries to show the process of modelling, development and implementation of a possible approach to this problem through conceptual survey and practical solution in the food industry.
Optimizing nursing care by integrating theory-driven evidence-based practice.
Pipe, Teri Britt
2007-01-01
An emerging challenge for nursing leadership is how to convey the importance of both evidence-based practice (EBP) and theory-driven care in ensuring patient safety and optimizing outcomes. This article describes a specific example of a leadership strategy based on Rosswurm and Larrabee's model for change to EBP, which was effective in aligning the processes of EBP and theory-driven care.
A survey of the three-dimensional high Reynolds number transonic wind tunnel
NASA Technical Reports Server (NTRS)
Takashima, K.; Sawada, H.; Aoki, T.
1982-01-01
The facilities for aerodynamic testing of airplane models at transonic speeds and high Reynolds numbers are surveyed. The need for high Reynolds number testing is reviewed, using some experimental results. Some approaches to high Reynolds number testing such as the cryogenic wind tunnel, the induction driven wind tunnel, the Ludwieg tube, the Evans clean tunnel and the hydraulic driven wind tunnel are described. The level of development of high Reynolds number testing facilities in Japan is discussed.
NASA Astrophysics Data System (ADS)
Vandermeulen, J.; Nasseri, S. A.; Van de Wiele, B.; Durin, G.; Van Waeyenberge, B.; Dupré, L.
2018-03-01
Lagrangian-based collective coordinate models for magnetic domain wall (DW) motion rely on an ansatz for the DW profile and a Lagrangian approach to describe the DW motion in terms of a set of time-dependent collective coordinates: the DW position, the DW magnetization angle, the DW width and the DW tilting angle. Another approach was recently used to derive similar equations of motion by averaging the Landau-Lifshitz-Gilbert equation without any ansatz, and identifying the relevant collective coordinates afterwards. In this paper, we use an updated version of the semi-analytical equations to compare the Lagrangian-based collective coordinate models with micromagnetic simulations for field- and STT-driven (spin-transfer torque-driven) DW motion in Pt/CoFe/MgO and Pt/Co/AlOx nanostrips. Through this comparison, we assess the accuracy of the different models, and provide insight into the deviations of the models from simulations. It is found that the lack of terms related to DW asymmetry in the Lagrangian-based collective coordinate models significantly contributes to the discrepancy between the predictions of the most accurate Lagrangian-based model and the micromagnetic simulations in the field-driven case. This is in contrast to the STT-driven case where the DW remains symmetric.
Dopamine selectively remediates 'model-based' reward learning: a computational approach.
Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna
2016-02-01
Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
An analytical study of hybrid ejector/internal combustion engine-driven heat pumps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, R.W.
1988-01-01
Because ejectors can combine high reliability with low maintenance cost in a package requiring little capital investment, they may provide attractive heat pumping capability in situations where the importance of their inefficiencies is minimized. One such concept, a hybrid system in which an ejector driven by engine reject heat is used to increase the performance of an internal combustion engine-driven heat pump, was analyzed by modifying an existing ejector heat pump model and combining it with generic compressor and internal combustion engine models. Under the model assumptions for nominal cooling mode conditions, the results showed that hybrid systems could providemore » substantial performance augmentation/emdash/up to 17/percent/ increase in system coefficient of performance for a parallel arrangement of an enhanced ejector with the engine-driven compressor. 4 refs., 4 figs., 4 tabs.« less
Fast ion beta limit measurements by collimated neutron detection in MST plasmas
NASA Astrophysics Data System (ADS)
Capecchi, William; Anderson, Jay; Bonofiglo, Phillip; Kim, Jungha; Sears, Stephanie
2015-11-01
Fast ion orbits in the reversed field pinch (RFP) are well ordered and classically confined despite magnetic field stochasticity generated by multiple tearing modes. Classical TRANSP modeling of a 1MW tangentially injected hydrogen neutral beam in MST deuterium plasmas predicts a core-localized fast ion density that can be up to 25% of the electron density and a fast ion beta of many times the local thermal beta. However, neutral particle analysis of an NBI-driven mode (presumably driven by a fast ion pressure gradient) shows mode-induced transport of core-localized fast ions and a saturated fast ion density. The TRANSP modeling is presumed valid until the onset of the beam-driven mode and gives an initial estimate of the volume-averaged fast ion beta of 1-2% (local core value up to 10%). A collimated neutron detector for fusion product profile measurements will be used to determine the spatial distribution of fast ions, allowing for a first measurement of the critical fast-ion pressure gradient required for mode destabilization. Testing/calibration data and initial fast-ion profiles will be presented. Characterization of both the local and global fast ion beta will be done for deuterium beam injection into deuterium plasmas for comparison to TRANSP predictions. Work supported by US DOE.
Gao, Junyuan; Sun, Xiurong; Moore, Leon C.; White, Thomas W.; Brink, Peter R.
2011-01-01
We recently modeled fluid flow through gap junction channels coupling the pigmented and nonpigmented layers of the ciliary body. The model suggested the channels could transport the secretion of aqueous humor, but flow would be driven by hydrostatic pressure rather than osmosis. The pressure required to drive fluid through a single layer of gap junctions might be just a few mmHg and difficult to measure. In the lens, however, there is a circulation of Na+ that may be coupled to intracellular fluid flow. Based on this hypothesis, the fluid would cross hundreds of layers of gap junctions, and this might require a large hydrostatic gradient. Therefore, we measured hydrostatic pressure as a function of distance from the center of the lens using an intracellular microelectrode-based pressure-sensing system. In wild-type mouse lenses, intracellular pressure varied from ∼330 mmHg at the center to zero at the surface. We have several knockout/knock-in mouse models with differing levels of expression of gap junction channels coupling lens fiber cells. Intracellular hydrostatic pressure in lenses from these mouse models varied inversely with the number of channels. When the lens’ circulation of Na+ was either blocked or reduced, intracellular hydrostatic pressure in central fiber cells was either eliminated or reduced proportionally. These data are consistent with our hypotheses: fluid circulates through the lens; the intracellular leg of fluid circulation is through gap junction channels and is driven by hydrostatic pressure; and the fluid flow is generated by membrane transport of sodium. PMID:21624945
Simulations of material mixing in laser-driven reshock experiments
NASA Astrophysics Data System (ADS)
Haines, Brian M.; Grinstein, Fernando F.; Welser-Sherrill, Leslie; Fincke, James R.
2013-02-01
We perform simulations of a laser-driven reshock experiment [Welser-Sherrill et al., High Energy Density Phys. (unpublished)] in the strong-shock high energy-density regime to better understand material mixing driven by the Richtmyer-Meshkov instability. Validation of the simulations is based on direct comparison of simulation and radiographic data. Simulations are also compared with published direct numerical simulation and the theory of homogeneous isotropic turbulence. Despite the fact that the flow is neither homogeneous, isotropic nor fully turbulent, there are local regions in which the flow demonstrates characteristics of homogeneous isotropic turbulence. We identify and isolate these regions by the presence of high levels of turbulent kinetic energy (TKE) and vorticity. After reshock, our analysis shows characteristics consistent with those of incompressible isotropic turbulence. Self-similarity and effective Reynolds number assessments suggest that the results are reasonably converged at the finest resolution. Our results show that in shock-driven transitional flows, turbulent features such as self-similarity and isotropy only fully develop once de-correlation, characteristic vorticity distributions, and integrated TKE, have decayed significantly. Finally, we use three-dimensional simulation results to test the performance of two-dimensional Reynolds-averaged Navier-Stokes simulations. In this context, we also test a presumed probability density function turbulent mixing model extensively used in combustion applications.
NASA Astrophysics Data System (ADS)
Gallagher, John A.
2016-04-01
The desired operating range of ferroelectric materials with compositions near the morphotropic phase boundary is limited by field induced phase transformations. In [001]C cut and poled relaxor ferroelectric single crystals the mechanically driven ferroelectric rhombohedral to ferroelectric orthorhombic phase transformation is hindered by antagonistic electrical loading. Instability around the phase transformation makes the current experimental technique for characterization of the large field behavior very time consuming. Characterization requires specialized equipment and involves an extensive set of measurements under combined electrical, mechanical, and thermal loads. In this work a mechanism-based model is combined with a more limited set of experiments to obtain the same results. The model utilizes a work-energy criterion that calculates the mechanical work required to induce the transformation and the required electrical work that is removed to reverse the transformation. This is done by defining energy barriers to the transformation. The results of the combined experiment and modeling approach are compared to the fully experimental approach and error is discussed. The model shows excellent predictive capability and is used to substantially reduce the total number of experiments required for characterization. This decreases the time and resources required for characterization of new compositions.
A Reward-Based Behavioral Platform to Measure Neural Activity during Head-Fixed Behavior
Micallef, Andrew H.; Takahashi, Naoya; Larkum, Matthew E.; Palmer, Lucy M.
2017-01-01
Understanding the neural computations that contribute to behavior requires recording from neurons while an animal is behaving. This is not an easy task as most subcellular recording techniques require absolute head stability. The Go/No-Go sensory task is a powerful decision-driven task that enables an animal to report a binary decision during head-fixation. Here we discuss how to set up an Ardunio and Python based platform system to control a Go/No-Go sensory behavior paradigm. Using an Arduino micro-controller and Python-based custom written program, a reward can be delivered to the animal depending on the decision reported. We discuss the various components required to build the behavioral apparatus that can control and report such a sensory stimulus paradigm. This system enables the end user to control the behavioral testing in real-time and therefore it provides a strong custom-made platform for probing the neural basis of behavior. PMID:28620282
Elevations of water-worn features on Mars: Implications for circulation of groundwater
Carr, M.H.
2002-01-01
Central to the model of the evolution of the martian hydrosphere by Clifford and Parker [2001] is a permanent freezing of the planet at the end of the Noachian and recharge of the global groundwater system by basal melting of ice-rich polar deposits. Acquisition of MOLA data by Mars Global Surveyor provides a means of testing the model, since discharge of water onto the surface, after development of the cryosphere, is driven by the hydrostatic head created by the difference in elevation between the base of the polar-layered terrain and the discharge site. The new data show that, while most post- Noachian water-worn features are at a lower elevation than the base of the polar-layered terrains, as required by the model, there are exceptions. Prominent among these are possible lacustrine deposits within the canyons, tributaries to the canyons, and valleys on several volcanoes. These high-standing features can be reconciled with the model if volcanic melting of ice within the cryosphere is invoked as a source for water at high elevations. An alternative is that high pressures may have developed below the cryosphere as a result of water being trapped beneath the growing cryosphere and the impermeable basement. Yet another alternative is that, since the end of the Noachian, the groundwater system has been recharged by precipitation during occasional warm periods.
Hybrid-mode read-in integrated circuit for infrared scene projectors
NASA Astrophysics Data System (ADS)
Cho, Min Ji; Shin, Uisub; Lee, Hee Chul
2017-05-01
The infrared scene projector (IRSP) is a tool for evaluating infrared sensors by producing infrared images. Because sensor testing with IRSPs is safer than field testing, the usefulness of IRSPs is widely recognized at present. The important performance characteristics of IRSPs are the thermal resolution and the thermal dynamic range. However, due to an existing trade-off between these requirements, it is often difficult to find a workable balance between them. The conventional read-in integrated circuit (RIIC) can be classified into two types: voltage-mode and current-mode types. An IR emitter driven by a voltage-mode RIIC offers a fine thermal resolution. On the other hand, an emitter driven by the current-mode RIIC has the advantage of a wide thermal dynamic range. In order to provide various scenes, i.e., from highresolution scenes to high-temperature scenes, both of the aforementioned advantages are required. In this paper, a hybridmode RIIC which is selectively operated in two modes is proposed. The mode-selective characteristic of the proposed RIIC allows users to generate high-fidelity scenes regardless of the scene content. A prototype of the hybrid-mode RIIC was fabricated using a 0.18-μm 1-poly 6-metal CMOS process. The thermal range and the thermal resolution of the IR emitter driven by the proposed circuit were calculated based on measured data. The estimated thermal dynamic range of the current mode was from 261K to 790K, and the estimated thermal resolution of the voltage mode at 300K was 23 mK with a 12-bit gray-scale resolution.
A simple model for farmland nitrogen loss to surface runoff with raindrop driven process
NASA Astrophysics Data System (ADS)
Tong, J.; Li, J.
2016-12-01
It has been widely recognized that surface runoff from the agricultural fields is an important source of non-point source pollution (NPSP). Moreover, as the agricultural country with the largest nitrogen fertilizer production, import and consumption in the world, our nation should pay greater attention to the over-application and inefficient use of nitrogen (N) fertilizer, which may cause severe pollution both in surface water and groundwater. To figure out the transfer mechanism between the soil solution and surface runoff, lots of laboratory test were conducted and related models were established in this study. But little of them was carried out in field scale since a part of variables are hard to control and some uncontrollable natural factors including rainfall intensity, temperature, wind speeds, soil spatial heterogeneity etc., may affect the field experimental results. Despite that, field tests can better reflect the mechanism of soil chemical loss to surface runoff than laboratory experiments, and the latter tend to oversimplify the environmental conditions. Therefore, a physically based, nitrogen transport model was developed and tested with so called semi-field experiments (i.e., artificial rainfall instead of natural rainfall was applied in the test). Our model integrated both raindrop driven process and diffusion effect along with the simplified nitrogen chain reactions. The established model was solved numerically through the modified Hydrus-1d source code, and the model simulations closely agree with the experimental data. Furthermore, our model indicates that the depth of the exchange layer and raindrop induced water transfer rate are two important parameters, and they have different impacts on the simulation results. The study results can provide references for preventing and controlling agricultural NPSP.
The transferability of safety-driven access management models for application to other sites.
DOT National Transportation Integrated Search
2001-01-01
Several research studies have produced mathematical models that predict the safety impacts of selected access management techniques. Since new models require substantial resources to construct, this study evaluated five existing models with regard to...
NASA Astrophysics Data System (ADS)
Chen, Zhan-Ming; Chen, G. Q.
2013-07-01
This study presents a network simulation of the global embodied energy flows in 2007 based on a multi-region input-output model. The world economy is portrayed as a 6384-node network and the energy interactions between any two nodes are calculated and analyzed. According to the results, about 70% of the world's direct energy input is invested in resource, heavy manufacture, and transportation sectors which provide only 30% of the embodied energy to satisfy final demand. By contrast, non-transportation services sectors contribute to 24% of the world's demand-driven energy requirement with only 6% of the direct energy input. Commodity trade is shown to be an important alternative to fuel trade in redistributing energy, as international commodity flows embody 1.74E + 20 J of energy in magnitude up to 89% of the traded fuels. China is the largest embodied energy exporter with a net export of 3.26E + 19 J, in contrast to the United States as the largest importer with a net import of 2.50E + 19 J. The recent economic fluctuations following the financial crisis accelerate the relative expansions of energy requirement by developing countries, as a consequence China will take over the place of the United States as the world's top demand-driven energy consumer in 2022 and India will become the third largest in 2015.
Histone Acetyltransferase Activity of MOF Is Required for MLL-AF9 Leukemogenesis.
Valerio, Daria G; Xu, Haiming; Chen, Chun-Wei; Hoshii, Takayuki; Eisold, Meghan E; Delaney, Christopher; Cusan, Monica; Deshpande, Aniruddha J; Huang, Chun-Hao; Lujambio, Amaia; Zheng, YuJun George; Zuber, Johannes; Pandita, Tej K; Lowe, Scott W; Armstrong, Scott A
2017-04-01
Chromatin-based mechanisms offer therapeutic targets in acute myeloid leukemia (AML) that are of great current interest. In this study, we conducted an RNAi-based screen to identify druggable chromatin regulator-based targets in leukemias marked by oncogenic rearrangements of the MLL gene. In this manner, we discovered the H4K16 histone acetyltransferase (HAT) MOF to be important for leukemia cell growth. Conditional deletion of Mof in a mouse model of MLL-AF9 -driven leukemogenesis reduced tumor burden and prolonged host survival. RNA sequencing showed an expected downregulation of genes within DNA damage repair pathways that are controlled by MOF, as correlated with a significant increase in yH2AX nuclear foci in Mof -deficient MLL-AF9 tumor cells. In parallel, Mof loss also impaired global H4K16 acetylation in the tumor cell genome. Rescue experiments with catalytically inactive mutants of MOF showed that its enzymatic activity was required to maintain cancer pathogenicity. In support of the role of MOF in sustaining H4K16 acetylation, a small-molecule inhibitor of the HAT component MYST blocked the growth of both murine and human MLL-AF9 leukemia cell lines. Furthermore, Mof inactivation suppressed leukemia development in an NUP98-HOXA9 -driven AML model. Taken together, our results establish that the HAT activity of MOF is required to sustain MLL-AF9 leukemia and may be important for multiple AML subtypes. Blocking this activity is sufficient to stimulate DNA damage, offering a rationale to pursue MOF inhibitors as a targeted approach to treat MLL -rearranged leukemias. Cancer Res; 77(7); 1753-62. ©2017 AACR . ©2017 American Association for Cancer Research.
2008-12-01
A SYSTEMS ENGINEERING PROCESS SUPPORTING THE DEVELOPMENT OF OPERATIONAL REQUIREMENTS DRIVEN FEDERATIONS Andreas Tolk & Thomas G. Litwin ...c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Tolk, Litwin and Kewley Executive Office (PEO...capabilities and their relative changes 1297 Tolk, Litwin and Kewley based on the system to be evaluated as well, in particular when it comes to
Predicting Kenya Short Rains Using the Indian Ocean SST
NASA Astrophysics Data System (ADS)
Peng, X.; Albertson, J. D.; Steinschneider, S.
2017-12-01
The rainfall over the Eastern Africa is charaterized by the typical bimodal monsoon system. Literatures have shown that the monsoon system is closely connected with the large-scale atmospheric motion which is believed to be driven by sea surface temperature anomalies (SSTA). Therefore, we may make use of the predictability of SSTA in estimating future Easter Africa monsoon. In this study, we tried predict the Kenya short rains (Oct, Nov and Dec rainfall) based on the Indian Ocean SSTA. The Least Absolute Shrinkage and Selection Operator (LASSO) regression is used to avoid over-fitting issues. Models for different lead times are trained using a 28-year training set (2006-1979) and are tested using a 10-year test set (2007-2016). Satisfying prediciton skills are achieved at relatively long lead times (i.e., 8 and 10 months) in terms of correlation coefficient and sign accuracy. Unlike some of the previous work, the prediction models are obtained from a data-driven method. Limited predictors are selected for each model and can be used in understanding the underlying physical connection. Still, further investigation is needed since the sampling variability issue cannot be excluded due to the limited sample size.
NASA Astrophysics Data System (ADS)
Halimah, B. Z.; Azlina, A.; Sembok, T. M.; Sufian, I.; Sharul Azman, M. N.; Azuraliza, A. B.; Zulaiha, A. O.; Nazlia, O.; Salwani, A.; Sanep, A.; Hailani, M. T.; Zaher, M. Z.; Azizah, J.; Nor Faezah, M. Y.; Choo, W. O.; Abdullah, Chew; Sopian, B.
The Holistic Islamic Banking System (HiCORE), a banking system suitable for virtual banking environment, created based on universityindustry collaboration initiative between Universiti Kebangsaan Malaysia (UKM) and Fuziq Software Sdn Bhd. HiCORE was modeled on a multitiered Simple - Services Oriented Architecture (S-SOA), using the parameterbased semantic approach. HiCORE's existence is timely as the financial world is looking for a new approach to creating banking and financial products that are interest free or based on the Islamic Syariah principles and jurisprudence. An interest free banking system has currently caught the interest of bankers and financiers all over the world. HiCORE's Parameter-based module houses the Customer-information file (CIF), Deposit and Financing components. The Parameter based module represents the third tier of the multi-tiered Simple SOA approach. This paper highlights the multi-tiered parameter- driven approach to the creation of new Islamiic products based on the 'dalil' (Quran), 'syarat' (rules) and 'rukun' (procedures) as required by the syariah principles and jurisprudence reflected by the semantic ontology embedded in the parameter module of the system.
Implementation and evaluation of a monthly water balance model over the US on an 800 m grid
Hostetler, Steven W.; Alder, Jay R.
2016-01-01
We simulate the 1950–2010 water balance for the conterminous U.S. (CONUS) with a monthly water balance model (MWBM) using the 800 m Parameter-elevation Regression on Independent Slopes Model (PRISM) data set as model input. We employed observed snow and streamflow data sets to guide modification of the snow and potential evapotranspiration components in the default model and to evaluate model performance. Based on various metrics and sensitivity tests, the modified model yields reasonably good simulations of seasonal snowpack in the West (range of bias of ±50 mm at 68% of 713 SNOTEL sites), the gradients and magnitudes of actual evapotranspiration, and runoff (median correlation of 0.83 and median Nash-Sutcliff efficiency of 0.6 between simulated and observed annual time series at 1427 USGS gage sites). The model generally performs well along the Pacific Coast, the high elevations of the Basin and Range and over the Midwest and East, but not as well over the dry areas of the Southwest and upper Plains regions due, in part, to the apportioning of direct versus delayed runoff. Sensitivity testing and application of the MWBM to simulate the future water balance at four National Parks when driven by 30 climate models from the Climate Model Intercomparison Program Phase 5 (CMIP5) demonstrate that the model is useful for evaluating first-order, climate driven hydrologic change on monthly and annual time scales.
Direct methanol fuel cells: A database-driven design procedure
NASA Astrophysics Data System (ADS)
Flipsen, S. F. J.; Spitas, C.
2011-10-01
To test the feasibility of DMFC systems in preliminary stages of the design process the design engineer can make use of heuristic models identifying the opportunity of DMFC systems in a specific application. In general these models are to generic and have a low accuracy. To improve the accuracy a second-order model is proposed in this paper. The second-order model consists of an evolutionary algorithm written in Mathematica, which selects a component-set satisfying the fuel-cell systems' performance requirements, places the components in 3D space and optimizes for volume. The results are presented as a 3D draft proposal together with a feasibility metric. To test the algorithm the design of DMFC system applied in the MP3 player is evaluated. The results show that volume and costs are an issue for the feasibility of the fuel-cell power-system applied in the MP3 player. The generated designs and the algorithm are evaluated and recommendations are given.
Effects of MHD instabilities on neutral beam current drive
NASA Astrophysics Data System (ADS)
Podestà, M.; Gorelenkova, M.; Darrow, D. S.; Fredrickson, E. D.; Gerhardt, S. P.; White, R. B.
2015-05-01
Neutral beam injection (NBI) is one of the primary tools foreseen for heating, current drive (CD) and q-profile control in future fusion reactors such as ITER and a Fusion Nuclear Science Facility. However, fast ions from NBI may also provide the drive for energetic particle-driven instabilities (e.g. Alfvénic modes (AEs)), which in turn redistribute fast ions in both space and energy, thus hampering the control capabilities and overall efficiency of NB-driven current. Based on experiments on the NSTX tokamak (M. Ono et al 2000 Nucl. Fusion 40 557), the effects of AEs and other low-frequency magneto-hydrodynamic instabilities on NB-CD efficiency are investigated. A new fast ion transport model, which accounts for particle transport in phase space as required for resonant AE perturbations, is utilized to obtain consistent simulations of NB-CD through the tokamak transport code TRANSP. It is found that instabilities do indeed reduce the NB-driven current density over most of the plasma radius by up to ∼50%. Moreover, the details of the current profile evolution are sensitive to the specific model used to mimic the interaction between NB ions and instabilities. Implications for fast ion transport modeling in integrated tokamak simulations are briefly discussed.
Effects of MHD instabilities on neutral beam current drive
Podestà, M.; Gorelenkova, M.; Darrow, D. S.; ...
2015-04-17
One of the primary tools foreseen for heating, current drive (CD) and q-profile control in future fusion reactors such as ITER and a Fusion Nuclear Science Facility is the neutral beam injection (NBI). However, fast ions from NBI may also provide the drive for energetic particle-driven instabilities (e.g. Alfvénic modes (AEs)), which in turn redistribute fast ions in both space and energy, thus hampering the control capabilities and overall efficiency of NB-driven current. Based on experiments on the NSTX tokamak (M. Ono et al 2000 Nucl. Fusion 40 557), the effects of AEs and other low-frequency magneto-hydrodynamic instabilities on NB-CDmore » efficiency are investigated. When looking at the new fast ion transport model, which accounts for particle transport in phase space as required for resonant AE perturbations, is utilized to obtain consistent simulations of NB-CD through the tokamak transport code TRANSP. It is found that instabilities do indeed reduce the NB-driven current density over most of the plasma radius by up to ~50%. Moreover, the details of the current profile evolution are sensitive to the specific model used to mimic the interaction between NB ions and instabilities. Finally, implications for fast ion transport modeling in integrated tokamak simulations are briefly discussed.« less
Kamesh, Reddi; Rani, Kalipatnapu Yamuna
2017-12-01
In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.
Network operating system focus technology
NASA Technical Reports Server (NTRS)
1985-01-01
An activity structured to provide specific design requirements and specifications for the Space Station Data Management System (DMS) Network Operating System (NOS) is outlined. Examples are given of the types of supporting studies and implementation tasks presently underway to realize a DMS test bed capability to develop hands-on understanding of NOS requirements as driven by actual subsystem test beds participating in the overall Johnson Space Center test bed program. Classical operating system elements and principal NOS functions are listed.
Real-time computing platform for spiking neurons (RT-spike).
Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael
2006-07-01
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
The NTeQ ISD Model: A Tech-Driven Model for Digital Natives (DNs)
ERIC Educational Resources Information Center
Williams, C.; Anekwe, J. U.
2017-01-01
Integrating Technology for enquiry (NTeQ) instructional development model (ISD), is believed to be a technology-driven model. The authors x-rayed the ten-step model to reaffirm the ICT knowledge demand of the learner and the educator; hence computer-based activities at various stages of the model are core elements. The model also is conscious of…
NASA Astrophysics Data System (ADS)
Piotrowski, Adam P.; Napiorkowski, Jaroslaw J.
2018-06-01
A number of physical or data-driven models have been proposed to evaluate stream water temperatures based on hydrological and meteorological observations. However, physical models require a large amount of information that is frequently unavailable, while data-based models ignore the physical processes. Recently the air2stream model has been proposed as an intermediate alternative that is based on physical heat budget processes, but it is so simplified that the model may be applied like data-driven ones. However, the price for simplicity is the need to calibrate eight parameters that, although have some physical meaning, cannot be measured or evaluated a priori. As a result, applicability and performance of the air2stream model for a particular stream relies on the efficiency of the calibration method. The original air2stream model uses an inefficient 20-year old approach called Particle Swarm Optimization with inertia weight. This study aims at finding an effective and robust calibration method for the air2stream model. Twelve different optimization algorithms are examined on six different streams from northern USA (states of Washington, Oregon and New York), Poland and Switzerland, located in both high mountains, hilly and lowland areas. It is found that the performance of the air2stream model depends significantly on the calibration method. Two algorithms lead to the best results for each considered stream. The air2stream model, calibrated with the chosen optimization methods, performs favorably against classical streamwater temperature models. The MATLAB code of the air2stream model and the chosen calibration procedure (CoBiDE) are available as Supplementary Material on the Journal of Hydrology web page.
Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality
NASA Astrophysics Data System (ADS)
Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.
2017-12-01
Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.
1992-12-21
in preparation). Foundations of artificial intelligence. Cambridge, MA: MIT Press. O’Reilly, R. C. (1991). X3DNet: An X- Based Neural Network ...2.2.3 Trace based protocol analysis 19 2.2A Summary of important data features 21 2.3 Tools related to process model testing 23 2.3.1 Tools for building...algorithm 57 3. Requirements for testing process models using trace based protocol 59 analysis 3.1 Definition of trace based protocol analysis (TBPA) 59
40 CFR 93.159 - Procedures for conformity determinations of general Federal actions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... based on the applicable air quality models, data bases, and other requirements specified in the most... applicable air quality models, data bases, and other requirements specified in the most recent version of the... data are available, such as actual stack test data from stationary sources which are part of the...
Omics strategies for revealing Yersinia pestis virulence
Yang, Ruifu; Du, Zongmin; Han, Yanping; Zhou, Lei; Song, Yajun; Zhou, Dongsheng; Cui, Yujun
2012-01-01
Omics has remarkably changed the way we investigate and understand life. Omics differs from traditional hypothesis-driven research because it is a discovery-driven approach. Mass datasets produced from omics-based studies require experts from different fields to reveal the salient features behind these data. In this review, we summarize omics-driven studies to reveal the virulence features of Yersinia pestis through genomics, trascriptomics, proteomics, interactomics, etc. These studies serve as foundations for further hypothesis-driven research and help us gain insight into Y. pestis pathogenesis. PMID:23248778
A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.
Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai
2014-08-01
This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.
Wang, Shutao; Raju, Balasundar I; Leyvi, Evgeniy; Weinstein, David A; Seip, Ralf
2011-09-01
Glycogen storage disease type Ia (GSDIa) is caused by an inherited defect in the glucose-6-phosphatase gene. The recent advent of targeted ultrasound-mediated delivery (USMD) of plasmid DNA (pDNA) to the liver in conjunction with microbubbles may provide an alternative treatment option. This study focuses on determining the acoustically accessible liver volume in GSDIa patients using transducer models of various geometries with an image-based geometry-driven approach. Results show that transducers with longer focal lengths and smaller apertures (up to an f/number of 2) are able to access larger liver volumes in GSDIa patients while still being capable of delivering the required ultrasound dose in situ (2.5 MPa peak negative pressure at the focus). With sufficiently large acoustic windows and the ability to use glucose to easily assess efficacy, GSD appears to be a good model for testing USMD as proof of principle as a potential therapy for liver applications in general. Copyright © 2011 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Simplified subsurface modelling: data assimilation and violated model assumptions
NASA Astrophysics Data System (ADS)
Erdal, Daniel; Lange, Natascha; Neuweiler, Insa
2017-04-01
Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.
Code of Federal Regulations, 2014 CFR
2014-01-01
... aircraft noise when the wind speed is in excess of 5 knots (9 km/hr). Sec. G36.107 Noise Measurement... OF TRANSPORTATION AIRCRAFT NOISE STANDARDS: AIRCRAFT TYPE AND AIRWORTHINESS CERTIFICATION Pt. 36, App....201 Corrections to Test Results. G36.203 Validity of results. part d—noise limits G36.301 Aircraft...
NASA Astrophysics Data System (ADS)
Forkel, Matthias; Dorigo, Wouter; Lasslop, Gitta; Teubner, Irene; Chuvieco, Emilio; Thonicke, Kirsten
2017-12-01
Vegetation fires affect human infrastructures, ecosystems, global vegetation distribution, and atmospheric composition. However, the climatic, environmental, and socioeconomic factors that control global fire activity in vegetation are only poorly understood, and in various complexities and formulations are represented in global process-oriented vegetation-fire models. Data-driven model approaches such as machine learning algorithms have successfully been used to identify and better understand controlling factors for fire activity. However, such machine learning models cannot be easily adapted or even implemented within process-oriented global vegetation-fire models. To overcome this gap between machine learning-based approaches and process-oriented global fire models, we introduce a new flexible data-driven fire modelling approach here (Satellite Observations to predict FIre Activity, SOFIA approach version 1). SOFIA models can use several predictor variables and functional relationships to estimate burned area that can be easily adapted with more complex process-oriented vegetation-fire models. We created an ensemble of SOFIA models to test the importance of several predictor variables. SOFIA models result in the highest performance in predicting burned area if they account for a direct restriction of fire activity under wet conditions and if they include a land cover-dependent restriction or allowance of fire activity by vegetation density and biomass. The use of vegetation optical depth data from microwave satellite observations, a proxy for vegetation biomass and water content, reaches higher model performance than commonly used vegetation variables from optical sensors. We further analyse spatial patterns of the sensitivity between anthropogenic, climate, and vegetation predictor variables and burned area. We finally discuss how multiple observational datasets on climate, hydrological, vegetation, and socioeconomic variables together with data-driven modelling and model-data integration approaches can guide the future development of global process-oriented vegetation-fire models.
Six Sigma Quality Management System and Design of Risk-based Statistical Quality Control.
Westgard, James O; Westgard, Sten A
2017-03-01
Six sigma concepts provide a quality management system (QMS) with many useful tools for managing quality in medical laboratories. This Six Sigma QMS is driven by the quality required for the intended use of a test. The most useful form for this quality requirement is the allowable total error. Calculation of a sigma-metric provides the best predictor of risk for an analytical examination process, as well as a design parameter for selecting the statistical quality control (SQC) procedure necessary to detect medically important errors. Simple point estimates of sigma at medical decision concentrations are sufficient for laboratory applications. Copyright © 2016 Elsevier Inc. All rights reserved.
Computational modeling of brain tumors: discrete, continuum or hybrid?
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Deisboeck, Thomas S.
In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silico brain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.
Computational modeling of brain tumors: discrete, continuum or hybrid?
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Deisboeck, Thomas S.
2008-04-01
In spite of all efforts, patients diagnosed with highly malignant brain tumors (gliomas), continue to face a grim prognosis. Achieving significant therapeutic advances will also require a more detailed quantitative understanding of the dynamic interactions among tumor cells, and between these cells and their biological microenvironment. Data-driven computational brain tumor models have the potential to provide experimental tumor biologists with such quantitative and cost-efficient tools to generate and test hypotheses on tumor progression, and to infer fundamental operating principles governing bidirectional signal propagation in multicellular cancer systems. This review highlights the modeling objectives of and challenges with developing such in silicobrain tumor models by outlining two distinct computational approaches: discrete and continuum, each with representative examples. Future directions of this integrative computational neuro-oncology field, such as hybrid multiscale multiresolution modeling are discussed.
Reanalysis of Clause Boundaries in Japanese as a Constraint-Driven Process.
ERIC Educational Resources Information Center
Miyamoto, Edson T.
2003-01-01
Reports on two experiments that focus on clause boundaries in Japanese that suggest that minimal change restriction is unnecessary to characterize reanalysis. Proposes that the data and previous observations are more naturally explained by a constraint-driven model in which revisions are performed only when required by parsing constraints.…
40 CFR 600.010-86 - Vehicle test requirements and minimum data requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... base level, and (iii) For additional model types established under § 600.207(a)(2), data from each... data requirements. 600.010-86 Section 600.010-86 Protection of Environment ENVIRONMENTAL PROTECTION... Provisions § 600.010-86 Vehicle test requirements and minimum data requirements. (a) For each certification...
Stable solutions of inflation driven by vector fields
NASA Astrophysics Data System (ADS)
Emami, Razieh; Mukohyama, Shinji; Namba, Ryo; Zhang, Ying-li
2017-03-01
Many models of inflation driven by vector fields alone have been known to be plagued by pathological behaviors, namely ghost and/or gradient instabilities. In this work, we seek a new class of vector-driven inflationary models that evade all of the mentioned instabilities. We build our analysis on the Generalized Proca Theory with an extension to three vector fields to realize isotropic expansion. We obtain the conditions required for quasi de-Sitter solutions to be an attractor analogous to the standard slow-roll one and those for their stability at the level of linearized perturbations. Identifying the remedy to the existing unstable models, we provide a simple example and explicitly show its stability. This significantly broadens our knowledge on vector inflationary scenarios, reviving potential phenomenological interests for this class of models.
NASA Astrophysics Data System (ADS)
Launois, Thomas; Ogée, Jérôme; Commane, Roisin; Wehr, Rchard; Meredith, Laura; Munger, Bill; Nelson, David; Saleska, Scott; Wofsy, Steve; Zahniser, Mark; Wingate, Lisa
2016-04-01
The exchange of CO2 between the terrestrial biosphere and the atmosphere is driven by photosynthetic uptake and respiratory loss, two fluxes currently estimated with considerable uncertainty at large scales. Model predictions indicate that these biosphere fluxes will be modified in the future as CO2 concentrations and temperatures increase; however, it still unclear to what extent. To address this challenge there is a need for better constraints on land surface model parameterisations. Additional atmospheric tracers of large-scale CO2 fluxes have been identified as potential candidates for this task. In particular carbonyl sulphide (OCS) has been proposed as a complementary tracer of gross photosynthesis over land, since OCS uptake by plants is dominated by carbonic anhydrase (CA) activity, an enzyme abundant in leaves that catalyses CO2 hydration during photosynthesis. However, although the mass budget at the ecosystem is dominated by the flux of OCS into leaves, some OCS is also exchanged between the atmosphere and the soil and this component of the budget requires constraining. In this study, we adapted the process-based isotope-enabled model MuSICA (Multi-layer Simulator of the Interactions between a vegetation Canopy and the Atmosphere) to include the transport, reaction, diffusion and production of OCS within a forested ecosystem. This model was combined with 3 years (2011-2013) of in situ measurements of OCS atmospheric concentration profiles and fluxes at the Harvard Forest (Massachussets, USA) to test hypotheses on the mechanisms responsible for CA-driven uptake by leaves and soils as well as possible OCS emissions during litter decomposition. Model simulations over the three years captured well the impact of diurnally and seasonally varying environmental conditions on the net ecosystem OCS flux. A sensitivity analysis on soil CA activity and soil OCS emission rates was also performed to quantify their impact on the vertical profiles of OCS inside the canopy and the net OCS exchange with the atmosphere.
Resonance measurement of nonlocal spin torque in a three-terminal magnetic device.
Xue, Lin; Wang, Chen; Cui, Yong-Tao; Liu, Luqiao; Swander, A; Sun, J Z; Buhrman, R A; Ralph, D C
2012-04-06
A pure spin current generated within a nonlocal spin valve can exert a spin-transfer torque on a nanomagnet. This nonlocal torque enables new design schemes for magnetic memory devices that do not require the application of large voltages across tunnel barriers that can suffer electrical breakdown. Here we report a quantitative measurement of this nonlocal spin torque using spin-torque-driven ferromagnetic resonance. Our measurement agrees well with the prediction of an effective circuit model for spin transport. Based on this model, we suggest strategies for optimizing the strength of nonlocal torque. © 2012 American Physical Society
Integrating geo web services for a user driven exploratory analysis
NASA Astrophysics Data System (ADS)
Moncrieff, Simon; Turdukulov, Ulanbek; Gulland, Elizabeth-Kate
2016-04-01
In data exploration, several online data sources may need to be dynamically aggregated or summarised over spatial region, time interval, or set of attributes. With respect to thematic data, web services are mainly used to present results leading to a supplier driven service model limiting the exploration of the data. In this paper we propose a user need driven service model based on geo web processing services. The aim of the framework is to provide a method for the scalable and interactive access to various geographic data sources on the web. The architecture combines a data query, processing technique and visualisation methodology to rapidly integrate and visually summarise properties of a dataset. We illustrate the environment on a health related use case that derives Age Standardised Rate - a dynamic index that needs integration of the existing interoperable web services of demographic data in conjunction with standalone non-spatial secure database servers used in health research. Although the example is specific to the health field, the architecture and the proposed approach are relevant and applicable to other fields that require integration and visualisation of geo datasets from various web services and thus, we believe is generic in its approach.
NASA Astrophysics Data System (ADS)
Lu, Xiaojun; Liu, Changli; Chen, Lei
2018-04-01
In this paper, a redundant Piezo-driven stage having 3RRR compliant mechanism is introduced, we propose the master-slave control with trajectory planning (MSCTP) strategy and Bouc-Wen model to improve its micro-motion tracking performance. The advantage of the proposed controller lies in that its implementation only requires a simple control strategy without the complexity of modeling to avoid the master PEA's tracking error. The dynamic model of slave PEA system with Bouc-Wen hysteresis is established and identified via particle swarm optimization (PSO) approach. The Piezo-driven stage with operating period T=1s and 2s is implemented to track a prescribed circle. The simulation results show that MSCTP with Bouc-Wen model reduces the trajectory tracking errors to the range of the accuracy of our available measurement.
High-resolution, regional-scale crop yield simulations for the Southwestern United States
NASA Astrophysics Data System (ADS)
Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.
2012-12-01
Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and maximum temperature), beyond which the yields were negatively affected. These results are now being used for further regional-scale yield analysis as the aforementioned framework is adaptable to multiple geographic regions and crop types.
A Model-Driven Approach to e-Course Management
ERIC Educational Resources Information Center
Savic, Goran; Segedinac, Milan; Milenkovic, Dušica; Hrin, Tamara; Segedinac, Mirjana
2018-01-01
This paper presents research on using a model-driven approach to the development and management of electronic courses. We propose a course management system which stores a course model represented as distinct machine-readable components containing domain knowledge of different course aspects. Based on this formally defined platform-independent…
Optimizing methods for linking cinematic features to fMRI data.
Kauttonen, Janne; Hlushchuk, Yevhen; Tikka, Pia
2015-04-15
One of the challenges of naturalistic neurosciences using movie-viewing experiments is how to interpret observed brain activations in relation to the multiplicity of time-locked stimulus features. As previous studies have shown less inter-subject synchronization across viewers of random video footage than story-driven films, new methods need to be developed for analysis of less story-driven contents. To optimize the linkage between our fMRI data collected during viewing of a deliberately non-narrative silent film 'At Land' by Maya Deren (1944) and its annotated content, we combined the method of elastic-net regularization with the model-driven linear regression and the well-established data-driven independent component analysis (ICA) and inter-subject correlation (ISC) methods. In the linear regression analysis, both IC and region-of-interest (ROI) time-series were fitted with time-series of a total of 36 binary-valued and one real-valued tactile annotation of film features. The elastic-net regularization and cross-validation were applied in the ordinary least-squares linear regression in order to avoid over-fitting due to the multicollinearity of regressors, the results were compared against both the partial least-squares (PLS) regression and the un-regularized full-model regression. Non-parametric permutation testing scheme was applied to evaluate the statistical significance of regression. We found statistically significant correlation between the annotation model and 9 ICs out of 40 ICs. Regression analysis was also repeated for a large set of cubic ROIs covering the grey matter. Both IC- and ROI-based regression analyses revealed activations in parietal and occipital regions, with additional smaller clusters in the frontal lobe. Furthermore, we found elastic-net based regression more sensitive than PLS and un-regularized regression since it detected a larger number of significant ICs and ROIs. Along with the ISC ranking methods, our regression analysis proved a feasible method for ordering the ICs based on their functional relevance to the annotated cinematic features. The novelty of our method is - in comparison to the hypothesis-driven manual pre-selection and observation of some individual regressors biased by choice - in applying data-driven approach to all content features simultaneously. We found especially the combination of regularized regression and ICA useful when analyzing fMRI data obtained using non-narrative movie stimulus with a large set of complex and correlated features. Copyright © 2015. Published by Elsevier Inc.
Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus
2010-01-01
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031
High Temperature Degradation Mechanisms in Polymer Matrix Composites
NASA Technical Reports Server (NTRS)
Cunningham, Ronan A.
1996-01-01
Polymer matrix composites are increasingly used in demanding structural applications in which they may be exposed to harsh environments. The durability of such materials is a major concern, potentially limiting both the integrity of the structures and their useful lifetimes. The goal of the current investigation is to develop a mechanism-based model of the chemical degradation which occurs, such that given the external chemical environment and temperatures throughout the laminate, laminate geometry, and ply and/or constituent material properties, we can calculate the concentration of diffusing substances and extent of chemical degradation as functions of time and position throughout the laminate. This objective is met through the development and use of analytical models, coupled to an analysis-driven experimental program which offers both quantitative and qualitative information on the degradation mechanism. Preliminary analyses using a coupled diffusion/reaction model are used to gain insight into the physics of the degradation mechanisms and to identify crucial material parameters. An experimental program is defined based on the results of the preliminary analysis which allows the determination of the necessary material coefficients. Thermogravimetric analyses are carried out in nitrogen, air, and oxygen to provide quantitative information on thermal and oxidative reactions. Powdered samples are used to eliminate diffusion effects. Tests in both inert and oxidative environments allow the separation of thermal and oxidative contributions to specimen mass loss. The concentration dependency of the oxidative reactions is determined from the tests in pure oxygen. Short term isothermal tests at different temperatures are carried out on neat resin and unidirectional macroscopic specimens to identify diffusion effects. Mass loss, specimen shrinkage, the formation of degraded surface layers and surface cracking are recorded as functions of exposure time. Geometry effects in the neat resin, and anisotropic diffusion effects in the composites, are identified through the use of specimens with different aspect ratios. The data is used with the model to determine reaction coefficients and effective diffusion coefficients. The empirical and analytical correlations confirm the preliminary model results which suggest that mass loss at lower temperatures is dominated by oxidative reactions and that these reaction are limited by diffusion of oxygen from the surface. The mechanism-based model is able to successfully capture the basic physics of the degradation phenomena under a wide range of test conditions. The analysis-based test design is successful in separating out oxidative, thermal, and diffusion effects to allow the determination of material coefficients. This success confirms the basic picture of the process; however, a more complete understanding of some aspects of the physics are required before truly predictive capability can be achieved.
Ocean Renewable Energy Research at U. New Hampshire
NASA Astrophysics Data System (ADS)
Wosnik, M.; Baldwin, K.; White, C.; Carter, M.; Gress, D.; Swift, R.; Tsukrov, I.; Kraft, G.; Celikkol, B.
2008-11-01
The University of New Hampshire (UNH) is strategically positioned to develop and evaluate wave and tidal energy extraction technologies, with much of the required test site infrastructure in place already. Laboratory facilities (wave/tow tanks, flumes, water tunnels) are used to test concept validation models (scale 1:25--100) and design models (scale 1:10--30). The UNH Open Ocean Aquaculture (OOA) site located 1.6 km south of the Isles of Shoals (10 km off shore) and the General Sullivan Bridge testing facility in the Great Bay Estuary are used to test process models (scale 1:3--15) and prototype/demonstration models (scale 1:1-- 4) of wave energy and tidal energy extraction devices, respectively. Both test sites are easily accessible and in close proximity of UNH, with off-the-shelf availability. The Great Bay Estuary system is one of the most energetic tidally driven estuaries on the East Coast of the U.S. The current at the General Sullivan bridge test facility reliably exceeds four knots over part of the tidal cycle. The OOA site is a ten year old, well established offshore test facility, and is continually serviced by a dedicated research vessel and operations/diving crew. In addition to an overview of the physical resources, results of recent field testing of half- and full-scale hydrokinetic turbines, and an analysis of recent acoustic Doppler surveys of the tidal estuary will be presented.
Note: Ultrasonic gas flowmeter based on optimized time-of-flight algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, X. F.; Tang, Z. A.
2011-04-15
A new digital signal processor based single path ultrasonic gas flowmeter is designed, constructed, and experimentally tested. To achieve high accuracy measurements, an optimized ultrasound driven method of incorporation of the amplitude modulation and the phase modulation of the transmit-receive technique is used to stimulate the transmitter. Based on the regularities among the received envelope zero-crossings, different received signal's signal-to-noise ratio situations are discriminated and optional time-of-flight algorithms are applied to take flow rate calculations. Experimental results from the dry calibration indicate that the designed flowmeter prototype can meet the zero-flow verification test requirements of the American Gas Association Reportmore » No. 9. Furthermore, the results derived from the flow calibration prove that the proposed flowmeter prototype can measure flow rate accurately in the practical experiments, and the nominal accuracies after FWME adjustment are lower than 0.8% throughout the calibration range.« less
Study on Amortization Time and Rationality in Real Estate Investment
NASA Astrophysics Data System (ADS)
Li, Yancang; Zhou, Shujing; Suo, Juanjuan
Amortization time and rationality has been discussed a lot in real estate investment research. As the price of real estate is driven by Geometric Brown Motion (GBM), whether the mortgagors should amortize in advance has become a key issue in amortization time research. This paper presents a new method to solve the problem by using the optimal stopping time theory and option pricing theory models. We discuss the option value in amortizing decision based on this model. A simulation method is used to test this method.
Exploration and practice for engineering innovative talents training based on project-driven
NASA Astrophysics Data System (ADS)
Xu, Yishen; Lv, Qingsong; Ye, Yan; Wu, Maocheng; Gu, Jihua
2017-08-01
As one of the "excellent engineer education program" of the Ministry of Education and one of the characteristic majors of Jiangsu Province, the major of optoelectronic information science and engineering in Soochow University has a long history and distinctive features. In recent years, aiming to the talents training objective of "broad foundation, practiceoriented, to be creative", education and teaching reforms have been carried out, which emphasize basis of theoretical teaching, carrier of practical training, promotion of projects and discussion, and development of second class. By optimizing the teaching contents and course system of the theoretical courses, the engineering innovative talents training mode based on the project-driven has been implemented with playing a practical training carrier role and overall managing the second class teaching for cultivating students' innovative spirit and practical ability. Meanwhile, the evaluation mechanism of the students' comprehensive performance mainly based on "scores of theory test" is being gradually changed, and the activities such as scientific research, discipline competitions and social practices are playing an increasing important role in the students' comprehensive assessment. The produced achievements show that the proposed training model based on project-driven could stimulate the students' enthusiasm and initiative to participate in research activities and promote the training of students' ability of engineering practice and consciousness of innovation.
Kinetic Approaches to Shear-Driven Magnetic Reconnection for Multi-Scale Modeling of CME Initiation
NASA Astrophysics Data System (ADS)
Black, C.; Antiochos, S. K.; DeVore, C.; Germaschewski, K.; Karpen, J. T.
2013-12-01
In the standard model for coronal mass ejections (CME) and/or solar flares, the free energy for the event resides in the strongly sheared magnetic field of a filament channel. The pre-eruption force balance, consisting of an upward force due to the magnetic pressure of the sheared field balanced by a downward tension due to overlying un-sheared field, is widely believed to be disrupted by magnetic reconnection. Therefore, understanding initiation of solar explosive phenomena requires a true multi-scale model of reconnection onset driven by the buildup of magnetic shear. While the application of magnetic-field shear is a trivial matter in MHD simulations, it is a significant challenge in a PIC code. The driver must be implemented in a self-consistent manner and with boundary conditions that avoid the generation of waves that destroy the applied shear. In this work, we describe drivers for 2.5D, aperiodic, PIC systems and discuss the implementation of driver-consistent boundary conditions that allow a net electric current to flow through the walls. Preliminary tests of these boundaries with a MHD equilibrium are shown. This work was supported, in part, by the NASA Living With a Star TR&T Program.
NASA Technical Reports Server (NTRS)
Pena, Joaquin; Hinchey, Michael G.; Sterritt, Roy; Ruiz-Cortes, Antonio; Resinas, Manuel
2006-01-01
Autonomic Computing (AC), self-management based on high level guidance from humans, is increasingly gaining momentum as the way forward in designing reliable systems that hide complexity and conquer IT management costs. Effectively, AC may be viewed as Policy-Based Self-Management. The Model Driven Architecture (MDA) approach focuses on building models that can be transformed into code in an automatic manner. In this paper, we look at ways to implement Policy-Based Self-Management by means of models that can be converted to code using transformations that follow the MDA philosophy. We propose a set of UML-based models to specify autonomic and autonomous features along with the necessary procedures, based on modification and composition of models, to deploy a policy as an executing system.
Sutherland, Clare A M; Liu, Xizi; Zhang, Lingshan; Chu, Yingtung; Oldmeadow, Julian A; Young, Andrew W
2018-04-01
People form first impressions from facial appearance rapidly, and these impressions can have considerable social and economic consequences. Three dimensions can explain Western perceivers' impressions of Caucasian faces: approachability, youthful-attractiveness, and dominance. Impressions along these dimensions are theorized to be based on adaptive cues to threat detection or sexual selection, making it likely that they are universal. We tested whether the same dimensions of facial impressions emerge across culture by building data-driven models of first impressions of Asian and Caucasian faces derived from Chinese and British perceivers' unconstrained judgments. We then cross-validated the dimensions with computer-generated average images. We found strong evidence for common approachability and youthful-attractiveness dimensions across perceiver and face race, with some evidence of a third dimension akin to capability. The models explained ~75% of the variance in facial impressions. In general, the findings demonstrate substantial cross-cultural agreement in facial impressions, especially on the most salient dimensions.
Tracking Control and System Development for Laser-Driven Micro-Vehicles
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Hoshino, Kentaro; Hara, Shinji; Shiokata, Daisuke; Yabe, Takashi
The purpose of this paper is to design a control system for an integrated laser propulsion/tracking system to achieve continuous motion and control of laser-driven micro-vehicles. Laser propulsion is significant in achieving miniature and light micro-vehicles. A laser-driven micro-airplane has been studied using a paper airplane and YAG laser, resulting in successful gliding of the airplane. High-performance laser tracking control is required to achieve continuous flight. This paper presents a control design strategy based on the generalized Kalman-Yakubovic-Popov lemma to achieve this requirement. Experiments have been carried out to evaluate the performance of the integrated laser propulsion/tracking system.
Goel, Purva; Bapat, Sanket; Vyas, Renu; Tambe, Amruta; Tambe, Sanjeev S
2015-11-13
The development of quantitative structure-retention relationships (QSRR) aims at constructing an appropriate linear/nonlinear model for the prediction of the retention behavior (such as Kovats retention index) of a solute on a chromatographic column. Commonly, multi-linear regression and artificial neural networks are used in the QSRR development in the gas chromatography (GC). In this study, an artificial intelligence based data-driven modeling formalism, namely genetic programming (GP), has been introduced for the development of quantitative structure based models predicting Kovats retention indices (KRI). The novelty of the GP formalism is that given an example dataset, it searches and optimizes both the form (structure) and the parameters of an appropriate linear/nonlinear data-fitting model. Thus, it is not necessary to pre-specify the form of the data-fitting model in the GP-based modeling. These models are also less complex, simple to understand, and easy to deploy. The effectiveness of GP in constructing QSRRs has been demonstrated by developing models predicting KRIs of light hydrocarbons (case study-I) and adamantane derivatives (case study-II). In each case study, two-, three- and four-descriptor models have been developed using the KRI data available in the literature. The results of these studies clearly indicate that the GP-based models possess an excellent KRI prediction accuracy and generalization capability. Specifically, the best performing four-descriptor models in both the case studies have yielded high (>0.9) values of the coefficient of determination (R(2)) and low values of root mean squared error (RMSE) and mean absolute percent error (MAPE) for training, test and validation set data. The characteristic feature of this study is that it introduces a practical and an effective GP-based method for developing QSRRs in gas chromatography that can be gainfully utilized for developing other types of data-driven models in chromatography science. Copyright © 2015 Elsevier B.V. All rights reserved.
Amin, Waqas; Singh, Harpreet; Pople, Andre K.; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V.; Becich, Michael J.
2010-01-01
Context: Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Design: Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. Result: The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. Conclusion: These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems. PMID:20922029
Amin, Waqas; Singh, Harpreet; Pople, Andre K; Winters, Sharon; Dhir, Rajiv; Parwani, Anil V; Becich, Michael J
2010-08-10
Tissue banking informatics deals with standardized annotation, collection and storage of biospecimens that can further be shared by researchers. Over the last decade, the Department of Biomedical Informatics (DBMI) at the University of Pittsburgh has developed various tissue banking informatics tools to expedite translational medicine research. In this review, we describe the technical approach and capabilities of these models. Clinical annotation of biospecimens requires data retrieval from various clinical information systems and the de-identification of the data by an honest broker. Based upon these requirements, DBMI, with its collaborators, has developed both Oracle-based organ-specific data marts and a more generic, model-driven architecture for biorepositories. The organ-specific models are developed utilizing Oracle 9.2.0.1 server tools and software applications and the model-driven architecture is implemented in a J2EE framework. The organ-specific biorepositories implemented by DBMI include the Cooperative Prostate Cancer Tissue Resource (http://www.cpctr.info/), Pennsylvania Cancer Alliance Bioinformatics Consortium (http://pcabc.upmc.edu/main.cfm), EDRN Colorectal and Pancreatic Neoplasm Database (http://edrn.nci.nih.gov/) and Specialized Programs of Research Excellence (SPORE) Head and Neck Neoplasm Database (http://spores.nci.nih.gov/current/hn/index.htm). The model-based architecture is represented by the National Mesothelioma Virtual Bank (http://mesotissue.org/). These biorepositories provide thousands of well annotated biospecimens for the researchers that are searchable through query interfaces available via the Internet. These systems, developed and supported by our institute, serve to form a common platform for cancer research to accelerate progress in clinical and translational research. In addition, they provide a tangible infrastructure and resource for exposing research resources and biospecimen services in collaboration with the clinical anatomic pathology laboratory information system (APLIS) and the cancer registry information systems.
A Tool for Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis; Erickson, John
2005-01-01
Absent a general method for mathematically sound, automated transformation of customer requirements into a formal model of the desired system, developers must resort to either manual application of formal methods or to system testing (either manual or automated). While formal methods have afforded numerous successes, they present serious issues, e.g., costs to gear up to apply them (time, expensive staff), and scalability and reproducibility when standards in the field are not settled. The testing path cannot be walked to the ultimate goal, because exhaustive testing is infeasible for all but trivial systems. So system verification remains problematic. System or requirements validation is similarly problematic. The alternatives available today depend on either having a formal model or pursuing enough testing to enable the customer to be certain that system behavior meets requirements. The testing alternative for non-trivial systems always have some system behaviors unconfirmed and therefore is not the answer. To ensure that a formal model is equivalent to the customer s requirements necessitates that the customer somehow fully understands the formal model, which is not realistic. The predominant view that provably correct system development depends on having a formal model of the system leads to a desire for a mathematically sound method to automate the transformation of customer requirements into a formal model. Such a method, an augmentation of requirements-based programming, will be briefly described in this paper, and a prototype tool to support it will be described. The method and tool enable both requirements validation and system verification for the class of systems whose behavior can be described as scenarios. An application of the tool to a prototype automated ground control system for NASA mission is presented.
Development of a lumbar EMG-based coactivation index for the assessment of complex dynamic tasks.
Le, Peter; Aurand, Alexander; Walter, Benjamin A; Best, Thomas M; Khan, Safdar N; Mendel, Ehud; Marras, William S
2018-03-01
The objective of this study was to develop and test an EMG-based coactivation index and compare it to a coactivation index defined by a biologically assisted lumbar spine model to differentiate between tasks. The purpose was to provide a universal approach to assess coactivation of a multi-muscle system when a computational model is not accessible. The EMG-based index developed utilised anthropometric-defined muscle characteristics driven by torso kinematics and EMG. Muscles were classified as agonists/antagonists based upon 'simulated' moments of the muscles relative to the total 'simulated' moment. Different tasks were used to test the range of the index including lifting, pushing and Valsalva. Results showed that the EMG-based index was comparable to the index defined by a biologically assisted model (r 2 = 0.78). Overall, the EMG-based index provides a universal, usable method to assess the neuromuscular effort associated with coactivation for complex dynamic tasks when the benefit of a biomechanical model is not available. Practitioner Summary: A universal coactivation index for the lumbar spine was developed to assess complex dynamic tasks. This method was validated relative to a model-based index for use when a high-end computational model is not available. Its simplicity allows for fewer inputs and usability for assessment of task ergonomics and rehabilitation.
Prototyping and testing of mechanical components for the GRAVITY spectrometers
NASA Astrophysics Data System (ADS)
Wiest, Michael; Fischer, Sebastian; Thiel, Markus; Haug, Marcus; Rohloff, Ralf-Rainer; Straubmeier, Christian; Araujo-Hauck, Constanza; Yazici, Senol; Eisenhauer, Frank; Perrin, Guy; Brandner, Wolfgang; Perraut, Karine; Amorim, Antonio; Schöller, Markus; Eckart, Andreas
2010-07-01
GRAVITY is a 2nd generation VLTI Instrument which operates on 6 interferometric baselines by using all 4 UTs. It will offer narrow angle astrometry in the infrared K-band with an accuracy of 10 ìas. The University of Cologne is part of the international GRAVITY consortium and responsible for the design and manufacturing of the two spectrometers. One is optimized for observing the science object, providing three different spectral resolutions and optional polarimetry, the other is optimized for a fast fringe tracking at a spectral resolution of R=22 with optional polarimetry. In order to achieve the necessary image quality, the current mechanical design foresees 5 motorized functions, 2 linear motions and 3 filter wheels. Additionally the latest optical design proposal includes 20 degrees of freedom for manual adjustments distributed over the different optical elements. Both spectrometers require precise linear and rotational movements on micrometer or arcsecond scales. These movements will be realized using custom linear stages based on compliant joints. These stages will be driven by actuators based on a Phytron/Harmonic Drive combination. For dimensioning and in order to qualify the reliability of these mechanisms, it is necessary to evaluate the mechanisms on the base of several prototypes. Due to the cryogenic environment the wheel mechanisms will be driven by Phytron stepper motors, too. A ratchet mechanism, which is currently in the beginning of his design phase, will deliver the required precision to the filter wheels. This contribution will give a first impression how the next mechanical prototypes will look like. Besides, advantages of purchasing and integrating a distance sensor and a resolver are reported. Both are supposed to work under cryogenic conditions and should achieve high resolutions for the measuring of movements inside the test cryostat.
NASA Technical Reports Server (NTRS)
Pohner, John A.; Dempsey, Brian P.; Herold, Leroy M.
1990-01-01
Space Station elements and advanced military spacecraft will require rejection of tens of kilowatts of waste heat. Large space radiators and two-phase heat transport loops will be required. To minimize radiator size and weight, it is critical to minimize the temperature drop between the heat source and sink. Under an Air Force contract, a unique, high-performance heat exchanger is developed for coupling the radiator to the transport loop. Since fluid flow through the heat exchanger is driven by capillary forces which are easily dominated by gravity forces in ground testing, it is necessary to perform microgravity thermal testing to verify the design. This contract consists of an experiment definition phase leading to a preliminary design and cost estimate for a shuttle-based flight experiment of this heat exchanger design. This program will utilize modified hardware from a ground test program for the heat exchanger.
Data Driven Model Development for the Supersonic Semispan Transport (S(sup 4)T)
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.
2011-01-01
We investigate two common approaches to model development for robust control synthesis in the aerospace community; namely, reduced order aeroservoelastic modelling based on structural finite-element and computational fluid dynamics based aerodynamic models and a data-driven system identification procedure. It is shown via analysis of experimental Super- Sonic SemiSpan Transport (S4T) wind-tunnel data using a system identification approach it is possible to estimate a model at a fixed Mach, which is parsimonious and robust across varying dynamic pressures.
2016-07-27
is a common requirement for aircraft, rockets , and hypersonic vehicles. The Aerospace Fuels Quality Test and Model Development (AFQTMoDev) project...was initiated to mature fuel quality assurance practices for rocket grade kerosene, thereby ensuring operational readiness of conventional and...and reliability, is a common requirement for aircraft, rockets , and hypersonic vehicles. The Aerospace Fuels Quality Test and Model Development
The Nuremberg Code subverts human health and safety by requiring animal modeling
2012-01-01
Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234
The Nuremberg Code subverts human health and safety by requiring animal modeling.
Greek, Ray; Pippus, Annalea; Hansen, Lawrence A
2012-07-08
The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.
Interactive Schematic Integration Within the Propellant System Modeling Environment
NASA Technical Reports Server (NTRS)
Coote, David; Ryan, Harry; Burton, Kenneth; McKinney, Lee; Woodman, Don
2012-01-01
Task requirements for rocket propulsion test preparations of the test stand facilities drive the need to model the test facility propellant systems prior to constructing physical modifications. The Propellant System Modeling Environment (PSME) is an initiative designed to enable increased efficiency and expanded capabilities to a broader base of NASA engineers in the use of modeling and simulation (M&S) technologies for rocket propulsion test and launch mission requirements. PSME will enable a wider scope of users to utilize M&S of propulsion test and launch facilities for predictive and post-analysis functionality by offering a clean, easy-to-use, high-performance application environment.
An Integrated Data-Driven Strategy for Safe-by-Design Nanoparticles: The FP7 MODERN Project.
Brehm, Martin; Kafka, Alexander; Bamler, Markus; Kühne, Ralph; Schüürmann, Gerrit; Sikk, Lauri; Burk, Jaanus; Burk, Peeter; Tamm, Tarmo; Tämm, Kaido; Pokhrel, Suman; Mädler, Lutz; Kahru, Anne; Aruoja, Villem; Sihtmäe, Mariliis; Scott-Fordsmand, Janeck; Sorensen, Peter B; Escorihuela, Laura; Roca, Carlos P; Fernández, Alberto; Giralt, Francesc; Rallo, Robert
2017-01-01
The development and implementation of safe-by-design strategies is key for the safe development of future generations of nanotechnology enabled products. The safety testing of the huge variety of nanomaterials that can be synthetized is unfeasible due to time and cost constraints. Computational modeling facilitates the implementation of alternative testing strategies in a time and cost effective way. The development of predictive nanotoxicology models requires the use of high quality experimental data on the structure, physicochemical properties and bioactivity of nanomaterials. The FP7 Project MODERN has developed and evaluated the main components of a computational framework for the evaluation of the environmental and health impacts of nanoparticles. This chapter describes each of the elements of the framework including aspects related to data generation, management and integration; development of nanodescriptors; establishment of nanostructure-activity relationships; identification of nanoparticle categories; hazard ranking and risk assessment.
Energy-driven scheduling algorithm for nanosatellite energy harvesting maximization
NASA Astrophysics Data System (ADS)
Slongo, L. K.; Martínez, S. V.; Eiterer, B. V. B.; Pereira, T. G.; Bezerra, E. A.; Paiva, K. V.
2018-06-01
The number of tasks that a satellite may execute in orbit is strongly related to the amount of energy its Electrical Power System (EPS) is able to harvest and to store. The manner the stored energy is distributed within the satellite has also a great impact on the CubeSat's overall efficiency. Most CubeSat's EPS do not prioritize energy constraints in their formulation. Unlike that, this work proposes an innovative energy-driven scheduling algorithm based on energy harvesting maximization policy. The energy harvesting circuit is mathematically modeled and the solar panel I-V curves are presented for different temperature and irradiance levels. Considering the models and simulations, the scheduling algorithm is designed to keep solar panels working close to their maximum power point by triggering tasks in the appropriate form. Tasks execution affects battery voltage, which is coupled to the solar panels through a protection circuit. A software based Perturb and Observe strategy allows defining the tasks to be triggered. The scheduling algorithm is tested in FloripaSat, which is an 1U CubeSat. A test apparatus is proposed to emulate solar irradiance variation, considering the satellite movement around the Earth. Tests have been conducted to show that the scheduling algorithm improves the CubeSat energy harvesting capability by 4.48% in a three orbit experiment and up to 8.46% in a single orbit cycle in comparison with the CubeSat operating without the scheduling algorithm.
A Study on a Microwave-Driven Smart Material Actuator
NASA Technical Reports Server (NTRS)
Choi, Sang H.; Chu, Sang-Hyon; Kwak, M.; Cutler, A. D.
2001-01-01
NASA s Next Generation Space Telescope (NGST) has a large deployable, fragmented optical surface (greater than or = 2 8 m in diameter) that requires autonomous correction of deployment misalignments and thermal effects. Its high and stringent resolution requirement imposes a great deal of challenge for optical correction. The threshold value for optical correction is dictated by lambda/20 (30 nm for NGST optics). Control of an adaptive optics array consisting of a large number of optical elements and smart material actuators is so complex that power distribution for activation and control of actuators must be done by other than hard-wired circuitry. The concept of microwave-driven smart actuators is envisioned as the best option to alleviate the complexity associated with hard-wiring. A microwave-driven actuator was studied to realize such a concept for future applications. Piezoelectric material was used as an actuator that shows dimensional change with high electric field. The actuators were coupled with microwave rectenna and tested to correlate the coupling effect of electromagnetic wave. In experiments, a 3x3 rectenna patch array generated more than 50 volts which is a threshold voltage for 30-nm displacement of a single piezoelectric material. Overall, the test results indicate that the microwave-driven actuator concept can be adopted for NGST applications.
Data-Driven Residential Load Modeling and Validation in GridLAB-D
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gotseff, Peter; Lundstrom, Blake
Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less
A methodology for adaptable and robust ecosystem services assessment.
Villa, Ferdinando; Bagstad, Kenneth J; Voigt, Brian; Johnson, Gary W; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant "one model fits all" paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES--both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts.
A methodology for adaptable and robust ecosystem services assessment
Villa, Ferdinando; Bagstad, Kenneth J.; Voigt, Brian; Johnson, Gary W.; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant “one model fits all” paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES - both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts.
A Methodology for Adaptable and Robust Ecosystem Services Assessment
Villa, Ferdinando; Bagstad, Kenneth J.; Voigt, Brian; Johnson, Gary W.; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant “one model fits all” paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES - both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts. PMID:24625496
NASA Astrophysics Data System (ADS)
McPhee, J.; William, Y. W.
2005-12-01
This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system
Clinical data interoperability based on archetype transformation.
Costa, Catalina Martínez; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
2011-10-01
The semantic interoperability between health information systems is a major challenge to improve the quality of clinical practice and patient safety. In recent years many projects have faced this problem and provided solutions based on specific standards and technologies in order to satisfy the needs of a particular scenario. Most of such solutions cannot be easily adapted to new scenarios, thus more global solutions are needed. In this work, we have focused on the semantic interoperability of electronic healthcare records standards based on the dual model architecture and we have developed a solution that has been applied to ISO 13606 and openEHR. The technological infrastructure combines reference models, archetypes and ontologies, with the support of Model-driven Engineering techniques. For this purpose, the interoperability infrastructure developed in previous work by our group has been reused and extended to cover the requirements of data transformation. Copyright © 2011 Elsevier Inc. All rights reserved.
Janse, A; Worm-Smeitink, M; Bleijenberg, G; Donders, R; Knoop, H
2018-02-01
Face-to-face cognitive-behavioural therapy (CBT) leads to a reduction of fatigue in chronic fatigue syndrome (CFS). Aims To test the efficacy of internet-based CBT (iCBT) for adults with CFS. A total of 240 patients with CFS were randomised to either iCBT with protocol-driven therapist feedback or with therapist feedback on demand, or a waiting list. Primary outcome was fatigue severity assessed with the Checklist Individual Strength (Netherlands Trial Register: NTR4013). Compared with a waiting list, intention-to-treat (ITT) analysis showed a significant reduction of fatigue for both iCBT conditions (protocol-driven feedback: B = -8.3, 97.5% CI -12.7 to -3.9, P < 0.0001; feedback on demand: B = -7.2, 97.5% CI -11.3 to -3.1, P < 0.0001). No significant differences were found between both iCBT conditions on all outcome measures (P = 0.3-0.9). An exploratory analysis revealed that feedback-on-demand iCBT required less therapist time (mean 4 h 37 min) than iCBT with protocol-driven feedback (mean 6 h 9 min, P < 0.001) and also less than face-to-face CBT as reported in the literature. Both iCBT conditions are efficacious and time efficient. Declaration of interest None.
Thakore, Bhoomi K; Naffziger-Hirsch, Michelle E; Richardson, Jennifer L; Williams, Simon N; McGee, Richard
2014-08-02
Approaches to training biomedical scientists have created a talented research community. However, they have failed to create a professional workforce that includes many racial and ethnic minorities and women in proportion to their representation in the population or in PhD training. This is particularly true at the faculty level. Explanations for the absence of diversity in faculty ranks can be found in social science theories that reveal processes by which individuals develop identities, experiences, and skills required to be seen as legitimate within the profession. Using the social science theories of Communities of Practice, Social Cognitive Career Theory, identity formation, and cultural capital, we have developed and are testing a novel coaching-based model to address some of the limitations of previous diversity approaches. This coaching intervention (The Academy for Future Science Faculty) includes annual in-person meetings of students and trained faculty Career Coaches, along with ongoing virtual coaching, group meetings and communication. The model is being tested as a randomized controlled trial with two cohorts of biomedical PhD students from across the U.S., one recruited at the start of their PhDs and one nearing completion. Stratification into the experimental and control groups, and to coaching groups within the experimental arms, achieved equal numbers of students by race, ethnicity and gender to the extent possible. A fundamental design element of the Academy is to teach and make visible the social science principles which highly influence scientific advancement, as well as acknowledging the extra challenges faced by underrepresented groups working to be seen as legitimate within the scientific communities. The strategy being tested is based upon a novel application of the well-established principles of deploying highly skilled coaches, selected and trained for their ability to develop talents of others. This coaching model is intended to be a complement, rather than a substitute, for traditional mentoring in biomedical research training, and is being tested as such.
NASA Astrophysics Data System (ADS)
Cescatti, A.; Duveiller, G.; Hooker, J.
2017-12-01
Changing vegetation cover not only affects the atmospheric concentration of greenhouse gases but also alters the radiative and non-radiative properties of the surface. The result of competing biophysical processes on Earth's surface energy balance varies spatially and seasonally, and can lead to warming or cooling depending on the specific vegetation change and on the background climate. To date these effects are not accounted for in land-based climate policies because of the complexity of the phenomena, contrasting model predictions and the lack of global data-driven assessments. To overcome the limitations of available observation-based diagnostics and of the on-going model inter-comparison, here we present a new benchmarking dataset derived from satellite remote sensing. This global dataset provides the potential changes induced by multiple vegetation transitions on the single terms of the surface energy balance. We used this dataset for two major goals: 1) Quantify the impact of actual vegetation changes that occurred during the decade 2000-2010, showing the overwhelming role of tropical deforestation in warming the surface by reducing evapotranspiration despite the concurrent brightening of the Earth. 2) Benchmark a series of ESMs against data-driven metrics of the land cover change impacts on the various terms of the surface energy budget and on the surface temperature. We anticipate that the dataset could be also used to evaluate future scenarios of land cover change and to develop the monitoring, reporting and verification guidelines required for the implementation of mitigation plans that account for biophysical land processes.
Modeling an Ice-rich Lobate Debris Apron in Deuteronilus Mensae
NASA Astrophysics Data System (ADS)
Fastook, J. L.; Head, J. W.; Madeleine, J.-B.; Forget, F.; Marchant, D.
2010-03-01
Models help interpret observed glacial deposits and test formation scenarios. We examine a lobate debris apron recently proven to contain pure water ice. Two hypotheses are tested: alcove-only and collapse from a larger ice sheet driven by a GCM.
An ensemble boosting model for predicting transfer to the pediatric intensive care unit.
Rubin, Jonathan; Potes, Cristhian; Xu-Wilson, Minnan; Dong, Junzi; Rahman, Asif; Nguyen, Hiep; Moromisato, David
2018-04-01
Early deterioration indicators have the potential to alert hospital care staff in advance of adverse events, such as patients requiring an increased level of care, or the need for rapid response teams to be called. Our work focuses on the problem of predicting the transfer of pediatric patients from the general ward of a hospital to the pediatric intensive care unit. The development of a data-driven pediatric early deterioration indicator for use by clinicians with the purpose of predicting encounters where transfer from the general ward to the PICU is likely. Using data collected over 5.5 years from the electronic health records of two medical facilities, we develop machine learning classifiers based on adaptive boosting and gradient tree boosting. We further combine these learned classifiers into an ensemble model and compare its performance to a modified pediatric early warning score (PEWS) baseline that relies on expert defined guidelines. To gauge model generalizability, we perform an inter-facility evaluation where we train our algorithm on data from one facility and perform evaluation on a hidden test dataset from a separate facility. We show that improvements are witnessed over the modified PEWS baseline in accuracy (0.77 vs. 0.69), sensitivity (0.80 vs. 0.68), specificity (0.74 vs. 0.70) and AUROC (0.85 vs. 0.73). Data-driven, machine learning algorithms can improve PICU transfer prediction accuracy compared to expertly defined systems, such as a modified PEWS, but care must be taken in the training of such approaches to avoid inadvertently introducing bias into the outcomes of these systems. Copyright © 2018 Elsevier B.V. All rights reserved.
Assessing Argumentative Representation with Bayesian Network Models in Debatable Social Issues
ERIC Educational Resources Information Center
Zhang, Zhidong; Lu, Jingyan
2014-01-01
This study seeks to obtain argumentation models, which represent argumentative processes and an assessment structure in secondary school debatable issues in the social sciences. The argumentation model was developed based on mixed methods, a combination of both theory-driven and data-driven methods. The coding system provided a combing point by…
Modelling climate change and malaria transmission.
Parham, Paul E; Michael, Edwin
2010-01-01
The impact of climate change on human health has received increasing attention in recent years, with potential impacts due to vector-borne diseases only now beginning to be understood. As the most severe vector-borne disease, with one million deaths globally in 2006, malaria is thought most likely to be affected by changes in climate variables due to the sensitivity of its transmission dynamics to environmental conditions. While considerable research has been carried out using statistical models to better assess the relationship between changes in environmental variables and malaria incidence, less progress has been made on developing process-based climate-driven mathematical models with greater explanatory power. Here, we develop a simple model of malaria transmission linked to climate which permits useful insights into the sensitivity of disease transmission to changes in rainfall and temperature variables. Both the impact of changes in the mean values of these key external variables and importantly temporal variation in these values are explored. We show that the development and analysis of such dynamic climate-driven transmission models will be crucial to understanding the rate at which P. falciparum and P. vivax may either infect, expand into or go extinct in populations as local environmental conditions change. Malaria becomes endemic in a population when the basic reproduction number R0 is greater than unity and we identify an optimum climate-driven transmission window for the disease, thus providing a useful indicator for determing how transmission risk may change as climate changes. Overall, our results indicate that considerable work is required to better understand ways in which global malaria incidence and distribution may alter with climate change. In particular, we show that the roles of seasonality, stochasticity and variability in environmental variables, as well as ultimately anthropogenic effects, require further study. The work presented here offers a theoretical framework upon which this future research may be developed.
NASA Astrophysics Data System (ADS)
Hashim, Z.; Fukusaki, E.
2016-06-01
The increased demand for clean, sustainable and renewable energy resources has driven the development of various microbial systems to produce biofuels. One of such systems is the ethanol-producing yeast. Although yeast produces ethanol naturally using its native pathways, production yield is low and requires improvement for commercial biofuel production. Moreover, ethanol is toxic to yeast and thus ethanol tolerance should be improved to further enhance ethanol production. In this study, we employed metabolomics-based strategy using 30 single-gene deleted yeast strains to construct multivariate models for ethanol tolerance and screen metabolites that relate to ethanol sensitivity/tolerance. The information obtained from this study can be used as an input for strain improvement via metabolic engineering.
Gaziano, Thomas A; Young, Cynthia R; Fitzmaurice, Garrett; Atwood, Sidney; Gaziano, J Michael
2008-01-01
Summary Background Around 80% of all cardiovascular deaths occur in developing countries. Assessment of those patients at high risk is an important strategy for prevention. Since developing countries have limited resources for prevention strategies that require laboratory testing, we assessed if a risk prediction method that did not require any laboratory tests could be as accurate as one requiring laboratory information. Methods The National Health and Nutrition Examination Survey (NHANES) was a prospective cohort study of 14 407 US participants aged between 25–74 years at the time they were first examined (between 1971 and 1975). Our follow-up study population included participants with complete information on these surveys who did not report a history of cardiovascular disease (myocardial infarction, heart failure, stroke, angina) or cancer, yielding an analysis dataset N=6186. We compared how well either method could predict first-time fatal and non-fatal cardiovascular disease events in this cohort. For the laboratory-based model, which required blood testing, we used standard risk factors to assess risk of cardiovascular disease: age, systolic blood pressure, smoking status, total cholesterol, reported diabetes status, and current treatment for hypertension. For the non-laboratory-based model, we substituted body-mass index for cholesterol. Findings In the cohort of 6186, there were 1529 first-time cardiovascular events and 578 (38%) deaths due to cardiovascular disease over 21 years. In women, the laboratory-based model was useful for predicting events, with a c statistic of 0·829. The c statistic of the non-laboratory-based model was 0·831. In men, the results were similar (0·784 for the laboratory-based model and 0·783 for the non-laboratory-based model). Results were similar between the laboratory-based and non-laboratory-based models in both men and women when restricted to fatal events only. Interpretation A method that uses non-laboratory-based risk factors predicted cardiovascular events as accurately as one that relied on laboratory-based values. This approach could simplify risk assessment in situations where laboratory testing is inconvenient or unavailable. PMID:18342687
Bal-Price, Anna; Crofton, Kevin M; Leist, Marcel; Allen, Sandra; Arand, Michael; Buetler, Timo; Delrue, Nathalie; FitzGerald, Rex E; Hartung, Thomas; Heinonen, Tuula; Hogberg, Helena; Bennekou, Susanne Hougaard; Lichtensteiger, Walter; Oggier, Daniela; Paparella, Martin; Axelstad, Marta; Piersma, Aldert; Rached, Eva; Schilter, Benoît; Schmuck, Gabriele; Stoppini, Luc; Tongiorgi, Enrico; Tiramani, Manuela; Monnet-Tschudi, Florianne; Wilks, Martin F; Ylikomi, Timo; Fritsche, Ellen
2015-02-01
A major problem in developmental neurotoxicity (DNT) risk assessment is the lack of toxicological hazard information for most compounds. Therefore, new approaches are being considered to provide adequate experimental data that allow regulatory decisions. This process requires a matching of regulatory needs on the one hand and the opportunities provided by new test systems and methods on the other hand. Alignment of academically and industrially driven assay development with regulatory needs in the field of DNT is a core mission of the International STakeholder NETwork (ISTNET) in DNT testing. The first meeting of ISTNET was held in Zurich on 23-24 January 2014 in order to explore the concept of adverse outcome pathway (AOP) to practical DNT testing. AOPs were considered promising tools to promote test systems development according to regulatory needs. Moreover, the AOP concept was identified as an important guiding principle to assemble predictive integrated testing strategies (ITSs) for DNT. The recommendations on a road map towards AOP-based DNT testing is considered a stepwise approach, operating initially with incomplete AOPs for compound grouping, and focussing on key events of neurodevelopment. Next steps to be considered in follow-up activities are the use of case studies to further apply the AOP concept in regulatory DNT testing, making use of AOP intersections (common key events) for economic development of screening assays, and addressing the transition from qualitative descriptions to quantitative network modelling.
NASA Technical Reports Server (NTRS)
Cross, E. J., Jr.
1976-01-01
A procedure is developed for deriving the level flight drag and propulsive efficiency of propeller-driven aircraft. This is a method in which the overall drag of the aircraft is expressed in terms of the measured increment of power required to overcome a corresponding known increment of drag. The aircraft is flown in unaccelerated, straight and level flight, and thus includes the effects of the propeller drag and slipstream. Propeller efficiency and airplane drag are computed on the basis of data obtained during flight test and do not rely on the analytical calculations of inadequate theory.
Melozzi, Francesca; Woodman, Marmaduke M; Jirsa, Viktor K; Bernard, Christophe
2017-01-01
Connectome-based modeling of large-scale brain network dynamics enables causal in silico interrogation of the brain's structure-function relationship, necessitating the close integration of diverse neuroinformatics fields. Here we extend the open-source simulation software The Virtual Brain (TVB) to whole mouse brain network modeling based on individual diffusion magnetic resonance imaging (dMRI)-based or tracer-based detailed mouse connectomes. We provide practical examples on how to use The Virtual Mouse Brain (TVMB) to simulate brain activity, such as seizure propagation and the switching behavior of the resting state dynamics in health and disease. TVMB enables theoretically driven experimental planning and ways to test predictions in the numerous strains of mice available to study brain function in normal and pathological conditions.
Event-driven simulations of nonlinear integrate-and-fire neurons.
Tonnelier, Arnaud; Belmabrouk, Hana; Martinez, Dominique
2007-12-01
Event-driven strategies have been used to simulate spiking neural networks exactly. Previous work is limited to linear integrate-and-fire neurons. In this note, we extend event-driven schemes to a class of nonlinear integrate-and-fire models. Results are presented for the quadratic integrate-and-fire model with instantaneous or exponential synaptic currents. Extensions to conductance-based currents and exponential integrate-and-fire neurons are discussed.
Minică, Camelia C.; Genovese, Giulio; Hultman, Christina M.; Pool, René; Vink, Jacqueline M.; Neale, Michael C.; Dolan, Conor V.; Neale, Benjamin M.
2017-01-01
Sequence-based association studies are at a critical inflexion point with the increasing availability of exome-sequencing data. A popular test of association is the sequence kernel association test (SKAT). Weights are embedded within SKAT to reflect the hypothesized contribution of the variants to the trait variance. Because the true weights are generally unknown, and so are subject to misspecification, we examined the efficiency of a data-driven weighting scheme. We propose the use of a set of theoretically defensible weighting schemes, of which, we assume, the one that gives the largest test statistic is likely to capture best the allele frequency-functional effect relationship. We show that the use of alternative weights obviates the need to impose arbitrary frequency thresholds in sequence data association analyses. As both the score test and the likelihood ratio test (LRT) may be used in this context, and may differ in power, we characterize the behavior of both tests. We found that the two tests have equal power if the set of weights resembled the correct ones. However, if the weights are badly specified, the LRT shows superior power (due to its robustness to misspecification). With this data-driven weighting procedure the LRT detected significant signal in genes located in regions already confirmed as associated with schizophrenia – the PRRC2A (P=1.020E-06) and the VARS2 (P=2.383E-06) – in the Swedish schizophrenia case-control cohort of 11,040 individuals with exome-sequencing data. The score test is currently preferred for its computational efficiency and power. Indeed, assuming correct specification, in some circumstances the score test is the most powerful. However, LRT has the advantageous properties of being generally more robust and more powerful under weight misspecification. This is an important result given that, arguably, misspecified models are likely to be the rule rather than the exception in weighting-based approaches. PMID:28238293
NASA Astrophysics Data System (ADS)
Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.
2015-10-01
Reliable river flow forecasts play a key role in flood risk mitigation. Among different approaches of river flow forecasting, data driven approaches have become increasingly popular in recent years due to their minimum information requirements and ability to simulate nonlinear and non-stationary characteristics of hydrological processes. In this study, attempts are made to apply four different types of data driven approaches, namely traditional artificial neural networks (ANN), adaptive neuro-fuzzy inference systems (ANFIS), wavelet neural networks (WNN), and, hybrid ANFIS with multi resolution analysis using wavelets (WNF). Developed models applied for real time flood forecasting at Casino station on Richmond River, Australia which is highly prone to flooding. Hourly rainfall and runoff data were used to drive the models which have been used for forecasting with 1, 6, 12, 24, 36 and 48 h lead-time. The performance of models further improved by adding an upstream river flow data (Wiangaree station), as another effective input. All models perform satisfactorily up to 12 h lead-time. However, the hybrid wavelet-based models significantly outperforming the ANFIS and ANN models in the longer lead-time forecasting. The results confirm the robustness of the proposed structure of the hybrid models for real time runoff forecasting in the study area.
Prospects of second generation artificial intelligence tools in calibration of chemical sensors.
Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Ramam, Veluri Anantha; Rao, Gollapalli Nageswara; Rao, Vaddadi Venkata Panakala
2005-05-01
Multivariate data driven calibration models with neural networks (NNs) are developed for binary (Cu++ and Ca++) and quaternary (K+, Ca++, NO3- and Cl-) ion-selective electrode (ISE) data. The response profiles of ISEs with concentrations are non-linear and sub-Nernstian. This task represents function approximation of multi-variate, multi-response, correlated, non-linear data with unknown noise structure i.e. multi-component calibration/prediction in chemometric parlance. Radial distribution function (RBF) and Fuzzy-ARTMAP-NN models implemented in the software packages, TRAJAN and Professional II, are employed for the calibration. The optimum NN models reported are based on residuals in concentration space. Being a data driven information technology, NN does not require a model, prior- or posterior- distribution of data or noise structure. Missing information, spikes or newer trends in different concentration ranges can be modeled through novelty detection. Two simulated data sets generated from mathematical functions are modeled as a function of number of data points and network parameters like number of neurons and nearest neighbors. The success of RBF and Fuzzy-ARTMAP-NNs to develop adequate calibration models for experimental data and function approximation models for more complex simulated data sets ensures AI2 (artificial intelligence, 2nd generation) as a promising technology in quantitation.
Model-Based GUI Testing Using Uppaal at Novo Nordisk
NASA Astrophysics Data System (ADS)
Hjort, Ulrik H.; Illum, Jacob; Larsen, Kim G.; Petersen, Michael A.; Skou, Arne
This paper details a collaboration between Aalborg University and Novo Nordiskin developing an automatic model-based test generation tool for system testing of the graphical user interface of a medical device on an embedded platform. The tool takes as input an UML Statemachine model and generates a test suite satisfying some testing criterion, such as edge or state coverage, and converts the individual test case into a scripting language that can be automatically executed against the target. The tool has significantly reduced the time required for test construction and generation, and reduced the number of test scripts while increasing the coverage.
Space can substitute for time in predicting climate-change effects on biodiversity
Blois, Jessica L.; Williams, John W.; Fitzpatrick, Matthew C.; Jackson, Stephen T.; Ferrier, Simon
2013-01-01
“Space-for-time” substitution is widely used in biodiversity modeling to infer past or future trajectories of ecological systems from contemporary spatial patterns. However, the foundational assumption—that drivers of spatial gradients of species composition also drive temporal changes in diversity—rarely is tested. Here, we empirically test the space-for-time assumption by constructing orthogonal datasets of compositional turnover of plant taxa and climatic dissimilarity through time and across space from Late Quaternary pollen records in eastern North America, then modeling climate-driven compositional turnover. Predictions relying on space-for-time substitution were ∼72% as accurate as “time-for-time” predictions. However, space-for-time substitution performed poorly during the Holocene when temporal variation in climate was small relative to spatial variation and required subsampling to match the extent of spatial and temporal climatic gradients. Despite this caution, our results generally support the judicious use of space-for-time substitution in modeling community responses to climate change.
Space can substitute for time in predicting climate-change effects on biodiversity.
Blois, Jessica L; Williams, John W; Fitzpatrick, Matthew C; Jackson, Stephen T; Ferrier, Simon
2013-06-04
"Space-for-time" substitution is widely used in biodiversity modeling to infer past or future trajectories of ecological systems from contemporary spatial patterns. However, the foundational assumption--that drivers of spatial gradients of species composition also drive temporal changes in diversity--rarely is tested. Here, we empirically test the space-for-time assumption by constructing orthogonal datasets of compositional turnover of plant taxa and climatic dissimilarity through time and across space from Late Quaternary pollen records in eastern North America, then modeling climate-driven compositional turnover. Predictions relying on space-for-time substitution were ∼72% as accurate as "time-for-time" predictions. However, space-for-time substitution performed poorly during the Holocene when temporal variation in climate was small relative to spatial variation and required subsampling to match the extent of spatial and temporal climatic gradients. Despite this caution, our results generally support the judicious use of space-for-time substitution in modeling community responses to climate change.
Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B
2013-01-01
The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.
Fault Diagnosis in HVAC Chillers
NASA Technical Reports Server (NTRS)
Choi, Kihoon; Namuru, Setu M.; Azam, Mohammad S.; Luo, Jianhui; Pattipati, Krishna R.; Patterson-Hine, Ann
2005-01-01
Modern buildings are being equipped with increasingly sophisticated power and control systems with substantial capabilities for monitoring and controlling the amenities. Operational problems associated with heating, ventilation, and air-conditioning (HVAC) systems plague many commercial buildings, often the result of degraded equipment, failed sensors, improper installation, poor maintenance, and improperly implemented controls. Most existing HVAC fault-diagnostic schemes are based on analytical models and knowledge bases. These schemes are adequate for generic systems. However, real-world systems significantly differ from the generic ones and necessitate modifications of the models and/or customization of the standard knowledge bases, which can be labor intensive. Data-driven techniques for fault detection and isolation (FDI) have a close relationship with pattern recognition, wherein one seeks to categorize the input-output data into normal or faulty classes. Owing to the simplicity and adaptability, customization of a data-driven FDI approach does not require in-depth knowledge of the HVAC system. It enables the building system operators to improve energy efficiency and maintain the desired comfort level at a reduced cost. In this article, we consider a data-driven approach for FDI of chillers in HVAC systems. To diagnose the faults of interest in the chiller, we employ multiway dynamic principal component analysis (MPCA), multiway partial least squares (MPLS), and support vector machines (SVMs). The simulation of a chiller under various fault conditions is conducted using a standard chiller simulator from the American Society of Heating, Refrigerating, and Air-conditioning Engineers (ASHRAE). We validated our FDI scheme using experimental data obtained from different types of chiller faults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Treiger, Teresa M; Fink-Samnick, Ellen
2013-01-01
The purpose of this second article of a 3-article series is to clarify the competencies for a new paradigm of case management built upon a value-driven foundation that : Applicable to all health care sectors where case management is practiced. In moving forward, the one fact that rings true is that there will be a constant change in our industry. As the health care terrain shifts and new influences continually surface, there will be consequences for case management practice. These impacts require nimble clinical professionals in possession of recognized and firmly established competencies. They must be agile to frame (and reframe) their professional practice to facilitate the best possible outcomes for their patients. Case managers can choose to be Gumby™ or Pokey™. This is exactly the time to define a competency-based case management model, highlighting one sufficiently fluid to fit into any setting of care. The practice of case management transcends the vast array of representative professional disciplines and educational levels. A majority of current models are driven by business priorities rather than the competencies critical to successful practice and quality patient outcomes. This results in a fragmented professional case management identity. Although there is an inherent value in what each discipline brings to the table, this advanced model unifies behind case management's unique, strengths-based identity instead of continuing to align within traditional divisions (e.g., discipline, work setting, population served). This model fosters case management's expanding career advancement opportunities, including a reflective clinical ladder.
Beyond Solar Fuels: Renewable Energy-Driven Chemistry.
Lanzafame, Paola; Abate, Salvatare; Ampelli, Claudio; Genovese, Chiara; Passalacqua, Rosalba; Centi, Gabriele; Perathoner, Siglinda
2017-11-23
The future feasibility of decarbonized industrial chemical production based on the substitution of fossil feedstocks (FFs) with renewable energy (RE) sources is discussed. Indeed, the use of FFs as an energy source has the greatest impact on the greenhouse gas emissions of chemical production. This future scenario is indicated as "solar-driven" or "RE-driven" chemistry. Its possible implementation requires to go beyond the concept of solar fuels, in particular to address two key aspects: i) the use of RE-driven processes for the production of base raw materials, such as olefins, methanol, and ammonia, and ii) the development of novel RE-driven routes that simultaneously realize process and energy intensification, particularly in the direction of a significant reduction of the number of the process steps. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
UTOPIAN: user-driven topic modeling based on interactive nonnegative matrix factorization.
Choo, Jaegul; Lee, Changhyun; Reddy, Chandan K; Park, Haesun
2013-12-01
Topic modeling has been widely used for analyzing text document collections. Recently, there have been significant advancements in various topic modeling techniques, particularly in the form of probabilistic graphical modeling. State-of-the-art techniques such as Latent Dirichlet Allocation (LDA) have been successfully applied in visual text analytics. However, most of the widely-used methods based on probabilistic modeling have drawbacks in terms of consistency from multiple runs and empirical convergence. Furthermore, due to the complicatedness in the formulation and the algorithm, LDA cannot easily incorporate various types of user feedback. To tackle this problem, we propose a reliable and flexible visual analytics system for topic modeling called UTOPIAN (User-driven Topic modeling based on Interactive Nonnegative Matrix Factorization). Centered around its semi-supervised formulation, UTOPIAN enables users to interact with the topic modeling method and steer the result in a user-driven manner. We demonstrate the capability of UTOPIAN via several usage scenarios with real-world document corpuses such as InfoVis/VAST paper data set and product review data sets.
Toward Computational Cumulative Biology by Combining Models of Biological Datasets
Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel
2014-01-01
A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations—for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database. PMID:25427176
Toward computational cumulative biology by combining models of biological datasets.
Faisal, Ali; Peltonen, Jaakko; Georgii, Elisabeth; Rung, Johan; Kaski, Samuel
2014-01-01
A main challenge of data-driven sciences is how to make maximal use of the progressively expanding databases of experimental datasets in order to keep research cumulative. We introduce the idea of a modeling-based dataset retrieval engine designed for relating a researcher's experimental dataset to earlier work in the field. The search is (i) data-driven to enable new findings, going beyond the state of the art of keyword searches in annotations, (ii) modeling-driven, to include both biological knowledge and insights learned from data, and (iii) scalable, as it is accomplished without building one unified grand model of all data. Assuming each dataset has been modeled beforehand, by the researchers or automatically by database managers, we apply a rapidly computable and optimizable combination model to decompose a new dataset into contributions from earlier relevant models. By using the data-driven decomposition, we identify a network of interrelated datasets from a large annotated human gene expression atlas. While tissue type and disease were major driving forces for determining relevant datasets, the found relationships were richer, and the model-based search was more accurate than the keyword search; moreover, it recovered biologically meaningful relationships that are not straightforwardly visible from annotations-for instance, between cells in different developmental stages such as thymocytes and T-cells. Data-driven links and citations matched to a large extent; the data-driven links even uncovered corrections to the publication data, as two of the most linked datasets were not highly cited and turned out to have wrong publication entries in the database.
Unsteady Computational Tests of a Non-Equilibrium
NASA Astrophysics Data System (ADS)
Jirasek, Adam; Hamlington, Peter; Lofthouse, Andrew; Usafa Collaboration; Cu Boulder Collaboration
2017-11-01
A non-equilibrium turbulence model is assessed on simulations of three practically-relevant unsteady test cases; oscillating channel flow, transonic flow around an oscillating airfoil, and transonic flow around the Benchmark Super-Critical Wing. The first case is related to piston-driven flows while the remaining cases are relevant to unsteady aerodynamics at high angles of attack and transonic speeds. Non-equilibrium turbulence effects arise in each of these cases in the form of a lag between the mean strain rate and Reynolds stresses, resulting in reduced kinetic energy production compared to classical equilibrium turbulence models that are based on the gradient transport (or Boussinesq) hypothesis. As a result of the improved representation of unsteady flow effects, the non-equilibrium model provides substantially better agreement with available experimental data than do classical equilibrium turbulence models. This suggests that the non-equilibrium model may be ideally suited for simulations of modern high-speed, high angle of attack aerodynamics problems.
Novel Hybrid Scheduling Technique for Sensor Nodes with Mixed Criticality Tasks.
Micea, Mihai-Victor; Stangaciu, Cristina-Sorina; Stangaciu, Valentin; Curiac, Daniel-Ioan
2017-06-26
Sensor networks become increasingly a key technology for complex control applications. Their potential use in safety- and time-critical domains has raised the need for task scheduling mechanisms specially adapted to sensor node specific requirements, often materialized in predictable jitter-less execution of tasks characterized by different criticality levels. This paper offers an efficient scheduling solution, named Hybrid Hard Real-Time Scheduling (H²RTS), which combines a static, clock driven method with a dynamic, event driven scheduling technique, in order to provide high execution predictability, while keeping a high node Central Processing Unit (CPU) utilization factor. From the detailed, integrated schedulability analysis of the H²RTS, a set of sufficiency tests are introduced and demonstrated based on the processor demand and linear upper bound metrics. The performance and correct behavior of the proposed hybrid scheduling technique have been extensively evaluated and validated both on a simulator and on a sensor mote equipped with ARM7 microcontroller.
Design of rapid prototype of UAV line-of-sight stabilized control system
NASA Astrophysics Data System (ADS)
Huang, Gang; Zhao, Liting; Li, Yinlong; Yu, Fei; Lin, Zhe
2018-01-01
The line-of-sight (LOS) stable platform is the most important technology of UAV (unmanned aerial vehicle), which can reduce the effect to imaging quality from vibration and maneuvering of the aircraft. According to the requirement of LOS stability system (inertial and optical-mechanical combined method) and UAV's structure, a rapid prototype is designed using based on industrial computer using Peripheral Component Interconnect (PCI) and Windows RTX to exchange information. The paper shows the control structure, and circuit system including the inertial stability control circuit with gyro and voice coil motor driven circuit, the optical-mechanical stability control circuit with fast-steering-mirror (FSM) driven circuit and image-deviation-obtained system, outer frame rotary follower, and information-exchange system on PC. Test results show the stability accuracy reaches 5μrad, and prove the effectiveness of the combined line-of-sight stabilization control system, and the real-time rapid prototype runs stable.
Hybrid quantum-classical modeling of quantum dot devices
NASA Astrophysics Data System (ADS)
Kantner, Markus; Mittnenzweig, Markus; Koprucki, Thomas
2017-11-01
The design of electrically driven quantum dot devices for quantum optical applications asks for modeling approaches combining classical device physics with quantum mechanics. We connect the well-established fields of semiclassical semiconductor transport theory and the theory of open quantum systems to meet this requirement. By coupling the van Roosbroeck system with a quantum master equation in Lindblad form, we introduce a new hybrid quantum-classical modeling approach, which provides a comprehensive description of quantum dot devices on multiple scales: it enables the calculation of quantum optical figures of merit and the spatially resolved simulation of the current flow in realistic semiconductor device geometries in a unified way. We construct the interface between both theories in such a way, that the resulting hybrid system obeys the fundamental axioms of (non)equilibrium thermodynamics. We show that our approach guarantees the conservation of charge, consistency with the thermodynamic equilibrium and the second law of thermodynamics. The feasibility of the approach is demonstrated by numerical simulations of an electrically driven single-photon source based on a single quantum dot in the stationary and transient operation regime.
Federated ontology-based queries over cancer data
2012-01-01
Background Personalised medicine provides patients with treatments that are specific to their genetic profiles. It requires efficient data sharing of disparate data types across a variety of scientific disciplines, such as molecular biology, pathology, radiology and clinical practice. Personalised medicine aims to offer the safest and most effective therapeutic strategy based on the gene variations of each subject. In particular, this is valid in oncology, where knowledge about genetic mutations has already led to new therapies. Current molecular biology techniques (microarrays, proteomics, epigenetic technology and improved DNA sequencing technology) enable better characterisation of cancer tumours. The vast amounts of data, however, coupled with the use of different terms - or semantic heterogeneity - in each discipline makes the retrieval and integration of information difficult. Results Existing software infrastructures for data-sharing in the cancer domain, such as caGrid, support access to distributed information. caGrid follows a service-oriented model-driven architecture. Each data source in caGrid is associated with metadata at increasing levels of abstraction, including syntactic, structural, reference and domain metadata. The domain metadata consists of ontology-based annotations associated with the structural information of each data source. However, caGrid's current querying functionality is given at the structural metadata level, without capitalising on the ontology-based annotations. This paper presents the design of and theoretical foundations for distributed ontology-based queries over cancer research data. Concept-based queries are reformulated to the target query language, where join conditions between multiple data sources are found by exploiting the semantic annotations. The system has been implemented, as a proof of concept, over the caGrid infrastructure. The approach is applicable to other model-driven architectures. A graphical user interface has been developed, supporting ontology-based queries over caGrid data sources. An extensive evaluation of the query reformulation technique is included. Conclusions To support personalised medicine in oncology, it is crucial to retrieve and integrate molecular, pathology, radiology and clinical data in an efficient manner. The semantic heterogeneity of the data makes this a challenging task. Ontologies provide a formal framework to support querying and integration. This paper provides an ontology-based solution for querying distributed databases over service-oriented, model-driven infrastructures. PMID:22373043
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.