Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model
2017-03-01
set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation
Updated Model of the Solar Energetic Proton Environment in Space
NASA Astrophysics Data System (ADS)
Jiggens, Piers; Heynderickx, Daniel; Sandberg, Ingmar; Truscott, Pete; Raukunen, Osku; Vainio, Rami
2018-05-01
The Solar Accumulated and Peak Proton and Heavy Ion Radiation Environment (SAPPHIRE) model provides environment specification outputs for all aspects of the Solar Energetic Particle (SEP) environment. The model is based upon a thoroughly cleaned and carefully processed data set. Herein the evolution of the solar proton model is discussed with comparisons to other models and data. This paper discusses the construction of the underlying data set, the modelling methodology, optimisation of fitted flux distributions and extrapolation of model outputs to cover a range of proton energies from 0.1 MeV to 1 GeV. The model provides outputs in terms of mission cumulative fluence, maximum event fluence and peak flux for both solar maximum and solar minimum periods. A new method for describing maximum event fluence and peak flux outputs in terms of 1-in-x-year SPEs is also described. SAPPHIRE proton model outputs are compared with previous models including CREME96, ESP-PSYCHIC and the JPL model. Low energy outputs are compared to SEP data from ACE/EPAM whilst high energy outputs are compared to a new model based on GLEs detected by Neutron Monitors (NMs).
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
Regionalisation of statistical model outputs creating gridded data sets for Germany
NASA Astrophysics Data System (ADS)
Höpp, Simona Andrea; Rauthe, Monika; Deutschländer, Thomas
2016-04-01
The goal of the German research program ReKliEs-De (regional climate projection ensembles for Germany, http://.reklies.hlug.de) is to distribute robust information about the range and the extremes of future climate for Germany and its neighbouring river catchment areas. This joint research project is supported by the German Federal Ministry of Education and Research (BMBF) and was initiated by the German Federal States. The Project results are meant to support the development of adaptation strategies to mitigate the impacts of future climate change. The aim of our part of the project is to adapt and transfer the regionalisation methods of the gridded hydrological data set (HYRAS) from daily station data to the station based statistical regional climate model output of WETTREG (regionalisation method based on weather patterns). The WETTREG model output covers the period of 1951 to 2100 with a daily temporal resolution. For this, we generate a gridded data set of the WETTREG output for precipitation, air temperature and relative humidity with a spatial resolution of 12.5 km x 12.5 km, which is common for regional climate models. Thus, this regionalisation allows comparing statistical to dynamical climate model outputs. The HYRAS data set was developed by the German Meteorological Service within the German research program KLIWAS (www.kliwas.de) and consists of daily gridded data for Germany and its neighbouring river catchment areas. It has a spatial resolution of 5 km x 5 km for the entire domain for the hydro-meteorological elements precipitation, air temperature and relative humidity and covers the period of 1951 to 2006. After conservative remapping the HYRAS data set is also convenient for the validation of climate models. The presentation will consist of two parts to present the actual state of the adaptation of the HYRAS regionalisation methods to the statistical regional climate model WETTREG: First, an overview of the HYRAS data set and the regionalisation methods for precipitation (REGNIE method based on a combination of multiple linear regression with 5 predictors and inverse distance weighting), air temperature and relative humidity (optimal interpolation) will be given. Finally, results of the regionalisation of WETTREG model output will be shown.
Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo
2017-01-01
The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.
Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles
Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.
2009-01-01
The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382
Structural identifiability analysis of a cardiovascular system model.
Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas
2016-05-01
The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Method and system for monitoring and displaying engine performance parameters
NASA Technical Reports Server (NTRS)
Abbott, Terence S. (Inventor); Person, Jr., Lee H. (Inventor)
1991-01-01
The invention is a method and system for monitoring and directly displaying the actual thrust produced by a jet aircraft engine under determined operating conditions and the available thrust and predicted (commanded) thrust of a functional model of an ideal engine under the same determined operating conditions. A first set of actual value output signals representative of a plurality of actual performance parameters of the engine under the determined operating conditions is generated and compared with a second set of predicted value output signals representative of the predicted value of corresponding performance parameters of a functional model of the engine under the determined operating conditions to produce a third set of difference value output signals within a range of normal, caution, or warning limit values. A thrust indicator displays when any one of the actual value output signals is in the warning range while shaping function means shape each of the respective difference output signals as each approaches the limit of the respective normal, caution, and warning range limits.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
NASA Astrophysics Data System (ADS)
Oza, Nikunj
2012-03-01
A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. A set of training examples— examples with known output values—is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate’s measurements. The generalization performance of a learned model (how closely the target outputs and the model’s predicted outputs agree for patterns that have not been presented to the learning algorithm) would provide an indication of how well the model has learned the desired mapping. More formally, a classification learning algorithm L takes a training set T as its input. The training set consists of |T| examples or instances. It is assumed that there is a probability distribution D from which all training examples are drawn independently—that is, all the training examples are independently and identically distributed (i.i.d.). The ith training example is of the form (x_i, y_i), where x_i is a vector of values of several features and y_i represents the class to be predicted.* In the sunspot classification example given above, each training example would represent one sunspot’s classification (y_i) and the corresponding set of measurements (x_i). The output of a supervised learning algorithm is a model h that approximates the unknown mapping from the inputs to the outputs. In our example, h would map from the sunspot measurements to the type of sunspot. We may have a test set S—a set of examples not used in training that we use to test how well the model h predicts the outputs on new examples. Just as with the examples in T, the examples in S are assumed to be independent and identically distributed (i.i.d.) draws from the distribution D. We measure the error of h on the test set as the proportion of test cases that h misclassifies: 1/|S| Sigma(x,y union S)[I(h(x)!= y)] where I(v) is the indicator function—it returns 1 if v is true and 0 otherwise. In our sunspot classification example, we would identify additional examples of sunspots that were not used in generating the model, and use these to determine how accurate the model is—the fraction of the test samples that the model classifies correctly. An example of a classification model is the decision tree shown in Figure 23.1. We will discuss the decision tree learning algorithm in more detail later—for now, we assume that, given a training set with examples of sunspots, this decision tree is derived. This can be used to classify previously unseen examples of sunpots. For example, if a new sunspot’s inputs indicate that its "Group Length" is in the range 10-15, then the decision tree would classify the sunspot as being of type “E,” whereas if the "Group Length" is "NULL," the "Magnetic Type" is "bipolar," and the "Penumbra" is "rudimentary," then it would be classified as type "C." In this chapter, we will add to the above description of classification problems. We will discuss decision trees and several other classification models. In particular, we will discuss the learning algorithms that generate these classification models, how to use them to classify new examples, and the strengths and weaknesses of these models. We will end with pointers to further reading on classification methods applied to astronomy data.
Application of Artificial Neural Network to Optical Fluid Analyzer
NASA Astrophysics Data System (ADS)
Kimura, Makoto; Nishida, Katsuhiko
1994-04-01
A three-layer artificial neural network has been applied to the presentation of optical fluid analyzer (OFA) raw data, and the accuracy of oil fraction determination has been significantly improved compared to previous approaches. To apply the artificial neural network approach to solving a problem, the first step is training to determine the appropriate weight set for calculating the target values. This involves using a series of data sets (each comprising a set of input values and an associated set of output values that the artificial neural network is required to determine) to tune artificial neural network weighting parameters so that the output of the neural network to the given set of input values is as close as possible to the required output. The physical model used to generate the series of learning data sets was the effective flow stream model, developed for OFA data presentation. The effectiveness of the training was verified by reprocessing the same input data as were used to determine the weighting parameters and then by comparing the results of the artificial neural network to the expected output values. The standard deviation of the expected and obtained values was approximately 10% (two sigma).
Fligor, Brian J; Cox, L Clarke
2004-12-01
To measure the sound levels generated by the headphones of commercially available portable compact disc players and provide hearing healthcare providers with safety guidelines based on a theoretical noise dose model. Using a Knowles Electronics Manikin for Acoustical Research and a personal computer, output levels across volume control settings were recorded from headphones driven by a standard signal (white noise) and compared with output levels from music samples of eight different genres. Many commercially available models from different manufacturers were investigated. Several different styles of headphones (insert, supra-aural, vertical, and circumaural) were used to determine if style of headphone influenced output level. Free-field equivalent sound pressure levels measured at maximum volume control setting ranged from 91 dBA to 121 dBA. Output levels varied across manufacturers and style of headphone, although generally the smaller the headphone, the higher the sound level for a given volume control setting. Specifically, in one manufacturer, insert earphones increased output level 7-9 dB, relative to the output from stock headphones included in the purchase of the CD player. In a few headphone-CD player combinations, peak sound pressure levels exceeded 130 dB SPL. Based on measured sound pressure levels across systems and the noise dose model recommended by National Institute for Occupational Safety and Health for protecting the occupational worker, a maximum permissible noise dose would typically be reached within 1 hr of listening with the volume control set to 70% of maximum gain using supra-aural headphones. Using headphones that resulted in boosting the output level (e.g., insert earphones used in this study) would significantly decrease the maximum safe volume control setting; this effect was unpredictable from one manufacturer to another. In the interest of providing a straightforward recommendation that should protect the hearing of the majority of consumers, reasonable guidelines would include a recommendation to limit headphone use to 1 hr or less per day if using supra-aural style headphones at a gain control setting of 60% of maximum.
Perl-speaks-NONMEM (PsN)--a Perl module for NONMEM related programming.
Lindbom, Lars; Ribbing, Jakob; Jonsson, E Niclas
2004-08-01
The NONMEM program is the most widely used nonlinear regression software in population pharmacokinetic/pharmacodynamic (PK/PD) analyses. In this article we describe a programming library, Perl-speaks-NONMEM (PsN), intended for programmers that aim at using the computational capability of NONMEM in external applications. The library is object oriented and written in the programming language Perl. The classes of the library are built around NONMEM's data, model and output files. The specification of the NONMEM model is easily set or changed through the model and data file classes while the output from a model fit is accessed through the output file class. The classes have methods that help the programmer perform common repetitive tasks, e.g. summarising the output from a NONMEM run, setting the initial estimates of a model based on a previous run or truncating values over a certain threshold in the data file. PsN creates a basis for the development of high-level software using NONMEM as the regression tool.
NASA Technical Reports Server (NTRS)
Oza, Nikunj C.
2011-01-01
A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.
Assessment of the Uniqueness of Wind Tunnel Strain-Gage Balance Load Predictions
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2016-01-01
A new test was developed to assess the uniqueness of wind tunnel strain-gage balance load predictions that are obtained from regression models of calibration data. The test helps balance users to gain confidence in load predictions of non-traditional balance designs. It also makes it possible to better evaluate load predictions of traditional balances that are not used as originally intended. The test works for both the Iterative and Non-Iterative Methods that are used in the aerospace testing community for the prediction of balance loads. It is based on the hypothesis that the total number of independently applied balance load components must always match the total number of independently measured bridge outputs or bridge output combinations. This hypothesis is supported by a control volume analysis of the inputs and outputs of a strain-gage balance. It is concluded from the control volume analysis that the loads and bridge outputs of a balance calibration data set must separately be tested for linear independence because it cannot always be guaranteed that a linearly independent load component set will result in linearly independent bridge output measurements. Simple linear math models for the loads and bridge outputs in combination with the variance inflation factor are used to test for linear independence. A highly unique and reversible mapping between the applied load component set and the measured bridge output set is guaranteed to exist if the maximum variance inflation factor of both sets is less than the literature recommended threshold of five. Data from the calibration of a six{component force balance is used to illustrate the application of the new test to real-world data.
Using Weather Data and Climate Model Output in Economic Analyses of Climate Change
DOE Office of Scientific and Technical Information (OSTI.GOV)
Auffhammer, M.; Hsiang, S. M.; Schlenker, W.
2013-06-28
Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overviewmore » of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.« less
Emulating RRTMG Radiation with Deep Neural Networks for the Accelerated Model for Climate and Energy
NASA Astrophysics Data System (ADS)
Pal, A.; Norman, M. R.
2017-12-01
The RRTMG radiation scheme in the Accelerated Model for Climate and Energy Multi-scale Model Framework (ACME-MMF), is a bottleneck and consumes approximately 50% of the computational time. To simulate a case using RRTMG radiation scheme in ACME-MMF with high throughput and high resolution will therefore require a speed-up of this calculation while retaining physical fidelity. In this study, RRTMG radiation is emulated with Deep Neural Networks (DNNs). The first step towards this goal is to run a case with ACME-MMF and generate input data sets for the DNNs. A principal component analysis of these input data sets are carried out. Artificial data sets are created using the previous data sets to cover a wider space. These artificial data sets are used in a standalone RRTMG radiation scheme to generate outputs in a cost effective manner. These input-output pairs are used to train multiple architectures DNNs(1). Another DNN(2) is trained using the inputs to predict the error. A reverse emulation is trained to map the output to input. An error controlled code is developed with the two DNNs (1 and 2) and will determine when/if the original parameterization needs to be used.
Fiori, Simone
2007-01-01
Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641
Gaussian functional regression for output prediction: Model assimilation and experimental design
NASA Astrophysics Data System (ADS)
Nguyen, N. C.; Peraire, J.
2016-03-01
In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.
Optimized System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Longman, Richard W.
1999-01-01
In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.
NASA Technical Reports Server (NTRS)
Chang, H.
1976-01-01
A computer program using Lemke, Salkin and Spielberg's Set Covering Algorithm (SCA) to optimize a traffic model problem in the Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE) was documented. SCA forms a submodule of SAMPLE and provides for input and output, subroutines, and an interactive feature for performing the optimization and arranging the results in a readily understandable form for output.
User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs
Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.
2008-01-01
This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.
Real-time quality monitoring in debutanizer column with regression tree and ANFIS
NASA Astrophysics Data System (ADS)
Siddharth, Kumar; Pathak, Amey; Pani, Ajaya Kumar
2018-05-01
A debutanizer column is an integral part of any petroleum refinery. Online composition monitoring of debutanizer column outlet streams is highly desirable in order to maximize the production of liquefied petroleum gas. In this article, data-driven models for debutanizer column are developed for real-time composition monitoring. The dataset used has seven process variables as inputs and the output is the butane concentration in the debutanizer column bottom product. The input-output dataset is divided equally into a training (calibration) set and a validation (testing) set. The training set data were used to develop fuzzy inference, adaptive neuro fuzzy (ANFIS) and regression tree models for the debutanizer column. The accuracy of the developed models were evaluated by simulation of the models with the validation dataset. It is observed that the ANFIS model has better estimation accuracy than other models developed in this work and many data-driven models proposed so far in the literature for the debutanizer column.
Validation of individual and aggregate global flood hazard models for two major floods in Africa.
NASA Astrophysics Data System (ADS)
Trigg, M.; Bernhofen, M.; Whyman, C.
2017-12-01
A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.
Grid Integrated Distributed PV (GridPV) Version 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reno, Matthew J.; Coogan, Kyle
2014-12-01
This manual provides the documentation of the MATLAB toolbox of functions for using OpenDSS to simulate the impact of solar energy on the distribution system. The majority of the functio ns are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in th e OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included tomore » show potential uses of the toolbox functions. Each function i n the toolbox is documented with the function use syntax, full description, function input list, function output list, example use, and example output.« less
Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.
Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J
2012-09-01
Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis data sets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical data sets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Control vocabulary software designed for CMIP6
NASA Astrophysics Data System (ADS)
Nadeau, D.; Taylor, K. E.; Williams, D. N.; Ames, S.
2016-12-01
The Coupled Model Intercomparison Project Phase 6 (CMIP6) coordinates a number of intercomparison activities and includes many more experiments than its predecessor, CMIP5. In order to organize and facilitate use of the complex collection of expected CMIP6 model output, a standard set of descriptive information has been defined, which must be stored along with the data. This standard information enables automated machine interpretation of the contents of all model output files. The standard metadata is stored in compliance with the Climate and Forecast (CF) standard, which ensures that it can be interpreted and visualized by many standard software packages. Additional attributes (not standardized by CF) are required by CMIP6 to enhance identification of models and experiments, and to provide additional information critical for interpreting the model results. To ensure that CMIP6 data complies with the standards, a python program called "PrePARE" (Pre-Publication Attribute Reviewer for the ESGF) has been developed to check the model output prior to its publication and release for analysis. If, for example, a required attribute is missing or incorrect (e.g., not included in the reference CMIP6 controlled vocabularies), then PrePare will prevent publication. In some circumstances, missing attributes can be created or incorrect attributes can be replaced automatically by PrePARE, and the program will warn users about the changes that have been made. PrePARE provides a final check on model output assuring adherence to a baseline conformity across the output from all CMIP6 models which will facilitate analysis by climate scientists. PrePARE is flexible and can be easily modified for use by similar projects that have a well-defined set of metadata and controlled vocabularies.
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
A Generalized Mixture Framework for Multi-label Classification
Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos
2015-01-01
We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069
Use of Regional Climate Model Output for Hydrologic Simulations
NASA Astrophysics Data System (ADS)
Hay, L. E.; Clark, M. P.; Wilby, R. L.; Gutowski, W. J.; Leavesley, G. H.; Pan, Z.; Arritt, R. W.; Takle, E. S.
2001-12-01
Daily precipitation and maximum and minimum temperature time series from a Regional Climate Model (RegCM2) were used as input to a distributed hydrologic model for a rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado; East Fork of the Carson River near Gardnerville, Nevada; and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily data sets of precipitation and maximum and minimum temperature were developed from measured data. These datasets included precipitation and temperature data for all stations that are located within the area of the RegCM2 model output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and station data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and station-based simulations of runoff show little skill on a daily basis (Nash-Sutcliffe (NS) values ranging from 0.05-0.37 for RegCM2 and -0.08-0.65 for station). When the precipitation and temperature biases are corrected in the RegCM2 output and station data sets (Bias-RegCM2 and Bias-station, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins. In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from -0.08 to 0.72). These results indicate that the resolution of the RegCM2 output is appropriate for basin-scale modeling, but RegCM2 model output does not contain the day-to-day variability needed for basin-scale modeling in rainfall-dominated basins. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.
NASA Astrophysics Data System (ADS)
Périllat, Raphaël; Girard, Sylvain; Korsakissok, Irène; Mallet, Vinien
2015-04-01
In a previous study, the sensitivity of a long distance model was analyzed on the Fukushima Daiichi disaster case with the Morris screening method. It showed that a few variables, such as horizontal diffusion coefficient or clouds thickness, have a weak influence on most of the chosen outputs. The purpose of the present study is to apply a similar methodology on the IRSN's operational short distance atmospheric dispersion model, called pX. Atmospheric dispersion models are very useful in case of accidental releases of pollutant to minimize the population exposure during the accident and to obtain an accurate assessment of short and long term environmental and sanitary impact. Long range models are mostly used for consequences assessment while short range models are more adapted to the early phases of the crisis and are used to make prognosis. The Morris screening method was used to estimate the sensitivity of a set of outputs and to rank the inputs by their influences. The input ranking is highly dependent on the considered output, but a few variables seem to have a weak influence on most of them. This first step revealed that interactions and non-linearity are much more pronounced with the short range model than with the long range one. Afterward, the Sobol screening method was used to obtain more quantitative results on the same set of outputs. Using this method was possible for the short range model because it is far less computationally demanding than the long range model. The study also confronts two parameterizations, Doury's and Pasquill's models, to contrast their behavior. The Doury's model seems to excessively inflate the influence of some inputs compared to the Pasquill's model, such as the altitude of emission and the air stability which do not have the same role in the two models. The outputs of the long range model were dominated by only a few inputs. On the contrary, in this study the influence is shared more evenly between the inputs.
NASA Astrophysics Data System (ADS)
Bring, Arvid; Asokan, Shilpa M.; Jaramillo, Fernando; Jarsjö, Jerker; Levi, Lea; Pietroń, Jan; Prieto, Carmen; Rogberg, Peter; Destouni, Georgia
2015-06-01
The multimodel ensemble of the Coupled Model Intercomparison Project, Phase 5 (CMIP5) synthesizes the latest research in global climate modeling. The freshwater system on land, particularly runoff, has so far been of relatively low priority in global climate models, despite the societal and ecosystem importance of freshwater changes, and the science and policy needs for such model output on drainage basin scales. Here we investigate the implications of CMIP5 multimodel ensemble output data for the freshwater system across a set of drainage basins in the Northern Hemisphere. Results of individual models vary widely, with even ensemble mean results differing greatly from observations and implying unrealistic long-term systematic changes in water storage and level within entire basins. The CMIP5 projections of basin-scale freshwater fluxes differ considerably more from observations and among models for the warm temperate study basins than for the Arctic and cold temperate study basins. In general, the results call for concerted research efforts and model developments for improving the understanding and modeling of the freshwater system and its change drivers. Specifically, more attention to basin-scale water flux analyses should be a priority for climate model development, and an important focus for relevant model-based advice for adaptation to climate change.
A user-friendly model for spray drying to aid pharmaceutical product development.
Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L J; Frijlink, Henderik W
2013-01-01
The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach.
Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite
NASA Astrophysics Data System (ADS)
Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi
2018-05-01
LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.
A Risk Stratification Model for Lung Cancer Based on Gene Coexpression Network and Deep Learning
2018-01-01
Risk stratification model for lung cancer with gene expression profile is of great interest. Instead of previous models based on individual prognostic genes, we aimed to develop a novel system-level risk stratification model for lung adenocarcinoma based on gene coexpression network. Using multiple microarray, gene coexpression network analysis was performed to identify survival-related networks. A deep learning based risk stratification model was constructed with representative genes of these networks. The model was validated in two test sets. Survival analysis was performed using the output of the model to evaluate whether it could predict patients' survival independent of clinicopathological variables. Five networks were significantly associated with patients' survival. Considering prognostic significance and representativeness, genes of the two survival-related networks were selected for input of the model. The output of the model was significantly associated with patients' survival in two test sets and training set (p < 0.00001, p < 0.0001 and p = 0.02 for training and test sets 1 and 2, resp.). In multivariate analyses, the model was associated with patients' prognosis independent of other clinicopathological features. Our study presents a new perspective on incorporating gene coexpression networks into the gene expression signature and clinical application of deep learning in genomic data science for prognosis prediction. PMID:29581968
A new dynamical downscaling approach with GCM bias corrections and spectral nudging
NASA Astrophysics Data System (ADS)
Xu, Zhongfeng; Yang, Zong-Liang
2015-04-01
To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.
Hay, L.E.; Clark, M.P.
2003-01-01
This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.
A 3,000-year quantitative drought record derived from XRF element data from a south Texas playa
NASA Astrophysics Data System (ADS)
Livsey, D. N.; Simms, A.; Hangsterfer, A.; Nisbet, R.; DeWitt, R.
2013-12-01
Recent droughts throughout the central United States highlight the need for a better understanding of the past frequency and severity of drought occurrence. Current records of past drought for the south Texas coast are derived from tree-ring data that span approximately the last 900 years before present (BP). In this study we utilize a supervised learning routine to create a transfer function between X-Ray Fluorescence (XRF) derived elemental data from Laguna Salada, Texas core LS10-02 to a locally derived tree-ring drought record. From this transfer function the 900 BP tree-ring drought record was extended to 3,000 BP. The supervised learning routine was trained on the first 100 years of XRF element data and tree-ring drought data to create the transfer function and training data set output. The model was then projected from the XRF elemental data for the next 800 years to create a deployed data set output and to test the transfer function parameters. The coefficients of determination between the model output and observed values are 0.77 and 0.70 for the 100-year training data set and 900-year deployed data set respectively. Given the relatively high coefficients of determination for both the training data set and deployed data set we interpret the model parameters are fairly robust and that a high-resolution drought record can be derived from the XRF element data. These results indicate that XRF element data can be used as a quantitative tool to reconstruct past drought records.
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
NASA Astrophysics Data System (ADS)
Hakkarinen, C.; Brown, D.; Callahan, J.; hankin, S.; de Koningh, M.; Middleton-Link, D.; Wigley, T.
2001-05-01
A Web-based access system to climate model output data sets for intercomparison and analysis has been produced, using the NOAA-PMEL developed Live Access Server software as host server and Ferret as the data serving and visualization engine. Called ARCAS ("ACACIA Regional Climate-data Access System"), and publicly accessible at http://dataserver.ucar.edu/arcas, the site currently serves climate model outputs from runs of the NCAR Climate System Model for the 21st century, for Business as Usual and Stabilization of Greenhouse Gas Emission scenarios. Users can select, download, and graphically display single variables or comparisons of two variables from either or both of the CSM model runs, averaged for monthly, seasonal, or annual time resolutions. The time length of the averaging period, and the geographical domain for download and display, are fully selectable by the user. A variety of arithmetic operations on the data variables can be computed "on-the-fly", as defined by the user. Expansions of the user-selectable options for defining analysis options, and for accessing other DOD-compatible ("Distributed Ocean Data System-compatible") data sets, residing at locations other than the NCAR hardware server on which ARCAS operates, are planned for this year. These expansions are designed to allow users quick and easy-to-operate web-based access to the largest possible selection of climate model output data sets available throughout the world.
Statistical Emulation of Climate Model Projections Based on Precomputed GCM Runs*
Castruccio, Stefano; McInerney, David J.; Stein, Michael L.; ...
2014-02-24
The authors describe a new approach for emulating the output of a fully coupled climate model under arbitrary forcing scenarios that is based on a small set of precomputed runs from the model. Temperature and precipitation are expressed as simple functions of the past trajectory of atmospheric CO 2 concentrations, and a statistical model is fit using a limited set of training runs. The approach is demonstrated to be a useful and computationally efficient alternative to pattern scaling and captures the nonlinear evolution of spatial patterns of climate anomalies inherent in transient climates. The approach does as well as patternmore » scaling in all circumstances and substantially better in many; it is not computationally demanding; and, once the statistical model is fit, it produces emulated climate output effectively instantaneously. In conclusion, it may therefore find wide application in climate impacts assessments and other policy analyses requiring rapid climate projections.« less
Integrated Mode Choice, Small Aircraft Demand, and Airport Operations Model User's Guide
NASA Technical Reports Server (NTRS)
Yackovetsky, Robert E. (Technical Monitor); Dollyhigh, Samuel M.
2004-01-01
A mode choice model that generates on-demand air travel forecasts at a set of GA airports based on changes in economic characteristics, vehicle performance characteristics such as speed and cost, and demographic trends has been integrated with a model to generate itinerate aircraft operations by airplane category at a set of 3227 airports. Numerous intermediate outputs can be generated, such as the number of additional trips diverted from automobiles and schedule air by the improved performance and cost of on-demand air vehicles. The total number of transported passenger miles that are diverted is also available. From these results the number of new aircraft to service the increased demand can be calculated. Output from the models discussed is in the format to generate the origin and destination traffic flow between the 3227 airports based on solutions to a gravity model.
Atmospheric model development in support of SEASAT. Volume 2: Analysis models
NASA Technical Reports Server (NTRS)
Langland, R. A.
1977-01-01
As part of the SEASAT program of NASA, two sets of analysis programs were developed for the Jet Propulsion Laboratory. One set of programs produce 63 x 63 horizontal mesh analyses on a polar stereographic grid. The other set produces 187 x 187 third mesh analyses. The parameters analyzed include sea surface temperature, sea level pressure and twelve levels of upper air temperature, height and wind analyses. The analysis output is used to initialize the primitive equation forecast models.
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models
NASA Astrophysics Data System (ADS)
Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana
2014-05-01
Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.
He, Dan; Kuhn, David; Parida, Laxmi
2016-06-15
Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait prediction is usually represented as linear regression models. In many cases, for the same set of samples and markers, multiple traits are observed. Some of these traits might be correlated with each other. Therefore, modeling all the multiple traits together may improve the prediction accuracy. In this work, we view the multitrait prediction problem from a machine learning angle: as either a multitask learning problem or a multiple output regression problem, depending on whether different traits share the same genotype matrix or not. We then adapted multitask learning algorithms and multiple output regression algorithms to solve the multitrait prediction problem. We proposed a few strategies to improve the least square error of the prediction from these algorithms. Our experiments show that modeling multiple traits together could improve the prediction accuracy for correlated traits. The programs we used are either public or directly from the referred authors, such as MALSAR (http://www.public.asu.edu/~jye02/Software/MALSAR/) package. The Avocado data set has not been published yet and is available upon request. dhe@us.ibm.com. © The Author 2016. Published by Oxford University Press.
Community Coordinated Modeling Center Support of Science Needs for Integrated Data Environment
NASA Technical Reports Server (NTRS)
Kuznetsova, M. M.; Hesse, M.; Rastatter, L.; Maddox, M.
2007-01-01
Space science models are essential component of integrated data environment. Space science models are indispensable tools to facilitate effective use of wide variety of distributed scientific sources and to place multi-point local measurements into global context. The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the- art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. The majority of models residing at CCMC are comprehensive computationally intensive physics-based models. To allow the models to be driven by data relevant to particular events, the CCMC developed an online data file generation tool that automatically downloads data from data providers and transforms them to required format. CCMC provides a tailored web-based visualization interface for the model output, as well as the capability to download simulations output in portable standard format with comprehensive metadata and user-friendly model output analysis library of routines that can be called from any C supporting language. CCMC is developing data interpolation tools that enable to present model output in the same format as observations. CCMC invite community comments and suggestions to better address science needs for the integrated data environment.
A User-Friendly Model for Spray Drying to Aid Pharmaceutical Product Development
Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L. J.; Frijlink, Henderik W.
2013-01-01
The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach. PMID:24040240
NASA Astrophysics Data System (ADS)
Hu, Xiaoxiang; Wu, Ligang; Hu, Changhua; Wang, Zhaoqiang; Gao, Huijun
2014-08-01
By utilising Takagi-Sugeno (T-S) fuzzy set approach, this paper addresses the robust H∞ dynamic output feedback control for the non-linear longitudinal model of flexible air-breathing hypersonic vehicles (FAHVs). The flight control of FAHVs is highly challenging due to the unique dynamic characteristics, and the intricate couplings between the engine and fight dynamics and external disturbance. Because of the dynamics' enormous complexity, currently, only the longitudinal dynamics models of FAHVs have been used for controller design. In this work, T-S fuzzy modelling technique is utilised to approach the non-linear dynamics of FAHVs, then a fuzzy model is developed for the output tracking problem of FAHVs. The fuzzy model contains parameter uncertainties and disturbance, which can approach the non-linear dynamics of FAHVs more exactly. The flexible models of FAHVs are difficult to measure because of the complex dynamics and the strong couplings, thus a full-order dynamic output feedback controller is designed for the fuzzy model. A robust H∞ controller is designed for the obtained closed-loop system. By utilising the Lyapunov functional approach, sufficient solvability conditions for such controllers are established in terms of linear matrix inequalities. Finally, the effectiveness of the proposed T-S fuzzy dynamic output feedback control method is demonstrated by numerical simulations.
Software Validation via Model Animation
NASA Technical Reports Server (NTRS)
Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.
2015-01-01
This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.
Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo
2017-09-01
To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.
A quantum causal discovery algorithm
NASA Astrophysics Data System (ADS)
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Improving the Taiwan Military’s Disaster Relief Response to Typhoons
2015-06-01
circulation, are mostly westbound. When they reach the vicinity of Taiwan or the Philippines , which are always at the edge of the Pacific subtropical high...files from the POM base case model, one set for each design point. To automate the process of running all the GAMS files, a Windows batch file ( BAT ...is used to call on GAMS to solve each version of the model. The BAT file creates a new directory for each run to hold output, and one of the outputs
Directional Slack-Based Measure for the Inverse Data Envelopment Analysis
Abu Bakar, Mohd Rizam; Lee, Lai Soon; Jaafar, Azmi B.; Heydar, Maryam
2014-01-01
A novel technique has been introduced in this research which lends its basis to the Directional Slack-Based Measure for the inverse Data Envelopment Analysis. In practice, the current research endeavors to elucidate the inverse directional slack-based measure model within a new production possibility set. On one occasion, there is a modification imposed on the output (input) quantities of an efficient decision making unit. In detail, the efficient decision making unit in this method was omitted from the present production possibility set but substituted by the considered efficient decision making unit while its input and output quantities were subsequently modified. The efficiency score of the entire DMUs will be retained in this approach. Also, there would be an improvement in the efficiency score. The proposed approach was investigated in this study with reference to a resource allocation problem. It is possible to simultaneously consider any upsurges (declines) of certain outputs associated with the efficient decision making unit. The significance of the represented model is accentuated by presenting numerical examples. PMID:24883350
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Bader, Jon B.
2009-01-01
Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.
Analysis of Sting Balance Calibration Data Using Optimized Regression Models
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Bader, Jon B.
2010-01-01
Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
Johnson, Earl E
2017-11-01
To determine safe output sound pressure levels (SPL) for sound amplification devices to preserve hearing sensitivity after usage. A mathematical model consisting of the Modified Power Law (MPL) (Humes & Jesteadt, 1991 ) combined with equations for predicting temporary threshold shift (TTS) and subsequent permanent threshold shift (PTS) (Macrae, 1994b ) was used to determine safe output SPL. The study involves no new human subject measurements of loudness tolerance or threshold shifts. PTS was determined by the MPL model for 234 audiograms and the SPL output recommended by four different validated prescription recommendations for hearing aids. PTS can, on rare occasion, occur as a result of SPL delivered by hearing aids at modern day prescription recommendations. The trading relationship of safe output SPL, decibel hearing level (dB HL) threshold, and PTS was captured with algebraic expressions. Better hearing thresholds lowered the safe output SPL and higher thresholds raised the safe output SPL. Safe output SPL can consider the magnitude of unaided hearing loss. For devices not set to prescriptive levels, limiting the output SPL below the safe levels identified should protect against threshold worsening as a result of long-term usage.
Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J
2011-09-01
When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.
A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Cressie, N.; Teixeira, J.
2010-12-01
Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Cohen, R. J.; Mark, R. G.
2002-01-01
Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.
A network-base analysis of CMIP5 "historical" experiments
NASA Astrophysics Data System (ADS)
Bracco, A.; Foudalis, I.; Dovrolis, C.
2012-12-01
In computer science, "complex network analysis" refers to a set of metrics, modeling tools and algorithms commonly used in the study of complex nonlinear dynamical systems. Its main premise is that the underlying topology or network structure of a system has a strong impact on its dynamics and evolution. By allowing to investigate local and non-local statistical interaction, network analysis provides a powerful, but only marginally explored, framework to validate climate models and investigate teleconnections, assessing their strength, range, and impacts on the climate system. In this work we propose a new, fast, robust and scalable methodology to examine, quantify, and visualize climate sensitivity, while constraining general circulation models (GCMs) outputs with observations. The goal of our novel approach is to uncover relations in the climate system that are not (or not fully) captured by more traditional methodologies used in climate science and often adopted from nonlinear dynamical systems analysis, and to explain known climate phenomena in terms of the network structure or its metrics. Our methodology is based on a solid theoretical framework and employs mathematical and statistical tools, exploited only tentatively in climate research so far. Suitably adapted to the climate problem, these tools can assist in visualizing the trade-offs in representing global links and teleconnections among different data sets. Here we present the methodology, and compare network properties for different reanalysis data sets and a suite of CMIP5 coupled GCM outputs. With an extensive model intercomparison in terms of the climate network that each model leads to, we quantify how each model reproduces major teleconnections, rank model performances, and identify common or specific errors in comparing model outputs and observations.
NASA Astrophysics Data System (ADS)
Taissariyeva, K.; Issembergenov, N.; Dzhobalaeva, G.; Usembaeva, S.
2016-09-01
The given paper considers the multilevel 6 kW-power transistor inverter at supply by 12 accumulators for transformation of solar battery energy to the electric power. At the output of the multilevel transistor inverter, it is possible to receive voltage close to a sinusoidal form. The main objective of this inverter is transformation of solar energy to the electric power of industrial frequency. The analysis of the received output curves of voltage on harmonicity has been carried out. In this paper it is set forth the developed scheme of the multilevel transistor inverter (DC-to-ac converter) which allows receiving at the output the voltage close to sinusoidal form, as well as to regulation of the output voltage level. In the paper, the results of computer modeling and experimental studies are presented.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
ERIC Educational Resources Information Center
Bres, E.; And Others
Part II of the study describes a model for associating one portion of total library costs with use; the model's output is a set of charges that can be used to recover (justify) these usage-based costs. Algebraic formulations constructed and combined to form the model are a set of service charges that may vary by user type and recover out-of-pocket…
Integrated Model Reduction and Control of Aircraft with Flexible Wings
NASA Technical Reports Server (NTRS)
Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.
2013-01-01
This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.
Modeling health impact of global health programs implemented by Population Services International
2013-01-01
Background Global health implementing organizations benefit most from health impact estimation models that isolate the individual effects of distributed products and services - a feature not typically found in intervention impact models, but which allow comparisons across interventions and intervention settings. Population Services International (PSI), a social marketing organization, has developed a set of impact models covering seven health program areas, which translate product/service distribution data into impact estimates. Each model's primary output is the number of disability-adjusted life-years (DALYs) averted by an intervention within a specific country and population context. This paper aims to describe the structure and inputs for two types of DALYs averted models, considering the benefits and limitations of this methodology. Methods PSI employs two modeling approaches for estimating health impact: a macro approach for most interventions and a micro approach for HIV, tuberculosis (TB), and behavior change communication (BCC) interventions. Within each intervention country context, the macro approach determines the coverage that one product/service unit provides a population in person-years, whereas the micro approach estimates an individual's risk of infection with and without the product/service unit. The models use these estimations to generate per unit DALYs averted coefficients for each intervention. When multiplied by program output data, these coefficients predict the total number of DALYs averted by an intervention in a country. Results Model outputs are presented by country for two examples: Water Chlorination DALYs Averted Model, a macro model, and the HIV Condom DALYs Averted Model for heterosexual transmission, a micro model. Health impact estimates measured in DALYs averted for PSI interventions on a global level are also presented. Conclusions The DALYs averted models offer implementing organizations practical measurement solutions for understanding an intervention's contribution to improving health. These models calculate health impact estimates that reflect the scale and diversity of program operations and intervention settings, and that enable comparisons across health areas and countries. Challenges remain in accounting for intervention synergies, attributing impact to a single organization, and sourcing and updating model inputs. Nevertheless, these models demonstrate how DALYs averted can be viably used by the global health community as a metric for predicting intervention impact using standard program output data. PMID:23902668
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Interregional migration in an extended input-output model.
Madden, M; Trigg, A B
1990-01-01
"This article develops a two-region version of an extended input-output model that disaggregates consumption among employed, unemployed, and inmigrant households, and which explicitly models the influx into a region of migrants to take up a proportion of any jobs created in the regional economy. The model is empirically tested using real data for the Scotland (UK) regions of Strathclyde and Rest-of-Scotland. Sets of interregional economic, demographic, demo-economic, and econo-demographic multipliers are developed and discussed, and the effects of a range of economic and demographic impacts are modeled. The circumstances under which Hawkins-Simon conditions for non-negativity are breached are identified, and the limits of the model discussed." excerpt
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass
NASA Astrophysics Data System (ADS)
Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.
2018-04-01
Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.
User's guide to the western spruce budworm modeling system
Nicholas L. Crookston; J. J. Colbert; Paul W. Thomas; Katharine A. Sheehan; William P. Kemp
1990-01-01
The Budworm Modeling System is a set of four computer programs: The Budworm Dynamics Model, the Prognosis-Budworm Dynamics Model, the Prognosis-Budworm Damage Model, and the Parallel Processing-Budworm Dynamics Model. Input to the first three programs and the output produced are described in this guide. A guide to the fourth program will be published separately....
A model for plant lighting system selection.
Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W
2002-01-01
A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.
Optimal inverse functions created via population-based optimization.
Jennings, Alan L; Ordóñez, Raúl
2014-06-01
Finding optimal inputs for a multiple-input, single-output system is taxing for a system operator. Population-based optimization is used to create sets of functions that produce a locally optimal input based on a desired output. An operator or higher level planner could use one of the functions in real time. For the optimization, each agent in the population uses the cost and output gradients to take steps lowering the cost while maintaining their current output. When an agent reaches an optimal input for its current output, additional agents are generated in the output gradient directions. The new agents then settle to the local optima for the new output values. The set of associated optimal points forms an inverse function, via spline interpolation, from a desired output to an optimal input. In this manner, multiple locally optimal functions can be created. These functions are naturally clustered in input and output spaces allowing for a continuous inverse function. The operator selects the best cluster over the anticipated range of desired outputs and adjusts the set point (desired output) while maintaining optimality. This reduces the demand from controlling multiple inputs, to controlling a single set point with no loss in performance. Results are demonstrated on a sample set of functions and on a robot control problem.
User's Guide for Monthly Vector Wind Profile Model
NASA Technical Reports Server (NTRS)
Adelfang, S. I.
1999-01-01
The background, theoretical concepts, and methodology for construction of vector wind profiles based on a statistical model are presented. The derived monthly vector wind profiles are to be applied by the launch vehicle design community for establishing realistic estimates of critical vehicle design parameter dispersions related to wind profile dispersions. During initial studies a number of months are used to establish the model profiles that produce the largest monthly dispersions of ascent vehicle aerodynamic load indicators. The largest monthly dispersions for wind, which occur during the winter high-wind months, are used for establishing the design reference dispersions for the aerodynamic load indicators. This document includes a description of the computational process for the vector wind model including specification of input data, parameter settings, and output data formats. Sample output data listings are provided to aid the user in the verification of test output.
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Space-Time Fusion Under Error in Computer Model Output: An Application to Modeling Air Quality
In the last two decades a considerable amount of research effort has been devoted to modeling air quality with public health objectives. These objectives include regulatory activities such as setting standards along with assessing the relationship between exposure to air pollutan...
Coelho, Antonio Augusto Rodrigues
2016-01-01
This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723
A phenomenological model of muscle fatigue and the power-endurance relationship.
James, A; Green, S
2012-11-01
The relationship between power output and the time that it can be sustained during exercise (i.e., endurance) at high intensities is curvilinear. Although fatigue is implicit in this relationship, there is little evidence pertaining to it. To address this, we developed a phenomenological model that predicts the temporal response of muscle power during submaximal and maximal exercise and which was based on the type, contractile properties (e.g., fatiguability), and recruitment of motor units (MUs) during exercise. The model was first used to predict power outputs during all-out exercise when fatigue is clearly manifest and for several distributions of MU type. The model was then used to predict times that different submaximal power outputs could be sustained for several MU distributions, from which several power-endurance curves were obtained. The model was simultaneously fitted to two sets of human data pertaining to all-out exercise (power-time profile) and submaximal exercise (power-endurance relationship), yielding a high goodness of fit (R(2) = 0.96-0.97). This suggested that this simple model provides an accurate description of human power output during submaximal and maximal exercise and that fatigue-related processes inherent in it account for the curvilinearity of the power-endurance relationship.
Quantum algorithm for linear regression
NASA Astrophysics Data System (ADS)
Wang, Guoming
2017-07-01
We present a quantum algorithm for fitting a linear regression model to a given data set using the least-squares approach. Differently from previous algorithms which yield a quantum state encoding the optimal parameters, our algorithm outputs these numbers in the classical form. So by running it once, one completely determines the fitted model and then can use it to make predictions on new data at little cost. Moreover, our algorithm works in the standard oracle model, and can handle data sets with nonsparse design matrices. It runs in time poly( log2(N ) ,d ,κ ,1 /ɛ ) , where N is the size of the data set, d is the number of adjustable parameters, κ is the condition number of the design matrix, and ɛ is the desired precision in the output. We also show that the polynomial dependence on d and κ is necessary. Thus, our algorithm cannot be significantly improved. Furthermore, we also give a quantum algorithm that estimates the quality of the least-squares fit (without computing its parameters explicitly). This algorithm runs faster than the one for finding this fit, and can be used to check whether the given data set qualifies for linear regression in the first place.
Hostetler, S.W.; Giorgi, F.
1993-01-01
In this paper we investigate the feasibility of coupling regional climate models (RCMs) with landscape-scale hydrologic models (LSHMs) for studies of the effects of climate on hydrologic systems. The RCM used is the National Center for Atmospheric Research/Pennsylvania State University mesoscale model (MM4). Output from two year-round simulations (1983 and 1988) over the western United States is used to drive a lake model for Pyramid Lake in Nevada and a streamfiow model for Steamboat Creek in Oregon. Comparisons with observed data indicate that MM4 is able to produce meteorologic data sets that can be used to drive hydrologic models. Results from the lake model simulations indicate that the use of MM4 output produces reasonably good predictions of surface temperature and evaporation. Results from the streamflow simulations indicate that the use of MM4 output results in good simulations of the seasonal cycle of streamflow, but deficiencies in simulated wintertime precipitation resulted in underestimates of streamflow and soil moisture. Further work with climate (multiyear) simulations is necessary to achieve a complete analysis, but the results from this study indicate that coupling of LSHMs and RCMs may be a useful approach for evaluating the effects of climate change on hydrologic systems.
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Perrone, J. A.
1997-01-01
We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.
Optimization of diode-pumped doubly QML laser with neodymium-doped vanadate crystals at 1.34 μm
NASA Astrophysics Data System (ADS)
Zhang, Gang; Jiao, Zhiyong
2018-05-01
We present a theoretical model for a diode-pumped, 1.34 μm V3+:YAG laser that is equipped with an acoustic-optic modulator. The model includes the loss introduced by the acoustic-optic modulator combined with the physical properties of the laser resonator, the neodymium-doped vanadate crystals and the output coupler. The parameters are adjusted within a reasonable range to optimize the pulse output characteristics. A typical Q-switched and mode-locked Nd:Lu0.15Y0.85VO4 laser at 1.34 μm with acoustic-optic modulator and V3+:YAG is set up, and the experimental output characteristics are consistent with the theoretical simulation results.
An Intelligent Decision Support System for Workforce Forecast
2011-01-01
ARIMA ) model to forecast the demand for construction skills in Hong Kong. This model was based...Decision Trees ARIMA Rule Based Forecasting Segmentation Forecasting Regression Analysis Simulation Modeling Input-Output Models LP and NLP Markovian...data • When results are needed as a set of easily interpretable rules 4.1.4 ARIMA Auto-regressive, integrated, moving-average ( ARIMA ) models
Western Wind Data Set | Grid Modernization | NREL
replicates the stochastic nature of wind power plant output. NREL modeled hysteresis around wind turbine cut where wind speeds are often near wind turbine cut-out (~25 m/s), SCORE output does not replicate the Vestas V90). The hysteresis-corrected SCORE is an attempt to put the wind turbine hysteresis at cut-out
Díaz, José; Acosta, Jesús; González, Rafael; Cota, Juan; Sifuentes, Ernesto; Nebot, Àngela
2018-02-01
The control of the central nervous system (CNS) over the cardiovascular system (CS) has been modeled using different techniques, such as fuzzy inductive reasoning, genetic fuzzy systems, neural networks, and nonlinear autoregressive techniques; the results obtained so far have been significant, but not solid enough to describe the control response of the CNS over the CS. In this research, support vector machines (SVMs) are used to predict the response of a branch of the CNS, specifically, the one that controls an important part of the cardiovascular system. To do this, five models are developed to emulate the output response of five controllers for the same input signal, the carotid sinus blood pressure (CSBP). These controllers regulate parameters such as heart rate, myocardial contractility, peripheral and coronary resistance, and venous tone. The models are trained using a known set of input-output response in each controller; also, there is a set of six input-output signals for testing each proposed model. The input signals are processed using an all-pass filter, and the accuracy performance of the control models is evaluated using the percentage value of the normalized mean square error (MSE). Experimental results reveal that SVM models achieve a better estimation of the dynamical behavior of the CNS control compared to others modeling systems. The main results obtained show that the best case is for the peripheral resistance controller, with a MSE of 1.20e-4%, while the worst case is for the heart rate controller, with a MSE of 1.80e-3%. These novel models show a great reliability in fitting the output response of the CNS which can be used as an input to the hemodynamic system models in order to predict the behavior of the heart and blood vessels in response to blood pressure variations. Copyright © 2017 Elsevier Ltd. All rights reserved.
A computer program for simulating geohydrologic systems in three dimensions
Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.
1980-01-01
This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)
Fuzzy rule based estimation of agricultural diffuse pollution concentration in streams.
Singh, Raj Mohan
2008-04-01
Outflow from the agricultural fields carries diffuse pollutants like nutrients, pesticides, herbicides etc. and transports the pollutants into the nearby streams. It is a matter of serious concern for water managers and environmental researchers. The application of chemicals in the agricultural fields, and transport of these chemicals into streams are uncertain that cause complexity in reliable stream quality predictions. The chemical characteristics of applied chemical, percentage of area under the chemical application etc. are some of the main inputs that cause pollution concentration as output in streams. Each of these inputs and outputs may contain measurement errors. Fuzzy rule based model based on fuzzy sets suits to address uncertainties in inputs by incorporating overlapping membership functions for each of inputs even for limited data availability situations. In this study, the property of fuzzy sets to address the uncertainty in input-output relationship is utilized to obtain the estimate of concentrations of a herbicide, atrazine, in a stream. The data of White river basin, a part of the Mississippi river system, is used for developing the fuzzy rule based models. The performance of the developed methodology is found encouraging.
Mentoring for junior medical faculty: Existing models and suggestions for low-resource settings.
Menon, Vikas; Muraleedharan, Aparna; Bhat, Ballambhattu Vishnu
2016-02-01
Globally, there is increasing recognition about the positive benefits and impact of mentoring on faculty retention rates, career satisfaction and scholarly output. However, emphasis on research and practice of mentoring is comparatively meagre in low and middle income countries. In this commentary, we critically examine two existing models of mentorship for medical faculty and offer few suggestions for an integrated hybrid model that can be adapted for use in low resource settings. Copyright © 2016 Elsevier B.V. All rights reserved.
Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran
2016-10-18
A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.
Ławryńczuk, Maciej
2017-03-01
This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Ragonnet, Romain; Trauer, James M; Denholm, Justin T; Marais, Ben J; McBryde, Emma S
2017-05-30
Multidrug-resistant and rifampicin-resistant tuberculosis (MDR/RR-TB) represent an important challenge for global tuberculosis (TB) control. The high rates of MDR/RR-TB observed among re-treatment cases can arise from diverse pathways: de novo amplification during initial treatment, inappropriate treatment of undiagnosed MDR/RR-TB, relapse despite appropriate treatment, or reinfection with MDR/RR-TB. Mathematical modelling allows quantification of the contribution made by these pathways in different settings. This information provides valuable insights for TB policy-makers, allowing better contextualised solutions. However, mathematical modelling outputs need to consider local data and be easily accessible to decision makers in order to improve their usefulness. We present a user-friendly web-based modelling interface, which can be used by people without technical knowledge. Users can input their own parameter values and produce estimates for their specific setting. This innovative tool provides easy access to mathematical modelling outputs that are highly relevant to national TB control programs. In future, the same approach could be applied to a variety of modelling applications, enhancing local decision making.
NASA Astrophysics Data System (ADS)
Neri, Mattia; Toth, Elena
2017-04-01
The study presents the implementation of different regionalisation approaches for the transfer of model parameters from similar and/or neighbouring gauged basin to an ungauged catchment, and in particular it uses a semi-distributed continuously-simulating conceptual rainfall-runoff model for simulating daily streamflows. The case study refers to a set of Apennine catchments (in the Emilia-Romagna region, Italy), that, given the spatial proximity, are assumed to belong to the same hydrologically homogeneous region and are used, alternatively, as donors and regionalised basins. The model is a semi-distributed version of the HBV model (TUWien model) in which the catchment is divided in zones of different altitude that contribute separately to the total outlet flow. The model includes a snow module, whose application in the Apennine area has been, so far, very limited, even if snow accumulation and melting phenomena do have an important role in the study basins. Two methods, both widely applied in the recent literature, are applied for regionalising the model: i) "parameters averaging", where each parameter is obtained as a weighted mean of the parameters obtained, through calibration, on the donor catchments ii) "output averaging", where the model is run over the ungauged basin using the entire set of parameters of each donor basin and the simulated outputs are then averaged. In the first approach, the parameters are regionalised independently from each other, in the second one, instead, the correlation among the parameters is maintained. Since the model is a semi-distributed one, where each elevation zone contributes separately, the study proposes to test also a modified version of the second approach ("output averaging"), where each zone is considered as an autonomous entity, whose parameters are transposed to the ungauged sub-basin corresponding to the same elevation zone. The study explores also the choice of the weights to be used for averaging the parameters (in the "parameters averaging" approach) or for averaging the simulated streamflow (in the "output averaging" approach): in particular, weights are estimated as a function of the similarity/distance of the ungauged basin/zone to the donors, on the basis of a set of geo-morphological catchment descriptors. The predictive accuracy of the different regionalisation methods is finally assessed by jack-knife cross-validation against the observed daily runoff for all the study catchments.
A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA
NASA Astrophysics Data System (ADS)
Khodabakhshi, Mohammad
2009-08-01
This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.
Reconfigurable data path processor
NASA Technical Reports Server (NTRS)
Donohoe, Gregory (Inventor)
2005-01-01
A reconfigurable data path processor comprises a plurality of independent processing elements. Each of the processing elements advantageously comprising an identical architecture. Each processing element comprises a plurality of data processing means for generating a potential output. Each processor is also capable of through-putting an input as a potential output with little or no processing. Each processing element comprises a conditional multiplexer having a first conditional multiplexer input, a second conditional multiplexer input and a conditional multiplexer output. A first potential output value is transmitted to the first conditional multiplexer input, and a second potential output value is transmitted to the second conditional multiplexer output. The conditional multiplexer couples either the first conditional multiplexer input or the second conditional multiplexer input to the conditional multiplexer output, according to an output control command. The output control command is generated by processing a set of arithmetic status-bits through a logical mask. The conditional multiplexer output is coupled to a first processing element output. A first set of arithmetic bits are generated according to the processing of the first processable value. A second set of arithmetic bits may be generated from a second processing operation. The selection of the arithmetic status-bits is performed by an arithmetic-status bit multiplexer selects the desired set of arithmetic status bits from among the first and second set of arithmetic status bits. The conditional multiplexer evaluates the select arithmetic status bits according to logical mask defining an algorithm for evaluating the arithmetic status bits.
Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates
NASA Technical Reports Server (NTRS)
Patera, Anthony T.
1997-01-01
A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.
NASA Astrophysics Data System (ADS)
Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.
2015-03-01
Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.
Ollendorf, Daniel A; Pearson, Steven D
2014-01-01
Economic modeling has rarely been considered to be an essential component of healthcare policy-making in the USA, due to a lack of transparency in model design and assumptions, as well as political interests that equate examination of cost with unfair rationing. The Institute for Clinical and Economic Review has been involved in several efforts to bring economic modeling into public discussion of the comparative value of healthcare interventions, efforts that have evolved over time to suit the needs of multiple public forums. In this article, we review these initiatives and present a template that attempts to 'unpack' model output and present the major drivers of outcomes and cost. We conclude with a series of recommendations for effective presentation of economic models to US policy-makers.
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)
Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).
Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.
The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV
NASA Astrophysics Data System (ADS)
Ho, Y.; Weber, J.
2017-12-01
WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.
El Haimar, Amine; Santos, Joost R
2014-03-01
Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne
2016-04-01
Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.
Tempo: A Toolkit for the Timed Input/Output Automata Formalism
2008-01-30
generation of distributed code from specifications. F .4.3 [Formal Languages]: Tempo;, D.3 [Programming Many distributed systems involve a combination of...and require The chek (i) transition is enabled when process i’s program the simulator to check the assertions after every single step counter is set to...output foo (n:Int) The Tempo simulator addresses this issue by putting the states x: Int : = 10;transitions modeler in charge of resolving the non
Towards Standardization in Terminal Ballistics Testing: Velocity Representation
1976-01-01
d vd vr does not exist at vV, it is true that -. Also avs rd s t d v s approximates...29 3b. Sample of plotter output: v versus v s -r.. ....... .. 30s S 3c. Sample of plotter output: v /vs versus vr/avs. ...... 31 I ’ i Li- Preceding...implicit in sets of ( v s , v r) data. A form is proposed as being sufficiently simple and versatile to usefully and realistically model
NASA Astrophysics Data System (ADS)
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2017-04-01
This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.
Systems and methods for predicting materials properties
Ceder, Gerbrand; Fischer, Chris; Tibbetts, Kevin; Morgan, Dane; Curtarolo, Stefano
2007-11-06
Systems and methods for predicting features of materials of interest. Reference data are analyzed to deduce relationships between the input data sets and output data sets. Reference data includes measured values and/or computed values. The deduced relationships can be specified as equations, correspondences, and/or algorithmic processes that produce appropriate output data when suitable input data is used. In some instances, the output data set is a subset of the input data set, and computational results may be refined by optionally iterating the computational procedure. To deduce features of a new material of interest, a computed or measured input property of the material is provided to an equation, correspondence, or algorithmic procedure previously deduced, and an output is obtained. In some instances, the output is iteratively refined. In some instances, new features deduced for the material of interest are added to a database of input and output data for known materials.
Orbiter global positioning system design and Ku-band problem investigations, exhibit B, revision 1
NASA Technical Reports Server (NTRS)
Lindsey, W. C.
1983-01-01
The hardware, software, and interface between them was investigated for a low dynamics, nonhostile environment, low cost GPS receiver (GPS Z set). The set is basically a three dimensional geodetic and way point navigator with GPS time, ground speed, and ground track as possible outputs in addition to the usual GPS receiver set outputs. Each functional module comprising the GPS set is described, enumerating its functional inputs and outputs, leading to the interface between hardware and software of the set.
Obscuration Code with Space Station Applications (Manual)
1985-12-01
used to perform this DCL style com - mand parsing, readers are referred to the VMS documentation concerning the Command Definition Utility or CDU. I I I...FOR0O7.DAT; Input echo file: USERI: [RJM.NASJAN5S1 .LIS;3 The above examples show the operation of the SET OUTPUT com - mand. Note that the printer file is...be opened using the SET OUTPUT com - mand. The output files can be opened and closed using the SET OUTPUT /ECHOING, /PRINTABLE, /PLOTTABLE commands
Mid-Piacensian mean annual sea surface temperature: an analysis for data-model comparisons
Dowsett, Harry J.; Robinson, Marci M.; Foley, Kevin M.; Stoll, Danielle K.
2010-01-01
Numerical models of the global climate system are the primary tools used to understand and project climate disruptions in the form of future global warming. The Pliocene has been identified as the closest, albeit imperfect, analog to climate conditions expected for the end of this century, making an independent data set of Pliocene conditions necessary for ground truthing model results. Because most climate model output is produced in the form ofmean annual conditions, we present a derivative of the USGS PRISM3 Global Climate Reconstruction which integrates multiple proxies of sea surface temperature (SST) into single surface temperature anomalies. We analyze temperature estimates from faunal and floral assemblage data,Mg/Ca values and alkenone unsaturation indices to arrive at a single mean annual SST anomaly (Pliocene minus modern) best describing each PRISM site, understanding that multiple proxies should not necessarily show concordance. The power of themultiple proxy approach lies within its diversity, as no two proxies measure the same environmental variable. This data set can be used to verify climate model output, to serve as a starting point for model inter-comparisons, and for quantifying uncertainty in Pliocene model prediction in perturbed physics ensembles.
SHERMAN, a shape-based thermophysical model. I. Model description and validation
NASA Astrophysics Data System (ADS)
Magri, Christopher; Howell, Ellen S.; Vervack, Ronald J.; Nolan, Michael C.; Fernández, Yanga R.; Marshall, Sean E.; Crowell, Jenna L.
2018-03-01
SHERMAN, a new thermophysical modeling package designed for analyzing near-infrared spectra of asteroids and other solid bodies, is presented. The model's features, the methods it uses to solve for surface and subsurface temperatures, and the synthetic data it outputs are described. A set of validation tests demonstrates that SHERMAN produces accurate output in a variety of special cases for which correct results can be derived from theory. These cases include a family of solutions to the heat equation for which thermal inertia can have any value and thermophysical properties can vary with depth and with temperature. An appendix describes a new approximation method for estimating surface temperatures within spherical-section craters, more suitable for modeling infrared beaming at short wavelengths than the standard method.
Simulation of medical Q-switch flash-pumped Er:YAG laser
NASA Astrophysics Data System (ADS)
-Yan-lin, Wang; Huang-Chuyun; Yao-Yucheng; Xiaolin, Zou
2011-01-01
Er: YAG laser, the wavelength is 2940nm, can be absorbed strongly by water. The absorption coefficient is as high as 13000 cm-1. As the water strong absorption, Erbium laser can bring shallow penetration depth and smaller surrounding tissue injury in most soft tissue and hard tissue. At the same time, the interaction between 2940nm radiation and biological tissue saturated with water is equivalent to instantaneous heating within limited volume, thus resulting in the phenomenon of micro-explosion to removal organization. Different parameters can be set up to cut enamel, dentin, caries and soft tissue. For the development and optimization of laser system, it is a practical choice to use laser modeling to predict the influence of various parameters for laser performance. Aim at the status of low Erbium laser output power, flash-pumped Er: YAG laser performance was simulated to obtain optical output in theory. the rate equation model was obtained and used to predict the change of population densities in various manifolds and use the technology of Q-switch the simulate laser output for different design parameters and results showed that Er: YAG laser output energy can achieve the maximum average output power of 9.8W under the given parameters. The model can be used to find the potential laser systems that meet application requirements.
NASA Astrophysics Data System (ADS)
Han, Dandan; Bai, Jian; Lu, Qianbo; Lou, Shuqi; Jiao, Xufen; Yang, Guoguang
2016-08-01
There is a temperature drift of an accelerometer attributed to the temperature variation, which would adversely influence the output performance. In this paper, a quantitative analysis of the temperature effect and the temperature compensation of a MOEMS accelerometer, which is composed of a grating interferometric cavity and a micromachined sensing chip, are proposed. A finite-element-method (FEM) approach is applied in this work to simulate the deformation of the sensing chip of the MOEMS accelerometer at different temperature from -20°C to 70°C. The deformation results in the variation of the distance between the grating and the sensing chip of the MOEMS accelerometer, modulating the output intensities finally. A static temperature model is set up to describe the temperature characteristics of the accelerometer through the simulation results and the temperature compensation is put forward based on the temperature model, which can improve the output performance of the accelerometer. This model is permitted to estimate the temperature effect of this type accelerometer, which contains a micromachined sensing chip. Comparison of the output intensities with and without temperature compensation indicates that the temperature compensation can improve the stability of the output intensities of the MOEMS accelerometer based on a grating interferometric cavity.
Wrapping Python around MODFLOW/MT3DMS based groundwater models
NASA Astrophysics Data System (ADS)
Post, V.
2008-12-01
Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.
User's Guide to the Stand Prognosis Model
William R. Wykoff; Nicholas L. Crookston; Albert R. Stage
1982-01-01
The Stand Prognosis Model is a computer program that projects the development of forest stands in the Northern Rocky Mountains. Thinning options allow for simulation of a variety of management strategies. Input consists of a stand inventory, including sample tree records, and a set of option selection instructions. Output includes data normally found in stand, stock,...
User Manual for SAHM package for VisTrails
Talbert, C.B.; Talbert, M.K.
2012-01-01
The Software for Assisted Habitat I\\•1odeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre-and post-processing steps and modeling options incorporated in the construction of a species distribution model. The four main advantages to using the combined VisTrail: SAHM package for species distribution modeling are: 1. formalization and tractable recording of the entire modeling process 2. easier collaboration through a common modeling framework 3. a user-friendly graphical interface to manage file input, model runs, and output 4. extensibility to incorporate future and additional modeling routines and tools. This user manual provides detailed information on each module within the SAHM package, their input, output, common connections, optional arguments, and default settings. This information can also be accessed for individual modules by right clicking on the documentation button for any module in VisTrail or by right clicking on any input or output for a module and selecting view documentation. This user manual is intended to accompany the user guide which provides detailed instructions on how to install the SAHM package within VisTrails and then presents information on the use of the package.
Uncertainty importance analysis using parametric moment ratio functions.
Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen
2014-02-01
This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.
Inversion-based propofol dosing for intravenous induction of hypnosis
NASA Astrophysics Data System (ADS)
Padula, F.; Ionescu, C.; Latronico, N.; Paltenghi, M.; Visioli, A.; Vivacqua, G.
2016-10-01
In this paper we propose an inversion-based methodology for the computation of a feedforward action for the propofol intravenous administration during the induction of hypnosis in general anesthesia. In particular, the typical initial bolus is substituted with a command signal that is obtained by predefining a desired output and by applying an input-output inversion procedure. The robustness of the method has been tested by considering a set of patients with different model parameters, which is representative of a large population.
Real Time Trajectory Planning for Groups of Unmanned Vehicles
2005-08-22
positions of the robots, and a third set determines the local relations of the neighboring robots. They have used a nonholonomic kinematic model for the...Note that the vehicles are not physically coupled in any way. A feedback control law for control inputs F.2 and T2 miust be determined to control... Vb12 71 = 01 + 01 2 - 02 The goal is finding a control law for the two inputs [F2 , T2] to stabilize the outputs. Therefore, the input-output equations
Multisite Evaluation of a Data Quality Tool for Patient-Level Clinical Data Sets
Huser, Vojtech; DeFalco, Frank J.; Schuemie, Martijn; Ryan, Patrick B.; Shang, Ning; Velez, Mark; Park, Rae Woong; Boyce, Richard D.; Duke, Jon; Khare, Ritu; Utidjian, Levon; Bailey, Charles
2016-01-01
Introduction: Data quality and fitness for analysis are crucial if outputs of analyses of electronic health record data or administrative claims data should be trusted by the public and the research community. Methods: We describe a data quality analysis tool (called Achilles Heel) developed by the Observational Health Data Sciences and Informatics Collaborative (OHDSI) and compare outputs from this tool as it was applied to 24 large healthcare datasets across seven different organizations. Results: We highlight 12 data quality rules that identified issues in at least 10 of the 24 datasets and provide a full set of 71 rules identified in at least one dataset. Achilles Heel is a freely available software that provides a useful starter set of data quality rules with the ability to add additional rules. We also present results of a structured email-based interview of all participating sites that collected qualitative comments about the value of Achilles Heel for data quality evaluation. Discussion: Our analysis represents the first comparison of outputs from a data quality tool that implements a fixed (but extensible) set of data quality rules. Thanks to a common data model, we were able to compare quickly multiple datasets originating from several countries in America, Europe and Asia. PMID:28154833
NASA Astrophysics Data System (ADS)
Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.
2009-04-01
Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartolac, S; Letourneau, D; University of Toronto, Toronto, Ontario
Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventionsmore » with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances and interventions are required less frequently.« less
Integrating high dimensional bi-directional parsing models for gene mention tagging.
Hsu, Chun-Nan; Chang, Yu-Ming; Kuo, Cheng-Ju; Lin, Yu-Shi; Huang, Han-Shen; Chung, I-Fang
2008-07-01
Tagging gene and gene product mentions in scientific text is an important initial step of literature mining. In this article, we describe in detail our gene mention tagger participated in BioCreative 2 challenge and analyze what contributes to its good performance. Our tagger is based on the conditional random fields model (CRF), the most prevailing method for the gene mention tagging task in BioCreative 2. Our tagger is interesting because it accomplished the highest F-scores among CRF-based methods and second over all. Moreover, we obtained our results by mostly applying open source packages, making it easy to duplicate our results. We first describe in detail how we developed our CRF-based tagger. We designed a very high dimensional feature set that includes most of information that may be relevant. We trained bi-directional CRF models with the same set of features, one applies forward parsing and the other backward, and integrated two models based on the output scores and dictionary filtering. One of the most prominent factors that contributes to the good performance of our tagger is the integration of an additional backward parsing model. However, from the definition of CRF, it appears that a CRF model is symmetric and bi-directional parsing models will produce the same results. We show that due to different feature settings, a CRF model can be asymmetric and the feature setting for our tagger in BioCreative 2 not only produces different results but also gives backward parsing models slight but constant advantage over forward parsing model. To fully explore the potential of integrating bi-directional parsing models, we applied different asymmetric feature settings to generate many bi-directional parsing models and integrate them based on the output scores. Experimental results show that this integrated model can achieve even higher F-score solely based on the training corpus for gene mention tagging. Data sets, programs and an on-line service of our gene mention tagger can be accessed at http://aiia.iis.sinica.edu.tw/biocreative2.htm.
Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi
2012-10-01
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
DairyWise, a whole-farm dairy model.
Schils, R L M; de Haan, M H A; Hemmer, J G A; van den Pol-van Dasselaar, A; de Boer, J A; Evers, A G; Holshof, G; van Middelkoop, J C; Zom, R L G
2007-11-01
A whole-farm dairy model was developed and evaluated. The DairyWise model is an empirical model that simulated technical, environmental, and financial processes on a dairy farm. The central component is the FeedSupply model that balanced the herd requirements, as generated by the DairyHerd model, and the supply of homegrown feeds, as generated by the crop models for grassland and corn silage. The output of the FeedSupply model was used as input for several technical, environmental, and economic submodels. The submodels simulated a range of farm aspects such as nitrogen and phosphorus cycling, nitrate leaching, ammonia emissions, greenhouse gas emissions, energy use, and a financial farm budget. The final output was a farm plan describing all material and nutrient flows and the consequences on the environment and economy. Evaluation of DairyWise was performed with 2 data sets consisting of 29 dairy farms. The evaluation showed that DairyWise was able to simulate gross margin, concentrate intake, nitrogen surplus, nitrate concentration in ground water, and crop yields. The variance accounted for ranged from 37 to 84%, and the mean differences between modeled and observed values varied between -5 to +3% per set of farms. We conclude that DairyWise is a powerful tool for integrated scenario development and evaluation for scientists, policy makers, extension workers, teachers and farmers.
Adaptive model reduction for continuous systems via recursive rational interpolation
NASA Technical Reports Server (NTRS)
Lilly, John H.
1994-01-01
A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
Hebisz, Rafal; Borkowski, Jacek; Zatoń, Marek
2016-01-01
Abstract The aim of this study was to determine differences in glycolytic metabolite concentrations and work output in response to an all-out interval training session in 23 cyclists with at least 2 years of interval training experience (E) and those inexperienced (IE) in this form of training. The intervention involved subsequent sets of maximal intensity exercise on a cycle ergometer. Each set comprised four 30 s repetitions interspersed with 90 s recovery periods; sets were repeated when blood pH returned to 7.3. Measurements of post-exercise hydrogen (H+) and lactate ion (LA-) concentrations and work output were taken. The experienced cyclists performed significantly more sets of maximal efforts than the inexperienced athletes (5.8 ± 1.2 vs. 4.3 ± 0.9 sets, respectively). Work output decreased in each subsequent set in the IE group and only in the last set in the E group. Distribution of power output changed only in the E group; power decreased in the initial repetitions of set only to increase in the final repetitions. H+ concentration decreased in the third, penultimate, and last sets in the E group and in each subsequent set in the IE group. LA- decreased in the last set in both groups. In conclusion, the experienced cyclists were able to repeatedly induce elevated levels of lactic acidosis. Power output distribution changed with decreased acid–base imbalance. In this way, this group could compensate for a decreased anaerobic metabolism. The above factors allowed cyclists experienced in interval training to perform more sets of maximal exercise without a decrease in power output compared with inexperienced cyclists. PMID:28149346
Robust DEA under discrete uncertain data: a case study of Iranian electricity distribution companies
NASA Astrophysics Data System (ADS)
Hafezalkotob, Ashkan; Haji-Sami, Elham; Omrani, Hashem
2015-06-01
Crisp input and output data are fundamentally indispensable in traditional data envelopment analysis (DEA). However, the real-world problems often deal with imprecise or ambiguous data. In this paper, we propose a novel robust data envelopment model (RDEA) to investigate the efficiencies of decision-making units (DMU) when there are discrete uncertain input and output data. The method is based upon the discrete robust optimization approaches proposed by Mulvey et al. (1995) that utilizes probable scenarios to capture the effect of ambiguous data in the case study. Our primary concern in this research is evaluating electricity distribution companies under uncertainty about input/output data. To illustrate the ability of proposed model, a numerical example of 38 Iranian electricity distribution companies is investigated. There are a large amount ambiguous data about these companies. Some electricity distribution companies may not report clear and real statistics to the government. Thus, it is needed to utilize a prominent approach to deal with this uncertainty. The results reveal that the RDEA model is suitable and reliable for target setting based on decision makers (DM's) preferences when there are uncertain input/output data.
NASA Astrophysics Data System (ADS)
Korbacz, A.; Brzeziński, A.; Thomas, M.
2008-04-01
We use new estimates of the global atmospheric and oceanic angular momenta (AAM, OAM) to study the influence on LOD/UT1. The AAM series was calculated from the output fields of the atmospheric general circulation model ERA-40 reanalysis. The OAM series is an outcome of global ocean model OMCT simulation driven by global fields of the atmospheric parameters from the ERA- 40 reanalysis. The excitation data cover the period between 1963 and 2001. Our calculations concern atmospheric and oceanic effects in LOD/UT1 over the periods between 20 days and decades. Results are compared to those derived from the alternative AAM/OAM data sets.
Multilayer perceptron, fuzzy sets, and classification
NASA Technical Reports Server (NTRS)
Pal, Sankar K.; Mitra, Sushmita
1992-01-01
A fuzzy neural network model based on the multilayer perceptron, using the back-propagation algorithm, and capable of fuzzy classification of patterns is described. The input vector consists of membership values to linguistic properties while the output vector is defined in terms of fuzzy class membership values. This allows efficient modeling of fuzzy or uncertain patterns with appropriate weights being assigned to the backpropagated errors depending upon the membership values at the corresponding outputs. During training, the learning rate is gradually decreased in discrete steps until the network converges to a minimum error solution. The effectiveness of the algorithm is demonstrated on a speech recognition problem. The results are compared with those of the conventional MLP, the Bayes classifier, and the other related models.
NASA Astrophysics Data System (ADS)
Wang, Xianxun; Mei, Yadong
2017-04-01
Coordinative operation of hydro-wind-photovoltaic is the solution of mitigating the conflict of power generation and output fluctuation of new energy and conquering the bottleneck of new energy development. Due to the deficiencies of characterizing output fluctuation, depicting grid construction and disposal of power abandon, the research of coordinative mechanism is influenced. In this paper, the multi-object and multi-hierarchy model of coordinative operation of hydro-wind-photovoltaic is built with the aim of maximizing power generation and minimizing output fluctuation and the constraints of topotaxy of power grid and balanced disposal of power abandon. In the case study, the comparison of uncoordinative and coordinative operation is carried out with the perspectives of power generation, power abandon and output fluctuation. By comparison from power generation, power abandon and output fluctuation between separate operation and coordinative operation of multi-power, the coordinative mechanism is studied. Compared with running solely, coordinative operation of hydro-wind-photovoltaic can gain the compensation benefits. Peak-alternation operation reduces the power abandon significantly and maximizes resource utilization effectively by compensating regulation of hydropower. The Pareto frontier of power generation and output fluctuation is obtained through multiple-objective optimization. It clarifies the relationship of mutual influence between these two objects. When coordinative operation is taken, output fluctuation can be markedly reduced at the cost of a slight decline of power generation. The power abandon also drops sharply compared with operating separately. Applying multi-objective optimization method to optimize the coordinate operation, Pareto optimal solution set of power generation and output fluctuation is achieved.
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Marrocu, Marino; Pusceddu, Gabriella; Langousis, Andreas; Mascaro, Giuseppe; Caroletti, Giulio
2013-04-01
Within the activities of the EU FP7 CLIMB project (www.climb-fp7.eu), we developed downscaling procedures to reliably assess climate forcing at hydrologically relevant scales, and applied them to six representative hydrological basins located in the Mediterranean region: Riu Mannu and Noce in Italy, Chiba in Tunisia, Kocaeli in Turkey, Thau in France, and Gaza in Palestine. As a first step towards this aim, we used daily precipitation and temperature data from the gridded E-OBS project (www.ecad.eu/dailydata), as reference fields, to rank 14 Regional Climate Model (RCM) outputs from the ENSEMBLES project (http://ensembles-eu.metoffice.com). The four best performing model outputs were selected, with the additional constraint of maintaining 2 outputs obtained from running different RCMs driven by the same GCM, and 2 runs from the same RCM driven by different GCMs. For these four RCM-GCM model combinations, a set of downscaling techniques were developed and applied, for the period 1951-2100, to variables used in hydrological modeling (i.e. precipitation; mean, maximum and minimum daily temperatures; direct solar radiation, relative humidity, magnitude and direction of surface winds). The quality of the final products is discussed, together with the results obtained after applying a bias reduction procedure to daily temperature and precipitation fields.
NASA Technical Reports Server (NTRS)
Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)
2001-01-01
Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.
Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model
NASA Astrophysics Data System (ADS)
Kuznetsov, A. V.; Makaryants, G. M.
2018-01-01
There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.
Independent component analysis decomposition of hospital emergency department throughput measures
NASA Astrophysics Data System (ADS)
He, Qiang; Chu, Henry
2016-05-01
We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.
The N-BOD2 user's and programmer's manual
NASA Technical Reports Server (NTRS)
Frisch, H. P.
1978-01-01
A general purpose digital computer program was developed and designed to aid in the analysis of spacecraft attitude dynamics. The program provides the analyst with the capability of automatically deriving and numerically solving the equations of motion of any system that can be modeled as a topological tree of coupled rigid bodies, flexible bodies, point masses, and symmetrical momentum wheels. Two modes of output are available. The composite system equations of motion may be outputted on a line printer in a symbolic form that may be easily translated into common vector-dyadic notation, or the composite system equations of motion may be solved numerically and any desirable set of system state variables outputted as a function of time.
Updated Eastern Interconnect Wind Power Output and Forecasts for ERGIS: July 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pennock, K.
AWS Truepower, LLC (AWST) was retained by the National Renewable Energy Laboratory (NREL) to update wind resource, plant output, and wind power forecasts originally produced by the Eastern Wind Integration and Transmission Study (EWITS). The new data set was to incorporate AWST's updated 200-m wind speed map, additional tall towers that were not included in the original study, and new turbine power curves. Additionally, a primary objective of this new study was to employ new data synthesis techniques developed for the PJM Renewable Integration Study (PRIS) to eliminate diurnal discontinuities resulting from the assimilation of observations into mesoscale model runs.more » The updated data set covers the same geographic area, 10-minute time resolution, and 2004?2006 study period for the same onshore and offshore (Great Lakes and Atlantic coast) sites as the original EWITS data set.« less
Performance optimization and validation of ADM1 simulations under anaerobic thermophilic conditions.
Atallah, Nabil M; El-Fadel, Mutasem; Ghanimeh, Sophia; Saikaly, Pascal; Abou-Najm, Majdi
2014-12-01
In this study, two experimental sets of data each involving two thermophilic anaerobic digesters treating food waste, were simulated using the Anaerobic Digestion Model No. 1 (ADM1). A sensitivity analysis was conducted, using both data sets of one digester, for parameter optimization based on five measured performance indicators: methane generation, pH, acetate, total COD, ammonia, and an equally weighted combination of the five indicators. The simulation results revealed that while optimization with respect to methane alone, a commonly adopted approach, succeeded in simulating methane experimental results, it predicted other intermediary outputs less accurately. On the other hand, the multi-objective optimization has the advantage of providing better results than methane optimization despite not capturing the intermediary output. The results from the parameter optimization were validated upon their independent application on the data sets of the second digester. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sulis, M.; Paniconi, C.; Marrocu, M.; Huard, D.; Chaumont, D.
2012-12-01
General circulation models (GCMs) are the primary instruments for obtaining projections of future global climate change. Outputs from GCMs, aided by dynamical and/or statistical downscaling techniques, have long been used to simulate changes in regional climate systems over wide spatiotemporal scales. Numerous studies have acknowledged the disagreements between the various GCMs and between the different downscaling methods designed to compensate for the mismatch between climate model output and the spatial scale at which hydrological models are applied. Very little is known, however, about the importance of these differences once they have been input or assimilated by a nonlinear hydrological model. This issue is investigated here at the catchment scale using a process-based model of integrated surface and subsurface hydrologic response driven by outputs from 12 members of a multimodel climate ensemble. The data set consists of daily values of precipitation and min/max temperatures obtained by combining four regional climate models and five GCMs. The regional scenarios were downscaled using a quantile scaling bias-correction technique. The hydrologic response was simulated for the 690 km2des Anglais catchment in southwestern Quebec, Canada. The results show that different hydrological components (river discharge, aquifer recharge, and soil moisture storage) respond differently to precipitation and temperature anomalies in the multimodel climate output, with greater variability for annual discharge compared to recharge and soil moisture storage. We also find that runoff generation and extreme event-driven peak hydrograph flows are highly sensitive to any uncertainty in climate data. Finally, the results show the significant impact of changing sequences of rainy days on groundwater recharge fluxes and the influence of longer dry spells in modifying soil moisture spatial variability.
ERIC Educational Resources Information Center
Zigic, Sasha; Lemckert, Charles J.
2007-01-01
The following paper presents a computer-based learning strategy to assist in introducing and teaching water quality modelling to undergraduate civil engineering students. As part of the learning strategy, an interactive computer-based instructional (CBI) aid was specifically developed to assist students to set up, run and analyse the output from a…
USDA-ARS?s Scientific Manuscript database
The capacity of US agriculture to increase the output of specific foods to accommodate increased demand is not well documented. This research uses geospatial modeling to examine the capacity of the US agricultural land base to increase the per capita availability of an example set of nutrient-dense ...
ERIC Educational Resources Information Center
Omar, Zoharah; Ahmad, Aminah
2014-01-01
Following the classic systems model of inputs, processes, and outputs, this study examined the influence of three input factors, team climate, work overload, and team leadership, on research project team effectiveness as measured by publication productivity, team member satisfaction, and job frustration. This study also examined the mediating…
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M
2015-11-01
This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.
Simulation of a master-slave event set processor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Comfort, J.C.
1984-03-01
Event set manipulation may consume a considerable amount of the computation time spent in performing a discrete-event simulation. One way of minimizing this time is to allow event set processing to proceed in parallel with the remainder of the simulation computation. The paper describes a multiprocessor simulation computer, in which all non-event set processing is performed by the principal processor (called the host). Event set processing is coordinated by a front end processor (the master) and actually performed by several other functionally identical processors (the slaves). A trace-driven simulation program modeling this system was constructed, and was run with tracemore » output taken from two different simulation programs. Output from this simulation suggests that a significant reduction in run time may be realized by this approach. Sensitivity analysis was performed on the significant parameters to the system (number of slave processors, relative processor speeds, and interprocessor communication times). A comparison between actual and simulation run times for a one-processor system was used to assist in the validation of the simulation. 7 references.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, R; Gordon, I; Ghebremedhin, A
2014-06-01
Purpose: To determine the proton output factors for an SRS cone set using standardized apertures and varied range compensators (bolus blanks); specifically, to determine the best method for modeling the bolus gap factor (BGF) and eliminate the need for patient specific calibrations. Methods: A Standard Imaging A-16 chamber was placed in a Plastic Water phantom to measure the change in dose/MU with different treatment combinations for a proton SRS cone, using standardized apertures and range compensators. Measurements were made with all apertures in the SRS cone set, with four different range compensator thicknesses and five different air gaps between themore » end of the SRS cone and the surface of the phantom. The chamber was located at iso-center and maintained at a constant depth at the center of modulation for all measurements. Each aperture was placed in the cone to measure the change in MU needed to maintain constant dose at the chamber, as the air gap was increased with different thicknesses of bolus. Results: The dose/MU varied significantly with decreasing aperture size, increasing bolus thickness, or increasing air gap. The measured data was fitted with the lowest order polynomials that accurately described the data, to create a model for determining the change in output for any potential combination of devices used to treat a patient. For a given standardized aperture, the BGF could be described by its constituent factors: the bolus thickness factor (BTF) and the nozzle extension factor (NEF). Conclusion: The methods used to model the dose at the calibration point could be used to accurately predict the change in output for SRS proton beams due to the BGF, eliminating the need for patient specific calibrations. This method for modeling SRS treatments could also be applied to model other treatments using passively scattered proton beams.« less
A Weight of Evidence Framework for Environmental Assessments: Inferring Quantities
Environmental assessments require the generation of quantitative parameters such as degradation rates and assessment products may be quantities such as criterion values or magnitudes of effects. When multiple data sets or outputs of multiple models are available, it may be appro...
Finite Control Set Model Predictive Control for Multiple Distributed Generators Microgrids
NASA Astrophysics Data System (ADS)
Babqi, Abdulrahman Jamal
This dissertation proposes two control strategies for AC microgrids that consist of multiple distributed generators (DGs). The control strategies are valid for both grid-connected and islanded modes of operation. In general, microgrid can operate as a stand-alone system (i.e., islanded mode) or while it is connected to the utility grid (i.e., grid connected mode). To enhance the performance of a micrgorid, a sophisticated control scheme should be employed. The control strategies of microgrids can be divided into primary and secondary controls. The primary control regulates the output active and reactive powers of each DG in grid-connected mode as well as the output voltage and frequency of each DG in islanded mode. The secondary control is responsible for regulating the microgrid voltage and frequency in the islanded mode. Moreover, it provides power sharing schemes among the DGs. In other words, the secondary control specifies the set points (i.e. reference values) for the primary controllers. In this dissertation, Finite Control Set Model Predictive Control (FCS-MPC) was proposed for controlling microgrids. FCS-MPC was used as the primary controller to regulate the output power of each DG (in the grid-connected mode) or the voltage of the point of DG coupling (in the islanded mode of operation). In the grid-connected mode, Direct Power Model Predictive Control (DPMPC) was implemented to manage the power flow between each DG and the utility grid. In the islanded mode, Voltage Model Predictive Control (VMPC), as the primary control, and droop control, as the secondary control, were employed to control the output voltage of each DG and system frequency. The controller was equipped with a supplementary current limiting technique in order to limit the output current of each DG in abnormal incidents. The control approach also enabled smooth transition between the two modes. The performance of the control strategy was investigated and verified using PSCAD/EMTDC software platform. This dissertation also proposes a control and power sharing strategy for small-scale microgrids in both grid-connected and islanded modes based on centralized FCS-MPC. In grid-connected mode, the controller was capable of managing the output power of each DG and enabling flexible power regulation between the microgrid and the utility grid. In islanded mode, the controller regulated the microgrid voltage and frequency, and provided a precise power sharing scheme among the DGs. In addition, the power sharing can be adjusted flexibly by changing the sharing ratio. The proposed control also enabled plug-and-play operation. Moreover, a smooth transition between the two modes of operation was achieved without any disturbance in the system. Case studies were carried out in order to validate the proposed control strategy with the PSCAD/EMTDA software package.
NASA Astrophysics Data System (ADS)
Shin, Henry; Suresh, Nina L.; Zev Rymer, William; Hu, Xiaogang
2018-02-01
Objective. Chronic muscle weakness impacts the majority of individuals after a stroke. The origins of this hemiparesis is multifaceted, and an altered spinal control of the motor unit (MU) pool can lead to muscle weakness. However, the relative contribution of different MU recruitment and discharge organization is not well understood. In this study, we sought to examine these different effects by utilizing a MU simulation with variations set to mimic the changes of MU control in stroke. Approach. Using a well-established model of the MU pool, this study quantified the changes in force output caused by changes in MU recruitment range and recruitment order, as well as MU firing rate organization at the population level. We additionally expanded the original model to include a fatigue component, which variably decreased the output force with increasing length of contraction. Differences in the force output at both the peak and fatigued time points across different excitation levels were quantified and compared across different sets of MU parameters. Main results. Across the different simulation parameters, we found that the main driving factor of the reduced force output was due to the compressed range of MU recruitment. Recruitment compression caused a decrease in total force across all excitation levels. Additionally, a compression of the range of MU firing rates also demonstrated a decrease in the force output mainly at the higher excitation levels. Lastly, changes to the recruitment order of MUs appeared to minimally impact the force output. Significance. We found that altered control of MUs alone, as simulated in this study, can lead to a substantial reduction in muscle force generation in stroke survivors. These findings may provide valuable insight for both clinicians and researchers in prescribing and developing different types of therapies for the rehabilitation and restoration of lost strength after stroke.
Tropical Cyclone Information System
NASA Technical Reports Server (NTRS)
Li, P. Peggy; Knosp, Brian W.; Vu, Quoc A.; Yi, Chao; Hristova-Veleva, Svetla M.
2009-01-01
The JPL Tropical Cyclone Infor ma tion System (TCIS) is a Web portal (http://tropicalcyclone.jpl.nasa.gov) that provides researchers with an extensive set of observed hurricane parameters together with large-scale and convection resolving model outputs. It provides a comprehensive set of high-resolution satellite (see figure), airborne, and in-situ observations in both image and data formats. Large-scale datasets depict the surrounding environmental parameters such as SST (Sea Surface Temperature) and aerosol loading. Model outputs and analysis tools are provided to evaluate model performance and compare observations from different platforms. The system pertains to the thermodynamic and microphysical structure of the storm, the air-sea interaction processes, and the larger-scale environment as depicted by ocean heat content and the aerosol loading of the environment. Currently, the TCIS is populated with satellite observations of all tropical cyclones observed globally during 2005. There is a plan to extend the database both forward in time till present as well as backward to 1998. The portal is powered by a MySQL database and an Apache/Tomcat Web server on a Linux system. The interactive graphic user interface is provided by Google Map.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broderick, Robert; Quiroz, Jimmy; Grijalva, Santiago
2014-07-15
Matlab Toolbox for simulating the impact of solar energy on the distribution grid. The majority of the functions are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving GridPV Toolbox information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in the OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included to show potential uses of the toolbox functions.
The ACP Special Issue is being organized to draw together analysis of a set of cooperative modeling experiments (referred to as HTAP2). The purpose of this technical note is to provide a common description of the experimental design and set up for HTAP2 that can be referred to b...
NASA Technical Reports Server (NTRS)
Lucas, Michael J.; Marcolini, Michael A.
1997-01-01
The Rotorcraft Noise Model (RNM) is an aircraft noise impact modeling computer program being developed for NASA-Langley Research Center which calculates sound levels at receiver positions either on a uniform grid or at specific defined locations. The basic computational model calculates a variety of metria. Acoustic properties of the noise source are defined by two sets of sound pressure hemispheres, each hemisphere being centered on a noise source of the aircraft. One set of sound hemispheres provides the broadband data in the form of one-third octave band sound levels. The other set of sound hemispheres provides narrowband data in the form of pure-tone sound pressure levels and phase. Noise contours on the ground are output graphically or in tabular format, and are suitable for inclusion in Environmental Impact Statements or Environmental Assessments.
Thermospheric dynamics - A system theory approach
NASA Technical Reports Server (NTRS)
Codrescu, M.; Forbes, J. M.; Roble, R. G.
1990-01-01
A system theory approach to thermospheric modeling is developed, based upon a linearization method which is capable of preserving nonlinear features of a dynamical system. The method is tested using a large, nonlinear, time-varying system, namely the thermospheric general circulation model (TGCM) of the National Center for Atmospheric Research. In the linearized version an equivalent system, defined for one of the desired TGCM output variables, is characterized by a set of response functions that is constructed from corresponding quasi-steady state and unit sample response functions. The linearized version of the system runs on a personal computer and produces an approximation of the desired TGCM output field height profile at a given geographic location.
General model and control of an n rotor helicopter
NASA Astrophysics Data System (ADS)
Sidea, A. G.; Yding Brogaard, R.; Andersen, N. A.; Ravn, O.
2014-12-01
The purpose of this study was to create a dynamic, nonlinear mathematical model of a multirotor that would be valid for different numbers of rotors. Furthermore, a set of Single Input Single Output (SISO) controllers were implemented for attitude control. Both model and controllers were tested experimentally on a quadcopter. Using the combined model and controllers, simple system simulation and control is possible, by replacing the physical values for the individual systems.
Processing Device for High-Speed Execution of an Xrisc Computer Program
NASA Technical Reports Server (NTRS)
Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)
2016-01-01
A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.
Observations in the Computer Room: L2 Output and Learner Behaviour
ERIC Educational Resources Information Center
Leahy, Christine
2004-01-01
This article draws on second language theory, particularly output theory as defined by Swain (1995), in order to conceptualise observations made in a computer-assisted language learning setting. It investigates second language output and learner behaviour within an electronic role-play setting, based on a subject-specific problem solving task and…
Negative autoregulation matches production and demand in synthetic transcriptional networks.
Franco, Elisa; Giordano, Giulia; Forsberg, Per-Ola; Murray, Richard M
2014-08-15
We propose a negative feedback architecture that regulates activity of artificial genes, or "genelets", to meet their output downstream demand, achieving robustness with respect to uncertain open-loop output production rates. In particular, we consider the case where the outputs of two genelets interact to form a single assembled product. We show with analysis and experiments that negative autoregulation matches the production and demand of the outputs: the magnitude of the regulatory signal is proportional to the "error" between the circuit output concentration and its actual demand. This two-device system is experimentally implemented using in vitro transcriptional networks, where reactions are systematically designed by optimizing nucleic acid sequences with publicly available software packages. We build a predictive ordinary differential equation (ODE) model that captures the dynamics of the system and can be used to numerically assess the scalability of this architecture to larger sets of interconnected genes. Finally, with numerical simulations we contrast our negative autoregulation scheme with a cross-activation architecture, which is less scalable and results in slower response times.
Independent validation of Swarm Level 2 magnetic field products and `Quick Look' for Level 1b data
NASA Astrophysics Data System (ADS)
Beggan, Ciarán D.; Macmillan, Susan; Hamilton, Brian; Thomson, Alan W. P.
2013-11-01
Magnetic field models are produced on behalf of the European Space Agency (ESA) by an independent scientific consortium known as the Swarm Satellite Constellation Application and Research Facility (SCARF), through the Level 2 Processor (L2PS). The consortium primarily produces magnetic field models for the core, lithosphere, ionosphere and magnetosphere. Typically, for each magnetic product, two magnetic field models are produced in separate chains using complementary data selection and processing techniques. Hence, the magnetic field models from the complementary processing chains will be similar but not identical. The final step in the overall L2PS therefore involves inspection and validation of the magnetic field models against each other and against data from (semi-) independent sources (e.g. ground observatories). We describe the validation steps for each magnetic field product and the comparison against independent datasets, and we show examples of the output of the validation. In addition, the L2PS also produces a daily set of `Quick Look' output graphics and statistics to monitor the overall quality of Level 1b data issued by ESA. We describe the outputs of the `Quick Look' chain.
Fuzzy Neuron: Method and Hardware Realization
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.
2014-01-01
This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.
John D. Armstrong; Keith H. Nislow
2012-01-01
Modelling approaches for relating discharge to the biology of Atlantic salmon, Salmo salar L., and brown trout, Salmo trutta L., growing in rivers are reviewed. Process-based and empirical models are set within a common framework of input of water flow and output of characteristics of fish, such as growth and survival, which relate directly to population dynamics. A...
NASA Astrophysics Data System (ADS)
Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.
2016-12-01
There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.
[Decomposition model of energy-related carbon emissions in tertiary industry for China].
Lu, Yuan-Qing; Shi, Jun
2012-07-01
Tertiary industry has been developed in recent years. And it is very important to find the factors influenced the energy-related carbon emissions in tertiary industry. A decomposition model of energy-related carbon emissions for China is set up by adopting logarithmic mean weight Divisia method based on the identity of carbon emissions. The model is adopted to analyze the influence of energy structure, energy efficiency, tertiary industry structure and economic output to energy-related carbon emissions in China from 2000 to 2009. Results show that the contribution rate of economic output and energy structure to energy-related carbon emissions increases year by year. Either is the contribution rate of energy efficiency or the tertiary industry restraining to energy-related carbon emissions. However, the restrain effect is weakening.
Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.
Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D
2014-05-01
In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.
A global sensitivity analysis approach for morphogenesis models.
Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G
2015-11-21
Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.
Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models
NASA Astrophysics Data System (ADS)
Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.
2007-01-01
Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.
CPAP Devices for Emergency Prehospital Use: A Bench Study.
Brusasco, Claudia; Corradi, Francesco; De Ferrari, Alessandra; Ball, Lorenzo; Kacmarek, Robert M; Pelosi, Paolo
2015-12-01
CPAP is frequently used in prehospital and emergency settings. An air-flow output minimum of 60 L/min and a constant positive pressure are 2 important features for a successful CPAP device. Unlike hospital CPAP devices, which require electricity, CPAP devices for ambulance use need only an oxygen source to function. The aim of the study was to evaluate and compare on a bench model the performance of 3 orofacial mask devices (Ventumask, EasyVent, and Boussignac CPAP system) and 2 helmets (Ventukit and EVE Coulisse) used to apply CPAP in the prehospital setting. A static test evaluated air-flow output, positive pressure applied, and FIO2 delivered by each device. A dynamic test assessed airway pressure stability during simulated ventilation. Efficiency of devices was compared based on oxygen flow needed to generate a minimum air flow of 60 L/min at each CPAP setting. The EasyVent and EVE Coulisse devices delivered significantly higher mean air-flow outputs compared with the Ventumask and Ventukit under all CPAP conditions tested. The Boussignac CPAP system never reached an air-flow output of 60 L/min. The EasyVent had significantly lower pressure excursion than the Ventumask at all CPAP levels, and the EVE Coulisse had lower pressure excursion than the Ventukit at 5, 15, and 20 cm H2O, whereas at 10 cm H2O, no significant difference was observed between the 2 devices. Estimated oxygen consumption was lower for the EasyVent and EVE Coulisse compared with the Ventumask and Ventukit. Air-flow output, pressure applied, FIO2 delivered, device oxygen consumption, and ability to maintain air flow at 60 L/min differed significantly among the CPAP devices tested. Only the EasyVent and EVE Coulisse achieved the required minimum level of air-flow output needed to ensure an effective therapy under all CPAP conditions. Copyright © 2015 by Daedalus Enterprises.
Robustness, evolvability, and the logic of genetic regulation.
Payne, Joshua L; Moore, Jason H; Wagner, Andreas
2014-01-01
In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene's cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: For the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield identical gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, so that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype.
Robustness, Evolvability, and the Logic of Genetic Regulation
Moore, Jason H.; Wagner, Andreas
2014-01-01
In gene regulatory circuits, the expression of individual genes is commonly modulated by a set of regulating gene products, which bind to a gene’s cis-regulatory region. This region encodes an input-output function, referred to as signal-integration logic, that maps a specific combination of regulatory signals (inputs) to a particular expression state (output) of a gene. The space of all possible signal-integration functions is vast and the mapping from input to output is many-to-one: for the same set of inputs, many functions (genotypes) yield the same expression output (phenotype). Here, we exhaustively enumerate the set of signal-integration functions that yield idential gene expression patterns within a computational model of gene regulatory circuits. Our goal is to characterize the relationship between robustness and evolvability in the signal-integration space of regulatory circuits, and to understand how these properties vary between the genotypic and phenotypic scales. Among other results, we find that the distributions of genotypic robustness are skewed, such that the majority of signal-integration functions are robust to perturbation. We show that the connected set of genotypes that make up a given phenotype are constrained to specific regions of the space of all possible signal-integration functions, but that as the distance between genotypes increases, so does their capacity for unique innovations. In addition, we find that robust phenotypes are (i) evolvable, (ii) easily identified by random mutation, and (iii) mutationally biased toward other robust phenotypes. We explore the implications of these latter observations for mutation-based evolution by conducting random walks between randomly chosen source and target phenotypes. We demonstrate that the time required to identify the target phenotype is independent of the properties of the source phenotype. PMID:23373974
Research on laser detonation pulse circuit with low-power based on super capacitor
NASA Astrophysics Data System (ADS)
Wang, Hao-yu; Hong, Jin; He, Aifeng; Jing, Bo; Cao, Chun-qiang; Ma, Yue; Chu, En-yi; Hu, Ya-dong
2018-03-01
According to the demand of laser initiating device miniaturization and low power consumption of weapon system, research on the low power pulse laser detonation circuit with super capacitor. Established a dynamic model of laser output based on super capacitance storage capacity, discharge voltage and programmable output pulse width. The output performance of the super capacitor under different energy storage capacity and discharge voltage is obtained by simulation. The experimental test system was set up, and the laser diode of low power pulsed laser detonation circuit was tested and the laser output waveform of laser diode in different energy storage capacity and discharge voltage was collected. Experiments show that low power pulse laser detonation based on super capacitor energy storage circuit discharge with high efficiency, good transient performance, for a low power consumption requirement, for laser detonation system and low power consumption and provide reference light miniaturization of engineering practice.
The life-cycle research productivity of mathematicians and scientists.
Diamond, A M
1986-07-01
Declining research productivity with age is implied by economic models of life-cycle human capital investment but is denied by some recent empirical studies. The purpose of the present study is to provide new evidence on whether a scientist's output generally declines with advancing age. A longitudinal data set has been compiled for scientists and mathematicians at six major departments, including data on age, salaries, annual citations (stock of human capital), citations to current output (flow of human capital), and quantity of current output measured both in number of articles and in number of pages. Analysis of the data indicates that salaries peak from the early to mid-60s, whereas annual citations appear to peak from age 39 to 89 for different departments with a mean age of 59 for the 6 departments. The quantity and quality of current research output appear to decline continuously with age.
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Zhong, Zhixiong; Zhu, Yanzheng; Ahn, Choon Ki
2018-07-01
In this paper, we address the problem of reachable set estimation for continuous-time Takagi-Sugeno (T-S) fuzzy systems subject to unknown output delays. Based on the reachable set concept, a new controller design method is also discussed for such systems. An effective method is developed to attenuate the negative impact from the unknown output delays, which likely degrade the performance/stability of systems. First, an augmented fuzzy observer is proposed to capacitate a synchronous estimation for the system state and the disturbance term owing to the unknown output delays, which ensures that the reachable set of the estimation error is limited via the intersection operation of ellipsoids. Then, a compensation technique is employed to eliminate the influence on the system performance stemmed from the unknown output delays. Finally, the effectiveness and correctness of the obtained theories are verified by the tracking control of autonomous underwater vehicles. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2013-12-01
Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.
Artificial neural networks modelling the prednisolone nanoprecipitation in microfluidic reactors.
Ali, Hany S M; Blagden, Nicholas; York, Peter; Amani, Amir; Brook, Toni
2009-06-28
This study employs artificial neural networks (ANNs) to create a model to identify relationships between variables affecting drug nanoprecipitation using microfluidic reactors. The input variables examined were saturation levels of prednisolone, solvent and antisolvent flow rates, microreactor inlet angles and internal diameters, while particle size was the single output. ANNs software was used to analyse a set of data obtained by random selection of the variables. The developed model was then assessed using a separate set of validation data and provided good agreement with the observed results. The antisolvent flow rate was found to have the dominant role on determining final particle size.
Li, Dong-Juan; Li, Da-Peng
2017-09-14
In this paper, an adaptive output feedback control is framed for uncertain nonlinear discrete-time systems. The considered systems are a class of multi-input multioutput nonaffine nonlinear systems, and they are in the nested lower triangular form. Furthermore, the unknown dead-zone inputs are nonlinearly embedded into the systems. These properties of the systems will make it very difficult and challenging to construct a stable controller. By introducing a new diffeomorphism coordinate transformation, the controlled system is first transformed into a state-output model. By introducing a group of new variables, an input-output model is finally obtained. Based on the transformed model, the implicit function theorem is used to determine the existence of the ideal controllers and the approximators are employed to approximate the ideal controllers. By using the mean value theorem, the nonaffine functions of systems can become an affine structure but nonaffine terms still exist. The adaptation auxiliary terms are skillfully designed to cancel the effect of the dead-zone input. Based on the Lyapunov difference theorem, the boundedness of all the signals in the closed-loop system can be ensured and the tracking errors are kept in a bounded compact set. The effectiveness of the proposed technique is checked by a simulation study.
González-Domínguez, Elisa; Armengol, Josep; Rossi, Vittorio
2014-01-01
A mechanistic, dynamic model was developed to predict infection of loquat fruit by conidia of Fusicladium eriobotryae, the causal agent of loquat scab. The model simulates scab infection periods and their severity through the sub-processes of spore dispersal, infection, and latency (i.e., the state variables); change from one state to the following one depends on environmental conditions and on processes described by mathematical equations. Equations were developed using published data on F. eriobotryae mycelium growth, conidial germination, infection, and conidial dispersion pattern. The model was then validated by comparing model output with three independent data sets. The model accurately predicts the occurrence and severity of infection periods as well as the progress of loquat scab incidence on fruit (with concordance correlation coefficients >0.95). Model output agreed with expert assessment of the disease severity in seven loquat-growing seasons. Use of the model for scheduling fungicide applications in loquat orchards may help optimise scab management and reduce fungicide applications. PMID:25233340
2007-01-01
focus on identifying growth by income and housing costs. These, and other models are focused on the city itself and deal with growth over the course...2. This model employs a set of econometric models to project future population, household, and employment. The landscape is gridded into one... model in LEAM (LEAMecon) forecasts changes in output, employment and income over time based on changes in the market, technology, productivity and
Inverse analysis of turbidites by machine learning
NASA Astrophysics Data System (ADS)
Naruse, H.; Nakao, K.
2017-12-01
This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small deviation from the true parameters. Comparing to previous inverse modeling of turbidity currents, our methodology is superior especially in the efficiency of computation. Also, our methodology has advantage in extensibility and applicability to various sediment transport processes such as pyroclastic flows or debris flows.
NASA Astrophysics Data System (ADS)
Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.
2015-12-01
Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple hydrologic with hydrodynamic computations while discriminating between 1D-channels and 2D-floodplains. Such a fully-fledged set-up would be able to provide higher-order flood hazard information, e.g. time to flooding and flood duration, ultimately leading to improved flood risk assessment and management at the large scale.
Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center
NASA Technical Reports Server (NTRS)
Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre
2009-01-01
The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.
Comprehensive Assessment of Models and Events based on Library tools (CAMEL)
NASA Astrophysics Data System (ADS)
Rastaetter, L.; Boblitt, J. M.; DeZeeuw, D.; Mays, M. L.; Kuznetsova, M. M.; Wiegand, C.
2017-12-01
At the Community Coordinated Modeling Center (CCMC), the assessment of modeling skill using a library of model-data comparison metrics is taken to the next level by fully integrating the ability to request a series of runs with the same model parameters for a list of events. The CAMEL framework initiates and runs a series of selected, pre-defined simulation settings for participating models (e.g., WSA-ENLIL, SWMF-SC+IH for the heliosphere, SWMF-GM, OpenGGCM, LFM, GUMICS for the magnetosphere) and performs post-processing using existing tools for a host of different output parameters. The framework compares the resulting time series data with respective observational data and computes a suite of metrics such as Prediction Efficiency, Root Mean Square Error, Probability of Detection, Probability of False Detection, Heidke Skill Score for each model-data pair. The system then plots scores by event and aggregated over all events for all participating models and run settings. We are building on past experiences with model-data comparisons of magnetosphere and ionosphere model outputs in GEM2008, GEM-CEDAR CETI2010 and Operational Space Weather Model challenges (2010-2013). We can apply the framework also to solar-heliosphere as well as radiation belt models. The CAMEL framework takes advantage of model simulations described with Space Physics Archive Search and Extract (SPASE) metadata and a database backend design developed for a next-generation Run-on-Request system at the CCMC.
Making Sense of Complexity with FRE, a Scientific Workflow System for Climate Modeling (Invited)
NASA Astrophysics Data System (ADS)
Langenhorst, A. R.; Balaji, V.; Yakovlev, A.
2010-12-01
A workflow is a description of a sequence of activities that is both precise and comprehensive. Capturing the workflow of climate experiments provides a record which can be queried or compared, and allows reproducibility of the experiments - sometimes even to the bit level of the model output. This reproducibility helps to verify the integrity of the output data, and enables easy perturbation experiments. GFDL's Flexible Modeling System Runtime Environment (FRE) is a production-level software project which defines and implements building blocks of the workflow as command line tools. The scientific, numerical and technical input needed to complete the workflow of an experiment is recorded in an experiment description file in XML format. Several key features add convenience and automation to the FRE workflow: ● Experiment inheritance makes it possible to define a new experiment with only a reference to the parent experiment and the parameters to override. ● Testing is a basic element of the FRE workflow: experiments define short test runs which are verified before the main experiment is run, and a set of standard experiments are verified with new code releases. ● FRE is flexible enough to support short runs with mere megabytes of data, to high-resolution experiments that run on thousands of processors for months, producing terabytes of output data. Experiments run in segments of model time; after each segment, the state is saved and the model can be checkpointed at that level. Segment length is defined by the user, but the number of segments per system job is calculated to fit optimally in the batch scheduler requirements. FRE provides job control across multiple segments, and tools to monitor and alter the state of long-running experiments. ● Experiments are entered into a Curator Database, which stores query-able metadata about the experiment and the experiment's output. ● FRE includes a set of standardized post-processing functions as well as the ability to incorporate user-level functions. FRE post-processing can take us all the way to the preparing of graphical output for a scientific audience, and publication of data on a public portal. ● Recent FRE development includes incorporating a distributed workflow to support remote computing.
NASA Astrophysics Data System (ADS)
Rivera, Diego; Rivas, Yessica; Godoy, Alex
2015-02-01
Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-04-01
Model based analysis methods are relatively new approaches for processing the output data of radiation detectors in nuclear medicine imaging and spectroscopy. A class of such methods requires fast algorithms for fitting pulse models to experimental data. In order to apply integral-equation based methods for processing the preamplifier output pulses, this article proposes a fast and simple method for estimating the parameters of the well-known bi-exponential pulse model by solving an integral equation. The proposed method needs samples from only three points of the recorded pulse as well as its first and second order integrals. After optimizing the sampling points, the estimation results were calculated and compared with two traditional integration-based methods. Different noise levels (signal-to-noise ratios from 10 to 3000) were simulated for testing the functionality of the proposed method, then it was applied to a set of experimental pulses. Finally, the effect of quantization noise was assessed by studying different sampling rates. Promising results by the proposed method endorse it for future real-time applications.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2018-02-01
This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Verber, C. M.; Kenan, R. P.; Hartman, N. F.; Chapman, C. M.
1980-01-01
A laboratory model of a 16 channel integrated optical data preprocessor was fabricated and tested in response to a need for a device to evaluate the outputs of a set of remote sensors. It does this by accepting the outputs of these sensors, in parallel, as the components of a multidimensional vector descriptive of the data and comparing this vector to one or more reference vectors which are used to classify the data set. The comparison is performed by taking the difference between the signal and reference vectors. The preprocessor is wholly integrated upon the surface of a LiNbO3 single crystal with the exceptions of the source and the detector. He-Ne laser light is coupled in and out of the waveguide by prism couplers. The integrated optical circuit consists of a titanium infused waveguide pattern, electrode structures and grating beam splitters. The waveguide and electrode patterns, by virtue of their complexity, make the vector subtraction device the most complex integrated optical structure fabricated to date.
Scheduling Algorithm for Mission Planning and Logistics Evaluation (SAMPLE). Volume 1: User's guide
NASA Technical Reports Server (NTRS)
Dupnick, E.; Wiggins, D.
1980-01-01
An interactive computer program for automatically generating traffic models for the Space Transportation System (STS) is presented. Information concerning run stream construction, input data, and output data is provided. The flow of the interactive data stream is described. Error messages are specified, along with suggestions for remedial action. In addition, formats and parameter definitions for the payload data set (payload model), feasible combination file, and traffic model are documented.
Method for analyzing the chemical composition of liquid effluent from a direct contact condenser
Bharathan, Desikan; Parent, Yves; Hassani, A. Vahab
2001-01-01
A computational modeling method for predicting the chemical, physical, and thermodynamic performance of a condenser using calculations based on equations of physics for heat, momentum and mass transfer and equations of equilibrium thermodynamics to determine steady state profiles of parameters throughout the condenser. The method includes providing a set of input values relating to a condenser including liquid loading, vapor loading, and geometric characteristics of the contact medium in the condenser. The geometric and packing characteristics of the contact medium include the dimensions and orientation of a channel in the contact medium. The method further includes simulating performance of the condenser using the set of input values to determine a related set of output values such as outlet liquid temperature, outlet flow rates, pressures, and the concentration(s) of one or more dissolved noncondensable gas species in the outlet liquid. The method may also include iteratively performing the above computation steps using a plurality of sets of input values and then determining whether each of the resulting output values and performance profiles satisfies acceptance criteria.
Probabilistic Evaluation of Competing Climate Models
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Chatterjee, S.; Heyman, M.; Cressie, N.
2017-12-01
A standard paradigm for assessing the quality of climate model simulations is to compare what these models produce for past and present time periods, to observations of the past and present. Many of these comparisons are based on simple summary statistics called metrics. Here, we propose an alternative: evaluation of competing climate models through probabilities derived from tests of the hypothesis that climate-model-simulated and observed time sequences share common climate-scale signals. The probabilities are based on the behavior of summary statistics of climate model output and observational data, over ensembles of pseudo-realizations. These are obtained by partitioning the original time sequences into signal and noise components, and using a parametric bootstrap to create pseudo-realizations of the noise sequences. The statistics we choose come from working in the space of decorrelated and dimension-reduced wavelet coefficients. We compare monthly sequences of CMIP5 model output of average global near-surface temperature anomalies to similar sequences obtained from the well-known HadCRUT4 data set, as an illustration.
Conditional random fields for pattern recognition applied to structured data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burr, Tom; Skurikhin, Alexei
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
Conditional random fields for pattern recognition applied to structured data
Burr, Tom; Skurikhin, Alexei
2015-07-14
In order to predict labels from an output domain, Y, pattern recognition is used to gather measurements from an input domain, X. Image analysis is one setting where one might want to infer whether a pixel patch contains an object that is “manmade” (such as a building) or “natural” (such as a tree). Suppose the label for a pixel patch is “manmade”; if the label for a nearby pixel patch is then more likely to be “manmade” there is structure in the output domain that can be exploited to improve pattern recognition performance. Modeling P(X) is difficult because features betweenmore » parts of the model are often correlated. Thus, conditional random fields (CRFs) model structured data using the conditional distribution P(Y|X = x), without specifying a model for P(X), and are well suited for applications with dependent features. Our paper has two parts. First, we overview CRFs and their application to pattern recognition in structured problems. Our primary examples are image analysis applications in which there is dependence among samples (pixel patches) in the output domain. Second, we identify research topics and present numerical examples.« less
Hybrid neural network and fuzzy logic approaches for rendezvous and capture in space
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Castellano, Timothy
1991-01-01
The nonlinear behavior of many practical systems and unavailability of quantitative data regarding the input-output relations makes the analytical modeling of these systems very difficult. On the other hand, approximate reasoning-based controllers which do not require analytical models have demonstrated a number of successful applications such as the subway system in the city of Sendai. These applications have mainly concentrated on emulating the performance of a skilled human operator in the form of linguistic rules. However, the process of learning and tuning the control rules to achieve the desired performance remains a difficult task. Fuzzy Logic Control is based on fuzzy set theory. A fuzzy set is an extension of a crisp set. Crisp sets only allow full membership or no membership at all, whereas fuzzy sets allow partial membership. In other words, an element may partially belong to a set.
Eberts, Sandra M.
2011-01-01
A study of the Transport of Anthropogenic and Natural Contaminants to public-supply wells (TANC study) was begun in 2001 as part of the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program. The study was designed to shed light on factors that affect the vulnerability of groundwater and, more specifically, water from public-supply wells to contamination to provide a context for the NAWQA Program's earlier finding of mixtures of contaminants at low concentrations in groundwater near the water table in urban areas across the Nation. The TANC study has included investigations at both the regional (tens to thousands of square kilometers) and local (generally less than 25 square kilometers) scales. At the regional scale, the approach to investigation involves refining conceptual models of groundwater flow in hydrologically distinct settings and then constructing or updating a groundwater-flow model with particle tracking for each setting to help quantify regional water budgets, public-supply well contributing areas (areas contributing recharge to wells and zones of contribution for wells), and traveltimes from recharge areas to selected wells. A great deal of information about each contributing area is captured from the model output, including values for 170 variables that describe physical and (or) geochemical characteristics of the contributing areas. The information is subsequently stored in a relational database. Retrospective water-quality data from monitoring, domestic, and many of the public-supply wells, as well as data from newly collected samples at selected public-supply wells, also are stored in the database and are used with the model output to help discern the more important factors affecting vulnerability in many, if not most, settings. The study began with investigations in seven regional areas, and it benefits from being conducted as part of the NAWQA Program, in which consistent methods are used so that meaningful comparisons can be made. The hydrogeologic settings and regional-scale groundwater-flow models from the initial seven regional areas are documented in Chapter A of this U.S. Geological Survey Professional Paper. Also documented in Chapter A are the methods used to collect and compile the water-quality data, determine contributing areas of the public-supply wells, and characterize the oxidation-reduction (redox) conditions in each setting. A data dictionary for the database that was designed to enable joint storage and access to water-quality data and groundwater-flow model particle-tracking output is included as Appendix 1 of Chapter A. This chapter, Chapter B, documents modifications to the study methods and presents descriptions of two regional areas that were added to the TANC study in 2004.
NASA Astrophysics Data System (ADS)
Ushio, Toshimitsu; Takai, Shigemasa
Supervisory control is a general framework of logical control of discrete event systems. A supervisor assigns a set of control-disabled controllable events based on observed events so that the controlled discrete event system generates specified languages. In conventional supervisory control, it is assumed that observed events are determined by internal events deterministically. But, this assumption does not hold in a discrete event system with sensor errors and a mobile system, where each observed event depends on not only an internal event but also a state just before the occurrence of the internal event. In this paper, we model such a discrete event system by a Mealy automaton with a nondeterministic output function. We introduce two kinds of supervisors: one assigns each control action based on a permissive policy and the other based on an anti-permissive one. We show necessary and sufficient conditions for the existence of each supervisor. Moreover, we discuss the relationship between the supervisors in the case that the output function is determinisitic.
NASA Technical Reports Server (NTRS)
Enison, R. L.
1971-01-01
A computer program called Character String Scanner (CSS), is presented. It is designed to search a data set for any specified group of characters and then to flag this group. The output of the CSS program is a listing of the data set being searched with the specified group of characters being flagged by asterisks. Therefore, one may readily identify specific keywords, groups of keywords or specified lines of code internal to a computer program, in a program output, or in any other specific data set. Possible applications of this program include the automatic scan of an output data set for pertinent keyword data, the editing of a program to change the appearance of a certain word or group of words, and the conversion of a set of code to a different set of code.
Numerical modeling of rapidly varying flows using HEC-RAS and WSPG models.
Rao, Prasada; Hromadka, Theodore V
2016-01-01
The performance of two popular hydraulic models (HEC-RAS and WSPG) for modeling hydraulic jump in an open channel is investigated. The numerical solutions are compared with a new experimental data set obtained for varying channel bottom slopes and flow rates. Both the models satisfactorily predict the flow depths and location of the jump. The end results indicate that the numerical models output is sensitive to the value of chosen roughness coefficient. For this application, WSPG model is easier to implement with few input variables.
Continuous-Time Bilinear System Identification
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan
2003-01-01
The objective of this paper is to describe a new method for identification of a continuous-time multi-input and multi-output bilinear system. The approach is to make judicious use of the linear-model properties of the bilinear system when subjected to a constant input. Two steps are required in the identification process. The first step is to use a set of pulse responses resulting from a constant input of one sample period to identify the state matrix, the output matrix, and the direct transmission matrix. The second step is to use another set of pulse responses with the same constant input over multiple sample periods to identify the input matrix and the coefficient matrices associated with the coupling terms between the state and the inputs. Numerical examples are given to illustrate the concept and the computational algorithm for the identification method.
NASA Astrophysics Data System (ADS)
Jatnieks, Janis; De Lucia, Marco; Sips, Mike; Dransch, Doris
2015-04-01
Many geoscience applications can benefit from testing many combinations of input parameters for geochemical simulation models. It is, however, a challenge to screen the input and output data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we propose a Visual Analytics approach that has been developed in an ongoing collaboration between computer science and geoscience researchers. Our Visual Analytics approach uses visualization methods of hierarchical horizontal axis, multi-factor stacked bar charts and interactive semi-automated filtering for input and output data together with automatic sensitivity analysis. This guides the users towards significant relationships. We implement our approach as an interactive data exploration tool. It is designed with flexibility in mind, so that a diverse set of tasks such as inverse modeling, sensitivity analysis and model parameter refinement can be supported. Here we demonstrate the capabilities of our approach by two examples for gas storage applications. For the first example our Visual Analytics approach enabled the analyst to observe how the element concentrations change around previously established baselines in response to thousands of different combinations of mineral phases. This supported combinatorial inverse modeling for interpreting observations about the chemical composition of the formation fluids at the Ketzin pilot site for CO2 storage. The results indicate that, within the experimental error range, the formation fluid cannot be considered at local thermodynamical equilibrium with the mineral assemblage of the reservoir rock. This is a valuable insight from the predictive geochemical modeling for the Ketzin site. For the second example our approach supports sensitivity analysis for a reaction involving the reductive dissolution of pyrite with formation of pyrrothite in presence of gaseous hydrogen. We determine that this reaction is thermodynamically favorable under a broad range of conditions. This includes low temperatures and absence of microbial catalysators. Our approach has potential for use in other applications that involve exploration of relationships in geochemical simulation model data.
Mastin, Larry G.; Randall, Michael J.; Schwaiger, Hans F.; Denlinger, Roger P.
2013-01-01
Ash3d is a three-dimensional Eulerian atmospheric model for tephra transport, dispersal, and deposition, written by the authors to study and forecast hazards of volcanic ash clouds and tephra fall. In this report, we explain how to set up simulations using both a web interface and an ASCII input file, and how to view and interpret model output. We also summarize the architecture of the model and some of its properties.
Assembly line performance and modeling
NASA Astrophysics Data System (ADS)
Rane, Arun B.; Sunnapwar, Vivek K.
2017-09-01
Automobile sector forms the backbone of manufacturing sector. Vehicle assembly line is important section in automobile plant where repetitive tasks are performed one after another at different workstations. In this thesis, a methodology is proposed to reduce cycle time and time loss due to important factors like equipment failure, shortage of inventory, absenteeism, set-up, material handling, rejection and fatigue to improve output within given cost constraints. Various relationships between these factors, corresponding cost and output are established by scientific approach. This methodology is validated in three different vehicle assembly plants. Proposed methodology may help practitioners to optimize the assembly line using lean techniques.
Modeling the Afferent Dynamics of the Baroreflex Control System
Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.
2013-01-01
In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231
A technology mapping based on graph of excitations and outputs for finite state machines
NASA Astrophysics Data System (ADS)
Kania, Dariusz; Kulisz, Józef
2017-11-01
A new, efficient technology mapping method of FSMs, dedicated for PAL-based PLDs is proposed. The essence of the method consists in searching for the minimal set of PAL-based logic blocks that cover a set of multiple-output implicants describing the transition and output functions of an FSM. The method is based on a new concept of graph: the Graph of Excitations and Outputs. The proposed algorithm was tested using the FSM benchmarks. The obtained results were compared with the classical technology mapping of FSM.
Su, Jingjun; Du, Xinzhong; Li, Xuyong
2018-05-16
Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.
Program Merges SAR Data on Terrain and Vegetation Heights
NASA Technical Reports Server (NTRS)
Siqueira, Paul; Hensley, Scott; Rodriguez, Ernesto; Simard, Marc
2007-01-01
X/P Merge is a computer program that estimates ground-surface elevations and vegetation heights from multiple sets of data acquired by the GeoSAR instrument [a terrain-mapping synthetic-aperture radar (SAR) system that operates in the X and bands]. X/P Merge software combines data from X- and P-band digital elevation models, SAR backscatter magnitudes, and interferometric correlation magnitudes into a simplified set of output topographical maps of ground-surface elevation and tree height.
Reveal, A General Reverse Engineering Algorithm for Inference of Genetic Network Architectures
NASA Technical Reports Server (NTRS)
Liang, Shoudan; Fuhrman, Stefanie; Somogyi, Roland
1998-01-01
Given the immanent gene expression mapping covering whole genomes during development, health and disease, we seek computational methods to maximize functional inference from such large data sets. Is it possible, in principle, to completely infer a complex regulatory network architecture from input/output patterns of its variables? We investigated this possibility using binary models of genetic networks. Trajectories, or state transition tables of Boolean nets, resemble time series of gene expression. By systematically analyzing the mutual information between input states and output states, one is able to infer the sets of input elements controlling each element or gene in the network. This process is unequivocal and exact for complete state transition tables. We implemented this REVerse Engineering ALgorithm (REVEAL) in a C program, and found the problem to be tractable within the conditions tested so far. For n = 50 (elements) and k = 3 (inputs per element), the analysis of incomplete state transition tables (100 state transition pairs out of a possible 10(exp 15)) reliably produced the original rule and wiring sets. While this study is limited to synchronous Boolean networks, the algorithm is generalizable to include multi-state models, essentially allowing direct application to realistic biological data sets. The ability to adequately solve the inverse problem may enable in-depth analysis of complex dynamic systems in biology and other fields.
Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future
NASA Technical Reports Server (NTRS)
Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha
2008-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.
Ruys, Andrew J.
2018-01-01
Electrospun fibres have gained broad interest in biomedical applications, including tissue engineering scaffolds, due to their potential in mimicking extracellular matrix and producing structures favourable for cell and tissue growth. The development of scaffolds often involves multivariate production parameters and multiple output characteristics to define product quality. In this study on electrospinning of polycaprolactone (PCL), response surface methodology (RSM) was applied to investigate the determining parameters and find optimal settings to achieve the desired properties of fibrous scaffold for acetabular labrum implant. The results showed that solution concentration influenced fibre diameter, while elastic modulus was determined by solution concentration, flow rate, temperature, collector rotation speed, and interaction between concentration and temperature. Relationships between these variables and outputs were modelled, followed by an optimization procedure. Using the optimized setting (solution concentration of 10% w/v, flow rate of 4.5 mL/h, temperature of 45 °C, and collector rotation speed of 1500 RPM), a target elastic modulus of 25 MPa could be achieved at a minimum possible fibre diameter (1.39 ± 0.20 µm). This work demonstrated that multivariate factors of production parameters and multiple responses can be investigated, modelled, and optimized using RSM. PMID:29562614
Efficiency measurement and the operationalization of hospital production.
Magnussen, J
1996-01-01
OBJECTIVE. To discuss the usefulness of efficiency measures as instruments of monitoring and resource allocation by analyzing their invariance to changes in the operationalization of hospital production. STUDY SETTING. Norwegian hospitals over the three-year period 1989-1991. STUDY DESIGN. Efficiency is measured using Data Envelopment Analysis (DEA). The distribution of efficiency and the ranking of hospitals is compared across models using various distribution-free tests. DATA COLLECTION. Input and output data are collected by the Norwegian Central Bureau of Statistics. PRINCIPAL FINDINGS. The distribution of efficiency is found to be unaffected by changes in the specification of hospital output. Both the ranking of hospitals and the scale properties of the technology, however, are found to depend on the choice of output specification. CONCLUSION. Extreme care should be taken before resource allocation is based on DEA-type efficiency measures alone. Both the identification of efficient and inefficient hospitals and the cardinal measure of inefficiency will depend on the specification of output. Since the scale properties of the technology also vary with the specification of output, the search for an optimal hospital size may be futile. PMID:8617607
Cascade Back-Propagation Learning in Neural Networks
NASA Technical Reports Server (NTRS)
Duong, Tuan A.
2003-01-01
The cascade back-propagation (CBP) algorithm is the basis of a conceptual design for accelerating learning in artificial neural networks. The neural networks would be implemented as analog very-large-scale integrated (VLSI) circuits, and circuits to implement the CBP algorithm would be fabricated on the same VLSI circuit chips with the neural networks. Heretofore, artificial neural networks have learned slowly because it has been necessary to train them via software, for lack of a good on-chip learning technique. The CBP algorithm is an on-chip technique that provides for continuous learning in real time. Artificial neural networks are trained by example: A network is presented with training inputs for which the correct outputs are known, and the algorithm strives to adjust the weights of synaptic connections in the network to make the actual outputs approach the correct outputs. The input data are generally divided into three parts. Two of the parts, called the "training" and "cross-validation" sets, respectively, must be such that the corresponding input/output pairs are known. During training, the cross-validation set enables verification of the status of the input-to-output transformation learned by the network to avoid over-learning. The third part of the data, termed the "test" set, consists of the inputs that are required to be transformed into outputs; this set may or may not include the training set and/or the cross-validation set. Proposed neural-network circuitry for on-chip learning would be divided into two distinct networks; one for training and one for validation. Both networks would share the same synaptic weights.
Simulation of process identification and controller tuning for flow control system
NASA Astrophysics Data System (ADS)
Chew, I. M.; Wong, F.; Bono, A.; Wong, K. I.
2017-06-01
PID controller is undeniably the most popular method used in controlling various industrial processes. The feature to tune the three elements in PID has allowed the controller to deal with specific needs of the industrial processes. This paper discusses the three elements of control actions and improving robustness of controllers through combination of these control actions in various forms. A plant model is simulated using the Process Control Simulator in order to evaluate the controller performance. At first, the open loop response of the plant is studied by applying a step input to the plant and collecting the output data from the plant. Then, FOPDT of physical model is formed by using both Matlab-Simulink and PRC method. Then, calculation of controller’s setting is performed to find the values of Kc and τi that will give satisfactory control in closed loop system. Then, the performance analysis of closed loop system is obtained by set point tracking analysis and disturbance rejection performance. To optimize the overall physical system performance, a refined tuning of PID or detuning is further conducted to ensure a consistent resultant output of closed loop system reaction to the set point changes and disturbances to the physical model. As a result, the PB = 100 (%) and τi = 2.0 (s) is preferably chosen for setpoint tracking while PB = 100 (%) and τi = 2.5 (s) is selected for rejecting the imposed disturbance to the model. In a nutshell, selecting correlation tuning values is likewise depended on the required control’s objective for the stability performance of overall physical model.
Deriving Tools from Real-time Runs: A New CCMC Support for SEC and AFWA
NASA Technical Reports Server (NTRS)
Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha
2008-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions. the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models. and on the transition of appropriate models to space weather forecast centers. As part of the latter activity. the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.
Optimizing Force Deployment and Force Structure for the Rapid Deployment Force
1984-03-01
Analysis . . . . .. .. ... ... 97 Experimental Design . . . . . .. .. .. ... 99 IX. Use of a Flexible Response Surface ........ 10.2 Selection of a...setS . ere designe . arun, programming methodology , where the require: s.stem re..r is input and the model optimizes the num=er. :::pe, cargo. an...to obtain new computer outputs" (Ref 38:23). The methodology can be used with any decision model, linear or nonlinear. Experimental Desion Since the
NASA Technical Reports Server (NTRS)
Mcnider, Richard T.; Christy, John R.; Cox, Gregory N.
1993-01-01
In order to better understand the dynamics of the global atmosphere, a data set of precision temperature measurements was developed using the NASA built Microwave Sounding Unit. Modeling research was carried out to validate global model outputs using various satellite data. Idealized flows in a rotating annulus were studied and applied to the general circulation of the atmosphere. Dynamic stratospheric ozone fluctuations were investigated. An extensive bibliography and several reprints are appended.
Interactive Correlation Analysis and Visualization of Climate Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Kwan-Liu
The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less
Automatic image equalization and contrast enhancement using Gaussian mixture modeling.
Celik, Turgay; Tjahjadi, Tardi
2012-01-01
In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.
Can We Use Single-Column Models for Understanding the Boundary Layer Cloud-Climate Feedback?
NASA Astrophysics Data System (ADS)
Dal Gesso, S.; Neggers, R. A. J.
2018-02-01
This study explores how to drive Single-Column Models (SCMs) with existing data sets of General Circulation Model (GCM) outputs, with the aim of studying the boundary layer cloud response to climate change in the marine subtropical trade wind regime. The EC-EARTH SCM is driven with the large-scale tendencies and boundary conditions as derived from two different data sets, consisting of high-frequency outputs of GCM simulations. SCM simulations are performed near Barbados Cloud Observatory in the dry season (January-April), when fair-weather cumulus is the dominant low-cloud regime. This climate regime is characterized by a near equilibrium in the free troposphere between the long-wave radiative cooling and the large-scale advection of warm air. In the SCM, this equilibrium is ensured by scaling the monthly mean dynamical tendency of temperature and humidity such that it balances that of the model physics in the free troposphere. In this setup, the high-frequency variability in the forcing is maintained, and the boundary layer physics acts freely. This technique yields representative cloud amount and structure in the SCM for the current climate. Furthermore, the cloud response to a sea surface warming of 4 K as produced by the SCM is consistent with that of the forcing GCM.
Posada, David
2006-01-01
ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102
A data mining framework for time series estimation.
Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin
2010-04-01
Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor)
2017-01-01
A current source logic gate with depletion mode field effect transistor ("FET") transistors and resistors may include a current source, a current steering switch input stage, and a resistor divider level shifting output stage. The current source may include a transistor and a current source resistor. The current steering switch input stage may include a transistor to steer current to set an output stage bias point depending on an input logic signal state. The resistor divider level shifting output stage may include a first resistor and a second resistor to set the output stage point and produce valid output logic signal states. The transistor of the current steering switch input stage may function as a switch to provide at least two operating points.
Mathematical Model of Naive T Cell Division and Survival IL-7 Thresholds.
Reynolds, Joseph; Coles, Mark; Lythe, Grant; Molina-París, Carmen
2013-01-01
We develop a mathematical model of the peripheral naive T cell population to study the change in human naive T cell numbers from birth to adulthood, incorporating thymic output and the availability of interleukin-7 (IL-7). The model is formulated as three ordinary differential equations: two describe T cell numbers, in a resting state and progressing through the cell cycle. The third is introduced to describe changes in IL-7 availability. Thymic output is a decreasing function of time, representative of the thymic atrophy observed in aging humans. Each T cell is assumed to possess two interleukin-7 receptor (IL-7R) signaling thresholds: a survival threshold and a second, higher, proliferation threshold. If the IL-7R signaling strength is below its survival threshold, a cell may undergo apoptosis. When the signaling strength is above the survival threshold, but below the proliferation threshold, the cell survives but does not divide. Signaling strength above the proliferation threshold enables entry into cell cycle. Assuming that individual cell thresholds are log-normally distributed, we derive population-average rates for apoptosis and entry into cell cycle. We have analyzed the adiabatic change in homeostasis as thymic output decreases. With a parameter set representative of a healthy individual, the model predicts a unique equilibrium number of T cells. In a parameter range representative of persistent viral or bacterial infection, where naive T cell cycle progression is impaired, a decrease in thymic output may result in the collapse of the naive T cell repertoire.
Autonomous frequency domain identification: Theory and experiment
NASA Technical Reports Server (NTRS)
Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.
1989-01-01
The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.
NASA Astrophysics Data System (ADS)
Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.
2016-12-01
The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.
Atmospheric Science Data Center
2015-07-21
L2 Lite Standard Products The TES Lite products are intended to simplify TES data usage including data /model and data/data comparisons. This product can be used for science analysis ... PGE corrected a date range issue in the originally delivered standard output. An updated set of TES L2 Lite standard products was ...
Statistical basis and outputs of stable isotope mixing models: Comment on Fry (2013)
A recent article by Fry (2013; Mar Ecol Prog Ser 472:1−13) reviewed approaches to solving underdetermined stable isotope mixing systems, and presented a new graphical approach and set of summary statistics for the analysis of such systems. In his review, Fry (2013) mis-characteri...
USDA-ARS?s Scientific Manuscript database
Given a set of biallelic molecular markers, such as SNPs, with genotype values encoded numerically on a collection of plant, animal or human samples, the goal of genetic trait prediction is to predict the quantitative trait values by simultaneously modeling all marker effects. Genetic trait predicti...
NASA Technical Reports Server (NTRS)
Redwine, W. J.
1979-01-01
A timeline containing altitude, control surface deflection rates and angles, hinge moment loads, thrust vector control gimbal rates, and main throttle settings is used to derive the model. The timeline is constructed from the output of one or more trajectory simulation programs.
Surveillance system and method having parameter estimation and operating mode partitioning
NASA Technical Reports Server (NTRS)
Bickford, Randall L. (Inventor)
2003-01-01
A system and method for monitoring an apparatus or process asset including partitioning an unpartitioned training data set into a plurality of training data subsets each having an operating mode associated thereto; creating a process model comprised of a plurality of process submodels each trained as a function of at least one of the training data subsets; acquiring a current set of observed signal data values from the asset; determining an operating mode of the asset for the current set of observed signal data values; selecting a process submodel from the process model as a function of the determined operating mode of the asset; calculating a current set of estimated signal data values from the selected process submodel for the determined operating mode; and outputting the calculated current set of estimated signal data values for providing asset surveillance and/or control.
GROSS- GAMMA RAY OBSERVATORY ATTITUDE DYNAMICS SIMULATOR
NASA Technical Reports Server (NTRS)
Garrick, J.
1994-01-01
The Gamma Ray Observatory (GRO) spacecraft will constitute a major advance in gamma ray astronomy by offering the first opportunity for comprehensive observations in the range of 0.1 to 30,000 megaelectronvolts (MeV). The Gamma Ray Observatory Attitude Dynamics Simulator, GROSS, is designed to simulate this mission. The GRO Dynamics Simulator consists of three separate programs: the Standalone Profile Program; the Simulator Program, which contains the Simulation Control Input/Output (SCIO) Subsystem, the Truth Model (TM) Subsystem, and the Onboard Computer (OBC) Subsystem; and the Postprocessor Program. The Standalone Profile Program models the environment of the spacecraft and generates a profile data set for use by the simulator. This data set contains items such as individual external torques; GRO spacecraft, Tracking and Data Relay Satellite (TDRS), and solar and lunar ephemerides; and star data. The Standalone Profile Program is run before a simulation. The SCIO subsystem is the executive driver for the simulator. It accepts user input, initializes parameters, controls simulation, and generates output data files and simulation status display. The TM subsystem models the spacecraft dynamics, sensors, and actuators. It accepts ephemerides, star data, and environmental torques from the Standalone Profile Program. With these and actuator commands from the OBC subsystem, the TM subsystem propagates the current state of the spacecraft and generates sensor data for use by the OBC and SCIO subsystems. The OBC subsystem uses sensor data from the TM subsystem, a Kalman filter (for attitude determination), and control laws to compute actuator commands to the TM subsystem. The OBC subsystem also provides output data to the SCIO subsystem for output to the analysts. The Postprocessor Program is run after simulation is completed. It generates printer and CRT plots and tabular reports of the simulated data at the direction of the user. GROSS is written in FORTRAN 77 and ASSEMBLER and has been implemented on a VAX 11/780 under VMS 4.5. It has a virtual memory requirement of 255k. GROSS was developed in 1986.
NASA Astrophysics Data System (ADS)
Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit
2017-07-01
In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.
NASA Astrophysics Data System (ADS)
María Palomares, Ana; Navarro, Jorge; Grifoll, Manel; Pallares, Elena; Espino, Manuel
2016-04-01
This work shows the main results of the HAREAMAR project (including HAREMAR, ENE2012-38772-C02-01 and DARDO, ENE2012-38772-C02-02 projects), concerning the local Wind, Wave and Current simulation at St. Jordi Bay (NW Mediterranean Sea). Offshore Wind Energy has become one of the main topics within the research in Wind Energy research. Although there are quite a few models with a high level of reliability for wind simulation and prediction in onshore places, the wind prediction needs further investigations for adaptation to the Offshore emplacements, taking into account the interaction atmosphere-ocean. The main problem in these ocean areas is the lack of wind data, which neither allows for characterizing the energy potential and wind behaviour in a particular place, nor validating the forecasting models. The main objective of this work is to reduce the local prediction errors, in order to make the meteo-oceanographic hindcast and forecast more reliable. The COAWST model (Coupled-Ocean-Atmosphere-Wave Sediment Transport Model; Warner et al., 2010) system has been implemented in the region considering a set of downscaling nested meshes to obtain high-resolution outputs in the region. The adaptation to this particular area, combining the different wind, wave and ocean model domains has been far from simple, because the grid domains for the three models differ significantly. This work shows the main results of the COAWST model implementation to this particular area, including both monthly and other set of tests in different atmospheric situations, especially chosen for their particular interest. The time period considered for the validation is the whole year 2012. A comparative study between the WRF, SWAN and ROMS model outputs (without coupling), the COWAST model outputs, and a buoy measurements moored in the region was performed for this year. References Warner, J.C., Armstrong, B., He, R., and Zambon, J.B., 2010, Development of a Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system: Ocean Modeling, 35 (3), 230-244.
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
A high-resolution regional reanalysis for the European CORDEX region
NASA Astrophysics Data System (ADS)
Bollmeyer, Christoph; Keller, Jan; Ohlwein, Christian; Wahl, Sabrina
2015-04-01
Within the Hans-Ertel-Centre for Weather Research (HErZ), the climate monitoring branch concentrates efforts on the assessment and analysis of regional climate in Germany and Europe. In joint cooperation with DWD (German Weather Service), a high-resolution reanalysis system based on the COSMO model has been developed. Reanalyses gain more and more importance as a source of meteorological information for many purposes and applications. Several global reanalyses projects (e.g., ERA, MERRA, CSFR, JMA9) produce and verify these data sets to provide time series as long as possible combined with a high data quality. Due to a spatial resolution down to 50-70km and 3-hourly temporal output, they are not suitable for small scale problems (e.g., regional climate assessment, meso-scale NWP verification, input for subsequent models such as river runoff simulations, renewable energy applications). The implementation of regional reanalyses based on a limited area model along with a data assimilation scheme is able to generate reanalysis data sets with high spatio-temporal resolution. The work presented here focuses on two regional reanalyses for Europe and Germany. The European reanalysis COSMO-REA6 matches the CORDEX EURO-11 specifications, albeit at a higher spatial resolution, i.e., 0.055° (6km) instead of 0.11° (12km). Nested into COSMO-REA6 is COSMO-REA2, a convective-scale reanalysis with 2km resolution for Germany. COSMO-REA6 comprises the assimilation of observational data using the existing nudging scheme of COSMO and is complemented by a special soil moisture analysis and boundary conditions given by ERA-Interim data. COSMO-REA2 also uses the nudging scheme complemented by a latent heat nudging of radar information. The reanalysis data set currently covers 17 years (1997-2013) for COSMO-REA6 and 4 years (2010-2013) for COSMO-REA2 with a very large set of output variables and a high temporal output step of hourly 3D-fields and quarter-hourly 2D-fields. The evaluation of the reanalyses is done using independent observations for the most important meteorological parameters with special emphasis on precipitation and high-impact weather situations.
Hamdy, M; Hamdan, I
2015-07-01
In this paper, a robust H∞ fuzzy output feedback controller is designed for a class of affine nonlinear systems with disturbance via Takagi-Sugeno (T-S) fuzzy bilinear model. The parallel distributed compensation (PDC) technique is utilized to design a fuzzy controller. The stability conditions of the overall closed loop T-S fuzzy bilinear model are formulated in terms of Lyapunov function via linear matrix inequality (LMI). The control law is robustified by H∞ sense to attenuate external disturbance. Moreover, the desired controller gains can be obtained by solving a set of LMI. A continuous stirred tank reactor (CSTR), which is a benchmark problem in nonlinear process control, is discussed in detail to verify the effectiveness of the proposed approach with a comparative study. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
The Effect of Fertility Reduction on Economic Growth*
Ashraf, Quamrul H.; Weil, David N.; Wilde, Joshua
2014-01-01
We assess quantitatively the effect of exogenous reductions in fertility on output per capita. Our simulation model allows for effects that run through schooling, the size and age structure of the population, capital accumulation, parental time input into child-rearing, and crowding of fixed natural resources. The model is parameterized using a combination of microeconomic estimates, data on demographics and natural resource income in developing countries, and standard components of quantitative macroeconomic theory. We apply the model to examine the effect of a change in fertility from the UN medium-variant to the UN low-variant projection, using Nigerian vital rates as a baseline. For a base case set of parameters, we find that such a change would raise output per capita by 5.6 percent at a horizon of 20 years, and by 11.9 percent at a horizon of 50 years. PMID:25525283
Low-complexity piecewise-affine virtual sensors: theory and design
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Poggi, Tomaso; Oliveri, Alberto; Pascucci, Carlo Alberto; Bemporad, Alberto; Storace, Marco
2014-03-01
This paper is focused on the theoretical development and the hardware implementation of low-complexity piecewise-affine direct virtual sensors for the estimation of unmeasured variables of interest of nonlinear systems. The direct virtual sensor is designed directly from measured inputs and outputs of the system and does not require a dynamical model. The proposed approach allows one to design estimators which mitigate the effect of the so-called 'curse of dimensionality' of simplicial piecewise-affine functions, and can be therefore applied to relatively high-order systems, enjoying convergence and optimality properties. An automatic toolchain is also presented to generate the VHDL code describing the digital circuit implementing the virtual sensor, starting from the set of measured input and output data. The proposed methodology is applied to generate an FPGA implementation of the virtual sensor for the estimation of vehicle lateral velocity, using a hardware-in-the-loop setting.
[Technical efficiency in primary care for patients with diabetes].
Salinas-Martínez, Ana María; Amaya-Alemán, María Agustina; Arteaga-García, Julio César; Núñez-Rocha, Georgina Mayela; Garza-Elizondo, María Eugenia
2009-01-01
To quantify the technical efficiency of diabetes care in family practice settings, characterize the provision of services and health results, and recognize potential sources of variation. We used data envelopment analysis with inputs and outputs for diabetes care from 47 family units within a social security agency in Nuevo Leon. Tobit regression models were also used. Seven units were technically efficient in providing services and nine in achieving health goals. Only two achieved both outcomes. The metropolitan location and the total number of consultations favored efficiency in the provision of services regardless of patient attributes; and the age of the doctor, the efficiency of health results. Performance varied within and among family units; some were efficient at providing services while others at accomplishing health goals. Sources of variation also differed. It is necessary to include both outputs in the study of efficiency of diabetes care in family practice settings.
An economic evaluation of home management of malaria in Uganda: an interactive Markov model.
Lubell, Yoel; Mills, Anne J; Whitty, Christopher J M; Staedke, Sarah G
2010-08-27
Home management of malaria (HMM), promoting presumptive treatment of febrile children in the community, is advocated to improve prompt appropriate treatment of malaria in Africa. The cost-effectiveness of HMM is likely to vary widely in different settings and with the antimalarial drugs used. However, no data on the cost-effectiveness of HMM programmes are available. A Markov model was constructed to estimate the cost-effectiveness of HMM as compared to conventional care for febrile illnesses in children without HMM. The model was populated with data from Uganda, but is designed to be interactive, allowing the user to adjust certain parameters, including the antimalarials distributed. The model calculates the cost per disability adjusted life year averted and presents the incremental cost-effectiveness ratio compared to a threshold value. Model output is stratified by level of malaria transmission and the probability that a child would receive appropriate care from a health facility, to indicate the circumstances in which HMM is likely to be cost-effective. The model output suggests that the cost-effectiveness of HMM varies with malaria transmission, the probability of appropriate care, and the drug distributed. Where transmission is high and the probability of appropriate care is limited, HMM is likely to be cost-effective from a provider perspective. Even with the most effective antimalarials, HMM remains an attractive intervention only in areas of high malaria transmission and in medium transmission areas with a lower probability of appropriate care. HMM is generally not cost-effective in low transmission areas, regardless of which antimalarial is distributed. Considering the analysis from the societal perspective decreases the attractiveness of HMM. Syndromic HMM for children with fever may be a useful strategy for higher transmission settings with limited health care and diagnosis, but is not appropriate for all settings. HMM may need to be tailored to specific settings, accounting for local malaria transmission intensity and availability of health services.
Development of accelerated Raman and fluorescent Monte Carlo method
NASA Astrophysics Data System (ADS)
Dumont, Alexander P.; Patil, Chetan
2018-02-01
Monte Carlo (MC) modeling of photon propagation in turbid media is an essential tool for understanding optical interactions between light and tissue. Insight gathered from outputs of MC models assists in mapping between detected optical signals and bulk tissue optical properties, and as such, has proven useful for inverse calculations of tissue composition and optimization of the design of optical probes. MC models of Raman scattering have previously been implemented without consideration to background autofluorescence, despite its presence in raw measurements. Modeling both Raman and fluorescence profiles at high spectral resolution requires a significant increase in computation, but is more appropriate for investigating issues such as detection limits. We present a new Raman Fluorescence MC model developed atop an existing GPU parallelized MC framework that can run more than 300x times faster than CPU methods. The robust acceleration allows for the efficient production of both Raman and fluorescence outputs from the MC model. In addition, this model can handle arbitrary sample morphologies of excitation and collection geometries to more appropriately mimic experimental settings. We will present the model framework and initial results.
Modeling of a multileaf collimator
NASA Astrophysics Data System (ADS)
Kim, Siyong
A comprehensive physics model of a multileaf collimator (MLC) field for treatment planning was developed. Specifically, an MLC user interface module that includes a geometric optimization tool and a general method of in- air output factor calculation were developed. An automatic tool for optimization of MLC conformation is needed to realize the potential benefits of MLC. It is also necessary that a radiation therapy treatment planning (RTTP) system is capable of modeling MLC completely. An MLC geometric optimization and user interface module was developed. The planning time has been reduced significantly by incorporating the MLC module into the main RTTP system, Radiation Oncology Computer System (ROCS). The dosimetric parameter that has the most profound effect on the accuracy of the dose delivered with an MLC is the change in the in-air output factor that occurs with field shaping. It has been reported that the conventional method of calculating an in-air output factor cannot be used for MLC shaped fields accurately. Therefore, it is necessary to develop algorithms that allow accurate calculation of the in-air output factor. A generalized solution for an in-air output factor calculation was developed. Three major contributors of scatter to the in-air output-flattening filter, wedge, and tertiary collimator-were considered separately. By virtue of a field mapping method, in which a source plane field determined by detector's eye view is mapped into a detector plane field, no additional dosimetric data acquisition other than the standard data set for a range of square fields is required for the calculation of head scatter. Comparisons of in-air output factors between calculated and measured values show a good agreement for both open and wedge fields. For rectangular fields, a simple equivalent square formula was derived based on the configuration of a linear accelerator treatment head. This method predicts in-air output to within 1% accuracy. A two-effective-source algorithm was developed to account for the effect of source to detector distance on in-air output for wedge fields. Two effective sources, one for head scatter and the other for wedge scatter, were dealt with independently. Calculations provided less than 1% difference of in-air output factors from measurements. This approach offers the best comprehensive accuracy in radiation delivery with field shapes defined using MLC. This generalized model works equally well with fields shaped by any type of tertiary collimator and have the necessary framework to extend its application to intensity modulated radiation therapy.
NASA Astrophysics Data System (ADS)
Terando, A. J.; Grade, S.; Bowden, J.; Henareh Khalyani, A.; Wootten, A.; Misra, V.; Collazo, J.; Gould, W. A.; Boyles, R.
2016-12-01
Sub-tropical island nations may be particularly vulnerable to anthropogenic climate change because of predicted changes in the hydrologic cycle that would lead to significant drying in the future. However, decision makers in these regions have seen their adaptation planning efforts frustrated by the lack of island-resolving climate model information. Recently, two investigations have used statistical and dynamical downscaling techniques to develop climate change projections for the U.S. Caribbean region (Puerto Rico and U.S. Virgin Islands). We compare the results from these two studies with respect to three commonly downscaled CMIP5 global climate models (GCMs). The GCMs were dynamically downscaled at a convective-permitting scale using two different regional climate models. The statistical downscaling approach was conducted at locations with long-term climate observations and then further post-processed using climatologically aided interpolation (yielding two sets of projections). Overall, both approaches face unique challenges. The statistical approach suffers from a lack of observations necessary to constrain the model, particularly at the land-ocean boundary and in complex terrain. The dynamically downscaled model output has a systematic dry bias over the island despite ample availability of moisture in the atmospheric column. Notwithstanding these differences, both approaches are consistent in projecting a drier climate that is driven by the strong global-scale anthropogenic forcing.
NASA Technical Reports Server (NTRS)
Barbero, P.; Chin, J.
1973-01-01
The theoretical derivation of the set of equations is discussed which is applicable to modeling the dynamic characteristics of aeroelastically-scaled models flown on the two-cable mount system in a 16 ft transonic dynamics tunnel. The computer program provided for the analysis is also described. The program calculates model trim conditions as well as 3 DOF longitudinal and lateral/directional dynamic conditions for various flying cable and snubber cable configurations. Sample input and output are included.
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; Molthan, Andrew; Zavodsky, Bradley T.; Case, Jonathan L.; LaFontaine, Frank J.; Srikishen, Jayanthi
2010-01-01
The NASA Short-term Prediction Research and Transition Center (SPoRT)'s new "Weather in a Box" resources will provide weather research and forecast modeling capabilities for real-time application. Model output will provide additional forecast guidance and research into the impacts of new NASA satellite data sets and software capabilities. By combining several research tools and satellite products, SPoRT can generate model guidance that is strongly influenced by unique NASA contributions.
Parser for Sabin-to-Mahoney Transition Model of Quasispecies Replication
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ecale Zhou, Carol
2016-01-03
This code is a data parse for preparing output from the Qspp agent-based stochastic simulation model for plotting in Excel. This code is specific to a set of simulations that were run for the purpose of preparing data for a publication. It is necessary to make this code open-source in order to publish the model code (Qspp), which has already been released. There is a necessity of assuring that results from using Qspp for a publication
Modelling the Krebs cycle and oxidative phosphorylation.
Korla, Kalyani; Mitra, Chanchal K
2014-01-01
The Krebs cycle and oxidative phosphorylation are the two most important sets of reactions in a eukaryotic cell that meet the major part of the total energy demands of a cell. In this paper, we present a computer simulation of the coupled reactions using open source tools for simulation. We also show that it is possible to model the Krebs cycle with a simple black box with a few inputs and outputs. However, the kinetics of the internal processes has been modelled using numerical tools. We also show that the Krebs cycle and oxidative phosphorylation together can be combined in a similar fashion - a black box with a few inputs and outputs. The Octave script is flexible and customisable for any chosen set-up for this model. In several cases, we had no explicit idea of the underlying reaction mechanism and the rate determining steps involved, and we have used the stoichiometric equations that can be easily changed as and when more detailed information is obtained. The script includes the feedback regulation of the various enzymes of the Krebs cycle. For the electron transport chain, the pH gradient across the membrane is an essential regulator of the kinetics and this has been modelled empirically but fully consistent with experimental results. The initial conditions can be very easily changed and the simulation is potentially very useful in a number of cases of clinical importance.
Theoretical analyses of an injection-locked diode-pumped rubidium vapor laser.
Cai, He; Gao, Chunqing; Liu, Xiaoxu; Wang, Shunyan; Yu, Hang; Rong, Kepeng; An, Guofei; Han, Juhong; Zhang, Wei; Wang, Hongyuan; Wang, You
2018-04-02
Diode-pumped alkali lasers (DPALs) have drawn much attention since they were proposed in 2001. The narrow-linewidth DPAL can be potentially applied in the fields of coherent communication, laser radar, and atomic spectroscopy. In this study, we propose a novel protocol to narrow the width of one kind of DPAL, diode-pumped rubidium vapor laser (DPRVL), by use of an injection locking technique. A kinetic model is first set up for an injection-locked DPRVL with the end-pumped configuration. The laser tunable duration is also analyzed for a continuous wave (CW) injection-locked DPRVL system. Then, the influences of the pump power, power of a master laser, and reflectance of an output coupler on the output performance are theoretically analyzed. The study should be useful for design of a narrow-linewidth DPAL with the relatively high output.
NASA Astrophysics Data System (ADS)
Guadagnini, A.; Riva, M.; Dell'Oca, A.
2017-12-01
We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.
Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1981-01-01
A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit DSN software life cycle statistics. The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
Optimization of CW Fiber Lasers With Strong Nonlinear Cavity Dynamics
NASA Astrophysics Data System (ADS)
Shtyrina, O. V.; Efremov, S. A.; Yarutkina, I. A.; Skidin, A. S.; Fedoruk, M. P.
2018-04-01
In present work the equation for the saturated gain is derived from one-level gain equations describing the energy evolution inside the laser cavity. It is shown how to derive the parameters of the mathematical model from the experimental results. The numerically-estimated energy and spectrum of the signal are in good agreement with the experiment. Also, the optimization of the output energy is performed for a given set of model parameters.
Modeling the Pineapple Express phenomenon via Multivariate Extreme Value Theory
NASA Astrophysics Data System (ADS)
Weller, G.; Cooley, D. S.
2011-12-01
The pineapple express (PE) phenomenon is responsible for producing extreme winter precipitation events in the coastal and mountainous regions of the western United States. Because the PE phenomenon is also associated with warm temperatures, the heavy precipitation and associated snowmelt can cause destructive flooding. In order to study impacts, it is important that regional climate models from NARCCAP are able to reproduce extreme precipitation events produced by PE. We define a daily precipitation quantity which captures the spatial extent and intensity of precipitation events produced by the PE phenomenon. We then use statistical extreme value theory to model the tail dependence of this quantity as seen in an observational data set and each of the six NARCCAP regional models driven by NCEP reanalysis. We find that most NCEP-driven NARCCAP models do exhibit tail dependence between daily model output and observations. Furthermore, we find that not all extreme precipitation events are pineapple express events, as identified by Dettinger et al. (2011). The synoptic-scale atmospheric processes that drive extreme precipitation events produced by PE have only recently begun to be examined. Much of the current work has focused on pattern recognition, rather than quantitative analysis. We use daily mean sea-level pressure (MSLP) fields from NCEP to develop a "pineapple express index" for extreme precipitation, which exhibits tail dependence with our observed precipitation quantity for pineapple express events. We build a statistical model that connects daily precipitation output from the WRFG model, daily MSLP fields from NCEP, and daily observed precipitation in the western US. Finally, we use this model to simulate future observed precipitation based on WRFG output driven by the CCSM model, and our pineapple express index derived from future CCSM output. Our aim is to use this model to develop a better understanding of the frequency and intensity of extreme precipitation events produced by PE under climate change.
Baker, Daniel G; Newton, Robert U
2007-11-01
Athletes experienced in maximal-power and power-endurance training performed 1 set of 2 common power training exercises in an effort to determine the effects of moderately high repetitions upon power output levels throughout the set. Twenty-four and 15 athletes, respectively, performed a set of 10 repetitions in both the bench throw (BT P60) and jump squat exercise (JS P60) with a resistance of 60 kg. For both exercises, power output was highest on either the second (JS P60) or the third repetition (BT P60) and was then maintained until the fifth repetition. Significant declines in power output occurred from the sixth repetition onwards until the 10th repetition (11.2% for BT P60 and 5% for JS P60 by the 10th repetition). These findings suggest that athletes attempting to increase maximal power limit their repetitions to 2 to 5 when using resistances of 35 to 45% 1RM in these exercises.
Modal Parameter Identification of a Flexible Arm System
NASA Technical Reports Server (NTRS)
Barrington, Jason; Lew, Jiann-Shiun; Korbieh, Edward; Wade, Montanez; Tantaris, Richard
1998-01-01
In this paper an experiment is designed for the modal parameter identification of a flexible arm system. This experiment uses a function generator to provide input signal and an oscilloscope to save input and output response data. For each vibrational mode, many sets of sine-wave inputs with frequencies close to the natural frequency of the arm system are used to excite the vibration of this mode. Then a least-squares technique is used to analyze the experimental input/output data to obtain the identified parameters for this mode. The identified results are compared with the analytical model obtained by applying finite element analysis.
Asafu-Adjei, Josephine; Betensky, Rebecca A.; Palevsky, Paul M.; Waikar, Sushrut S.
2016-01-01
Background and objectives Intensive RRT may have adverse effects that account for the absence of benefit observed in randomized trials of more intensive versus less intensive RRT. We wished to determine the association of more intensive RRT with changes in urine output as a marker of worsening residual renal function in critically ill patients with severe AKI. Design, setting, participants, & measurements The Acute Renal Failure Trial Network Study (n=1124) was a multicenter trial that randomized critically ill patients requiring initiation of RRT to more intensive (hemodialysis or sustained low–efficiency dialysis six times per week or continuous venovenous hemodiafiltration at 35 ml/kg per hour) versus less intensive (hemodialysis or sustained low–efficiency dialysis three times per week or continuous venovenous hemodiafiltration at 20 ml/kg per hour) RRT. Mixed linear regression models were fit to estimate the association of RRT intensity with change in daily urine output in survivors through day 7 (n=871); Cox regression models were fit to determine the association of RRT intensity with time to ≥50% decline in urine output in all patients through day 28. Results Mean age of participants was 60±15 years old, 72% were men, and 30% were diabetic. In unadjusted models, among patients who survived ≥7 days, mean urine output was, on average, 31.7 ml/d higher (95% confidence interval, 8.2 to 55.2 ml/d) for the less intensive group compared with the more intensive group (P=0.01). More intensive RRT was associated with 29% greater unadjusted risk of decline in urine output of ≥50% (hazard ratio, 1.29; 95% confidence interval, 1.10 to 1.51). Conclusions More intensive versus less intensive RRT is associated with a greater reduction in urine output during the first 7 days of therapy and a greater risk of developing a decline in urine output of ≥50% in critically ill patients with severe AKI. PMID:27449661
Emulation of simulations of atmospheric dispersion at Fukushima for Sobol' sensitivity analysis
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2015-04-01
Polyphemus/Polair3D, from which derives IRSN's operational model ldX, was used to simulate the atmospheric dispersion at the Japan scale of radionuclides after the Fukushima disaster. A previous study with the screening method of Morris had shown that - The sensitivities depend a lot on the considered output; - Only a few of the inputs are non-influential on all considered outputs; - Most influential inputs have either non-linear effects or are interacting. These preliminary results called for a more detailed sensitivity analysis, especially regarding the characterization of interactions. The method of Sobol' allows for a precise evaluation of interactions but requires large simulation samples. Gaussian process emulators for each considered outputs were built in order to relieve this computational burden. Globally aggregated outputs proved to be easy to emulate with high accuracy, and associated Sobol' indices are in broad agreement with previous results obtained with the Morris method. More localized outputs, such as temporal averages of gamma dose rates at measurement stations, resulted in lesser emulator performances: tests simulations could not satisfactorily be reproduced by some emulators. These outputs are of special interest because they can be compared to available observations, for instance for calibration purpose. A thorough inspection of prediction residuals hinted that the model response to wind perturbations often behaved in very distinct regimes relatively to some thresholds. Complementing the initial sample with wind perturbations set to the extreme values allowed for sensible improvement of some of the emulators while other remained too unreliable to be used in a sensitivity analysis. Adaptive sampling or regime-wise emulation could be tried to circumvent this issue. Sobol' indices for local outputs revealed interesting patterns, mostly dominated by the winds, with very high interactions. The emulators will be useful for subsequent studies. Indeed, our goal is to characterize the model output uncertainty but too little information is available about input uncertainties. Hence, calibration of the input distributions with observation and a Bayesian approach seem necessary. This would probably involve methods such as MCMC which would be intractable without emulators.
NASA Astrophysics Data System (ADS)
Sanchez, P.; Hinojosa, J.; Ruiz, R.
2005-06-01
Recently, neuromodeling methods of microwave devices have been developed. These methods are suitable for the model generation of novel devices. They allow fast and accurate simulations and optimizations. However, the development of libraries makes these methods to be a formidable task, since they require massive input-output data provided by an electromagnetic simulator or measurements and repeated artificial neural network (ANN) training. This paper presents a strategy reducing the cost of library development with the advantages of the neuromodeling methods: high accuracy, large range of geometrical and material parameters and reduced CPU time. The library models are developed from a set of base prior knowledge input (PKI) models, which take into account the characteristics common to all the models in the library, and high-level ANNs which give the library model outputs from base PKI models. This technique is illustrated for a microwave multiconductor tunable phase shifter using anisotropic substrates. Closed-form relationships have been developed and are presented in this paper. The results show good agreement with the expected ones.
Leveraging the UML Metamodel: Expressing ORM Semantics Using a UML Profile
DOE Office of Scientific and Technical Information (OSTI.GOV)
CUYLER,DAVID S.
2000-11-01
Object Role Modeling (ORM) techniques produce a detailed domain model from the perspective of the business owner/customer. The typical process begins with a set of simple sentences reflecting facts about the business. The output of the process is a single model representing primarily the persistent information needs of the business. This type of model contains little, if any reference to a targeted computerized implementation. It is a model of business entities not of software classes. Through well-defined procedures, an ORM model can be transformed into a high quality objector relational schema.
John Hof; Curtis Flather; Tony Baltic; Stephen Davies
1999-01-01
The 1999 forest and rangeland condition indicator model is a set of independent econometric production functions for environmental outputs (measured with condition indicators) at the national scale. This report documents the development of the database and the statistical estimation required by this particular production structure with emphasis on two special...
2013-01-01
Background A simple, generalizable method for measuring research output would be useful in attempts to build research capacity, and in other contexts. Methods A simple indicator of individual research output was developed, based on grant income, publications and numbers of PhD students supervised. The feasibility and utility of the indicator was examined by using it to calculate research output from two similarly-sized research groups in different countries. The same indicator can be used to assess the balance in the research “portfolio” of an individual researcher. Results Research output scores of 41 staff in Research Department A had a wide range, from zero to 8; the distribution of these scores was highly skewed. Only about 20% of the researchers had well-balanced research outputs, with approximately equal contributions from grants, papers and supervision. Over a five-year period, Department A's total research output rose, while the number of research staff decreased slightly, in other words research productivity (output per head) rose. Total research output from Research Department B, of approximately the same size as A, was similar, but slightly higher than Department A. Conclusions The proposed indicator is feasible. The output score is dimensionless and can be used for comparisons within and between countries. Modeling can be used to explore the effect on research output of changing the size and composition of a research department. A sensitivity analysis shows that small increases in individual productivity result in relatively greater increases in overall departmental research output. The indicator appears to be potentially useful for capacity building, once the initial step of research priority setting has been completed. PMID:23317431
Wootton, Richard
2013-01-14
A simple, generalizable method for measuring research output would be useful in attempts to build research capacity, and in other contexts. A simple indicator of individual research output was developed, based on grant income, publications and numbers of PhD students supervised. The feasibility and utility of the indicator was examined by using it to calculate research output from two similarly-sized research groups in different countries. The same indicator can be used to assess the balance in the research "portfolio" of an individual researcher. Research output scores of 41 staff in Research Department A had a wide range, from zero to 8; the distribution of these scores was highly skewed. Only about 20% of the researchers had well-balanced research outputs, with approximately equal contributions from grants, papers and supervision. Over a five-year period, Department A's total research output rose, while the number of research staff decreased slightly, in other words research productivity (output per head) rose. Total research output from Research Department B, of approximately the same size as A, was similar, but slightly higher than Department A. The proposed indicator is feasible. The output score is dimensionless and can be used for comparisons within and between countries. Modeling can be used to explore the effect on research output of changing the size and composition of a research department. A sensitivity analysis shows that small increases in individual productivity result in relatively greater increases in overall departmental research output. The indicator appears to be potentially useful for capacity building, once the initial step of research priority setting has been completed.
LMI Based Robust Blood Glucose Regulation in Type-1 Diabetes Patient with Daily Multi-meal Ingestion
NASA Astrophysics Data System (ADS)
Mandal, S.; Bhattacharjee, A.; Sutradhar, A.
2014-04-01
This paper illustrates the design of a robust output feedback H ∞ controller for the nonlinear glucose-insulin (GI) process in a type-1 diabetes patient to deliver insulin through intravenous infusion device. The H ∞ design specification have been realized using the concept of linear matrix inequality (LMI) and the LMI approach has been used to quadratically stabilize the GI process via output feedback H ∞ controller. The controller has been designed on the basis of full 19th order linearized state-space model generated from the modified Sorensen's nonlinear model of GI process. The resulting controller has been tested with the nonlinear patient model (the modified Sorensen's model) in presence of patient parameter variations and other uncertainty conditions. The performance of the controller was assessed in terms of its ability to track the normoglycemic set point of 81 mg/dl with a typical multi-meal disturbance throughout a day that yields robust performance and noise rejection.
Validation of the thermal challenge problem using Bayesian Belief Networks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McFarland, John; Swiler, Laura Painton
The thermal challenge problem has been developed at Sandia National Laboratories as a testbed for demonstrating various types of validation approaches and prediction methods. This report discusses one particular methodology to assess the validity of a computational model given experimental data. This methodology is based on Bayesian Belief Networks (BBNs) and can incorporate uncertainty in experimental measurements, in physical quantities, and model uncertainties. The approach uses the prior and posterior distributions of model output to compute a validation metric based on Bayesian hypothesis testing (a Bayes' factor). This report discusses various aspects of the BBN, specifically in the context ofmore » the thermal challenge problem. A BBN is developed for a given set of experimental data in a particular experimental configuration. The development of the BBN and the method for ''solving'' the BBN to develop the posterior distribution of model output through Monte Carlo Markov Chain sampling is discussed in detail. The use of the BBN to compute a Bayes' factor is demonstrated.« less
NASA Astrophysics Data System (ADS)
Dillon, Chris
Built upon remote sensing and GIS littoral zone characterization methodologies of the past decade, a series of loosely coupled models aimed to test, compare and synthesize multi-beam SONAR (MBES), Airborne LiDAR Bathymetry (ALB), and satellite based optical data sets in the Gulf of St. Lawrence, Canada, eco-region. Bathymetry and relative intensity metrics for the MBES and ALB data sets were run through a quantitative and qualitative comparison, which included outputs from the Benthic Terrain Modeller (BTM) tool. Substrate classification based on relative intensities of respective data sets and textural indices generated using grey level co-occurrence matrices (GLCM) were investigated. A spatial modelling framework built in ArcGIS(TM) for the derivation of bathymetric data sets from optical satellite imagery was also tested for proof of concept and validation. Where possible, efficiencies and semi-automation for repeatable testing was achieved using ArcGIS(TM) ModelBuilder. The findings from this study could assist future decision makers in the field of coastal management and hydrographic studies. Keywords: Seafloor terrain characterization, Benthic Terrain Modeller (BTM), Multi-beam SONAR, Airborne LiDAR Bathymetry, Satellite Derived Bathymetry, ArcGISTM ModelBuilder, Textural analysis, Substrate classification.
Joint Labeling Of Multiple Regions of Interest (Rois) By Enhanced Auto Context Models.
Kim, Minjeong; Wu, Guorong; Guo, Yanrong; Shen, Dinggang
2015-04-01
Accurate segmentation of a set of regions of interest (ROIs) in the brain images is a key step in many neuroscience studies. Due to the complexity of image patterns, many learning-based segmentation methods have been proposed, including auto context model (ACM) that can capture high-level contextual information for guiding segmentation. However, since current ACM can only handle one ROI at a time, neighboring ROIs have to be labeled separately with different ACMs that are trained independently without communicating each other. To address this, we enhance the current single-ROI learning ACM to multi-ROI learning ACM for joint labeling of multiple neighboring ROIs (called e ACM). First, we extend current independently-trained single-ROI ACMs to a set of jointly-trained cross-ROI ACMs, by simultaneous training of ACMs for all spatially-connected ROIs to let them to share their respective intermediate outputs for coordinated labeling of each image point. Then, the context features in each ACM can capture the cross-ROI dependence information from the outputs of other ACMs that are designed for neighboring ROIs. Second, we upgrade the output labeling map of each ACM with the multi-scale representation, thus both local and global context information can be effectively used to increase the robustness in characterizing geometric relationship among neighboring ROIs. Third, we integrate ACM into a multi-atlases segmentation paradigm, for encompassing high variations among subjects. Experiments on LONI LPBA40 dataset show much better performance by our e ACM, compared to the conventional ACM.
Development of a Numerical Model for High-Temperature Shape Memory Alloys
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan A.; Melcher, Kevin J.; Noebe, Ronald D.; Gaydosh, Darrell J.
2006-01-01
A thermomechanical hysteresis model for a high-temperature shape memory alloy (HTSMA) actuator material is presented. The model is capable of predicting strain output of a tensile-loaded HTSMA when excited by arbitrary temperature-stress inputs for the purpose of actuator and controls design. Common quasi-static generalized Preisach hysteresis models available in the literature require large sets of experimental data for model identification at a particular operating point, and substantially more data for multiple operating points. The novel algorithm introduced here proposes an alternate approach to Preisach methods that is better suited for research-stage alloys, such as recently-developed HTSMAs, for which a complete database is not yet available. A detailed description of the minor loop hysteresis model is presented in this paper, as well as a methodology for determination of model parameters. The model is then qualitatively evaluated with respect to well-established Preisach properties and against a set of low-temperature cycled loading data using a modified form of the one-dimensional Brinson constitutive equation. The computationally efficient algorithm demonstrates adherence to Preisach properties and excellent agreement to the validation data set.
Acute Radiation Risk and BRYNTRN Organ Dose Projection Graphical User Interface
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Hu, Shaowen; Nounu, Hateni N.; Kim, Myung-Hee
2011-01-01
The integration of human space applications risk projection models of organ dose and acute radiation risk has been a key problem. NASA has developed an organ dose projection model using the BRYNTRN with SUM DOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUM DOSE are a Baryon transport code and an output data processing code, respectively. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN. A GUI for the ARR and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. BRYNTRN code operation requires extensive input preparation. Only a graphical user interface (GUI) can handle input and output for BRYNTRN to the response models easily and correctly. The purpose of the GUI development for ARRBOD is to provide seamless integration of input and output manipulations for the operations of projection modules (BRYNTRN, SLMDOSE, and the ARR probabilistic response model) in assessing the acute risk and the organ doses of significant Solar Particle Events (SPEs). The assessment of astronauts radiation risk from SPE is in support of mission design and operational planning to manage radiation risks in future space missions. The ARRBOD GUI can identify the proper shielding solutions using the gender-specific organ dose assessments in order to avoid ARR symptoms, and to stay within the current NASA short-term dose limits. The quantified evaluation of ARR severities based on any given shielding configuration and a specified EVA or other mission scenario can be made to guide alternative solutions for attaining determined objectives set by mission planners. The ARRBOD GUI estimates the whole-body effective dose, organ doses, and acute radiation sickness symptoms for astronauts, by which operational strategies and capabilities can be made for the protection of astronauts from SPEs in the planning of future lunar surface scenarios, exploration of near-Earth objects, and missions to Mars.
US EPA 2012 Air Quality Fused Surface for the Conterminous U.S. Map Service
This web service contains a polygon layer that depicts fused air quality predictions for 2012 for census tracts in the conterminous United States. Fused air quality predictions (for ozone and PM2.5) are modeled using a Bayesian space-time downscaling fusion model approach described in a series of three published journal papers: 1) (Berrocal, V., Gelfand, A. E. and Holland, D. M. (2012). Space-time fusion under error in computer model output: an application to modeling air quality. Biometrics 68, 837-848; 2) Berrocal, V., Gelfand, A. E. and Holland, D. M. (2010). A bivariate space-time downscaler under space and time misalignment. The Annals of Applied Statistics 4, 1942-1975; and 3) Berrocal, V., Gelfand, A. E., and Holland, D. M. (2010). A spatio-temporal downscaler for output from numerical models. J. of Agricultural, Biological,and Environmental Statistics 15, 176-197) is used to provide daily, predictive PM2.5 (daily average) and O3 (daily 8-hr maximum) surfaces for 2012. Summer (O3) and annual (PM2.5) means calculated and published. The downscaling fusion model uses both air quality monitoring data from the National Air Monitoring Stations/State and Local Air Monitoring Stations (NAMS/SLAMS) and numerical output from the Models-3/Community Multiscale Air Quality (CMAQ). Currently, predictions at the US census tract centroid locations within the 12 km CMAQ domain are archived. Predictions at the CMAQ grid cell centroids, or any desired set of locations co
COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior
NASA Technical Reports Server (NTRS)
Smialek, James L.; Auping, Judith V.
2002-01-01
COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,
NASA Astrophysics Data System (ADS)
Silversides, Katherine L.; Melkumyan, Arman
2017-03-01
Machine learning techniques such as Gaussian Processes can be used to identify stratigraphically important features in geophysical logs. The marker shales in the banded iron formation hosted iron ore deposits of the Hamersley Ranges, Western Australia, form distinctive signatures in the natural gamma logs. The identification of these marker shales is important for stratigraphic identification of unit boundaries for the geological modelling of the deposit. Machine learning techniques each have different unique properties that will impact the results. For Gaussian Processes (GPs), the output values are inclined towards the mean value, particularly when there is not sufficient information in the library. The impact that these inclinations have on the classification can vary depending on the parameter values selected by the user. Therefore, when applying machine learning techniques, care must be taken to fit the technique to the problem correctly. This study focuses on optimising the settings and choices for training a GPs system to identify a specific marker shale. We show that the final results converge even when different, but equally valid starting libraries are used for the training. To analyse the impact on feature identification, GP models were trained so that the output was inclined towards a positive, neutral or negative output. For this type of classification, the best results were when the pull was towards a negative output. We also show that the GP output can be adjusted by using a standard deviation coefficient that changes the balance between certainty and accuracy in the results.
Global and regional ecosystem modeling: comparison of model outputs and field measurements
NASA Astrophysics Data System (ADS)
Olson, R. J.; Hibbard, K.
2003-04-01
The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725
NASA Technical Reports Server (NTRS)
Elshorbany, Yasin F.; Duncan, Bryan N.; Strode, Sarah A.; Wang, James S.; Kouatchou, Jules
2016-01-01
We present the Efficient CH4-CO-OH (ECCOH) chemistry module that allows for the simulation of the methane, carbon monoxide, and hydroxyl radical (CH4-CO- OH) system, within a chemistry climate model, carbon cycle model, or Earth system model. The computational efficiency of the module allows many multi-decadal sensitivity simulations of the CH4-CO-OH system, which primarily determines the global atmospheric oxidizing capacity. This capability is important for capturing the nonlinear feedbacks of the CH4-CO-OH system and understanding the perturbations to methane, CO, and OH, and the concomitant impacts on climate. We implemented the ECCOH chemistry module in the NASA GEOS-5 atmospheric global circulation model (AGCM), performed multiple sensitivity simulations of the CH4-CO-OH system over 2 decades, and evaluated the model output with surface and satellite data sets of methane and CO. The favorable comparison of output from the ECCOH chemistry module (as configured in the GEOS- 5 AGCM) with observations demonstrates the fidelity of the module for use in scientific research.
Nonlinear system identification of smart structures under high impact loads
NASA Astrophysics Data System (ADS)
Sarp Arsava, Kemal; Kim, Yeesock; El-Korchi, Tahar; Park, Hyo Seon
2013-05-01
The main purpose of this paper is to develop numerical models for the prediction and analysis of the highly nonlinear behavior of integrated structure control systems subjected to high impact loading. A time-delayed adaptive neuro-fuzzy inference system (TANFIS) is proposed for modeling of the complex nonlinear behavior of smart structures equipped with magnetorheological (MR) dampers under high impact forces. Experimental studies are performed to generate sets of input and output data for training and validation of the TANFIS models. The high impact load and current signals are used as the input disturbance and control signals while the displacement and acceleration responses from the structure-MR damper system are used as the output signals. The benchmark adaptive neuro-fuzzy inference system (ANFIS) is used as a baseline. Comparisons of the trained TANFIS models with experimental results demonstrate that the TANFIS modeling framework is an effective way to capture nonlinear behavior of integrated structure-MR damper systems under high impact loading. In addition, the performance of the TANFIS model is much better than that of ANFIS in both the training and the validation processes.
NASA Astrophysics Data System (ADS)
Dolan, B.; Rutledge, S. A.; Barnum, J. I.; Matsui, T.; Tao, W. K.; Iguchi, T.
2017-12-01
POLarimetric Radar Retrieval and Instrument Simulator (POLARRIS) is a framework that has been developed to simulate radar observations from cloud resolving model (CRM) output and subject model data and observations to the same retrievals, analysis and visualization. This framework not only enables validation of bulk microphysical model simulated properties, but also offers an opportunity to study the uncertainties associated with retrievals such as hydrometeor classification (HID). For the CSU HID, membership beta functions (MBFs) are built using a set of simulations with realistic microphysical assumptions about axis ratio, density, canting angles, size distributions for each of ten hydrometeor species. These assumptions are tested using POLARRIS to understand their influence on the resulting simulated polarimetric data and final HID classification. Several of these parameters (density, size distributions) are set by the model microphysics, and therefore the specific assumptions of axis ratio and canting angle are carefully studied. Through these sensitivity studies, we hope to be able to provide uncertainties in retrieved polarimetric variables and HID as applied to CRM output. HID retrievals assign a classification to each point by determining the highest score, thereby identifying the dominant hydrometeor type within a volume. However, in nature, there is rarely just one a single hydrometeor type at a particular point. Models allow for mixing ratios of different hydrometeors within a grid point. We use the mixing ratios from CRM output in concert with the HID scores and classifications to understand how the HID algorithm can provide information about mixtures within a volume, as well as calculate a confidence in the classifications. We leverage the POLARRIS framework to additionally probe radar wavelength differences toward the possibility of a multi-wavelength HID which could utilize the strengths of different wavelengths to improve HID classifications. With these uncertainties and algorithm improvements, cases of convection are studied in a continental (Oklahoma) and maritime (Darwin, Australia) regime. Observations from C-band polarimetric data in both locations are compared to CRM simulations from NU-WRF using the POLARRIS framework.
2016-01-01
Background As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. Objective To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. Methods A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. Results The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. Conclusions A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. PMID:27986644
Bayesian Processor of Output for Probabilistic Quantitative Precipitation Forecasting
NASA Astrophysics Data System (ADS)
Krzysztofowicz, R.; Maranzano, C. J.
2006-05-01
The Bayesian Processor of Output (BPO) is a new, theoretically-based technique for probabilistic forecasting of weather variates. It processes output from a numerical weather prediction (NWP) model and optimally fuses it with climatic data in order to quantify uncertainty about a predictand. The BPO is being tested by producing Probabilistic Quantitative Precipitation Forecasts (PQPFs) for a set of climatically diverse stations in the contiguous U.S. For each station, the PQPFs are produced for the same 6-h, 12-h, and 24-h periods up to 84- h ahead for which operational forecasts are produced by the AVN-MOS (Model Output Statistics technique applied to output fields from the Global Spectral Model run under the code name AVN). The inputs into the BPO are estimated as follows. The prior distribution is estimated from a (relatively long) climatic sample of the predictand; this sample is retrieved from the archives of the National Climatic Data Center. The family of the likelihood functions is estimated from a (relatively short) joint sample of the predictor vector and the predictand; this sample is retrieved from the same archive that the Meteorological Development Laboratory of the National Weather Service utilized to develop the AVN-MOS system. This talk gives a tutorial introduction to the principles and procedures behind the BPO, and highlights some results from the testing: a numerical example of the estimation of the BPO, and a comparative verification of the BPO forecasts and the MOS forecasts. It concludes with a list of demonstrated attributes of the BPO (vis- à-vis the MOS): more parsimonious definitions of predictors, more efficient extraction of predictive information, better representation of the distribution function of predictand, and equal or better performance (in terms of calibration and informativeness).
Humphries, Mark D; Gurney, Kevin
2012-07-01
Deep brain stimulation (DBS) is a remarkably successful treatment for the motor symptoms of Parkinson's disease. High-frequency stimulation of the subthalamic nucleus (STN) within the basal ganglia is a main clinical target, but the physiological mechanisms of therapeutic STN DBS at the cellular and network level are unclear. We set out to begin to address the hypothesis that a mixture of responses in the basal ganglia output nuclei, combining regularized firing and inhibition, is a key contributor to the effectiveness of STN DBS. We used our computational model of the complete basal ganglia circuit to show how such a mixture of responses in basal ganglia output naturally arises from the network effects of STN DBS. We replicated the diversification of responses recorded in a primate STN DBS study to show that the model's predicted mixture of responses is consistent with therapeutic STN DBS. We then showed how this 'mixture of response' perspective suggests new ideas for DBS mechanisms: first, that the therapeutic frequency of STN DBS is above 100 Hz because the diversification of responses exhibits a step change above this frequency; and second, that optogenetic models of direct STN stimulation during DBS have proven therapeutically ineffective because they do not replicate the mixture of basal ganglia output responses evoked by electrical DBS. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Robust Combining of Disparate Classifiers Through Order Statistics
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
Capturing planar shapes by approximating their outlines
NASA Astrophysics Data System (ADS)
Sarfraz, M.; Riyazuddin, M.; Baig, M. H.
2006-05-01
A non-deterministic evolutionary approach for approximating the outlines of planar shapes has been developed. Non-uniform Rational B-splines (NURBS) have been utilized as an underlying approximation curve scheme. Simulated Annealing heuristic is used as an evolutionary methodology. In addition to independent studies of the optimization of weight and knot parameters of the NURBS, a separate scheme has also been developed for the optimization of weights and knots simultaneously. The optimized NURBS models have been fitted over the contour data of the planar shapes for the ultimate and automatic output. The output results are visually pleasing with respect to the threshold provided by the user. A web-based system has also been developed for the effective and worldwide utilization. The objective of this system is to provide the facility to visualize the output to the whole world through internet by providing the freedom to the user for various desired input parameters setting in the algorithm designed.
Toward a Geoscientific Semantic Web Based on How Geoscientists Talk Across Disciplines
NASA Astrophysics Data System (ADS)
Peckham, S. D.
2015-12-01
Are there terms and scientific concepts from math and science that almost all geoscientists understand? Is there a limited set of terms, patterns and language elements that geoscientists use for efficient, unambiguous communication that could be used to describe the variables that they measure, store in data sets and use as model inputs and outputs? In this talk it will be argued that the answer to both questions is "yes" by drawing attention to many such patterns and then showing how they have been used to create a rich set of naming conventions for variables called the CSDMS Standard Names. Variables, which store numerical quantities associated with specific objects, are the fundamental currency of science. They are the items that are measured and saved in data sets, which may then be read into models. They are the inputs and outputs of models and the items exchanged between coupled models. They also star in the equations that summarize our scientific knowledge. Carefully constructed, unambiguous and unique labels for commonly used variables therefore provide an attractive mechanism for automatic semantic mediation when variables are to be shared between heterogeous resources. They provide a means to automatically check for semantic equivalence so that variables can be safely shared in resource compositions. A good set of standardized variable names can serve as the hub in a hub-and-spoke solution to semantic mediation, where the "internal vocabularies" of geoscience resources (i.e. data sets and models) are mapped to and from the hub to facilitate interoperability and data sharing. When built from patterns and terms that most geoscientists are already familiar with, these standardized variable names are then "readable" by both humans and machines. Despite the importance of variables in scientific work, most of the ontological work in the geosciences is focused at a higher level that supports finding resources (e.g data sets) but not on describing the contents of those resources. The CSDMS Standard Names have matured continuously since they were first introduced over three years ago. Many recent extensions and applications of them (e.g. different science domains, different projects, new rules, ontological work) as well as their compatibility with the International System of Quantities (ISO 80000) will be discussed.
Evaluation of Data Used for Modelling the Stratosphere of Saturn
NASA Astrophysics Data System (ADS)
Armstrong, Eleanor Sophie; Irwin, Patrick G. J.; Moses, Julianne I.
2015-11-01
Planetary atmospheres are modeled through the use of a photochemical and kinetic reaction scheme constructed from experimentally and theoretically determined rate coefficients, photoabsorption cross sections and branching ratios for the molecules described within them. The KINETICS architecture has previously been developed to model planetary atmospheres and is applied here to Saturn’s stratosphere. We consider the pathways that comprise the reaction scheme of a current model, and update the reaction scheme according the to findings in a literature investigation. We evaluate contemporary photochemical literature, studying recent data sets of cross-sections and branching ratios for a number of hydrocarbons used in the photochemical scheme of Model C of KINETICS. In particular evaluation of new photodissociation branching ratios for CH4, C2H2, C2H4, C3H3, C3H5 and C4H2, and new cross-sectional data for C2H2, C2H4, C2H6, C3H3, C4H2, C6H2 and C8H2 are considered. By evaluating the techniques used and data sets obtained, a new reaction scheme selection was drawn up. These data are then used within the preferred reaction scheme of the thesis and applied to the KINETICS atmospheric model to produce a model of the stratosphere of Saturn in a steady state. A total output of the preferred reaction scheme is presented, and the data is compared both with the previous reaction scheme and with data from the Cassini spacecraft in orbit around Saturn.One of the key findings of this work is that there is significant change in the model’s output as a result of temperature dependent data determination. Although only shown within the changes to the photochemical portion of the preferred reaction scheme, it is suggested that an equally important temperature dependence will be exhibited in the kinetic section of the reaction scheme. The photochemical model output is shown to be highly dependent on the preferred reaction scheme used within it by this thesis. The importance of correct and temperature-appropriate photochemical and kinetic data for the atmosphere under examination is emphasised as a consequence.
Parameter extraction with neural networks
NASA Astrophysics Data System (ADS)
Cazzanti, Luca; Khan, Mumit; Cerrina, Franco
1998-06-01
In semiconductor processing, the modeling of the process is becoming more and more important. While the ultimate goal is that of developing a set of tools for designing a complete process (Technology CAD), it is also necessary to have modules to simulate the various technologies and, in particular, to optimize specific steps. This need is particularly acute in lithography, where the continuous decrease in CD forces the technologies to operate near their limits. In the development of a 'model' for a physical process, we face several levels of challenges. First, it is necessary to develop a 'physical model,' i.e. a rational description of the process itself on the basis of know physical laws. Second, we need an 'algorithmic model' to represent in a virtual environment the behavior of the 'physical model.' After a 'complete' model has been developed and verified, it becomes possible to do performance analysis. In many cases the input parameters are poorly known or not accessible directly to experiment. It would be extremely useful to obtain the values of these 'hidden' parameters from experimental results by comparing model to data. This is particularly severe, because the complexity and costs associated with semiconductor processing make a simple 'trial-and-error' approach infeasible and cost- inefficient. Even when computer models of the process already exists, obtaining data through simulations may be time consuming. Neural networks (NN) are powerful computational tools to predict the behavior of a system from an existing data set. They are able to adaptively 'learn' input/output mappings and to act as universal function approximators. In this paper we use artificial neural networks to build a mapping from the input parameters of the process to output parameters which are indicative of the performance of the process. Once the NN has been 'trained,' it is also possible to observe the process 'in reverse,' and to extract the values of the inputs which yield outputs with desired characteristics. Using this method, we can extract optimum values for the parameters and determine the process latitude very quickly.
A depictive neural model for the representation of motion verbs.
Rao, Sunil; Aleksander, Igor
2011-11-01
In this paper, we present a depictive neural model for the representation of motion verb semantics in neural models of visual awareness. The problem of modelling motion verb representation is shown to be one of function application, mapping a set of given input variables defining the moving object and the path of motion to a defined output outcome in the motion recognition context. The particular function-applicative implementation and consequent recognition model design presented are seen as arising from a noun-adjective recognition model enabling the recognition of colour adjectives as applied to a set of shapes representing objects to be recognised. The presence of such a function application scheme and a separately implemented position identification and path labelling scheme are accordingly shown to be the primitives required to enable the design and construction of a composite depictive motion verb recognition scheme. Extensions to the presented design to enable the representation of transitive verbs are also discussed.
Building Simulation Modelers are we big-data ready?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanyal, Jibonananda; New, Joshua Ryan
Recent advances in computing and sensor technologies have pushed the amount of data we collect or generate to limits previously unheard of. Sub-minute resolution data from dozens of channels is becoming increasingly common and is expected to increase with the prevalence of non-intrusive load monitoring. Experts are running larger building simulation experiments and are faced with an increasingly complex data set to analyze and derive meaningful insight. This paper focuses on the data management challenges that building modeling experts may face in data collected from a large array of sensors, or generated from running a large number of building energy/performancemore » simulations. The paper highlights the technical difficulties that were encountered and overcome in order to run 3.5 million EnergyPlus simulations on supercomputers and generating over 200 TBs of simulation output. This extreme case involved development of technologies and insights that will be beneficial to modelers in the immediate future. The paper discusses different database technologies (including relational databases, columnar storage, and schema-less Hadoop) in order to contrast the advantages and disadvantages of employing each for storage of EnergyPlus output. Scalability, analysis requirements, and the adaptability of these database technologies are discussed. Additionally, unique attributes of EnergyPlus output are highlighted which make data-entry non-trivial for multiple simulations. Practical experience regarding cost-effective strategies for big-data storage is provided. The paper also discusses network performance issues when transferring large amounts of data across a network to different computing devices. Practical issues involving lag, bandwidth, and methods for synchronizing or transferring logical portions of the data are presented. A cornerstone of big-data is its use for analytics; data is useless unless information can be meaningfully derived from it. In addition to technical aspects of managing big data, the paper details design of experiments in anticipation of large volumes of data. The cost of re-reading output into an analysis program is elaborated and analysis techniques that perform analysis in-situ with the simulations as they are run are discussed. The paper concludes with an example and elaboration of the tipping point where it becomes more expensive to store the output than re-running a set of simulations.« less
NASA Technical Reports Server (NTRS)
Koenig, D. G.; Falarski, M. D.
1979-01-01
Tests were made in the Ames 40- by 80-foot wind tunnel to determine the forward speed effects on wing-mounted thrust augmentors. The large-scale model was powered by the compressor output of J-85 driven viper compressors. The flap settings used were 15 deg and 30 deg with 0 deg, 15 deg, and 30 deg aileron settings. The maximum duct pressure, and wind tunnel dynamic pressure were 66 cmHg (26 in Hg) and 1190 N/sq m (25 lb/sq ft), respectively. All tests were made at zero sideslip. Test results are presented without analysis.
Computational and Statistical Models: A Comparison for Policy Modeling of Childhood Obesity
NASA Astrophysics Data System (ADS)
Mabry, Patricia L.; Hammond, Ross; Ip, Edward Hak-Sing; Huang, Terry T.-K.
As systems science methodologies have begun to emerge as a set of innovative approaches to address complex problems in behavioral, social science, and public health research, some apparent conflicts with traditional statistical methodologies for public health have arisen. Computational modeling is an approach set in context that integrates diverse sources of data to test the plausibility of working hypotheses and to elicit novel ones. Statistical models are reductionist approaches geared towards proving the null hypothesis. While these two approaches may seem contrary to each other, we propose that they are in fact complementary and can be used jointly to advance solutions to complex problems. Outputs from statistical models can be fed into computational models, and outputs from computational models can lead to further empirical data collection and statistical models. Together, this presents an iterative process that refines the models and contributes to a greater understanding of the problem and its potential solutions. The purpose of this panel is to foster communication and understanding between statistical and computational modelers. Our goal is to shed light on the differences between the approaches and convey what kinds of research inquiries each one is best for addressing and how they can serve complementary (and synergistic) roles in the research process, to mutual benefit. For each approach the panel will cover the relevant "assumptions" and how the differences in what is assumed can foster misunderstandings. The interpretations of the results from each approach will be compared and contrasted and the limitations for each approach will be delineated. We will use illustrative examples from CompMod, the Comparative Modeling Network for Childhood Obesity Policy. The panel will also incorporate interactive discussions with the audience on the issues raised here.
UMAP Modules-Units 71, 72, 73, 74, 75, 81-83, 234.
ERIC Educational Resources Information Center
Horelick, Brindell; And Others
The first four units cover aspects of medical applications of calculus: 71-Measuring Cardiac Output; 72-Prescribing Safe and Effective Dosage; 73-Epidemics; and 74-Tracer Methods in Permiability. All units include a set of exercises and answers to at least some of the problems. Unit 72 also contains a model exam and answers to this exam. The fifth…
One joule per Q-switched pulse diode-pumped laser
NASA Technical Reports Server (NTRS)
Holder, Lonnie E.; Kennedy, Chandler; Long, Larry; Dube, George
1992-01-01
Q-switched 1-J output has been achieved from diode-pumped zig-zag Nd:YAG slabs in an oscillator-amplifier configuration. The oscillator was single transverse and longitudinal model. This laser set records for Q-switched energy per pulse, and for average power from a diode-pumped laser. The laser was constructed in a rugged configuration suitable for routine laboratory use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2011-01-01
The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA,more » utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the primary contributors to the output data uncertainties. It was found that the uncertainties in the decay heat, pebble bed and reflector thermal conductivities were responsible for the bulk of the propagated uncertainty in the DLOFC maximum fuel temperature. It was also determined that the two standard deviation (2s) uncertainty on the maximum fuel temperature was between ±58oC (3.6%) and ±76oC (4.7%) on a mean value of 1604 oC. These values mostly depended on the selection of the distributions types, and not on the number of model calculations above the required Wilks criteria (a (95%,95%) statement would usually require 93 model runs).« less
Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation
NASA Astrophysics Data System (ADS)
Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi
2016-09-01
We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.
NASA Astrophysics Data System (ADS)
Dhingra, Sunil; Bhushan, Gian; Dubey, Kashyap Kumar
2014-03-01
The present work studies and identifies the different variables that affect the output parameters involved in a single cylinder direct injection compression ignition (CI) engine using jatropha biodiesel. Response surface methodology based on Central composite design (CCD) is used to design the experiments. Mathematical models are developed for combustion parameters (Brake specific fuel consumption (BSFC) and peak cylinder pressure (Pmax)), performance parameter brake thermal efficiency (BTE) and emission parameters (CO, NO x , unburnt HC and smoke) using regression techniques. These regression equations are further utilized for simultaneous optimization of combustion (BSFC, Pmax), performance (BTE) and emission (CO, NO x , HC, smoke) parameters. As the objective is to maximize BTE and minimize BSFC, Pmax, CO, NO x , HC, smoke, a multiobjective optimization problem is formulated. Nondominated sorting genetic algorithm-II is used in predicting the Pareto optimal sets of solution. Experiments are performed at suitable optimal solutions for predicting the combustion, performance and emission parameters to check the adequacy of the proposed model. The Pareto optimal sets of solution can be used as guidelines for the end users to select optimal combination of engine output and emission parameters depending upon their own requirements.
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
NASA Astrophysics Data System (ADS)
Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.
2018-02-01
The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.
Phenomenological model of maize starches expansion by extrusion
NASA Astrophysics Data System (ADS)
Kristiawan, M.; Della Valle, G.; Kansou, K.; Ndiaye, A.; Vergnes, B.
2016-10-01
During extrusion of starchy products, the molten material is forced through a die so that the sudden abrupt pressure drop causes part of the water to vaporize giving an expanded, cellular structure. The objective of this work was to elaborate a phenomenological model of expansion and couple it with Ludovic® mechanistic model of twin screw extrusion process. From experimental results that cover a wide range of thermomechanical conditions, a concept map of influence relationships between input and output variables was built. It took into account the phenomena of bubbles nucleation, growth, coalescence, shrinkage and setting, in a viscoelastic medium. The input variables were the moisture content MC, melt temperature T, specific mechanical energy SME, shear viscosity η at the die exit, computed by Ludovic®, and the melt storage moduli E'(at T > Tg). The outputs of the model were the macrostructure (volumetric expansion index VEI, anisotropy) and cellular structure (fineness F) of solid foams. Then a general model was established: VEI = α (η/η0)n in which α and n depend on T, MC, SME and E' and the link between anisotropy and fineness was established.
Matsubara, Takashi; Torikai, Hiroyuki
2016-04-01
Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.
A parametric LQ approach to multiobjective control system design
NASA Technical Reports Server (NTRS)
Kyr, Douglas E.; Buchner, Marc
1988-01-01
The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.
To publish or not to publish? On the aggregation and drivers of research performance
De Witte, Kristof
2010-01-01
This paper presents a methodology to aggregate multidimensional research output. Using a tailored version of the non-parametric Data Envelopment Analysis model, we account for the large heterogeneity in research output and the individual researcher preferences by endogenously weighting the various output dimensions. The approach offers three important advantages compared to the traditional approaches: (1) flexibility in the aggregation of different research outputs into an overall evaluation score; (2) a reduction of the impact of measurement errors and a-typical observations; and (3) a correction for the influences of a wide variety of factors outside the evaluated researcher’s control. As a result, research evaluations are more effective representations of actual research performance. The methodology is illustrated on a data set of all faculty members at a large polytechnic university in Belgium. The sample includes questionnaire items on the motivation and perception of the researcher. This allows us to explore whether motivation and background characteristics (such as age, gender, retention, etc.,) of the researchers explain variations in measured research performance. PMID:21057573
NASA Astrophysics Data System (ADS)
Chegwidden, O.; Nijssen, B.; Mao, Y.; Rupp, D. E.
2016-12-01
The Columbia River Basin (CRB) in the United States' Pacific Northwest (PNW) is highly regulated for hydropower generation, flood control, fish survival, irrigation and navigation. Historically it has had a hydrologic regime characterized by winter precipitation in the form of snow, followed by a spring peak in streamflow from snowmelt. Anthropogenic climate change is expected to significantly alter this regime, causing changes to streamflow timing and volume. While numerous hydrologic studies have been conducted across the CRB, the impact of methodological choices in hydrologic modeling has not been as heavily investigated. To better understand their impact on the spread in modeled projections of hydrological change, we ran simulations involving permutations of a variety of methodological choices. We used outputs from ten global climate models (GCMs) and two representative concentration pathways from the Intergovernmental Panel on Climate Change's Fifth Assessment Report. After downscaling the GCM output using three different techniques we forced the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS), both implemented at 1/16th degree ( 5 km) for the period 1950-2099. For the VIC model, we used three independently-derived parameter sets. We will show results from the range of simulations, both in the form of basin-wide spatial analyses of hydrologic variables and through analyses of changes in streamflow at selected sites throughout the CRB. We will then discuss the differences in sensitivities to climate change seen among the projections, paying particular attention to differences in projections from the hydrologic models and different parameter sets.
Martin, A.D.
1986-05-09
Method and apparatus are provided for generating an output pulse following a trigger pulse at a time delay interval preset with a resolution which is high relative to a low resolution available from supplied clock pulses. A first lumped constant delay provides a first output signal at predetermined interpolation intervals corresponding to the desired high resolution time interval. Latching circuits latch the high resolution data to form a first synchronizing data set. A selected time interval has been preset to internal counters and corrected for circuit propagation delay times having the same order of magnitude as the desired high resolution. Internal system clock pulses count down the counters to generate an internal pulse delayed by an internal which is functionally related to the preset time interval. A second LCD corrects the internal signal with the high resolution time delay. A second internal pulse is then applied to a third LCD to generate a second set of synchronizing data which is complementary with the first set of synchronizing data for presentation to logic circuits. The logic circuits further delay the internal output signal with the internal pulses. The final delayed output signal thereafter enables the output pulse generator to produce the desired output pulse at the preset time delay interval following input of the trigger pulse.
NASA Astrophysics Data System (ADS)
Harris, Andrew; Latutrie, Benjamin; Andredakis, Ioannis; De Groeve, Tom; Langlois, Eric; van Wyk de Vries, Benjamin; Del Negro, Ciro; Favalli, Massimiliano; Fujita, Eisuke; Kelfoun, Karim; Rongo, Rocco
2016-04-01
RED-SEED stands for Risk Evaluation, Detection and Simulation during Effusive Eruption Disasters, and combines stakeholders from the remote sensing, modeling and response communities with experience in tracking volcanic effusive events. It is an informal working group that has evolved around the philosophy of combining global scientific resources, in the realm of physical volcanology, remote sensing and modeling, to better define and limit uncertainty. The group first met during a three day-long workshop held in Clermont Ferrand (France) between 28 and 30 May 2013. The main recommendation of the workshop in terms of modeling was that there is a pressing need for "real-time input of reliable Time-Averaged Discharge Rate (TADR) data with regular up-dates of Digital Elevation Models (DEMs) if modeling is to be effective; the DEMs can be provided by the radar/photogrammetry community." We thus set up a test to explore (i) which model source terms are needed, (ii) how they can be provided and updated, and (iii) how can models be run and applied in an ensemble approach. The test used two hypothetical effusive events in the Chaîne des Puys (Auvergne, France), for which a prototype Geographical Information System (GIS) was set up to allow loss assessment during an effusive crisis. This system drew on all immediately available data for population, land use, communications, utility and building-type. After defining lava flow model source terms (vent location, effusion rate, lava chemistry, temperature, crystallinity and vesicularity), five operational lava flow emplacement models were run (DOWNFLOW, FLOWGO, LAVASIM, MAGFLOW and VOLCFLOW) to produce a projection for likelihood of impact for all pixels within the area covered by the GIS, based on agreement between models. The test thus aimed not to assess the model output, but instead to examine overlapping output. Next, inundation maps and damage reports for impacted zones were produced. The exercise identified several shortcomings of the modeling systems, but indicates that generation of a global response system for effusive crises that uses rapid-response model projections for lava inundation driven by real-time satellite hot spot detection - and open access data sets - is within the current capabilities of the community.
Sainz de Murieta, Iñaki; Rodríguez-Patón, Alfonso
2012-08-01
Despite the many designs of devices operating with the DNA strand displacement, surprisingly none is explicitly devoted to the implementation of logical deductions. The present article introduces a new model of biosensor device that uses nucleic acid strands to encode simple rules such as "IF DNA_strand(1) is present THEN disease(A)" or "IF DNA_strand(1) AND DNA_strand(2) are present THEN disease(B)". Taking advantage of the strand displacement operation, our model makes these simple rules interact with input signals (either DNA or any type of RNA) to generate an output signal (in the form of nucleotide strands). This output signal represents a diagnosis, which either can be measured using FRET techniques, cascaded as the input of another logical deduction with different rules, or even be a drug that is administered in response to a set of symptoms. The encoding introduces an implicit error cancellation mechanism, which increases the system scalability enabling longer inference cascades with a bounded and controllable signal-noise relation. It also allows the same rule to be used in forward inference or backward inference, providing the option of validly outputting negated propositions (e.g. "diagnosis A excluded"). The models presented in this paper can be used to implement smart logical DNA devices that perform genetic diagnosis in vitro. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
SU-E-T-223: High-Energy Photon Standard Dosimetry Data: A Quality Assurance Tool.
Lowenstein, J; Kry, S; Molineu, A; Alvarez, P; Aguirre, J; Summers, P; Followill, D
2012-06-01
Describe the Radiological Physics Center's (RPC) extensive standard dosimetry data set determined from on-site audits measurements. Measurements were made during on-site audits to institutions participating in NCI funded cooperative clinical trials for 44 years using a 0.6cc cylindrical ionization chamber placed within the RPC's water tank. Measurements were made on Varian, Siemens, and Elekta/Philips accelerators for 11 different energies from 68 models of accelerators. We have measured percent depth dose, output factors, and off-axis factors for 123 different accelerator model/energy combinations for which we have 5 or more sets of measurements. The RPC analyzed these data and determined the 'standard data' for each model/energy combination. The RPC defines 'standard data' as the mean value of 5 or more sets of dosimetry data or agreement with published depth dose data (within 2%). The analysis of these standard data indicates that for modern accelerator models, the dosimetry data for a particular model/energy are within ï,±2%. The RPC has always found accelerators of the same make/model/energy combination have the same dosimetric properties in terms of depth dose, field size dependence and off-axis factors. Because of this consistency, the RPC can assign standard data for percent depth dose, average output factors and off-axis factors for a given combination of energy and accelerator make and model. The RPC standard data can be used as a redundant quality assurance tool to assist Medical Physicists to have confidence in their clinical data to within 2%. The next step is for the RPC to provide a way for institutions to submit data to the RPC to determine if their data agrees with the standard data as a redundant check. This work was supported by PHS grants CA10953 awarded by NCI, DHHS. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Maksimov, German A.; Radchenko, Aleksei V.
2006-05-01
Acoustical stimulation (AS) of oil production rate from a well is perspective technology for oil industry but physical mechanisms of acoustical action are not understood clear due to complex character of the phenomena. In practice the role of these mechanisms is appeared non-directly in the form of additional oil output. Thus the validity examination of any physical model has to be carried out as with account of mechanism of acoustic action by itself as well with account of previous and consequent stages dealt with fluid filtration into a well. The advanced model of physical processes taking place at acoustical stimulation is considered in the framework of heating mechanism of acoustical action, but for two-component fluid in porous permeable media. The porous fluid is considered as consisted of light and heavy hydrocarbonaceous phases, which are in a thermodynamic equilibrium. Filtration or acoustical stimulation can change equilibrium balance between phases so the heavy phase can be precipitated on pores walls or dissolved. The set of acoustical, heat and filtration tasks were solved numerically to describe oil output from a well — the final result of acoustical action, which can be compared with experimental data. It is shown that the suggested numerical model allows us to reproduce the basic features of fluid filtration in a well before during and after acoustical stimulation.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Advanced fast 3D DSA model development and calibration for design technology co-optimization
NASA Astrophysics Data System (ADS)
Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing
2017-04-01
Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.
Two Unipolar Terminal-Attractor-Based Associative Memories
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang; Wu, Chwan-Hwa
1995-01-01
Two unipolar mathematical models of electronic neural network functioning as terminal-attractor-based associative memory (TABAM) developed. Models comprise sets of equations describing interactions between time-varying inputs and outputs of neural-network memory, regarded as dynamical system. Simplifies design and operation of optoelectronic processor to implement TABAM performing associative recall of images. TABAM concept described in "Optoelectronic Terminal-Attractor-Based Associative Memory" (NPO-18790). Experimental optoelectronic apparatus that performed associative recall of binary images described in "Optoelectronic Inner-Product Neural Associative Memory" (NPO-18491).
NASA Astrophysics Data System (ADS)
Jiao, Peng; Yang, Er; Ni, Yong Xin
2018-06-01
The overland flow resistance on grassland slope of 20° was studied by using simulated rainfall experiments. Model of overland flow resistance coefficient was established based on BP neural network. The input variations of model were rainfall intensity, flow velocity, water depth, and roughness of slope surface, and the output variations was overland flow resistance coefficient. Model was optimized by Genetic Algorithm. The results show that the model can be used to calculate overland flow resistance coefficient, and has high simulation accuracy. The average prediction error of the optimized model of test set is 8.02%, and the maximum prediction error was 18.34%.
On the decoding process in ternary error-correcting output codes.
Escalera, Sergio; Pujol, Oriol; Radeva, Petia
2010-01-01
A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
Du, Mingxuan; Fouché, Olivier; Zavattero, Elodie; Ma, Qiang; Delestre, Olivier; Gourbesville, Philippe
2018-02-22
Integrated hydrodynamic modelling is an efficient approach for making semi-quantitative scenarios reliable enough for groundwater management, provided that the numerical simulations are from a validated model. The model set-up, however, involves many inputs due to the complexity of both the hydrological system and the land use. The case study of a Mediterranean alluvial unconfined aquifer in the lower Var valley (Southern France) is useful to test a method to estimate lacking data on water abstraction by small farms in urban context. With this estimation of the undocumented pumping volumes, and after calibration of the exchange parameters of the stream-aquifer system with the help of a river model, the groundwater flow model shows a high goodness of fit with the measured potentiometric levels. The consistency between simulated results and real behaviour of the system, with regard to the observed effects of lowering weirs and previously published hydrochemistry data, confirms reliability of the groundwater flow model. On the other hand, accuracy of the transport model output may be influenced by many parameters, many of which are not derived from field measurements. In this case study, for which river-aquifer feeding is the main control, the partition coefficient between direct recharge and runoff does not show a significant effect on the transport model output, and therefore, uncertainty of the hydrological terms such as evapotranspiration and runoff is not a first-rank issue to the pollution propagation. The simulation of pollution scenarios with the model returns expected pessimistic outputs, with regard to hazard management. The model is now ready to be used in a decision support system by the local water supply managers.
Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics
NASA Astrophysics Data System (ADS)
Lazarus, S. M.; Holman, B. P.; Splitt, M. E.
2017-12-01
A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin; ...
2017-04-26
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
Reproducible, Component-based Modeling with TopoFlow, A Spatial Hydrologic Modeling Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peckham, Scott D.; Stoica, Maria; Jafarov, Elchin
Modern geoscientists have online access to an abundance of different data sets and models, but these resources differ from each other in myriad ways and this heterogeneity works against interoperability as well as reproducibility. The purpose of this paper is to illustrate the main issues and some best practices for addressing the challenge of reproducible science in the context of a relatively simple hydrologic modeling study for a small Arctic watershed near Fairbanks, Alaska. This study requires several different types of input data in addition to several, coupled model components. All data sets, model components and processing scripts (e.g. formore » preparation of data and figures, and for analysis of model output) are fully documented and made available online at persistent URLs. Similarly, all source code for the models and scripts is open-source, version controlled and made available online via GitHub. Each model component has a Basic Model Interface (BMI) to simplify coupling and its own HTML help page that includes a list of all equations and variables used. The set of all model components (TopoFlow) has also been made available as a Python package for easy installation. Three different graphical user interfaces for setting up TopoFlow runs are described, including one that allows model components to run and be coupled as web services.« less
NASA Astrophysics Data System (ADS)
Vasić, M.; Radojević, Z.
2017-08-01
One of the main disadvantages of the recently reported method, for setting up the drying regime based on the theory of moisture migration during drying, lies in a fact that it is based on a large number of isothermal experiments. In addition each isothermal experiment requires the use of different drying air parameters. The main goal of this paper was to find a way how to reduce the number of isothermal experiments without affecting the quality of the previously proposed calculation method. The first task was to define the lower and upper inputs as well as the output of the “black box” which will be used in the Box-Wilkinson’s orthogonal multi-factorial experimental design. Three inputs (drying air temperature, humidity and velocity) were used within the experimental design. The output parameter of the model represents the time interval between any two chosen characteristic points presented on the Deff - t. The second task was to calculate the output parameter for each planed experiments. The final output of the model is the equation which can predict the time interval between any two chosen characteristic points as a function of the drying air parameters. This equation is valid for any value of the drying air parameters which are within the defined area designated with lower and upper limiting values.
Converting the Active Digital Controller for Use in Two Tests
NASA Technical Reports Server (NTRS)
Wright, Robert G.
1995-01-01
The Active Digital Controller is a system used to control the various functions of wind tunnel models. It has the capability of digitizing and saving of up to sixty-four channels of analog data. It can output up to 16 channels of analog command signals. In addition to its use as a general controller, it can run up to two distinct control laws. All of this is done at a regulated speed of two hundred hertz. The Active Digital Controller (ADC) was modified for use in the Actively Controlled Response of Buffet Affected Tails (ACROBAT) tests and for side-wall pressure data acquisition. The changes included general maintenance and updating of the controller as well as setting up special modes of operation. The ACROBAT tests required that two sets of output signals be available. The pressure data acquisition needed a sampling rate of four hundred hertz, twice the standard ADC rate. These modifications were carried out and the ADC was used during the ACROBAT wind tunnel entry.
User interface for ground-water modeling: Arcview extension
Tsou, Ming‐shu; Whittemore, Donald O.
2001-01-01
Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.
Working Characteristics of Variable Intake Valve in Compressed Air Engine
Yu, Qihui; Shi, Yan; Cai, Maolin
2014-01-01
A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine. PMID:25379536
Working characteristics of variable intake valve in compressed air engine.
Yu, Qihui; Shi, Yan; Cai, Maolin
2014-01-01
A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.
Economic impacts of a transition to higher oil prices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tessmer, Jr, R. G.; Carhart, S. C.; Marcuse, W.
1978-06-01
Economic impacts of sharply higher oil and gas prices in the eighties are estimated using a combination of optimization and input-output models. A 1985 Base Case is compared with a High Case in which crude oil and crude natural gas are, respectively, 2.1 and 1.4 times as expensive as in the Base Case. Impacts examined include delivered energy prices and demands, resource consumption, emission levels and costs, aggregate and compositional changes in gross national product, balance of payments, output, employment, and sectoral prices. Methodology is developed for linking models in both quantity and price space for energy service--specific fuel demands.more » A set of energy demand elasticities is derived which is consistent between alternative 1985 cases and between the 1985 cases and an historical year (1967). A framework and methodology are also presented for allocating portions of the DOE Conservation budget according to broad policy objectives and allocation rules.« less
Non-blocking crossbar permutation engine with constant routing latency
NASA Technical Reports Server (NTRS)
Monacos, Steve P. (Inventor)
1994-01-01
The invention is embodied in an N x N crossbar for routing packets from a set of N input ports to a set of N output ports, each packet having a header identifying one of the output ports as its destination, including a plurality of individual links which carry individual packets. Each link has a link input end and a link output end, a plurality of switches. Each of the switches has at least top and bottom switch inputs connected to a corresponding pair of the link output ends and top and bottom switch outputs connected to a corresponding pair of link input ends, whereby each switch is connected to four different links. Each of the switches has an exchange state which routes packets from the top and bottom switch inputs to the bottom and top switch outputs, respectively, and a bypass state which routes packets from the top and bottom switch inputs to the top and bottom switch outputs, respectively. A plurality of individual controller devices governing respective switches for sensing from a header of a packet at each switch input for the identity of the destination output port of the packet and selecting one of the exchange and bypass states in accordance with the identity of the destination output port and with the location of the corresponding switch relative to the destination output port.
Impacts of uncertainties in European gridded precipitation observations on regional climate analysis
Gobiet, Andreas
2016-01-01
ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497
Prein, Andreas F; Gobiet, Andreas
2017-01-01
Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.
General Circulation Model Output for Forest Climate Change Research and Applications
Ellen J. Cooter; Brian K. Eder; Sharon K. LeDuc; Lawrence Truppi
1993-01-01
This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made. This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made.
High resolution digital delay timer
Martin, Albert D.
1988-01-01
Method and apparatus are provided for generating an output pulse following a trigger pulse at a time delay interval preset with a resolution which is high relative to a low resolution available from supplied clock pulses. A first lumped constant delay (20) provides a first output signal (24) at predetermined interpolation intervals corresponding to the desired high resolution time interval. Latching circuits (26, 28) latch the high resolution data (24) to form a first synchronizing data set (60). A selected time interval has been preset to internal counters (142, 146, 154) and corrected for circuit propagation delay times having the same order of magnitude as the desired high resolution. Internal system clock pulses (32, 34) count down the counters to generate an internal pulse delayed by an interval which is functionally related to the preset time interval. A second LCD (184) corrects the internal signal with the high resolution time delay. A second internal pulse is then applied to a third LCD (74) to generate a second set of synchronizing data (76) which is complementary with the first set of synchronizing data (60) for presentation to logic circuits (64). The logic circuits (64) further delay the internal output signal (72) to obtain a proper phase relationship of an output signal (80) with the internal pulses (32, 34). The final delayed output signal (80) thereafter enables the output pulse generator (82) to produce the desired output pulse (84) at the preset time delay interval following input of the trigger pulse (10, 12).
Adaptive Data-based Predictive Control for Short Take-off and Landing (STOL) Aircraft
NASA Technical Reports Server (NTRS)
Barlow, Jonathan Spencer; Acosta, Diana Michelle; Phan, Minh Q.
2010-01-01
Data-based Predictive Control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. The characteristics of adaptive data-based predictive control are particularly appropriate for the control of nonlinear and time-varying systems, such as Short Take-off and Landing (STOL) aircraft. STOL is a capability of interest to NASA because conceptual Cruise Efficient Short Take-off and Landing (CESTOL) transport aircraft offer the ability to reduce congestion in the terminal area by utilizing existing shorter runways at airports, as well as to lower community noise by flying steep approach and climb-out patterns that reduce the noise footprint of the aircraft. In this study, adaptive data-based predictive control is implemented as an integrated flight-propulsion controller for the outer-loop control of a CESTOL-type aircraft. Results show that the controller successfully tracks velocity while attempting to maintain a constant flight path angle, using longitudinal command, thrust and flap setting as the control inputs.
Inverse modeling of geochemical and mechanical compaction in sedimentary basins
NASA Astrophysics Data System (ADS)
Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto
2015-04-01
We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.
An effective drift correction for dynamical downscaling of decadal global climate predictions
NASA Astrophysics Data System (ADS)
Paeth, Heiko; Li, Jingmin; Pollinger, Felix; Müller, Wolfgang A.; Pohlmann, Holger; Feldmann, Hendrik; Panitz, Hans-Jürgen
2018-04-01
Initialized decadal climate predictions with coupled climate models are often marked by substantial climate drifts that emanate from a mismatch between the climatology of the coupled model system and the data set used for initialization. While such drifts may be easily removed from the prediction system when analyzing individual variables, a major problem prevails for multivariate issues and, especially, when the output of the global prediction system shall be used for dynamical downscaling. In this study, we present a statistical approach to remove climate drifts in a multivariate context and demonstrate the effect of this drift correction on regional climate model simulations over the Euro-Atlantic sector. The statistical approach is based on an empirical orthogonal function (EOF) analysis adapted to a very large data matrix. The climate drift emerges as a dramatic cooling trend in North Atlantic sea surface temperatures (SSTs) and is captured by the leading EOF of the multivariate output from the global prediction system, accounting for 7.7% of total variability. The SST cooling pattern also imposes drifts in various atmospheric variables and levels. The removal of the first EOF effectuates the drift correction while retaining other components of intra-annual, inter-annual and decadal variability. In the regional climate model, the multivariate drift correction of the input data removes the cooling trends in most western European land regions and systematically reduces the discrepancy between the output of the regional climate model and observational data. In contrast, removing the drift only in the SST field from the global model has hardly any positive effect on the regional climate model.
Tools and Techniques for Basin-Scale Climate Change Assessment
NASA Astrophysics Data System (ADS)
Zagona, E.; Rajagopalan, B.; Oakley, W.; Wilson, N.; Weinstein, P.; Verdin, A.; Jerla, C.; Prairie, J. R.
2012-12-01
The Department of Interior's WaterSMART Program seeks to secure and stretch water supplies to benefit future generations and identify adaptive measures to address climate change. Under WaterSMART, Basin Studies are comprehensive water studies to explore options for meeting projected imbalances in water supply and demand in specific basins. Such studies could be most beneficial with application of recent scientific advances in climate projections, stochastic simulation, operational modeling and robust decision-making, as well as computational techniques to organize and analyze many alternatives. A new integrated set of tools and techniques to facilitate these studies includes the following components: Future supply scenarios are produced by the Hydrology Simulator, which uses non-parametric K-nearest neighbor resampling techniques to generate ensembles of hydrologic traces based on historical data, optionally conditioned on long paleo reconstructed data using various Markov Chain techniuqes. Resampling can also be conditioned on climate change projections from e.g., downscaled GCM projections to capture increased variability; spatial and temporal disaggregation is also provided. The simulations produced are ensembles of hydrologic inputs to the RiverWare operations/infrastucture decision modeling software. Alternative demand scenarios can be produced with the Demand Input Tool (DIT), an Excel-based tool that allows modifying future demands by groups such as states; sectors, e.g., agriculture, municipal, energy; and hydrologic basins. The demands can be scaled at future dates or changes ramped over specified time periods. Resulting data is imported directly into the decision model. Different model files can represent infrastructure alternatives and different Policy Sets represent alternative operating policies, including options for noticing when conditions point to unacceptable vulnerabilities, which trigger dynamically executing changes in operations or other options. The over-arching Study Manager provides a graphical tool to create combinations of future supply scenarios, demand scenarios, infrastructure and operating policy alternatives; each scenario is executed as an ensemble of RiverWare runs, driven by the hydrologic supply. The Study Manager sets up and manages multiple executions on multi-core hardware. The sizeable are typically direct model outputs, or post-processed indicators of performance based on model outputs. Post processing statistical analysis of the outputs are possible using the Graphical Policy Analysis Tool or other statistical packages. Several Basin Studies undertaken have used RiverWare to evaluate future scenarios. The Colorado River Basin Study, the most complex and extensive to date, has taken advantage of these tools and techniques to generate supply scenarios, produce alternative demand scenarios and to set up and execute the many combinations of supplies, demands, policies, and infrastructure alternatives. The tools and techniques will be described with example applications.
SAI (Systems Applications, Incorporated) Urban Airshed Model. Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schere, K.L.
1985-06-01
This magnetic tape contains the FORTRAN source code, sample input data, and sample output data for the SAI Urban Airshed Model (UAM). The UAM is a 3-dimensional gridded air-quality simulation model that is well suited for predicting the spatial and temporal distribution of photochemical pollutant concentrations in an urban area. The model is based on the equations of conservation of mass for a set of reactive pollutants in a turbulent-flow field. To solve these equations, the UAM uses numerical techniques set in a 3-D finite-difference grid array of cells, each about 1 to 10 kilometers wide and 10 to severalmore » hundred meters deep. As output, the model provides the calculated pollutant concentrations in each cell as a function of time. The chemical species of prime interest included in the UAM simulations are O3, NO, NO/sub 2/ and several organic compounds and classes of compounds. The UAM system contains at its core the Airshed Simulation Program that accesses input data consisting of 10 to 14 files, depending on the program options chosen. Each file is created by a separate data-preparation program. There are 17 programs in the entire UAM system. The services of a qualified dispersion meteorologist, a chemist, and a computer programmer will be necessary to implement and apply the UAM and to interpret the results. Software Description: The program is written in the FORTRAN programming language for implementation on a UNIVAC 1110 computer under the UNIVAC 110 0 operating system level 38R5A. Memory requirement is 80K.« less
ImSET: Impact of Sector Energy Technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roop, Joseph M.; Scott, Michael J.; Schultz, Robert W.
2005-07-19
This version of the Impact of Sector Energy Technologies (ImSET) model represents the ''next generation'' of the previously developed Visual Basic model (ImBUILD 2.0) that was developed in 2003 to estimate the macroeconomic impacts of energy-efficient technology in buildings. More specifically, a special-purpose version of the 1997 benchmark national Input-Output (I-O) model was designed specifically to estimate the national employment and income effects of the deployment of Office of Energy Efficiency and Renewable Energy (EERE) -developed energy-saving technologies. In comparison with the previous versions of the model, this version allows for more complete and automated analysis of the essential featuresmore » of energy efficiency investments in buildings, industry, transportation, and the electric power sectors. This version also incorporates improvements in the treatment of operations and maintenance costs, and improves the treatment of financing of investment options. ImSET is also easier to use than extant macroeconomic simulation models and incorporates information developed by each of the EERE offices as part of the requirements of the Government Performance and Results Act.« less
Land-use change may exacerbate climate change impacts on water resources in the Ganges basin
NASA Astrophysics Data System (ADS)
Tsarouchi, Gina; Buytaert, Wouter
2018-02-01
Quantifying how land-use change and climate change affect water resources is a challenge in hydrological science. This work aims to quantify how future projections of land-use and climate change might affect the hydrological response of the Upper Ganges river basin in northern India, which experiences monsoon flooding almost every year. Three different sets of modelling experiments were run using the Joint UK Land Environment Simulator (JULES) land surface model (LSM) and covering the period 2000-2035: in the first set, only climate change is taken into account, and JULES was driven by the CMIP5 (Coupled Model Intercomparison Project Phase 5) outputs of 21 models, under two representative concentration pathways (RCP4.5 and RCP8.5), whilst land use was held fixed at the year 2010. In the second set, only land-use change is taken into account, and JULES was driven by a time series of 15 future land-use pathways, based on Landsat satellite imagery and the Markov chain simulation, whilst the meteorological boundary conditions were held fixed at years 2000-2005. In the third set, both climate change and land-use change were taken into consideration, as the CMIP5 model outputs were used in conjunction with the 15 future land-use pathways to force JULES. Variations in hydrological variables (stream flow, evapotranspiration and soil moisture) are calculated during the simulation period. Significant changes in the near-future (years 2030-2035) hydrologic fluxes arise under future land-cover and climate change scenarios pointing towards a severe increase in high extremes of flow: the multi-model mean of the 95th percentile of streamflow (Q5) is projected to increase by 63 % under the combined land-use and climate change high emissions scenario (RCP8.5). The changes in all examined hydrological components are greater in the combined land-use and climate change experiment. Results are further presented in a water resources context, aiming to address potential implications of climate change and land-use change from a water demand perspective. We conclude that future water demands in the Upper Ganges region for winter months may not be met.
Programming Many-Core Systems with GRAMPS
2010-08-01
12 3.2 Hypothetical GRAMPS cookie dough application . . . . . . . . . . . 13 3.3 Using a queue set...programming model. As shown, dough preparation is broken into individual stages corresponding to logical steps in 3.2. A GRAMPS EXAMPLE 13 Scoop Cookies Put on...chocolate chip cookie dough . the recipe [7]. Each stage sends its output downstream and takes as input the in- gredients and/or batter from its prior
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.
NASA Astrophysics Data System (ADS)
Pitts, K.; Nasiri, S. L.; Smith, N.
2013-12-01
Global climate models have improved considerably over the years, yet clouds still represent a large factor of uncertainty for these models. Comparisons of model-simulated cloud variables with equivalent satellite cloud products are the best way to start diagnosing the differences between model output and observations. Gridded (level 3) cloud products from many different satellites and instruments are required for a full analysis, but these products are created by different science teams using different algorithms and filtering criteria to create similar, but not directly comparable, cloud products. This study makes use of a recently developed uniform space-time gridding algorithm to create a new set of gridded cloud products from each satellite instrument's level 2 data of interest which are each filtered using the same criteria, allowing for a more direct comparison between satellite products. The filtering is done via several variables such as cloud top pressure/height, thermodynamic phase, optical properties, satellite viewing angle, and sun zenith angle. The filtering criteria are determined based on the variable being analyzed and the science question at hand. Each comparison of different variables may require different filtering strategies as no single approach is appropriate for all problems. Beyond inter-satellite data comparison, these new sets of uniformly gridded satellite products can also be used for comparison with model-simulated cloud variables. Of particular interest to this study are the differences in the vertical distributions of ice and liquid water content between the satellite retrievals and model simulations, especially in the mid-troposphere where there are mixed-phase clouds to consider. This presentation will demonstrate the proof of concept through comparisons of cloud water path from Aqua MODIS retrievals and NASA GISS-E2-[R/H] model simulations archived in the CMIP5 data portal.
NASA Astrophysics Data System (ADS)
Razak, Jeefferie Abd; Ahmad, Sahrim Haji; Ratnam, Chantara Thevy; Mahamood, Mazlin Aida; Yaakub, Juliana; Mohamad, Noraiham
2014-09-01
Fractional 25 two-level factorial design of experiment (DOE) was applied to systematically prepare the NR/EPDM blend using Haake internal mixer set-up. The process model of rubber blend preparation that correlates the relationships between the mixer process input parameters and the output response of blend compatibility was developed. Model analysis of variance (ANOVA) and model fitting through curve evaluation finalized the R2 of 99.60% with proposed parametric combination of A = 30/70 NR/EPDM blend ratio; B = 70°C mixing temperature; C = 70 rpm of rotor speed; D = 5 minutes of mixing period and E = 1.30 phr EPDM-g-MAH compatibilizer addition, with overall 0.966 desirability. Model validation with small deviation at +2.09% confirmed the repeatability of the mixing strategy with valid maximum tensile strength output representing the blend miscibility. Theoretical calculation of NR/EPDM blend compatibility is also included and compared. In short, this study provides a brief insight on the utilization of DOE for experimental simplification and parameter inter-correlation studies, especially when dealing with multiple variables during elastomeric rubber blend preparation.
Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software
NASA Astrophysics Data System (ADS)
Gowda, P. H.; Moorhead, J.; Brauer, D. K.
2017-12-01
Evapotranspiration (ET) is a major component of the hydrologic cycle. ET data are used for a variety of water management and research purposes such as irrigation scheduling, water and crop modeling, streamflow, water availability, and many more. Remote sensing products have been widely used to create spatially representative ET data sets which provide important information from field to regional scales. As UAV capabilities increase, remote sensing use is likely to also increase. For that purpose, scientists at the USDA-ARS research laboratory in Bushland, TX developed the Bushland Evapotranspiration and Agricultural Remote Sensing System (BEARS) software. The BEARS software is a Java based software that allows users to process remote sensing data to generate ET outputs using predefined models, or enter custom equations and models. The capability to define new equations and build new models expands the applicability of the BEARS software beyond ET mapping to any remote sensing application. The software also includes an image viewing tool that allows users to visualize outputs, as well as draw an area of interest using various shapes. This software is freely available from the USDA-ARS Conservation and Production Research Laboratory website.
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
Fast Query-Optimized Kernel-Machine Classification
NASA Technical Reports Server (NTRS)
Mazzoni, Dominic; DeCoste, Dennis
2004-01-01
A recently developed algorithm performs kernel-machine classification via incremental approximate nearest support vectors. The algorithm implements support-vector machines (SVMs) at speeds 10 to 100 times those attainable by use of conventional SVM algorithms. The algorithm offers potential benefits for classification of images, recognition of speech, recognition of handwriting, and diverse other applications in which there are requirements to discern patterns in large sets of data. SVMs constitute a subset of kernel machines (KMs), which have become popular as models for machine learning and, more specifically, for automated classification of input data on the basis of labeled training data. While similar in many ways to k-nearest-neighbors (k-NN) models and artificial neural networks (ANNs), SVMs tend to be more accurate. Using representations that scale only linearly in the numbers of training examples, while exploring nonlinear (kernelized) feature spaces that are exponentially larger than the original input dimensionality, KMs elegantly and practically overcome the classic curse of dimensionality. However, the price that one must pay for the power of KMs is that query-time complexity scales linearly with the number of training examples, making KMs often orders of magnitude more computationally expensive than are ANNs, decision trees, and other popular machine learning alternatives. The present algorithm treats an SVM classifier as a special form of a k-NN. The algorithm is based partly on an empirical observation that one can often achieve the same classification as that of an exact KM by using only small fraction of the nearest support vectors (SVs) of a query. The exact KM output is a weighted sum over the kernel values between the query and the SVs. In this algorithm, the KM output is approximated with a k-NN classifier, the output of which is a weighted sum only over the kernel values involving k selected SVs. Before query time, there are gathered statistics about how misleading the output of the k-NN model can be, relative to the outputs of the exact KM for a representative set of examples, for each possible k from 1 to the total number of SVs. From these statistics, there are derived upper and lower thresholds for each step k. These thresholds identify output levels for which the particular variant of the k-NN model already leans so strongly positively or negatively that a reversal in sign is unlikely, given the weaker SV neighbors still remaining. At query time, the partial output of each query is incrementally updated, stopping as soon as it exceeds the predetermined statistical thresholds of the current step. For an easy query, stopping can occur as early as step k = 1. For more difficult queries, stopping might not occur until nearly all SVs are touched. A key empirical observation is that this approach can tolerate very approximate nearest-neighbor orderings. In experiments, SVs and queries were projected to a subspace comprising the top few principal- component dimensions and neighbor orderings were computed in that subspace. This approach ensured that the overhead of the nearest-neighbor computations was insignificant, relative to that of the exact KM computation.
Rosello, Alicia; Horner, Carolyne; Hopkins, Susan; Hayward, Andrew C; Deeny, Sarah R
2017-02-01
OBJECTIVES (1) To systematically search for all dynamic mathematical models of infectious disease transmission in long-term care facilities (LTCFs); (2) to critically evaluate models of interventions against antimicrobial resistance (AMR) in this setting; and (3) to develop a checklist for hospital epidemiologists and policy makers by which to distinguish good quality models of AMR in LTCFs. METHODS The CINAHL, EMBASE, Global Health, MEDLINE, and Scopus databases were systematically searched for studies of dynamic mathematical models set in LTCFs. Models of interventions targeting methicillin-resistant Staphylococcus aureus in LTCFs were critically assessed. Using this analysis, we developed a checklist for good quality mathematical models of AMR in LTCFs. RESULTS AND DISCUSSION Overall, 18 papers described mathematical models that characterized the spread of infectious diseases in LTCFs, but no models of AMR in gram-negative bacteria in this setting were described. Future models of AMR in LTCFs require a more robust methodology (ie, formal model fitting to data and validation), greater transparency regarding model assumptions, setting-specific data, realistic and current setting-specific parameters, and inclusion of movement dynamics between LTCFs and hospitals. CONCLUSIONS Mathematical models of AMR in gram-negative bacteria in the LTCF setting, where these bacteria are increasingly becoming prevalent, are needed to help guide infection prevention and control. Improvements are required to develop outputs of sufficient quality to help guide interventions and policy in the future. We suggest a checklist of criteria to be used as a practical guide to determine whether a model is robust enough to test policy. Infect Control Hosp Epidemiol 2017;38:216-225.
Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)
Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.
2015-01-01
These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.
A daily, 1 km resolution data set of downscaled Greenland ice sheet surface mass balance (1958-2015)
NASA Astrophysics Data System (ADS)
Noël, Brice; van de Berg, Willem Jan; Machguth, Horst; Lhermitte, Stef; Howat, Ian; Fettweis, Xavier; van den Broeke, Michiel R.
2016-10-01
This study presents a data set of daily, 1 km resolution Greenland ice sheet (GrIS) surface mass balance (SMB) covering the period 1958-2015. Applying corrections for elevation, bare ice albedo and accumulation bias, the high-resolution product is statistically downscaled from the native daily output of the polar regional climate model RACMO2.3 at 11 km. The data set includes all individual SMB components projected to a down-sampled version of the Greenland Ice Mapping Project (GIMP) digital elevation model and ice mask. The 1 km mask better resolves narrow ablation zones, valley glaciers, fjords and disconnected ice caps. Relative to the 11 km product, the more detailed representation of isolated glaciated areas leads to increased precipitation over the southeastern GrIS. In addition, the downscaled product shows a significant increase in runoff owing to better resolved low-lying marginal glaciated regions. The combined corrections for elevation and bare ice albedo markedly improve model agreement with a newly compiled data set of ablation measurements.
Flatness-based model inverse for feed-forward braking control
NASA Astrophysics Data System (ADS)
de Vries, Edwin; Fehn, Achim; Rixen, Daniel
2010-12-01
For modern cars an increasing number of driver assistance systems have been developed. Some of these systems interfere/assist with the braking of a car. Here, a brake actuation algorithm for each individual wheel that can respond to both driver inputs and artificial vehicle deceleration set points is developed. The algorithm consists of a feed-forward control that ensures, within the modelled system plant, the optimal behaviour of the vehicle. For the quarter-car model with LuGre-tyre behavioural model, an inverse model can be derived using v x as the 'flat output', that is, the input for the inverse model. A number of time derivatives of the flat output are required to calculate the model input, brake torque. Polynomial trajectory planning provides the needed time derivatives of the deceleration request. The transition time of the planning can be adjusted to meet actuator constraints. It is shown that the output of the trajectory planning would ripple and introduce a time delay when a gradual continuous increase of deceleration is requested by the driver. Derivative filters are then considered: the Bessel filter provides the best symmetry in its step response. A filter of same order and with negative real-poles is also used, exhibiting no overshoot nor ringing. For these reasons, the 'real-poles' filter would be preferred over the Bessel filter. The half-car model can be used to predict the change in normal load on the front and rear axle due to the pitching of the vehicle. The anticipated dynamic variation of the wheel load can be included in the inverse model, even though it is based on a quarter-car. Brake force distribution proportional to normal load is established. It provides more natural and simpler equations than a fixed force ratio strategy.
NASA Astrophysics Data System (ADS)
Bisht, K.; Dodamani, S. S.
2016-12-01
Modelling of Land Surface Temperature is essential for short term and long term management of environmental studies and management activities of the Earth's resources. The objective of this research is to estimate and model Land Surface Temperatures (LST). For this purpose, Landsat 7 ETM+ images period from 2007 to 2012 were used for retrieving LST and processed through MATLAB software using Mamdani fuzzy inference systems (MFIS), which includes pre-monsoon and post-monsoon LST in the fuzzy model. The Mangalore City of Karnataka state, India has been taken for this research work. Fuzzy model inputs are considered as the pre-monsoon and post-monsoon retrieved temperatures and LST was chosen as output. In order to develop a fuzzy model for LST, seven fuzzy subsets, nineteen rules and one output are considered for the estimation of weekly mean air temperature. These are very low (VL), low (L), medium low (ML), medium (M), medium high (MH), high (H) and very high (VH). The TVX (Surface Temperature Vegetation Index) and the empirical method have provided estimated LST. The study showed that the Fuzzy model M4/7-19-1 (model 4, 7 fuzzy sets, 19 rules and 1 output) which developed over Mangalore City has provided more accurate outcomes than other models (M1, M2, M3, M5). The result of this research was evaluated according to statistical rules. The best correlation coefficient (R) and root mean squared error (RMSE) between estimated and measured values for pre-monsoon and post-monsoon LST found to be 0.966 - 1.607 K and 0.963- 1.623 respectively.
Cosmic logic: a computational model
NASA Astrophysics Data System (ADS)
Vanchurin, Vitaly
2016-02-01
We initiate a formal study of logical inferences in context of the measure problem in cosmology or what we call cosmic logic. We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or Gödel number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the non-computability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities on the set of all CO machines using cut-off prescriptions. The cut-off measures can still be used if the set is reduced to include only machines which halt after a finite and predetermined number of steps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosler, Peter
Stride Search provides a flexible tool for detecting storms or other extreme climate events in high-resolution climate data sets saved on uniform latitude-longitude grids in standard NetCDF format. Users provide the software a quantitative description of a meteorological event they are interested in; the software searches a data set for locations in space and time that meet the user’s description. In its first stage, Stride Search performs a spatial search of the data set at each timestep by dividing a search domain into circular sectors of constant geodesic radius. Data from a netCDF file is read into memory for eachmore » circular search sector. If the data meet or exceed a set of storm identification criteria (defined by the user), a storm is recorded to a linked list. Finally, the linked list is examined and duplicate detections of the same storm are removed and the results are written to an output file. The first stage’s output file is read by a second program that builds storm. Additional identification criteria may be applied at this stage to further classify storms. Storm tracks are the software’s ultimate output and routines are provided for formatting that output for various external software libraries for plotting and tabulating data.« less
Orzol, Leonard L.
1997-01-01
MODTOOLS uses the particle data calculated by MODPATH to construct several types of GIS output. MODTOOLS uses particle information recorded by MODPATH such as the row, column, or layer of the model grid, to generate a set of characteristics associated with each particle. The user can choose from the set of characteristics associated with each particle and use the capabilities of the GIS to selectively trace the movement of water discharging from specific cells in the model grid. MODTOOLS allows the hydrogeologist to utilize the capabilities of the GIS to graphically combine the results of the particle-tracking analysis, which facilitates the analysis and understanding of complex ground-water flow systems.
Modeling static and dynamic human cardiovascular responses to exercise.
Stremel, R W; Bernauer, E M; Harter, L W; Schultz, R A; Walters, R F
1975-08-01
A human performance model has been developed and described [9] which portrays the human circulatory, thermo regulatory and energy-exchange systems as an intercoupled set. In this model, steady state or static relationships are used to describe oxygen consumption and blood flow. For example, heart rate (HTRT) is calculated as a function of the oxygen and the thermo-regulatory requirements of each body compartment, using the steady state work values of cardiac output (CO, sum of all compartment blood flows) and stroke volume (SV, assumed maximal after 40% maximal oxygen consumption): HTRT=CO/SV. The steady state model has proven to be an acceptable first approximation, but the inclusion of transient characteristics are essential in describing the overall systems' adjustment to exercise stress. In the present study, the dynamic transient characteristics of heart rate, stroke volume and cardiac output were obtained from experiments utilizing step and sinusoidal forcing of work. The gain and phase relationships reveal a probable first order system with a six minute time constant, and are utilized to model the transient characteristics of these parameters. This approach leads to a more complex model but a more accurate representation of the physiology involved. The instrumentation and programming essential to these experiments are described.
Dynamic Fuzzy Model Development for a Drum-type Boiler-turbine Plant Through GK Clustering
NASA Astrophysics Data System (ADS)
Habbi, Ahcène; Zelmat, Mimoun
2008-10-01
This paper discusses a TS fuzzy model identification method for an industrial drum-type boiler plant using the GK fuzzy clustering approach. The fuzzy model is constructed from a set of input-output data that covers a wide operating range of the physical plant. The reference data is generated using a complex first-principle-based mathematical model that describes the key dynamical properties of the boiler-turbine dynamics. The proposed fuzzy model is derived by means of fuzzy clustering method with particular attention on structure flexibility and model interpretability issues. This may provide a basement of a new way to design model based control and diagnosis mechanisms for the complex nonlinear plant.
A robot conditioned reflex system modeled after the cerebellum.
NASA Technical Reports Server (NTRS)
Albus, J. S.
1972-01-01
Reduction of a theory of cerebellar function to computer software for the control of a mechanical manipulator. This reduction is achieved by considering the cerebellum, along with the higher-level brain centers which control it, as a type of finite-state machine with input entering the cerebellum via mossy fibers from the periphery and output from the cerebellum occurring via Purkinje cells. It is hypothesized that the cerebellum learns by an error-correction system similar to Perceptron training algorithms. An electromechanical model of the cerebellum is then developed for the control of a mechanical arm. The problem of modeling the granular layer which selects the set of parallel fibers which are active at any instant of time is considered, and a relevance matrix is constructed to model the relative degree of influence which mossy fibers from the various joints have on the sets of granule cells unique to each joint.
NASA Astrophysics Data System (ADS)
Woolfrey, John R.; Avery, Mitchell A.; Doweyko, Arthur M.
1998-03-01
Two three-dimensional quantitative structure-activity relationship (3D-QSAR) methods, comparative molecular field analysis (CoMFA) and hypothetical active site lattice (HASL), were compared with respect to the analysis of a training set of 154 artemisinin analogues. Five models were created, including a complete HASL and two trimmed versions, as well as two CoMFA models (leave-one-out standard CoMFA and the guided-region selection protocol). Similar r2 and q2 values were obtained by each method, although some striking differences existed between CoMFA contour maps and the HASL output. Each of the four predictive models exhibited a similar ability to predict the activity of a test set of 23 artemisinin analogues, although some differences were noted as to which compounds were described well by either model.
Signal analysis of accelerometry data using gravity-based modeling
NASA Astrophysics Data System (ADS)
Davey, Neil P.; James, Daniel A.; Anderson, Megan E.
2004-03-01
Triaxial accelerometers have been used to measure human movement parameters in swimming. Interpretation of data is difficult due to interference sources including interaction of external bodies. In this investigation the authors developed a model to simulate the physical movement of the lower back. Theoretical accelerometery outputs were derived thus giving an ideal, or noiseless dataset. An experimental data collection apparatus was developed by adapting a system to the aquatic environment for investigation of swimming. Model data was compared against recorded data and showed strong correlation. Comparison of recorded and modeled data can be used to identify changes in body movement, this is especially useful when cyclic patterns are present in the activity. Strong correlations between data sets allowed development of signal processing algorithms for swimming stroke analysis using first the pure noiseless data set which were then applied to performance data. Video analysis was also used to validate study results and has shown potential to provide acceptable results.
A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments
NASA Astrophysics Data System (ADS)
Quigley, Patricia Allison
Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.
Microcomputer-based classification of environmental data in municipal areas
NASA Astrophysics Data System (ADS)
Thiergärtner, H.
1995-10-01
Multivariate data-processing methods used in mineral resource identification can be used to classify urban regions. Using elements of expert systems, geographical information systems, as well as known classification and prognosis systems, it is possible to outline a single model that consists of resistant and of temporary parts of a knowledge base including graphical input and output treatment and of resistant and temporary elements of a bank of methods and algorithms. Whereas decision rules created by experts will be stored in expert systems directly, powerful classification rules in form of resistant but latent (implicit) decision algorithms may be implemented in the suggested model. The latent functions will be transformed into temporary explicit decision rules by learning processes depending on the actual task(s), parameter set(s), pixels selection(s), and expert control(s). This takes place both at supervised and nonsupervised classification of multivariately described pixel sets representing municipal subareas. The model is outlined briefly and illustrated by results obtained in a target area covering a part of the city of Berlin (Germany).
Device-independent tests of quantum channels
NASA Astrophysics Data System (ADS)
Dall'Arno, Michele; Brandsen, Sarah; Buscemi, Francesco
2017-03-01
We develop a device-independent framework for testing quantum channels. That is, we falsify a hypothesis about a quantum channel based only on an observed set of input-output correlations. Formally, the problem consists of characterizing the set of input-output correlations compatible with any arbitrary given quantum channel. For binary (i.e. two input symbols, two output symbols) correlations, we show that extremal correlations are always achieved by orthogonal encodings and measurements, irrespective of whether or not the channel preserves commutativity. We further provide a full, closed-form characterization of the sets of binary correlations in the case of: (i) any dihedrally covariant qubit channel (such as any Pauli and amplitude-damping channels) and (ii) any universally-covariant commutativity-preserving channel in an arbitrary dimension (such as any erasure, depolarizing, universal cloning and universal transposition channels).
Device-independent tests of quantum channels.
Dall'Arno, Michele; Brandsen, Sarah; Buscemi, Francesco
2017-03-01
We develop a device-independent framework for testing quantum channels. That is, we falsify a hypothesis about a quantum channel based only on an observed set of input-output correlations. Formally, the problem consists of characterizing the set of input-output correlations compatible with any arbitrary given quantum channel. For binary (i.e. two input symbols, two output symbols) correlations, we show that extremal correlations are always achieved by orthogonal encodings and measurements, irrespective of whether or not the channel preserves commutativity. We further provide a full, closed-form characterization of the sets of binary correlations in the case of: (i) any dihedrally covariant qubit channel (such as any Pauli and amplitude-damping channels) and (ii) any universally-covariant commutativity-preserving channel in an arbitrary dimension (such as any erasure, depolarizing, universal cloning and universal transposition channels).
MEGA16 - Computer program for analysis and extrapolation of stress-rupture data
NASA Technical Reports Server (NTRS)
Ensign, C. R.
1981-01-01
The computerized form of the minimum commitment method of interpolating and extrapolating stress versus time to failure data, MEGA16, is described. Examples are given of its many plots and tabular outputs for a typical set of data. The program assumes a specific model equation and then provides a family of predicted isothermals for any set of data with at least 12 stress-rupture results from three different temperatures spread over reasonable stress and time ranges. It is written in FORTRAN 4 using IBM plotting subroutines and its runs on an IBM 370 time sharing system.
Programming a Detector Emulator on NI's FlexRIO Platform
NASA Astrophysics Data System (ADS)
Gervais, Michelle; Crawford, Christopher; Sprow, Aaron; Nab Collaboration
2017-09-01
Recently digital detector emulators have been on the rise as a means to test data acquisition systems and analysis toolkits from a well understood data set. National Instruments' PXIe-7962R FPGA module and Active Technologies AT-1212 DAC module provide a customizable platform for analog output. Using a graphical programming language, we have developed a system capable of producing two time-correlated channels of analog output which sample unique amplitude spectra to mimic nuclear physics experiments. This system will be used to model the Nab experiment, in which a prompt beta decay electron is followed by a slow proton according to a defined time distribution. We will present the results of our work and discuss further development potential. DOE under Contract DE-SC0008107.
NASA Astrophysics Data System (ADS)
Syryamkim, V. I.; Kuznetsov, D. N.; Kuznetsova, A. S.
2018-05-01
Image recognition is an information process implemented by some information converter (intelligent information channel, recognition system) having input and output. The input of the system is fed with information about the characteristics of the objects being presented. The output of the system displays information about which classes (generalized images) the recognized objects are assigned to. When creating and operating an automated system for pattern recognition, a number of problems are solved, while for different authors the formulations of these tasks, and the set itself, do not coincide, since it depends to a certain extent on the specific mathematical model on which this or that recognition system is based. This is the task of formalizing the domain, forming a training sample, learning the recognition system, reducing the dimensionality of space.
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Schubert, Siegfried; Rood, Richard; Park, Chung-Kyu; Wu, Chung-Yu; Kondratyeva, Yelena; Molod, Andrea; Takacs, Lawrence; Seablom, Michael; Higgins, Wayne
1995-01-01
The Data Assimilation Office (DAO) at Goddard Space Flight Center has produced a multiyear global assimilated data set with version 1 of the Goddard Earth Observing System Data Assimilation System (GEOS-1 DAS). One of the main goals of this project, in addition to benchmarking the GEOS-1 system, was to produce a research quality data set suitable for the study of short-term climate variability. The output, which is global and gridded, includes all prognostic fields and a large number of diagnostic quantities such as precipitation, latent heating, and surface fluxes. Output is provided four times daily with selected quantities available eight times per day. Information about the observations input to the GEOS-1 DAS is provided in terms of maps of spatial coverage, bar graphs of data counts, and tables of all time periods with significant data gaps. The purpose of this document is to serve as a users' guide to NASA's first multiyear assimilated data set and to provide an early look at the quality of the output. Documentation is provided on all the data archives, including sample read programs and methods of data access. Extensive comparisons are made with the corresponding operational European Center for Medium-Range Weather Forecasts analyses, as well as various in situ and satellite observations. This document is also intended to alert users of the data about potential limitations of assimilated data, in general, and the GEOS-1 data, in particular. Results are presented for the period March 1985-February 1990.
Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models
Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.
2011-01-01
We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.
A two-stage DEA approach for environmental efficiency measurement.
Song, Malin; Wang, Shuhong; Liu, Wei
2014-05-01
The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.
Extending simulation modeling to activity-based costing for clinical procedures.
Glick, N D; Blackmore, C C; Zelman, W N
2000-04-01
A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.
Neural network based system for equipment surveillance
Vilim, Richard B.; Gross, Kenneth C.; Wegerich, Stephan W.
1998-01-01
A method and system for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process.
Neural network based system for equipment surveillance
Vilim, R.B.; Gross, K.C.; Wegerich, S.W.
1998-04-28
A method and system are disclosed for performing surveillance of transient signals of an industrial device to ascertain the operating state. The method and system involves the steps of reading into a memory training data, determining neural network weighting values until achieving target outputs close to the neural network output. If the target outputs are inadequate, wavelet parameters are determined to yield neural network outputs close to the desired set of target outputs and then providing signals characteristic of an industrial process and comparing the neural network output to the industrial process signals to evaluate the operating state of the industrial process. 33 figs.
Advanced approach to the analysis of a series of in-situ nuclear forward scattering experiments
NASA Astrophysics Data System (ADS)
Vrba, Vlastimil; Procházka, Vít; Smrčka, David; Miglierini, Marcel
2017-03-01
This study introduces a sequential fitting procedure as a specific approach to nuclear forward scattering (NFS) data evaluation. Principles and usage of this advanced evaluation method are described in details and its utilization is demonstrated on NFS in-situ investigations of fast processes. Such experiments frequently consist of hundreds of time spectra which need to be evaluated. The introduced procedure allows the analysis of these experiments and significantly decreases the time needed for the data evaluation. The key contributions of the study are the sequential use of the output fitting parameters of a previous data set as the input parameters for the next data set and the model suitability crosscheck option of applying the procedure in ascending and descending directions of the data sets. Described fitting methodology is beneficial for checking of model validity and reliability of obtained results.
The Mpi-M Aerosol Climatology (MAC)
NASA Astrophysics Data System (ADS)
Kinne, S.
2014-12-01
Monthly gridded global data-sets for aerosol optical properties (AOD, SSA and g) and for aerosol microphysical properties (CCN and IN) offer a (less complex) alternate path to include aerosol radiative effects and aerosol impacts on cloud-microphysics in global simulations. Based on merging AERONET sun-/sky-photometer data onto background maps provided by AeroCom phase 1 modeling output and AERONET sun-/the MPI-M Aerosol Climatology (MAC) version 1 was developed and applied in IPCC simulations with ECHAM and as ancillary data-set in satellite-based global data-sets. An updated version 2 of this climatology will be presented now applying central values from the more recent AeroCom phase 2 modeling and utilizing the better global coverage of trusted sun-photometer data - including statistics from the Marine Aerosol network (MAN). Applications include spatial distributions of estimates for aerosol direct and aerosol indirect radiative effects.
A methodology for long range prediction of air transportation
NASA Technical Reports Server (NTRS)
Ayati, M. B.; English, J. M.
1980-01-01
The paper describes the methodology for long-time projection of aircraft fuel requirements. A new concept of social and economic factors for future aviation industry which provides an estimate of predicted fuel usage is presented; it includes air traffic forecasts and lead times for producing new engines and aircraft types. An air transportation model is then developed in terms of an abstracted set of variables which represent the entire aircraft industry on a macroscale. This model was evaluated by testing the required output variables from a model based on historical data over the past decades.
NASA Technical Reports Server (NTRS)
Myers, Jerry G.; Young, M.; Goodenow, Debra A.; Keenan, A.; Walton, M.; Boley, L.
2015-01-01
Model and simulation (MS) credibility is defined as, the quality to elicit belief or trust in MS results. NASA-STD-7009 [1] delineates eight components (Verification, Validation, Input Pedigree, Results Uncertainty, Results Robustness, Use History, MS Management, People Qualifications) that address quantifying model credibility, and provides guidance to the model developers, analysts, and end users for assessing the MS credibility. Of the eight characteristics, input pedigree, or the quality of the data used to develop model input parameters, governing functions, or initial conditions, can vary significantly. These data quality differences have varying consequences across the range of MS application. NASA-STD-7009 requires that the lowest input data quality be used to represent the entire set of input data when scoring the input pedigree credibility of the model. This requirement provides a conservative assessment of model inputs, and maximizes the communication of the potential level of risk of using model outputs. Unfortunately, in practice, this may result in overly pessimistic communication of the MS output, undermining the credibility of simulation predictions to decision makers. This presentation proposes an alternative assessment mechanism, utilizing results parameter robustness, also known as model input sensitivity, to improve the credibility scoring process for specific simulations.
PROcess Based Diagnostics PROBE
NASA Technical Reports Server (NTRS)
Clune, T.; Schmidt, G.; Kuo, K.; Bauer, M.; Oloso, H.
2013-01-01
Many of the aspects of the climate system that are of the greatest interest (e.g., the sensitivity of the system to external forcings) are emergent properties that arise via the complex interplay between disparate processes. This is also true for climate models most diagnostics are not a function of an isolated portion of source code, but rather are affected by multiple components and procedures. Thus any model-observation mismatch is hard to attribute to any specific piece of code or imperfection in a specific model assumption. An alternative approach is to identify diagnostics that are more closely tied to specific processes -- implying that if a mismatch is found, it should be much easier to identify and address specific algorithmic choices that will improve the simulation. However, this approach requires looking at model output and observational data in a more sophisticated way than the more traditional production of monthly or annual mean quantities. The data must instead be filtered in time and space for examples of the specific process being targeted.We are developing a data analysis environment called PROcess-Based Explorer (PROBE) that seeks to enable efficient and systematic computation of process-based diagnostics on very large sets of data. In this environment, investigators can define arbitrarily complex filters and then seamlessly perform computations in parallel on the filtered output from their model. The same analysis can be performed on additional related data sets (e.g., reanalyses) thereby enabling routine comparisons between model and observational data. PROBE also incorporates workflow technology to automatically update computed diagnostics for subsequent executions of a model. In this presentation, we will discuss the design and current status of PROBE as well as share results from some preliminary use cases.
Flatness-based control in successive loops for stabilization of heart's electrical activity
NASA Astrophysics Data System (ADS)
Rigatos, Gerasimos; Melkikh, Alexey
2016-12-01
The article proposes a new flatness-based control method implemented in successive loops which allows for stabilization of the heart's electrical activity. Heart's pacemaking function is modeled as a set of coupled oscillators which potentially can exhibit chaotic behavior. It is shown that this model satisfies differential flatness properties. Next, the control and stabilization of this model is performed with the use of flatness-based control implemented in cascading loops. By applying a per-row decomposition of the state-space model of the coupled oscillators a set of nonlinear differential equations is obtained. Differential flatness properties are shown to hold for the subsystems associated with the each one of the aforementioned differential equations and next a local flatness-based controller is designed for each subsystem. For the i-th subsystem, state variable xi is chosen to be the flat output and state variable xi+1 is taken to be a virtual control input. Then the value of the virtual control input which eliminates the output tracking error for the i-th subsystem becomes reference setpoint for the i + 1-th subsystem. In this manner the control of the entire state-space model is performed by successive flatness-based control loops. By arriving at the n-th row of the state-space model one computes the control input that can be actually exerted on the aforementioned biosystem. This real control input of the coupled oscillators' system, contains recursively all virtual control inputs associated with the previous n - 1 rows of the state-space model. This control approach achieves asymptotically the elimination of the chaotic oscillation effects and the stabilization of the heart's pulsation rhythm. The stability of the proposed control scheme is proven with the use of Lyapunov analysis.
NASA Astrophysics Data System (ADS)
Shafii, Mahyar; Tolson, Bryan; Shawn Matott, L.
2015-04-01
GLUE is one of the most commonly used informal methodologies for uncertainty estimation in hydrological modelling. Despite the ease-of-use of GLUE, it involves a number of subjective decisions such as the strategy for identifying the behavioural solutions. This study evaluates the impact of behavioural solution identification strategies in GLUE on the quality of model output uncertainty. Moreover, two new strategies are developed to objectively identify behavioural solutions. The first strategy considers Pareto-based ranking of parameter sets, while the second one is based on ranking the parameter sets based on an aggregated criterion. The proposed strategies, as well as the traditional strategies in the literature, are evaluated with respect to reliability (coverage of observations by the envelope of model outcomes) and sharpness (width of the envelope of model outcomes) in different numerical experiments. These experiments include multi-criteria calibration and uncertainty estimation of three rainfall-runoff models with different number of parameters. To demonstrate the importance of behavioural solution identification strategy more appropriately, GLUE is also compared with two other informal multi-criteria calibration and uncertainty estimation methods (Pareto optimization and DDS-AU). The results show that the model output uncertainty varies with the behavioural solution identification strategy, and furthermore, a robust GLUE implementation would require considering multiple behavioural solution identification strategies and choosing the one that generates the desired balance between sharpness and reliability. The proposed objective strategies prove to be the best options in most of the case studies investigated in this research. Implementing such an approach for a high-dimensional calibration problem enables GLUE to generate robust results in comparison with Pareto optimization and DDS-AU.
A fuzzy logic approach to modeling a vehicle crash test
NASA Astrophysics Data System (ADS)
Pawlus, Witold; Karimi, Hamid Reza; Robbersmyr, Kjell G.
2013-03-01
This paper presents an application of fuzzy approach to vehicle crash modeling. A typical vehicle to pole collision is described and kinematics of a car involved in this type of crash event is thoroughly characterized. The basics of fuzzy set theory and modeling principles based on fuzzy logic approach are presented. In particular, exceptional attention is paid to explain the methodology of creation of a fuzzy model of a vehicle collision. Furthermore, the simulation results are presented and compared to the original vehicle's kinematics. It is concluded which factors have influence on the accuracy of the fuzzy model's output and how they can be adjusted to improve the model's fidelity.
Ranking influential spreaders is an ill-defined problem
NASA Astrophysics Data System (ADS)
Gu, Jain; Lee, Sungmin; Saramäki, Jari; Holme, Petter
2017-06-01
Finding influential spreaders of information and disease in networks is an important theoretical problem, and one of considerable recent interest. It has been almost exclusively formulated as a node-ranking problem —methods for identifying influential spreaders output a ranking of the nodes. In this work, we show that such a greedy heuristic does not necessarily work: the set of most influential nodes depends on the number of nodes in the set. Therefore, the set of n most important nodes to vaccinate does not need to have any node in common with the set of n + 1 most important nodes. We propose a method for quantifying the extent and impact of this phenomenon. By this method, we show that it is a common phenomenon in both empirical and model networks.
Performance Analysis of the Automotive TEG with Respect to the Geometry of the Modules
NASA Astrophysics Data System (ADS)
Yu, C. G.; Zheng, S. J.; Deng, Y. D.; Su, C. Q.; Wang, Y. P.
2017-05-01
Recently there has been increasing interest in applying thermoelectric technology to recover waste heat in automotive exhaust gas. Due to the limited space in the vehicle, it's meaningful to improve the TEG (thermoelectric generator) performance by optimizing the module geometry. This paper analyzes the performance of bismuth telluride modules for two criteria (power density and power output per area), and researches the relationship between the performance and the geometry of the modules. A geometry factor is defined for the thermoelectric element to describe the module geometry, and a mathematical model is set up to study the effects of the module geometry on its performance. It has been found out that the optimal geometry factors for maximum output power, power density and power output per unit area are different, and the value of the optimal geometry factors will be affected by the volume of the thermoelectric material and the thermal input. The results can be referred to as the basis for optimizing the performance of the thermoelectric modules.
Introduction of Virtualization Technology to Multi-Process Model Checking
NASA Technical Reports Server (NTRS)
Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu
2009-01-01
Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.
NASA Technical Reports Server (NTRS)
Gibson, S. G.
1983-01-01
A system of computer programs was developed to model general three dimensional surfaces. Surfaces are modeled as sets of parametric bicubic patches. There are also capabilities to transform coordinates, to compute mesh/surface intersection normals, and to format input data for a transonic potential flow analysis. A graphical display of surface models and intersection normals is available. There are additional capabilities to regulate point spacing on input curves and to compute surface/surface intersection curves. Input and output data formats are described; detailed suggestions are given for user input. Instructions for execution are given, and examples are shown.
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.
Data-driven Modeling of Metal-oxide Sensors with Dynamic Bayesian Networks
NASA Astrophysics Data System (ADS)
Gosangi, Rakesh; Gutierrez-Osuna, Ricardo
2011-09-01
We present a data-driven probabilistic framework to model the transient response of MOX sensors modulated with a sequence of voltage steps. Analytical models of MOX sensors are usually built based on the physico-chemical properties of the sensing materials. Although building these models provides an insight into the sensor behavior, they also require a thorough understanding of the underlying operating principles. Here we propose a data-driven approach to characterize the dynamical relationship between sensor inputs and outputs. Namely, we use dynamic Bayesian networks (DBNs), probabilistic models that represent temporal relations between a set of random variables. We identify a set of control variables that influence the sensor responses, create a graphical representation that captures the causal relations between these variables, and finally train the model with experimental data. We validated the approach on experimental data in terms of predictive accuracy and classification performance. Our results show that DBNs can accurately predict the dynamic response of MOX sensors, as well as capture the discriminatory information present in the sensor transients.
A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine
Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B.; Rowley, Andrew; Sugiarto, Indar; Furber, Steve
2017-01-01
We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three “nodes,” where each node is the “basic building block” LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W. PMID:28848380
A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine.
Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B; Rowley, Andrew; Sugiarto, Indar; Furber, Steve
2017-01-01
We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a "basic building block" for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)-brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10-50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three "nodes," where each node is the "basic building block" LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W.
Regression with Small Data Sets: A Case Study using Code Surrogates in Additive Manufacturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamath, C.; Fan, Y. J.
There has been an increasing interest in recent years in the mining of massive data sets whose sizes are measured in terabytes. While it is easy to collect such large data sets in some application domains, there are others where collecting even a single data point can be very expensive, so the resulting data sets have only tens or hundreds of samples. For example, when complex computer simulations are used to understand a scientific phenomenon, we want to run the simulation for many different values of the input parameters and analyze the resulting output. The data set relating the simulationmore » inputs and outputs is typically quite small, especially when each run of the simulation is expensive. However, regression techniques can still be used on such data sets to build an inexpensive \\surrogate" that could provide an approximate output for a given set of inputs. A good surrogate can be very useful in sensitivity analysis, uncertainty analysis, and in designing experiments. In this paper, we compare different regression techniques to determine how well they predict melt-pool characteristics in the problem domain of additive manufacturing. Our analysis indicates that some of the commonly used regression methods do perform quite well even on small data sets.« less
The Critical Infrastructure Portfolio Selection Model
2008-06-13
equal to zero (0.0). In terms of what a DMU might represent in reality, consider a restaurant owner who owns a set of restaurant franchises . That...that is described within this thesis, the DMUs are CI reconstruction projects. Just like the aforementioned restaurant franchise owner, a leader...owner is justifiably interested in knowing which restaurants turn a profit or provide quality service, both necessary benefits (outputs), based on the
Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.
Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D
2014-01-01
Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.
Brainstorming transformative solutions - Sustainable Puerto ...
This narrative scenario depicts one of many possible futures for the island of Puerto Rico in which the goals of energy and food supply resilience have been met. Set in the year 2080, the narrative describes a series of hypothetical (but possible) events, a set of proactive governance actions and policies, and citizen responses to those events and interventions. The narrative is based on expert-opinion and extrapolation of trends in energy markets, technology, and policy development, as well as recent events in Puerto Rico. This narrative was developed as part of a futures exercise, and the outputs of a recent stakeholder and expert workshop, to inform modeling efforts underway by a coalition of researchers and local stakeholders -- an NSF-funded project entitled, Urban Resilience to Extreme Events. This narrative, which describes one of many potential energy futures for the island of Puerto Rico, uses expert opinion and extrapolation of recent trends in energy markets, technology, and policy development to describe a scenario in which Puerto Rico has achieved recently stated goals for energy and food systems resilience and sustainability. It will be used along with the outputs of a recent stakeholder workshop to inform model building. The document will be posted in the Urban Resilience to Extreme Events Research Network's blog.
A Nested Nearshore Nutrient Model (N&Sup3;M) for ...
Nearshore conditions drive phenomena like harmful algal blooms (HABs), and the nearshore and coastal margin are the parts of the Great Lakes most used by humans. To assess conditions, optimize monitoring, and evaluate management options, a model of nearshore nutrient transport and algal dynamics is being developed. The model targets a “regional” spatial scale, similar to the Great Lakes Aquatic Habitat Framework's sub-basins, which divide the nearshore into 30 regions. Model runs are 365 days, a whole season temporal scale, reporting at 3 hour intervals. N³M uses output from existing hydrodynamic models and simple transport kinetics. The nutrient transport component of this model is largely complete, and is being tested with various hydrodynamic data sets. The first test case covers a 200 km² area between two major tributaries to Lake Michigan, the Grand and Muskegon. N³M currently simulates phosphorous and chloride, selected for their distinct in-lake transport dynamics; nitrogen will be added. Initial results for 2003, 2010, and 2015 show encouraging correlations with field measurements. Initially implemented in MatLab, the model is currently implemented in Python and leverages multi-processor computation. The 4D in-browser visualizer Cesium is used to view model output, time varying satellite imagery, and field observations. not applicable
Estimates of emergency operating capacity in U.S. manufacturing industries: 1994--2005
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belzer, D.B.
1997-02-01
To develop integrated policies for mobilization preparedness, planners require estimates and projections of available productive capacity during national emergency conditions. This report develops projections of national emergency operating capacity (EOC) for 458 US manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level. These measures are intended for use in planning models that are designed to predict the demands for detailed industry sectors that would occur under conditions such as a military mobilization or a major national disaster. This report is part of an ongoing series of studies prepared by the Pacific Northwest National Laboratory to support mobilization planning studiesmore » of the Federal Emergency Planning Agency/US Department of Defense (FEMA/DOD). Earlier sets of EOC estimates were developed in 1985 and 1991. This study presents estimates of EOC through 2005. As in the 1991 study, projections of capacity were based upon extrapolations of equipment capital stocks. The methodology uses time series regression models based on industry data to obtain a response function of industry capital stock to levels of industrial output. The distributed lag coefficients of these response function are then used with projected outputs to extrapolate the 1994 level of EOC. Projections of industrial outputs were taken from the intermediate-term forecast of the US economy prepared by INFORUM (Interindustry Forecasting Model, University of Maryland) in the spring of 1996.« less
On the use of through-fall exclusion experiments to filter model hypotheses.
NASA Astrophysics Data System (ADS)
Fisher, R.
2015-12-01
One key threat to the continued existence of large tropical forest carbon reservoirs is the increasing severity of drought across Amazonian forests, observed both in climate model predictions, in recent extreme drought events and in the more chronic lengthening of the dry season of South Eastern Amazonia. Model comprehension of these systems is in it's infancy, particularly with regard to the sensitivities of model output to the representation of hydraulic strategies in tropical forest systems. Here we use data from the ongoing 14 year old Caxiuana through-fall exclusion experiment, in Eastern Brazil, to filter a set of representations of the costs and benefits of alternative hydraulic strategies. In representations where there is a high resource cost to hydraulic resilience, the trait filtering CLM4.5(ED) model selects vegetation types that are sensitive to drought. Conversely, where drought tolerance is inexpensive, a more robust ecosystem emerges from the vegetation dynamic prediction. Thus, there is an impact of trait trade-off relationships on rainforest drought tolerance. It is possible to constrain the more realistic scenarios using outputs from the drought experiments. Better prediction would likely result from a more comprehensive understanding of the costs and benefits of alternative plant strategies.
Physics Based Modeling and Prognostics of Electrolytic Capacitors
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan; Ceyla, Jose R.; Biswas, Gautam; Goebel, Kai
2012-01-01
This paper proposes first principles based modeling and prognostics approach for electrolytic capacitors. Electrolytic capacitors have become critical components in electronics systems in aeronautics and other domains. Degradations and faults in DC-DC converter unit propagates to the GPS and navigation subsystems and affects the overall solution. Capacitors and MOSFETs are the two major components, which cause degradations and failures in DC-DC converters. This type of capacitors are known for its low reliability and frequent breakdown on critical systems like power supplies of avionics equipment and electrical drivers of electromechanical actuators of control surfaces. Some of the more prevalent fault effects, such as a ripple voltage surge at the power supply output can cause glitches in the GPS position and velocity output, and this, in turn, if not corrected will propagate and distort the navigation solution. In this work, we study the effects of accelerated aging due to thermal stress on different sets of capacitors under different conditions. Our focus is on deriving first principles degradation models for thermal stress conditions. Data collected from simultaneous experiments are used to validate the desired models. Our overall goal is to derive accurate models of capacitor degradation, and use them to predict performance changes in DC-DC converters.
High-resolution Doppler model of the human gait
NASA Astrophysics Data System (ADS)
Geisheimer, Jonathan L.; Greneker, Eugene F., III; Marshall, William S.
2002-07-01
A high resolution Doppler model of the walking human was developed for analyzing the continuous wave (CW) radar gait signature. Data for twenty subjects were collected simultaneously using an infrared motion capture system along with a two channel 10.525 GHz CW radar. The motion capture system recorded three-dimensional coordinates of infrared markers placed on the body. These body marker coordinates were used as inputs to create the theoretical Doppler output using a model constructed in MATLAB. The outputs of the model are the simulated Doppler signals due to each of the major limbs and the thorax. An estimated radar cross section for each part of the body was assigned using the Lund & Browder chart of estimated body surface area. The resultant Doppler model was then compared with the actual recorded Doppler gait signature in the frequency domain using the spectrogram. Comparison of the two sets of data has revealed several identifiable biomechanical features in the radar gait signature due to leg and body motion. The result of the research shows that a wealth of information can be unlocked from the radar gait signature, which may be useful in security and biometric applications.
NASA Astrophysics Data System (ADS)
Vannametee, E.; Karssenberg, D.; Hendriks, M. R.; de Jong, S. M.; Bierkens, M. F. P.
2010-05-01
We propose a modelling framework for distributed hydrological modelling of 103-105 km2 catchments by discretizing the catchment in geomorphologic units. Each of these units is modelled using a lumped model representative for the processes in the unit. Here, we focus on the development and parameterization of this lumped model as a component of our framework. The development of the lumped model requires rainfall-runoff data for an extensive set of geomorphological units. Because such large observational data sets do not exist, we create artificial data. With a high-resolution, physically-based, rainfall-runoff model, we create artificial rainfall events and resulting hydrographs for an extensive set of different geomorphological units. This data set is used to identify the lumped model of geomorphologic units. The advantage of this approach is that it results in a lumped model with a physical basis, with representative parameters that can be derived from point-scale measurable physical parameters. The approach starts with the development of the high-resolution rainfall-runoff model that generates an artificial discharge dataset from rainfall inputs as a surrogate of a real-world dataset. The model is run for approximately 105 scenarios that describe different characteristics of rainfall, properties of the geomorphologic units (i.e. slope gradient, unit length and regolith properties), antecedent moisture conditions and flow patterns. For each scenario-run, the results of the high-resolution model (i.e. runoff and state variables) at selected simulation time steps are stored in a database. The second step is to develop the lumped model of a geomorphological unit. This forward model consists of a set of simple equations that calculate Hortonian runoff and state variables of the geomorphologic unit over time. The lumped model contains only three parameters: a ponding factor, a linear reservoir parameter, and a lag time. The model is capable of giving an appropriate representation of the transient rainfall-runoff relations that exist in the artificial data set generated with the high-resolution model. The third step is to find the values of empirical parameters in the lumped forward model using the artificial dataset. For each scenario of the high-resolution model run, a set of lumped model parameters is determined with a fitting method using the corresponding time series of state variables and outputs retrieved from the database. Thus, the parameters in the lumped model can be estimated by using the artificial data set. The fourth step is to develop an approach to assign lumped model parameters based upon the properties of the geomorphological unit. This is done by finding relationships between the measurable physical properties of geomorphologic units (i.e. slope gradient, unit length, and regolith properties) and the lumped forward model parameters using multiple regression techniques. In this way, a set of lumped forward model parameters can be estimated as a function of morphology and physical properties of the geomorphologic units. The lumped forward model can then be applied to different geomorphologic units. Finally, the performance of the lumped forward model is evaluated; the outputs of the lumped forward model are compared with the results of the high-resolution model. Our results show that the lumped forward model gives the best estimates of total discharge volumes and peak discharges when rain intensities are not significantly larger than the infiltration capacities of the units and when the units are small with a flat gradient. Hydrograph shapes are fairly well reproduced for most cases except for flat and elongated units with large runoff volumes. The results of this study provide a first step towards developing low-dimensional models for large ungauged basins.
2012-10-10
functions) that you define as important outputs of the model. Think of the Monte Carlo simulation approach as picking golf balls out of a large...the large model as a very large basket, wherein many baby baskets reside. Each baby basket has its own set of colored golf balls that are bouncing...around. Sometimes these baby baskets are linked with each other (if there is a correlation between the variables), forcing the golf balls to bounce in
Data free inference with processed data products
Chowdhary, K.; Najm, H. N.
2014-07-12
Here, we consider the context of probabilistic inference of model parameters given error bars or confidence intervals on model output values, when the data is unavailable. We introduce a class of algorithms in a Bayesian framework, relying on maximum entropy arguments and approximate Bayesian computation methods, to generate consistent data with the given summary statistics. Once we obtain consistent data sets, we pool the respective posteriors, to arrive at a single, averaged density on the parameters. This approach allows us to perform accurate forward uncertainty propagation consistent with the reported statistics.
Programmable electronic synthesized capacitance
NASA Technical Reports Server (NTRS)
Kleinberg, Leonard L. (Inventor)
1987-01-01
A predetermined and variable synthesized capacitance which may be incorporated into the resonant portion of an electronic oscillator for the purpose of tuning the oscillator comprises a programmable operational amplifier circuit. The operational amplifier circuit has its output connected to its inverting input, in a follower configuration, by a network which is low impedance at the operational frequency of the circuit. The output of the operational amplifier is also connected to the noninverting input by a capacitor. The noninverting input appears as a synthesized capacitance which may be varied with a variation in gain-bandwidth product of the operational amplifier circuit. The gain-bandwidth product may, in turn, be varied with a variation in input set current with a digital to analog converter whose output is varied with a command word. The output impedance of the circuit may also be varied by the output set current. This circuit may provide very small ranges in oscillator frequency with relatively large control voltages unaffected by noise.
Automatic paper sliceform design from 3D solid models.
Le-Nguyen, Tuong-Vu; Low, Kok-Lim; Ruiz, Conrado; Le, Sang N
2013-11-01
A paper sliceform or lattice-style pop-up is a form of papercraft that uses two sets of parallel paper patches slotted together to make a foldable structure. The structure can be folded flat, as well as fully opened (popped-up) to make the two sets of patches orthogonal to each other. Automatic design of paper sliceforms is still not supported by existing computational models and remains a challenge. We propose novel geometric formulations of valid paper sliceform designs that consider the stability, flat-foldability and physical realizability of the designs. Based on a set of sufficient construction conditions, we also present an automatic algorithm for generating valid sliceform designs that closely depict the given 3D solid models. By approximating the input models using a set of generalized cylinders, our method significantly reduces the search space for stable and flat-foldable sliceforms. To ensure the physical realizability of the designs, the algorithm automatically generates slots or slits on the patches such that no two cycles embedded in two different patches are interlocking each other. This guarantees local pairwise assembility between patches, which is empirically shown to lead to global assembility. Our method has been demonstrated on a number of example models, and the output designs have been successfully made into real paper sliceforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luh, G.C.
1994-01-01
This thesis presents the application of advanced modeling techniques to construct nonlinear forward and inverse models of internal combustion engines for the detection and isolation of incipient faults. The NARMAX (Nonlinear Auto-Regressive Moving Average modeling with eXogenous inputs) technique of system identification proposed by Leontaritis and Billings was used to derive the nonlinear model of a internal combustion engine, over operating conditions corresponding to the I/M240 cycle. The I/M240 cycle is a standard proposed by the United States Environmental Protection Agency to measure tailpipe emissions in inspection and maintenance programs and consists of a driving schedule developed for the purposemore » of testing compliance with federal vehicle emission standards for carbon monoxide, unburned hydrocarbons, and nitrogen oxides. The experimental work for model identification and validation was performed on a 3.0 liter V6 engine installed in an engine test cell at the Center for Automotive Research at The Ohio State University. In this thesis, different types of model structures were proposed to obtain multi-input multi-output (MIMO) nonlinear NARX models. A modification of the algorithm proposed by He and Asada was used to estimate the robust orders of the derived MIMO nonlinear models. A methodology for the analysis of inverse NARX model was developed. Two methods were proposed to derive the inverse NARX model: (1) inversion from the forward NARX model; and (2) direct identification of inverse model from the output-input data set. In this thesis, invertibility, minimum-phase characteristic of zero dynamics, and stability analysis of NARX forward model are also discussed. Stability in the sense of Lyapunov is also investigated to check the stability of the identified forward and inverse models. This application of inverse problem leads to the estimation of unknown inputs and to actuator fault diagnosis.« less
Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael
2016-12-16
As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.
Wilkes, Donald F.; Purvis, James W.; Miller, A. Keith
1997-01-01
An infinitely variable transmission is capable of operating between a maximum speed in one direction and a minimum speed in an opposite direction, including a zero output angular velocity, while being supplied with energy at a constant angular velocity. Input energy is divided between a first power path carrying an orbital set of elements and a second path that includes a variable speed adjustment mechanism. The second power path also connects with the orbital set of elements in such a way as to vary the rate of angular rotation thereof. The combined effects of power from the first and second power paths are combined and delivered to an output element by the orbital element set. The transmission can be designed to operate over a preselected ratio of forward to reverse output speeds.
Zhang, Z. Fred; White, Signe K.; Bonneville, Alain; ...
2014-12-31
Numerical simulations have been used for estimating CO2 injectivity, CO2 plume extent, pressure distribution, and Area of Review (AoR), and for the design of CO2 injection operations and monitoring network for the FutureGen project. The simulation results are affected by uncertainties associated with numerous input parameters, the conceptual model, initial and boundary conditions, and factors related to injection operations. Furthermore, the uncertainties in the simulation results also vary in space and time. The key need is to identify those uncertainties that critically impact the simulation results and quantify their impacts. We introduce an approach to determine the local sensitivity coefficientmore » (LSC), defined as the response of the output in percent, to rank the importance of model inputs on outputs. The uncertainty of an input with higher sensitivity has larger impacts on the output. The LSC is scalable by the error of an input parameter. The composite sensitivity of an output to a subset of inputs can be calculated by summing the individual LSC values. We propose a local sensitivity coefficient method and applied it to the FutureGen 2.0 Site in Morgan County, Illinois, USA, to investigate the sensitivity of input parameters and initial conditions. The conceptual model for the site consists of 31 layers, each of which has a unique set of input parameters. The sensitivity of 11 parameters for each layer and 7 inputs as initial conditions is then investigated. For CO2 injectivity and plume size, about half of the uncertainty is due to only 4 or 5 of the 348 inputs and 3/4 of the uncertainty is due to about 15 of the inputs. The initial conditions and the properties of the injection layer and its neighbour layers contribute to most of the sensitivity. Overall, the simulation outputs are very sensitive to only a small fraction of the inputs. However, the parameters that are important for controlling CO2 injectivity are not the same as those controlling the plume size. The three most sensitive inputs for injectivity were the horizontal permeability of Mt Simon 11 (the injection layer), the initial fracture-pressure gradient, and the residual aqueous saturation of Mt Simon 11, while those for the plume area were the initial salt concentration, the initial pressure, and the initial fracture-pressure gradient. The advantages of requiring only a single set of simulation results, scalability to the proper parameter errors, and easy calculation of the composite sensitivities make this approach very cost-effective for estimating AoR uncertainty and guiding cost-effective site characterization, injection well design, and monitoring network design for CO2 storage projects.« less
Jaton, Florian
2017-01-01
This article documents the practical efforts of a group of scientists designing an image-processing algorithm for saliency detection. By following the actors of this computer science project, the article shows that the problems often considered to be the starting points of computational models are in fact provisional results of time-consuming, collective and highly material processes that engage habits, desires, skills and values. In the project being studied, problematization processes lead to the constitution of referential databases called ‘ground truths’ that enable both the effective shaping of algorithms and the evaluation of their performances. Working as important common touchstones for research communities in image processing, the ground truths are inherited from prior problematization processes and may be imparted to subsequent ones. The ethnographic results of this study suggest two complementary analytical perspectives on algorithms: (1) an ‘axiomatic’ perspective that understands algorithms as sets of instructions designed to solve given problems computationally in the best possible way, and (2) a ‘problem-oriented’ perspective that understands algorithms as sets of instructions designed to computationally retrieve outputs designed and designated during specific problematization processes. If the axiomatic perspective on algorithms puts the emphasis on the numerical transformations of inputs into outputs, the problem-oriented perspective puts the emphasis on the definition of both inputs and outputs. PMID:28950802
Blindness and Selective Mutism: One Student's Response to Voice-Output Devices
ERIC Educational Resources Information Center
Holley, Mary; Johnson, Ashli; Herzberg, Tina
2014-01-01
This case study was designed to measure the response of one student with blindness and selective mutism to the intervention of voice-output devices across two years and two different teachers in two instructional settings. Before the introduction of the voice output devices, the student did not choose to communicate using spoken language or…
Lertsatitthanakorn, C
2007-05-01
The use of biomass cook stoves is widespread in the domestic sector of developing countries, but the stoves are not efficient. To advance the versatility of the cook stove, we investigated the feasibility of adding a commercial thermoelectric (TE) module made of bismuth-telluride based materials to the stove's side wall, thereby creating a thermoelectric generator system that utilizes a proportion of the stove's waste heat. The system, a biomass cook stove thermoelectric generator (BITE), consists of a commercial TE module (Taihuaxing model TEP1-1264-3.4), a metal sheet wall which acts as one side of the stove's structure and serves as the hot side of the TE module, and a rectangular fin heat sink at the cold side of the TE module. An experimental set-up was built to evaluate the conversion efficiency at various temperature ranges. The experimental set-up revealed that the electrical power output and the conversion efficiency depended on the temperature difference between the cold and hot sides of the TE module. At a temperature difference of approximately 150 degrees C, the unit achieved a power output of 2.4W. The conversion efficiency of 3.2% was enough to drive a low power incandescent light bulb or a small portable radio. A theoretical model approximated the power output at low temperature ranges. An economic analysis indicated that the payback period tends to be very short when compared with the cost of the same power supplied by batteries. Therefore, the generator design formulated here could be used in the domestic sector. The system is not intended to compete with primary power sources but serves adequately as an emergency or backup source of power.
Radial/axial power divider/combiner
NASA Technical Reports Server (NTRS)
Vaddiparty, Yerriah P. (Inventor)
1987-01-01
An electromagnetic power divider/combiner comprises N radial outputs (31) having equal powers and preferably equal phases, and a single axial output (20). A divider structure (1) and a preferably identical combiner structure (2) are broadside coupled across a dielectric substrate (30) containing on one side the network of N radial outputs (31) and on its other side a set of N equispaced stubs (42) which are capacitively coupled through the dielectric substrate (30) to the N radial outputs (31). The divider structure (1) and the combiner structure (2) each comprise a dielectric disk (12, 22, respectively) on which is mounted a set of N radial impedance transformers (14, 24, respectively). Gross axial coupling is determined by the thickness of the dielectric layer (30). Rotating the disks (12, 22) with respect to each other effectuates fine adjustment in the degree of axial coupling.
Modeling the expenditure and reconstitution of work capacity above critical power.
Skiba, Philip Friere; Chidnok, Weerapong; Vanhatalo, Anni; Jones, Andrew M
2012-08-01
The critical power (CP) model includes two constants: the CP and the W' [P = (W' / t) + CP]. The W' is the finite work capacity available above CP. Power output above CP results in depletion of the W' complete depletion of the W' results in exhaustion. Monitoring the W' may be valuable to athletes during training and competition. Our purpose was to develop a function describing the dynamic state of the W' during intermittent exercise. After determination of V˙O(2max), CP, and W', seven subjects completed four separate exercise tests on a cycle ergometer on different days. Each protocol comprised a set of intervals: 60 s at a severe power output, followed by 30-s recovery at a lower prescribed power output. The intervals were repeated until exhaustion. These data were entered into a continuous equation predicting balance of W' remaining, assuming exponential reconstitution of the W'. The time constant was varied by an iterative process until the remaining modeled W' = 0 at the point of exhaustion. The time constants of W' recharge were negatively correlated with the difference between sub-CP recovery power and CP. The relationship was best fit by an exponential (r = 0.77). The model-predicted W' balance correlated with the temporal course of the rise in V˙O(2) (r = 0.82-0.96). The model accurately predicted exhaustion of the W' in a competitive cyclist during a road race. We have developed a function to track the dynamic state of the W' during intermittent exercise. This may have important implications for the planning and real-time monitoring of athletic performance.
NASA Astrophysics Data System (ADS)
Alpert, J. C.; Rutledge, G.; Wang, J.; Freeman, P.; Kang, C. Y.
2009-05-01
The NOAA Operational Modeling Archive Distribution System (NOMADS) is now delivering high availability services as part of NOAA's official real time data dissemination at its Web Operations Center (WOC). The WOC is a web service used by all organizational units in NOAA and acts as a data repository where public information can be posted to a secure and scalable content server. A goal is to foster collaborations among the research and education communities, value added retailers, and public access for science and development efforts aimed at advancing modeling and GEO-related tasks. The services used to access the operational model data output are the Open-source Project for a Network Data Access Protocol (OPeNDAP), implemented with the Grid Analysis and Display System (GrADS) Data Server (GDS), and applications for slicing, dicing and area sub-setting the large matrix of real time model data holdings. This approach insures an efficient use of computer resources because users transmit/receive only the data necessary for their tasks including metadata. Data sets served in this way with a high availability server offer vast possibilities for the creation of new products for value added retailers and the scientific community. New applications to access data and observations for verification of gridded model output, and progress toward integration with access to conventional and non-conventional observations will be discussed. We will demonstrate how users can use NOMADS services to repackage area subsets either using repackaging of GRIB2 files, or values selected by ensemble component, (forecast) time, vertical levels, global horizontal location, and by variable, virtually a 6- Dimensional analysis services across the internet.
Xanthos – A Global Hydrologic Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.
Xanthos is an open-source hydrologic model, written in Python, designed to quantify and analyse global water availability. Xanthos simulates historical and future global water availability on a monthly time step at a spatial resolution of 0.5 geographic degrees. Xanthos was designed to be extensible and used by scientists that study global water supply and work with the Global Change Assessment Model (GCAM). Xanthos uses a user-defined configuration file to specify model inputs, outputs and parameters. Xanthos has been tested using actual global data sets and the model is able to provide historical observations and future estimates of renewable freshwater resourcesmore » in the form of total runoff.« less
Xanthos – A Global Hydrologic Model
Li, Xinya; Vernon, Chris R.; Hejazi, Mohamad I.; ...
2017-09-11
Xanthos is an open-source hydrologic model, written in Python, designed to quantify and analyse global water availability. Xanthos simulates historical and future global water availability on a monthly time step at a spatial resolution of 0.5 geographic degrees. Xanthos was designed to be extensible and used by scientists that study global water supply and work with the Global Change Assessment Model (GCAM). Xanthos uses a user-defined configuration file to specify model inputs, outputs and parameters. Xanthos has been tested using actual global data sets and the model is able to provide historical observations and future estimates of renewable freshwater resourcesmore » in the form of total runoff.« less
CORDEX Coordinated Output for Regional Evaluation
NASA Astrophysics Data System (ADS)
Gutowski, William; Giorgi, Filippo; Lake, Irene
2017-04-01
The Science Advisory Team for the Coordinated Regional Downscaling Experiment (CORDEX) has developed a baseline framework of specified regions, resolutions and simulation periods intended to provide a foundation for ongoing regional CORDEX activities: the CORDEX Coordinated Output for Regional Evaluation, or CORDEX-CORE. CORDEX-CORE was conceived in part to be responsive to IPCC needs for coordinated simulations that could provide regional climate downscaling (RCD) that yields fine-scale climate information beyond that resolved by GCMs. For each CORDEX region, a matrix of GCM-RCD experiments is designed based on the need to cover as much as possible different dimensions of the uncertainty space (e.g., different emissions and land-use scenarios, GCMs, RCD models and techniques). An appropriate set of driving GCMs can allow a program of simulations that efficiently addresses key scientific issues within CORDEX, while facilitating comparison and transfer of results and lessons learned across different regions. The CORDEX-CORE program seeks to provide, as much as possible, homogeneity across domains, so it is envisioned that a standard set of regional climate models (RCMs) and empirical statistical downscaling (ESD) methods will downscale a standard set of GCMs over all or at least most CORDEX domains for a minimum set of scenarios (high and low end). The focus is on historical climate simulations for the 20th century and projections for 21st century, implying that data would be needed minimally for the period 1950-2100 (but ideally 1900-2100). This foundational ensemble can be regionally enriched with further contributions (additional GCM-RCD pairs) by individual groups over their selected domains of interest. The RCM model resolution for these core experiments will be in the range of 10-20 km, a resolution that has been shown to provide substantial added value for a variety of climate variables and that represents a significant forward step compared in the CORDEX program. This presentation presents the vision and structure of CORDEX-CORE while also soliciting discussion on plans for implementing the program.
A Web-based tool for UV irradiance data: predictions for European and Southeast Asian sites.
Kift, Richard; Webb, Ann R; Page, John; Rimmer, John; Janjai, Serm
2006-01-01
There are a range of UV models available, but one needs significant pre-existing knowledge and experience in order to be able to use them. In this article a comparatively simple Web-based model developed for the SoDa (Integration and Exploitation of Networked Solar Radiation Databases for Environment Monitoring) project is presented. This is a clear-sky model with modifications for cloud effects. To determine if the model produces realistic UV data the output is compared with 1 year sets of hourly measurements at sites in the United Kingdom and Thailand. The accuracy of the output depends on the input, but reasonable results were obtained with the use of the default database inputs and improved when pyranometer instead of modeled data provided the global radiation input needed to estimate the UV. The average modeled values of UV for the UK site were found to be within 10% of measurements. For the tropical sites in Thailand the average modeled values were within 1120% of measurements for the four sites with the use of the default SoDa database values. These results improved when pyranometer data and TOMS ozone data from 2002 replaced the standard SoDa database values, reducing the error range for all four sites to less than 15%.
NASA Astrophysics Data System (ADS)
Dafonte, C.; Fustes, D.; Manteiga, M.; Garabato, D.; Álvarez, M. A.; Ulla, A.; Allende Prieto, C.
2016-10-01
Aims: We present an innovative artificial neural network (ANN) architecture, called Generative ANN (GANN), that computes the forward model, that is it learns the function that relates the unknown outputs (stellar atmospheric parameters, in this case) to the given inputs (spectra). Such a model can be integrated in a Bayesian framework to estimate the posterior distribution of the outputs. Methods: The architecture of the GANN follows the same scheme as a normal ANN, but with the inputs and outputs inverted. We train the network with the set of atmospheric parameters (Teff, log g, [Fe/H] and [α/ Fe]), obtaining the stellar spectra for such inputs. The residuals between the spectra in the grid and the estimated spectra are minimized using a validation dataset to keep solutions as general as possible. Results: The performance of both conventional ANNs and GANNs to estimate the stellar parameters as a function of the star brightness is presented and compared for different Galactic populations. GANNs provide significantly improved parameterizations for early and intermediate spectral types with rich and intermediate metallicities. The behaviour of both algorithms is very similar for our sample of late-type stars, obtaining residuals in the derivation of [Fe/H] and [α/ Fe] below 0.1 dex for stars with Gaia magnitude Grvs < 12, which accounts for a number in the order of four million stars to be observed by the Radial Velocity Spectrograph of the Gaia satellite. Conclusions: Uncertainty estimation of computed astrophysical parameters is crucial for the validation of the parameterization itself and for the subsequent exploitation by the astronomical community. GANNs produce not only the parameters for a given spectrum, but a goodness-of-fit between the observed spectrum and the predicted one for a given set of parameters. Moreover, they allow us to obtain the full posterior distribution over the astrophysical parameters space once a noise model is assumed. This can be used for novelty detection and quality assessment.
The selection of construction sub-contractors using the fuzzy sets theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krzemiński, Michał
The paper presents the algorithm for the selection of sub-contractors. Main area of author’s interest is scheduling flow models. The ranking task aims at execution time as short as possible Brigades downtime should also be as small as possible. These targets are exposed to significant obsolescence. The criteria for selection of subcontractors will not be therefore time and cost, it is assumed that all those criteria be meet by sub-contractors. The decision should be made in regard to factors difficult to measure, to assess which is the perfect application of fuzzy sets theory. The paper will present a set ofmore » evaluation criteria, the part of the knowledge base and a description of the output variable.« less
Earth-Science Data Co-Locating Tool
NASA Technical Reports Server (NTRS)
Lee, Seungwon; Pan, Lei; Block, Gary L.
2012-01-01
This software is used to locate Earth-science satellite data and climate-model analysis outputs in space and time. This enables the direct comparison of any set of data with different spatial and temporal resolutions. It is written in three separate modules that are clearly separated for their functionality and interface with other modules. This enables a fast development of supporting any new data set. In this updated version of the tool, several new front ends are developed for new products. This software finds co-locatable data pairs for given sets of data products and creates new data products that share the same spatial and temporal coordinates. This facilitates the direct comparison between the two heterogeneous datasets and the comprehensive and synergistic use of the datasets.
System and Method for Modeling the Flow Performance Features of an Object
NASA Technical Reports Server (NTRS)
Jorgensen, Charles (Inventor); Ross, James (Inventor)
1997-01-01
The method and apparatus includes a neural network for generating a model of an object in a wind tunnel from performance data on the object. The network is trained from test input signals (e.g., leading edge flap position, trailing edge flap position, angle of attack, and other geometric configurations, and power settings) and test output signals (e.g., lift, drag, pitching moment, or other performance features). In one embodiment, the neural network training method employs a modified Levenberg-Marquardt optimization technique. The model can be generated 'real time' as wind tunnel testing proceeds. Once trained, the model is used to estimate performance features associated with the aircraft given geometric configuration and/or power setting input. The invention can also be applied in other similar static flow modeling applications in aerodynamics, hydrodynamics, fluid dynamics, and other such disciplines. For example, the static testing of cars, sails, and foils, propellers, keels, rudders, turbines, fins, and the like, in a wind tunnel, water trough, or other flowing medium.
The estimation of soil water fluxes using lysimeter data
NASA Astrophysics Data System (ADS)
Wegehenkel, M.
2009-04-01
The validation of soil water balance models regarding soil water fluxes in the field is still a problem. This requires time series of measured model outputs. In our study, a soil water balance model was validated using lysimeter time series of measured model outputs. The soil water balance model used in our study was the Hydrus-1D-model. This model was tested by a comparison of simulated with measured daily rates of actual evapotranspiration, soil water storage, groundwater recharge and capillary rise. These rates were obtained from twelve weighable lysimeters with three different soils and two different lower boundary conditions for the time period from January 1, 1996 to December 31, 1998. In that period, grass vegetation was grown on all lysimeters. These lysimeters are located in Berlin, Germany. One potential source of error in lysimeter experiments is preferential flow caused by an artificial channeling of water due to the occurrence of air space between the soil monolith and the inside wall of the lysimeters. To analyse such sources of errors, Hydrus-1D was applied with different modelling procedures. The first procedure consists of a general uncalibrated appli-cation of Hydrus-1D. The second one includes a calibration of soil hydraulic parameters via inverse modelling of different percolation events with Hydrus-1D. In the third procedure, the model DUALP_1D was applied with the optimized hydraulic parameter set to test the hy-pothesis of the existence of preferential flow paths in the lysimeters. The results of the different modelling procedures indicated that, in addition to a precise determination of the soil water retention functions, vegetation parameters such as rooting depth should also be taken into account. Without such information, the rooting depth is a calibration parameter. However, in some cases, the uncalibrated application of both models also led to an acceptable fit between measured and simulated model outputs.
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
NASA Astrophysics Data System (ADS)
Laiti, L.; Mallucci, S.; Piccolroaz, S.; Bellin, A.; Zardi, D.; Fiori, A.; Nikulin, G.; Majone, B.
2018-03-01
Assessing the accuracy of gridded climate data sets is highly relevant to climate change impact studies, since evaluation, bias correction, and statistical downscaling of climate models commonly use these products as reference. Among all impact studies those addressing hydrological fluxes are the most affected by errors and biases plaguing these data. This paper introduces a framework, coined Hydrological Coherence Test (HyCoT), for assessing the hydrological coherence of gridded data sets with hydrological observations. HyCoT provides a framework for excluding meteorological forcing data sets not complying with observations, as function of the particular goal at hand. The proposed methodology allows falsifying the hypothesis that a given data set is coherent with hydrological observations on the basis of the performance of hydrological modeling measured by a metric selected by the modeler. HyCoT is demonstrated in the Adige catchment (southeastern Alps, Italy) for streamflow analysis, using a distributed hydrological model. The comparison covers the period 1989-2008 and includes five gridded daily meteorological data sets: E-OBS, MSWEP, MESAN, APGD, and ADIGE. The analysis highlights that APGD and ADIGE, the data sets with highest effective resolution, display similar spatiotemporal precipitation patterns and produce the largest hydrological efficiency indices. Lower performances are observed for E-OBS, MESAN, and MSWEP, especially in small catchments. HyCoT reveals deficiencies in the representation of spatiotemporal patterns of gridded climate data sets, which cannot be corrected by simply rescaling the meteorological forcing fields, as often done in bias correction of climate model outputs. We recommend this framework to assess the hydrological coherence of gridded data sets to be used in large-scale hydroclimatic studies.
Emulation for probabilistic weather forecasting
NASA Astrophysics Data System (ADS)
Cornford, Dan; Barillec, Remi
2010-05-01
Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.
NASA Astrophysics Data System (ADS)
Ghader, Fatemeh; Aljoumani, Basem; Tröger, Uwe
2017-04-01
The main resources of fresh water are the groundwater. In Iran, the quality and quantity of groundwater is affected significantly by rapid population growth and unsustainable water management in the agricultural and industrial sectors. in Maharlu-Bakhtegan and Tashk salt lakes basin, the overexploitation of groundwater for irrigation purpose caused the salt water intrusion from the lakes to the area's aquifers, moreover, the basin is located in south of Iran with semiarid climate, faces a significant decline in rainfall. All these reasons cause the degradation of ground water quality. For this study, geographical coordinates of 406 observation wells will be defined as inputs and groundwater electrical conductivities (EC) will be set as output. Ordinary kriging (OK) and artificial neural networks (ANN) will be investigated for modeling groundwater salinity. Eighty percent of data will be randomly selected to train and develop mentioned models and twenty percent of data will be used for testing and validating. Finally, the outputs of models will be compared with the corresponding measured values in observation wells.
Supervision of dynamic systems: Monitoring, decision-making and control
NASA Technical Reports Server (NTRS)
White, T. N.
1982-01-01
Effects of task variables on the performance of the human supervisor by means of modelling techniques are discussed. The task variables considered are: The dynamics of the system, the task to be performed, the environmental disturbances and the observation noise. A relationship between task variables and parameters of a supervisory model is assumed. The model consists of three parts: (1) The observer part is thought to be a full order optimal observer, (2) the decision-making part is stated as a set of decision rules, and (3) the controller part is given by a control law. The observer part generates, on the basis of the system output and the control actions, an estimate of the state of the system and its associated variance. The outputs of the observer part are then used by the decision-making part to determine the instants in time of the observation actions on the one hand and the controls actions on the other. The controller part makes use of the estimated state to derive the amplitude(s) of the control action(s).
Phototransduction early steps model based on Beer-Lambert optical law.
Salido, Ezequiel M; Servalli, Leonardo N; Gomez, Juan Carlos; Verrastro, Claudio
2017-02-01
The amount of available rhodopsin on the photoreceptor outer segment and its change over time is not considered in classic models of phototransduction. Thus, those models do not take into account the absorptance variation of the outer segment under different brightness conditions. The relationship between the light absorbed by a medium and its absorptance is well described by the Beer-Lambert law. This newly proposed model implements the absorptance variation phenomenon in a set of equations that admit photons per second as input and results in active rhodopsins per second as output. This study compares the classic model of phototransduction developed by Forti et al. (1989) to this new model by using different light stimuli to measure active rhodopsin and photocurrent. The results show a linear relationship between light stimulus and active rhodopsin in the Forti model and an exponential saturation in the new model. Further, photocurrent values have shown that the new model behaves equivalently to the experimental and theoretical data as published by Forti in dark-adapted rods, but fits significantly better under light-adapted conditions. The new model successfully introduced a physics optical law to the standard model of phototransduction adding a new processing layer that had not been mathematically implemented before. In addition, it describes the physiological concept of saturation and delivers outputs in concordance to input magnitudes. Copyright © 2017 Elsevier Ltd. All rights reserved.
A methodology for long-range prediction of air transportation
NASA Technical Reports Server (NTRS)
Ayati, M. B.; English, J. M.
1980-01-01
A framework and methodology for long term projection of demand for aviation fuels is presented. The approach taken includes two basic components. The first was a new technique for establishing the socio-economic environment within which the future aviation industry is embedded. The concept utilized was a definition of an overall societal objective for the very long run future. Within a framework so defined, a set of scenarios by which the future will unfold are then written. These scenarios provide the determinants of the air transport industry operations and accordingly provide an assessment of future fuel requirements. The second part was the modeling of the industry in terms of an abstracted set of variables to represent the overall industry performance on a macro scale. The model was validated by testing the desired output variables from the model with historical data over the past decades.
Interactive access and management for four-dimensional environmental data sets using McIDAS
NASA Technical Reports Server (NTRS)
Hibbard, William L.; Tripoli, Gregory J.
1995-01-01
This grant has fundamentally changed the way that meteorologists look at the output of their atmospheric models, through the development and wide distribution of the Vis5D system. The Vis5D system is also gaining acceptance among oceanographers and atmospheric chemists. Vis5D gives these scientists an interactive three-dimensional movie of their very large data sets that they can use to understand physical mechanisms and to trace problems to their sources. This grant has also helped to define the future direction of scientific visualization through the development of the VisAD system and its lattice data model. The VisAD system can be used to interactively steer and visualize scientific computations. A key element of this capability is the flexibility of the system's data model to adapt to a wide variety of scientific data, including the integration of several forms of scientific metadata.
Beamforming synthesis of binaural responses from computer simulations of acoustic spaces.
Poletti, Mark A; Svensson, U Peter
2008-07-01
Auditorium designs can be evaluated prior to construction by numerical modeling of the design. High-accuracy numerical modeling produces the sound pressure on a rectangular grid, and subjective assessment of the design requires auralization of the sampled sound field at a desired listener position. This paper investigates the production of binaural outputs from the sound pressure at a selected number of grid points by using a least squares beam forming approach. Low-frequency axisymmetric emulations are derived by assuming a solid sphere model of the head, and a spherical array of 640 microphones is used to emulate ten measured head-related transfer function (HRTF) data sets from the CIPIC database for half the audio bandwidth. The spherical array can produce high-accuracy band-limited emulation of any human subject's measured HRTFs for a fixed listener position by using individual sets of beam forming impulse responses.
NASA Technical Reports Server (NTRS)
Markley, F. Landis; Cheng, Yang; Crassidis, John L.; Oshman, Yaakov
2007-01-01
Many applications require an algorithm that averages quaternions in an optimal manner. For example, when combining the quaternion outputs of multiple star trackers having this output capability, it is desirable to properly average the quaternions without recomputing the attitude from the the raw star tracker data. Other applications requiring some sort of optimal quaternion averaging include particle filtering and multiple-model adaptive estimation, where weighted quaternions are used to determine the quaternion estimate. For spacecraft attitude estimation applications, derives an optimal averaging scheme to compute the average of a set of weighted attitude matrices using the singular value decomposition method. Focusing on a 4-dimensional quaternion Gaussian distribution on the unit hypersphere, provides an approach to computing the average quaternion by minimizing a quaternion cost function that is equivalent to the attitude matrix cost function Motivated by and extending its results, this Note derives an algorithm that deterniines an optimal average quaternion from a set of scalar- or matrix-weighted quaternions. Rirthermore, a sufficient condition for the uniqueness of the average quaternion, and the equivalence of the mininiization problem, stated herein, to maximum likelihood estimation, are shown.
Nochomovitz, Yigal D; Li, Hao
2006-03-14
Deciphering the design principles for regulatory networks is fundamental to an understanding of biological systems. We have explored the mapping from the space of network topologies to the space of dynamical phenotypes for small networks. Using exhaustive enumeration of a simple model of three- and four-node networks, we demonstrate that certain dynamical phenotypes can be generated by an atypically broad spectrum of network topologies. Such dynamical outputs are highly designable, much like certain protein structures can be designed by an unusually broad spectrum of sequences. The network topologies that encode a highly designable dynamical phenotype possess two classes of connections: a fully conserved core of dedicated connections that encodes the stable dynamical phenotype and a partially conserved set of variable connections that controls the transient dynamical flow. By comparing the topologies and dynamics of the three- and four-node network ensembles, we observe a large number of instances of the phenomenon of "mutational buffering," whereby addition of a fourth node suppresses phenotypic variation amongst a set of three-node networks.
PID Tuning Using Extremum Seeking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Killingsworth, N; Krstic, M
2005-11-15
Although proportional-integral-derivative (PID) controllers are widely used in the process industry, their effectiveness is often limited due to poor tuning. Manual tuning of PID controllers, which requires optimization of three parameters, is a time-consuming task. To remedy this difficulty, much effort has been invested in developing systematic tuning methods. Many of these methods rely on knowledge of the plant model or require special experiments to identify a suitable plant model. Reviews of these methods are given in [1] and the survey paper [2]. However, in many situations a plant model is not known, and it is not desirable to openmore » the process loop for system identification. Thus a method for tuning PID parameters within a closed-loop setting is advantageous. In relay feedback tuning [3]-[5], the feedback controller is temporarily replaced by a relay. Relay feedback causes most systems to oscillate, thus determining one point on the Nyquist diagram. Based on the location of this point, PID parameters can be chosen to give the closed-loop system a desired phase and gain margin. An alternative tuning method, which does not require either a modification of the system or a system model, is unfalsified control [6], [7]. This method uses input-output data to determine whether a set of PID parameters meets performance specifications. An adaptive algorithm is used to update the PID controller based on whether or not the controller falsifies a given criterion. The method requires a finite set of candidate PID controllers that must be initially specified [6]. Unfalsified control for an infinite set of PID controllers has been developed in [7]; this approach requires a carefully chosen input signal [8]. Yet another model-free PID tuning method that does not require opening of the loop is iterative feedback tuning (IFT). IFT iteratively optimizes the controller parameters with respect to a cost function derived from the output signal of the closed-loop system, see [9]. This method is based on the performance of the closed-loop system during a step response experiment [10], [11]. In this article we present a method for optimizing the step response of a closed-loop system consisting of a PID controller and an unknown plant with a discrete version of extremum seeking (ES). Specifically, ES is used to minimize a cost function similar to that used in [10], [11], which quantifies the performance of the PID controller. ES, a non-model-based method, iteratively modifies the arguments (in this application the PID parameters) of a cost function so that the output of the cost function reaches a local minimum or local maximum. In the next section we apply ES to PID controller tuning. We illustrate this technique through simulations comparing the effectiveness of ES to other PID tuning methods. Next, we address the importance of the choice of cost function and consider the effect of controller saturation. Furthermore, we discuss the choice of ES tuning parameters. Finally, we offer some conclusions.« less
Surface tension and contact angles: Molecular origins and associated microstructure
NASA Technical Reports Server (NTRS)
Davis, H. T.
1982-01-01
Gradient theory converts the molecular theory of inhomogeneous fluid into nonlinear boundary value problems for density and stress distributions in fluid interfaces, contact line regions, nuclei and microdroplets, and other fluid microstructures. The relationship between the basic patterns of fluid phase behavior and the occurrence and stability of fluid microstructures was clearly established by the theory. All the inputs of the theory have molecular expressions which are computable from simple models. On another level, the theory becomes a phenomenological framework in which the equation of state of homogeneous fluid and sets of influence parameters of inhomogeneous fluids are the inputs and the structures, stress tensions and contact angles of menisci are the outputs. These outputs, which find applications in the science and technology of drops and bubbles, are discussed.
Robot trajectory tracking with self-tuning predicted control
NASA Technical Reports Server (NTRS)
Cui, Xianzhong; Shin, Kang G.
1988-01-01
A controller that combines self-tuning prediction and control is proposed for robot trajectory tracking. The controller has two feedback loops: one is used to minimize the prediction error, and the other is designed to make the system output track the set point input. Because the velocity and position along the desired trajectory are given and the future output of the system is predictable, a feedforward loop can be designed for robot trajectory tracking with self-tuning predicted control (STPC). Parameters are estimated online to account for the model uncertainty and the time-varying property of the system. The authors describe the principle of STPC, analyze the system performance, and discuss the simplification of the robot dynamic equations. To demonstrate its utility and power, the controller is simulated for a Stanford arm.
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
NASA Technical Reports Server (NTRS)
Ellis, R. C.; Fink, R. A.; Moore, E. A.
1987-01-01
The Common Drive Unit (CDU) is a high reliability rotary actuator with many versatile applications in mechanism designs. The CDU incorporates a set of redundant motor-brake assemblies driving a single output shaft through differential. Tachometers provide speed information in the AC version. Operation of both motors, as compared to the operation of one motor, will yield the same output torque with twice the output speed.
Li, Mingjie; Zhou, Ping; Wang, Hong; ...
2017-09-19
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Mingjie; Zhou, Ping; Wang, Hong
As one of the most important unit in the papermaking industry, the high consistency (HC) refining system is confronted with challenges such as improving pulp quality, energy saving, and emissions reduction in its operation processes. Here in this correspondence, an optimal operation of HC refining system is presented using nonlinear multiobjective model predictive control strategies that aim at set-point tracking objective of pulp quality, economic objective, and specific energy (SE) consumption objective, respectively. First, a set of input and output data at different times are employed to construct the subprocess model of the state process model for the HC refiningmore » system, and then the Wiener-type model can be obtained through combining the mechanism model of Canadian Standard Freeness and the state process model that determines their structures based on Akaike information criterion. Second, the multiobjective optimization strategy that optimizes both the set-point tracking objective of pulp quality and SE consumption is proposed simultaneously, which uses NSGA-II approach to obtain the Pareto optimal set. Furthermore, targeting at the set-point tracking objective of pulp quality, economic objective, and SE consumption objective, the sequential quadratic programming method is utilized to produce the optimal predictive controllers. In conclusion, the simulation results demonstrate that the proposed methods can make the HC refining system provide a better performance of set-point tracking of pulp quality when these predictive controllers are employed. In addition, while the optimal predictive controllers orienting with comprehensive economic objective and SE consumption objective, it has been shown that they have significantly reduced the energy consumption.« less
The structure of a market containing boundedly rational firms
NASA Astrophysics Data System (ADS)
Ibrahim, Adyda; Zura, Nerda; Saaban, Azizan
2017-11-01
The structure of a market is determined by the number of active firms in it. Over time, this number is affected by the exit of existing firms, called incumbents, and entries of new firms, called entrant. In this paper, we considered a market governed by the Cobb-Douglas utility function such that the demand function is isoelastic. Each firm is assumed to produce a single homogenous product under a constant unit cost. Furthermore, firms are assumed to be boundedly rational in adjusting their outputs at each period. A firm is considered to exit the market if its output is negative. In this paper, the market is assumed to have zero barrier-to-entry. Therefore, the exiting firm can reenter the market if its output is positive again, and new firms can enter the market easily. Based on these assumptions and rules, a mathematical model was developed and numerical simulations were run using Matlab. By setting certain values for the parameters in the model, initial numerical simulations showed that in the long run, the number of firms that manages to survive the market varies between zero to 30. This initial result is consistent with the idea that a zero barrier-to-entry may produce a perfectly competitive market.
Measured radiofrequency exposure during various mobile-phone use scenarios.
Kelsh, Michael A; Shum, Mona; Sheppard, Asher R; McNeely, Mark; Kuster, Niels; Lau, Edmund; Weidling, Ryan; Fordyce, Tiffani; Kühn, Sven; Sulser, Christof
2011-01-01
Epidemiologic studies of mobile phone users have relied on self reporting or billing records to assess exposure. Herein, we report quantitative measurements of mobile-phone power output as a function of phone technology, environmental terrain, and handset design. Radiofrequency (RF) output data were collected using software-modified phones that recorded power control settings, coupled with a mobile system that recorded and analyzed RF fields measured in a phantom head placed in a vehicle. Data collected from three distinct routes (urban, suburban, and rural) were summarized as averages of peak levels and overall averages of RF power output, and were analyzed using analysis of variance methods. Technology was the strongest predictor of RF power output. The older analog technology produced the highest RF levels, whereas CDMA had the lowest, with GSM and TDMA showing similar intermediate levels. We observed generally higher RF power output in rural areas. There was good correlation between average power control settings in the software-modified phones and power measurements in the phantoms. Our findings suggest that phone technology, and to a lesser extent, degree of urbanization, are the two stronger influences on RF power output. Software-modified phones should be useful for improving epidemiologic exposure assessment.
Reproducibility and Transparency in Ocean-Climate Modeling
NASA Astrophysics Data System (ADS)
Hannah, N.; Adcroft, A.; Hallberg, R.; Griffies, S. M.
2015-12-01
Reproducibility is a cornerstone of the scientific method. Within geophysical modeling and simulation achieving reproducibility can be difficult, especially given the complexity of numerical codes, enormous and disparate data sets, and variety of supercomputing technology. We have made progress on this problem in the context of a large project - the development of new ocean and sea ice models, MOM6 and SIS2. Here we present useful techniques and experience.We use version control not only for code but the entire experiment working directory, including configuration (run-time parameters, component versions), input data and checksums on experiment output. This allows us to document when the solutions to experiments change, whether due to code updates or changes in input data. To avoid distributing large input datasets we provide the tools for generating these from the sources, rather than provide raw input data.Bugs can be a source of non-determinism and hence irreproducibility, e.g. reading from or branching on uninitialized memory. To expose these we routinely run system tests, using a memory debugger, multiple compilers and different machines. Additional confidence in the code comes from specialised tests, for example automated dimensional analysis and domain transformations. This has entailed adopting a code style where we deliberately restrict what a compiler can do when re-arranging mathematical expressions.In the spirit of open science, all development is in the public domain. This leads to a positive feedback, where increased transparency and reproducibility makes using the model easier for external collaborators, who in turn provide valuable contributions. To facilitate users installing and running the model we provide (version controlled) digital notebooks that illustrate and record analysis of output. This has the dual role of providing a gross, platform-independent, testing capability and a means to documents model output and analysis.
Characterizing the output settings of dental curing lights.
Harlow, J E; Sullivan, B; Shortall, A C; Labrie, D; Price, R B
2016-01-01
For improved inter-study reproducibility and ultimately improved patient care, researchers and dentists need to know what electromagnetic radiation (light) is emitted from the light-curing unit (LCU) they are using and what is received by the resin. This information cannot be obtained from a dental radiometer, even though many studies have used a dental radiometer. The light outputs from six LCUs (two QTH and four broad-spectrum LED units) were collected in real-time using an integrating sphere connected to a fiberoptic spectrometer during different light exposures. It was found that the spectral emissions were unique to each LCU, and there was no standardization in what was emitted on the various ramp (soft-start) settings. Relative to the normal use setting, using the ramp setting reduced the radiant energy (J) delivered from each LCU. For one of the four broad-spectrum LED LCUs, the spectral emissions in the violet range did not increase when the overall radiant power output was increased. In addition, this broad-spectrum LED LCU emitted no light from the violet LED chip for the first 5s and only emitted violet light when the ramp phase finished. A single irradiance value derived from a dental radiometer or from a laboratory grade power meter cannot adequately describe the output from the LCU. Manufacturers should provide more information about the light output from their LCUs. Ideally, future assessments and research publications that include resin photopolymerization should report the spectral radiant power delivered from the LCU throughout the entire exposure cycle. Copyright © 2015 Elsevier Ltd. All rights reserved.
Computational modeling of cardiovascular response to orthostatic stress
NASA Technical Reports Server (NTRS)
Heldt, Thomas; Shim, Eun B.; Kamm, Roger D.; Mark, Roger G.
2002-01-01
The objective of this study is to develop a model of the cardiovascular system capable of simulating the short-term (< or = 5 min) transient and steady-state hemodynamic responses to head-up tilt and lower body negative pressure. The model consists of a closed-loop lumped-parameter representation of the circulation connected to set-point models of the arterial and cardiopulmonary baroreflexes. Model parameters are largely based on literature values. Model verification was performed by comparing the simulation output under baseline conditions and at different levels of orthostatic stress to sets of population-averaged hemodynamic data reported in the literature. On the basis of experimental evidence, we adjusted some model parameters to simulate experimental data. Orthostatic stress simulations are not statistically different from experimental data (two-sided test of significance with Bonferroni adjustment for multiple comparisons). Transient response characteristics of heart rate to tilt also compare well with reported data. A case study is presented on how the model is intended to be used in the future to investigate the effects of post-spaceflight orthostatic intolerance.
Chemical function based pharmacophore generation of endothelin-A selective receptor antagonists.
Funk, Oliver F; Kettmann, Viktor; Drimal, Jan; Langer, Thierry
2004-05-20
Both quantitative and qualitative chemical function based pharmacophore models of endothelin-A (ET(A)) selective receptor antagonists were generated by using the two algorithms HypoGen and HipHop, respectively, which are implemented in the Catalyst molecular modeling software. The input for HypoGen is a training set of 18 ET(A) antagonists exhibiting IC(50) values ranging between 0.19 nM and 67 microM. The best output hypothesis consists of five features: two hydrophobic (HY), one ring aromatic (RA), one hydrogen bond acceptor (HBA), and one negative ionizable (NI) function. The highest scoring Hip Hop model consists of six features: three hydrophobic (HY), one ring aromatic (RA), one hydrogen bond acceptor (HBA), and one negative ionizable (NI). It is the result of an input of three highly active, selective, and structurally diverse ET(A) antagonists. The predictive power of the quantitative model could be approved by using a test set of 30 compounds, whose activity values spread over 6 orders of magnitude. The two pharmacophores were tested according to their ability to extract known endothelin antagonists from the 3D molecular structure database of Derwent's World Drug Index. Thereby the main part of selective ET(A) antagonistic entries was detected by the two hypotheses. Furthermore, the pharmacophores were used to screen the Maybridge database. Six compounds were chosen from the output hit lists for in vitro testing of their ability to displace endothelin-1 from its receptor. Two of these are new potential lead compounds because they are structurally novel and exhibit satisfactory activity in the binding assay.
Community Intercomparison Suite (CIS) v1.4.0: a tool for intercomparing models and observations
NASA Astrophysics Data System (ADS)
Watson-Parris, Duncan; Schutgens, Nick; Cook, Nicholas; Kipling, Zak; Kershaw, Philip; Gryspeerdt, Edward; Lawrence, Bryan; Stier, Philip
2016-09-01
The Community Intercomparison Suite (CIS) is an easy-to-use command-line tool which has been developed to allow the straightforward intercomparison of remote sensing, in situ and model data. While there are a number of tools available for working with climate model data, the large diversity of sources (and formats) of remote sensing and in situ measurements necessitated a novel software solution. Developed by a professional software company, CIS supports a large number of gridded and ungridded data sources "out-of-the-box", including climate model output in NetCDF or the UK Met Office pp file format, CloudSat, CALIOP (Cloud-Aerosol Lidar with Orthogonal Polarization), MODIS (MODerate resolution Imaging Spectroradiometer), Cloud and Aerosol CCI (Climate Change Initiative) level 2 satellite data and a number of in situ aircraft and ground station data sets. The open-source architecture also supports user-defined plugins to allow many other sources to be easily added. Many of the key operations required when comparing heterogenous data sets are provided by CIS, including subsetting, aggregating, collocating and plotting the data. Output data are written to CF-compliant NetCDF files to ensure interoperability with other tools and systems. The latest documentation, including a user manual and installation instructions, can be found on our website (http://cistools.net). Here, we describe the need which this tool fulfils, followed by descriptions of its main functionality (as at version 1.4.0) and plugin architecture which make it unique in the field.
Modulating Wnt Signaling Pathway to Enhance Allograft Integration in Orthopedic Trauma Treatment
2013-10-01
presented below. Quantitative output provides an extensive set of data but we have chosen to present the most relevant parameters that are reflected in...multiple parameters . Most samples have been mechanically tested and data extracted for multiple parameters . Histological evaluation of subset of...Sumner, D. R. Saline Irrigation Does Not Affect Bone Formation or Fixation Strength of Hydroxyapatite /Tricalcium Phosphate-Coated Implants in a Rat Model
Importance of biometrics to addressing vulnerabilities of the U.S. infrastructure
NASA Astrophysics Data System (ADS)
Arndt, Craig M.; Hall, Nathaniel A.
2004-08-01
Human identification technologies are important threat countermeasures in minimizing select infrastructure vulnerabilities. Properly targeted countermeasures should be selected and integrated into an overall security solution based on disciplined analysis and modeling. Available data on infrastructure value, threat intelligence, and system vulnerabilities are carefully organized, analyzed and modeled. Prior to design and deployment of an effective countermeasure; the proper role and appropriateness of technology in addressing the overall set of vulnerabilities is established. Deployment of biometrics systems, as with other countermeasures, introduces potentially heightened vulnerabilities into the system. Heightened vulnerabilities may arise from both the newly introduced system complexities and an unfocused understanding of the set of vulnerabilities impacted by the new countermeasure. The countermeasure's own inherent vulnerabilities and those introduced by the system's integration with the existing system are analyzed and modeled to determine the overall vulnerability impact. The United States infrastructure is composed of government and private assets. The infrastructure is valued by their potential impact on several components: human physical safety, physical/information replacement/repair cost, potential contribution to future loss (criticality in weapons production), direct productivity output, national macro-economic output/productivity, and information integrity. These components must be considered in determining the overall impact of an infrastructure security breach. Cost/benefit analysis is then incorporated in the security technology deployment decision process. Overall security risks based on system vulnerabilities and threat intelligence determines areas of potential benefit. Biometric countermeasures are often considered when additional security at intended points of entry would minimize vulnerabilities.
Three models intercomparison for Quantitative Precipitation Forecast over Calabria
NASA Astrophysics Data System (ADS)
Federico, S.; Avolio, E.; Bellecci, C.; Colacino, M.; Lavagnini, A.; Accadia, C.; Mariani, S.; Casaioli, M.
2004-11-01
In the framework of the National Project “Sviluppo di distretti industriali per le Osservazioni della Terra” (Development of Industrial Districts for Earth Observations) funded by MIUR (Ministero dell'Università e della Ricerca Scientifica --Italian Ministry of the University and Scientific Research) two operational mesoscale models were set-up for Calabria, the southernmost tip of the Italian peninsula. Models are RAMS (Regional Atmospheric Modeling System) and MM5 (Mesoscale Modeling 5) that are run every day at Crati scrl to produce weather forecast over Calabria (http://www.crati.it). This paper reports model intercomparison for Quantitative Precipitation Forecast evaluated for a 20 month period from 1th October 2000 to 31th May 2002. In addition to RAMS and MM5 outputs, QBOLAM rainfall fields are available for the period selected and included in the comparison. This model runs operationally at “Agenzia per la Protezione dell'Ambiente e per i Servizi Tecnici”. Forecasts are verified comparing models outputs with raingauge data recorded by the regional meteorological network, which has 75 raingauges. Large-scale forcing is the same for all models considered and differences are due to physical/numerical parameterizations and horizontal resolutions. QPFs show differences between models. Largest differences are for BIA compared to the other considered scores. Performances decrease with increasing forecast time for RAMS and MM5, whilst QBOLAM scores better for second day forecast.
The NATA code; theory and analysis. Volume 2: User's manual
NASA Technical Reports Server (NTRS)
Bade, W. L.; Yos, J. M.
1975-01-01
The NATA code is a computer program for calculating quasi-one-dimensional gas flow in axisymmetric nozzles and rectangular channels, primarily to describe conditions in electric archeated wind tunnels. The program provides solutions based on frozen chemistry, chemical equilibrium, and nonequilibrium flow with finite reaction rates. The shear and heat flux on the nozzle wall are calculated and boundary layer displacement effects on the inviscid flow are taken into account. The program contains compiled-in thermochemical, chemical kinetic and transport cross section data for high-temperature air, CO2-N2-Ar mixtures, helium, and argon. It calculates stagnation conditions on axisymmetric or two-dimensional models and conditions on the flat surface of a blunt wedge. Included in the report are: definitions of the inputs and outputs; precoded data on gas models, reactions, thermodynamic and transport properties of species, and nozzle geometries; explanations of diagnostic outputs and code abort conditions; test problems; and a user's manual for an auxiliary program (NOZFIT) used to set up analytical curvefits to nozzle profiles.
Intensification of upwelling along Oman coast in a warming scenario
NASA Astrophysics Data System (ADS)
Praveen, V.; Ajayamohan, R. S.; Valsala, V.; Sandeep, S.
2016-07-01
The oceanic impact of poleward shift in monsoon low-level jet (MLLJ) is examined using a Regional Ocean Modeling System (ROMS). Two sets of downscaling experiments were conducted using ROMS with boundary and initial conditions from six CMIP5 models. While outputs from the historical run (1981-2000) acts as forcing for the first, the second uses RCP8.5 (2080-2099). By comparing the outputs, it is found that Oman coast will experience an increase in upwelling in tune with MLLJ shift. Consistent with the changes in upwelling and zonal Ekman transport, temperature, salinity, and productivity show significant changes near the Oman coast. The changes in MLLJ causes the coastal wind to angle against the Oman coast in such a fashion that the net upwelling increases in the next century and so does the marine productivity. This study contrasts the general view of weakening of upwelling along the Arabian coasts due to the weakening of monsoon winds.
Upwelling changes along the Arabian coast in a warming scenario
NASA Astrophysics Data System (ADS)
Praveen, V.; Ravindran, A. M.; Valsala, V.; Sandeep, S.
2016-12-01
The oceanic impact of poleward shift in Monsoon Low-Level Jet (MLLJ) is examined using a regional ocean model (ROMS). Two sets of downscaling experiments were conducted using ROMS with boundary and initial conditions from six CMIP5 models. While outputs from the historical run (1981-2000) acts as forcing for the first, the second uses RCP8.5 (2080-2099). By comparing the outputs, it is found that Oman coast will experience an increase in upwelling in tune with MLLJ shift. Consistent with the changes in upwelling and zonal Ekman transport, temperature, salinity and productivity show significant changes near the Oman coast. The changes in MLLJ causes the coastal wind to angle against the Oman coast in such a fashion that the net upwelling increases in the next century and so does the marine productivity. This study contrasts the general view of weakening of upwelling along the Arabian coasts due to the weakening of monsoon winds. Above findings has major implications on the livelihood and economy of the region
NASA Astrophysics Data System (ADS)
Yurumezoglu, Kemal; Karabey, Burak; Yigit Koyunkaya, Melike
2017-03-01
Full shadows, partial shadows and multilayer shadows are explained based on the phenomenon of the linear dispersion of light. This paper focuses on progressing the understanding of shadows from physical and mathematical perspectives. A significant relationship between light and color pigments is demonstrated with the help of the concept of sets. This integration of physical and mathematical reasoning not only manages an operational approach to the concept of shadows, it also outputs a model that can be used in science, technology, engineering and mathematics (STEM) curricula by providing a concrete and physical example for abstract concept of the empty set.
Delay correlation analysis and representation for vital complaint VHDL models
Rich, Marvin J.; Misra, Ashutosh
2004-11-09
A method and system unbind a rise/fall tuple of a VHDL generic variable and create rise time and fall time generics of each generic variable that are independent of each other. Then, according to a predetermined correlation policy, the method and system collect delay values in a VHDL standard delay file, sort the delay values, remove duplicate delay values, group the delay values into correlation sets, and output an analysis file. The correlation policy may include collecting all generic variables in a VHDL standard delay file, selecting each generic variable, and performing reductions on the set of delay values associated with each selected generic variable.
Methods and circuitry for reconfigurable SEU/SET tolerance
NASA Technical Reports Server (NTRS)
Shuler, Jr., Robert L. (Inventor)
2010-01-01
A device is disclosed in one embodiment that has multiple identical sets of programmable functional elements, programmable routing resources, and majority voters that correct errors. The voters accept a mode input for a redundancy mode and a split mode. In the redundancy mode, the programmable functional elements are identical and are programmed identically so the voters produce an output corresponding to the majority of inputs that agree. In a split mode, each voter selects a particular programmable functional element output as the output of the voter. Therefore, in the split mode, the programmable functional elements can perform different functions, operate independently, and/or be connected together to process different parts of the same problem.
Architecture for on-die interconnect
Khare, Surhud; More, Ankit; Somasekhar, Dinesh; Dunning, David S.
2016-03-15
In an embodiment, an apparatus includes: a plurality of islands configured on a semiconductor die, each of the plurality of islands having a plurality of cores; and a plurality of network switches configured on the semiconductor die and each associated with one of the plurality of islands, where each network switch includes a plurality of output ports, a first set of the output ports are each to couple to the associated network switch of an island via a point-to-point interconnect and a second set of the output ports are each to couple to the associated network switches of a plurality of islands via a point-to-multipoint interconnect. Other embodiments are described and claimed.
Sensor Drift Compensation Algorithm based on PDF Distance Minimization
NASA Astrophysics Data System (ADS)
Kim, Namyong; Byun, Hyung-Gi; Persaud, Krishna C.; Huh, Jeung-Soo
2009-05-01
In this paper, a new unsupervised classification algorithm is introduced for the compensation of sensor drift effects of the odor sensing system using a conducting polymer sensor array. The proposed method continues updating adaptive Radial Basis Function Network (RBFN) weights in the testing phase based on minimizing Euclidian Distance between two Probability Density Functions (PDFs) of a set of training phase output data and another set of testing phase output data. The output in the testing phase using the fixed weights of the RBFN are significantly dispersed and shifted from each target value due mostly to sensor drift effect. In the experimental results, the output data by the proposed methods are observed to be concentrated closer again to their own target values significantly. This indicates that the proposed method can be effectively applied to improved odor sensing system equipped with the capability of sensor drift effect compensation
Alternatives for jet engine control
NASA Technical Reports Server (NTRS)
Sain, M. K.
1979-01-01
The research is classified in two categories: (1) the use of modern multivariable frequency domain methods for control of engine models in the neighborhood of a set-point, and (2) the use of nonlinear modelling and optimization techniques for control of engine models over a more extensive part of the flight envelope. Progress in the first category included the extension of CARDIAD (Complex Acceptability Region for Diagonal Dominance) methods developed with the help of the grant to the case of engine models with four inputs and four outputs. A suitable bounding procedure for the dominance function was determined. Progress in the second category had its principal focus on automatic nonlinear model generation. Simulations of models produced satisfactory results where compared with the NASA DYNGEN digital engine deck.
Simulation technique for modeling flow on floodplains and in coastal wetlands
Schaffranek, Raymond W.; Baltzer, Robert A.
1988-01-01
The system design is premised on a proven, areal two-dimensional, finite-difference flow/transport model which is supported by an operational set of computer programs for input data management and model output interpretation. The purposes of the project are (1) to demonstrate the utility of the model for providing useful highway design information, (2) to develop guidelines and procedures for using the simulation system for evaluation, analysis, and optimal design of highway crossings of floodplain and coastal wetland areas, and (3) to identify improvements which can be effected in the simulation system to better serve the needs of highway design engineers. Two case study model implementations, being conducted to demonstrate the simulation system and modeling procedure, are presented and discussed briefly.
Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN
NASA Astrophysics Data System (ADS)
Peter, Josephine; Doloi, B.; Bhattacharyya, B.
2011-01-01
The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.
A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.
Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon
2007-02-01
Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.
NASA Astrophysics Data System (ADS)
Rehman, Naveed ur; Siddiqui, Mubashir Ali
2018-05-01
This work theoretically and experimentally investigated the performance of an arrayed solar flat-plate thermoelectric generator (ASFTEG). An analytical model, based on energy balances, was established for determining load voltage, power output and overall efficiency of ASFTEGs. An array consists of TEG devices (or modules) connected electrically in series and operating in closed-circuit mode with a load. The model takes into account the distinct temperature difference across each module, which is a major feature of this model. Parasitic losses have also been included in the model for realistic results. With the given set of simulation parameters, an ASFTEG consisting of four commercially available Bi2Te3 modules had a predicted load voltage of 200 mV and generated 3546 μW of electric power output. Predictions from the model were in good agreement with field experimental outcomes from a prototype ASFTEG, which was developed for validation purposes. Later, the model was simulated to maximize the performance of the ASFTEG by adjusting the thermal and electrical design of the system. Optimum values of design parameters were evaluated and discussed in detail. Beyond the current limitations associated with improvements in thermoelectric materials, this study will eventually lead to the successful development of portable roof-top renewable TEGs.
An experimental approach to identify dynamical models of transcriptional regulation in living cells
NASA Astrophysics Data System (ADS)
Fiore, G.; Menolascina, F.; di Bernardo, M.; di Bernardo, D.
2013-06-01
We describe an innovative experimental approach, and a proof of principle investigation, for the application of System Identification techniques to derive quantitative dynamical models of transcriptional regulation in living cells. Specifically, we constructed an experimental platform for System Identification based on a microfluidic device, a time-lapse microscope, and a set of automated syringes all controlled by a computer. The platform allows delivering a time-varying concentration of any molecule of interest to the cells trapped in the microfluidics device (input) and real-time monitoring of a fluorescent reporter protein (output) at a high sampling rate. We tested this platform on the GAL1 promoter in the yeast Saccharomyces cerevisiae driving expression of a green fluorescent protein (Gfp) fused to the GAL1 gene. We demonstrated that the System Identification platform enables accurate measurements of the input (sugars concentrations in the medium) and output (Gfp fluorescence intensity) signals, thus making it possible to apply System Identification techniques to obtain a quantitative dynamical model of the promoter. We explored and compared linear and nonlinear model structures in order to select the most appropriate to derive a quantitative model of the promoter dynamics. Our platform can be used to quickly obtain quantitative models of eukaryotic promoters, currently a complex and time-consuming process.
Experimental design for dynamics identification of cellular processes.
Dinh, Vu; Rundell, Ann E; Buzzard, Gregery T
2014-03-01
We address the problem of using nonlinear models to design experiments to characterize the dynamics of cellular processes by using the approach of the Maximally Informative Next Experiment (MINE), which was introduced in W. Dong et al. (PLoS ONE 3(8):e3105, 2008) and independently in M.M. Donahue et al. (IET Syst. Biol. 4:249-262, 2010). In this approach, existing data is used to define a probability distribution on the parameters; the next measurement point is the one that yields the largest model output variance with this distribution. Building upon this approach, we introduce the Expected Dynamics Estimator (EDE), which is the expected value using this distribution of the output as a function of time. We prove the consistency of this estimator (uniform convergence to true dynamics) even when the chosen experiments cluster in a finite set of points. We extend this proof of consistency to various practical assumptions on noisy data and moderate levels of model mismatch. Through the derivation and proof, we develop a relaxed version of MINE that is more computationally tractable and robust than the original formulation. The results are illustrated with numerical examples on two nonlinear ordinary differential equation models of biomolecular and cellular processes.
Comparative Performance and Model Agreement of Three Common Photovoltaic Array Configurations.
Boyd, Matthew T
2018-02-01
Three grid-connected monocrystalline silicon arrays on the National Institute of Standards and Technology (NIST) campus in Gaithersburg, MD have been instrumented and monitored for 1 yr, with only minimal gaps in the data sets. These arrays range from 73 kW to 271 kW, and all use the same module, but have different tilts, orientations, and configurations. One array is installed facing east and west over a parking lot, one in an open field, and one on a flat roof. Various measured relationships and calculated standard metrics have been used to compare the relative performance of these arrays in their different configurations. Comprehensive performance models have also been created in the modeling software pvsyst for each array, and its predictions using measured on-site weather data are compared to the arrays' measured outputs. The comparisons show that all three arrays typically have monthly performance ratios (PRs) above 0.75, but differ significantly in their relative output, strongly correlating to their operating temperature and to a lesser extent their orientation. The model predictions are within 5% of the monthly delivered energy values except during the winter months, when there was intermittent snow on the arrays, and during maintenance and other outages.
On the Directional Dependence and Null Space Freedom in Uncertainty Bound Identification
NASA Technical Reports Server (NTRS)
Lim, K. B.; Giesy, D. P.
1997-01-01
In previous work, the determination of uncertainty models via minimum norm model validation is based on a single set of input and output measurement data. Since uncertainty bounds at each frequency is directionally dependent for multivariable systems, this will lead to optimistic uncertainty levels. In addition, the design freedom in the uncertainty model has not been utilized to further reduce uncertainty levels. The above issues are addressed by formulating a min- max problem. An analytical solution to the min-max problem is given to within a generalized eigenvalue problem, thus avoiding a direct numerical approach. This result will lead to less conservative and more realistic uncertainty models for use in robust control.
Sun, Jiangming; Carlsson, Lars; Ahlberg, Ernst; Norinder, Ulf; Engkvist, Ola; Chen, Hongming
2017-07-24
Conformal prediction has been proposed as a more rigorous way to define prediction confidence compared to other application domain concepts that have earlier been used for QSAR modeling. One main advantage of such a method is that it provides a prediction region potentially with multiple predicted labels, which contrasts to the single valued (regression) or single label (classification) output predictions by standard QSAR modeling algorithms. Standard conformal prediction might not be suitable for imbalanced data sets. Therefore, Mondrian cross-conformal prediction (MCCP) which combines the Mondrian inductive conformal prediction with cross-fold calibration sets has been introduced. In this study, the MCCP method was applied to 18 publicly available data sets that have various imbalance levels varying from 1:10 to 1:1000 (ratio of active/inactive compounds). Our results show that MCCP in general performed well on bioactivity data sets with various imbalance levels. More importantly, the method not only provides confidence of prediction and prediction regions compared to standard machine learning methods but also produces valid predictions for the minority class. In addition, a compound similarity based nonconformity measure was investigated. Our results demonstrate that although it gives valid predictions, its efficiency is much worse than that of model dependent metrics.
Setting conservation management thresholds using a novel participatory modeling approach.
Addison, P F E; de Bie, K; Rumpff, L
2015-10-01
We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The Authors Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.
Automatic Visual Tracking and Social Behaviour Analysis with Multiple Mice
Giancardo, Luca; Sona, Diego; Huang, Huiping; Sannino, Sara; Managò, Francesca; Scheggia, Diego; Papaleo, Francesco; Murino, Vittorio
2013-01-01
Social interactions are made of complex behavioural actions that might be found in all mammalians, including humans and rodents. Recently, mouse models are increasingly being used in preclinical research to understand the biological basis of social-related pathologies or abnormalities. However, reliable and flexible automatic systems able to precisely quantify social behavioural interactions of multiple mice are still missing. Here, we present a system built on two components. A module able to accurately track the position of multiple interacting mice from videos, regardless of their fur colour or light settings, and a module that automatically characterise social and non-social behaviours. The behavioural analysis is obtained by deriving a new set of specialised spatio-temporal features from the tracker output. These features are further employed by a learning-by-example classifier, which predicts for each frame and for each mouse in the cage one of the behaviours learnt from the examples given by the experimenters. The system is validated on an extensive set of experimental trials involving multiple mice in an open arena. In a first evaluation we compare the classifier output with the independent evaluation of two human graders, obtaining comparable results. Then, we show the applicability of our technique to multiple mice settings, using up to four interacting mice. The system is also compared with a solution recently proposed in the literature that, similarly to us, addresses the problem with a learning-by-examples approach. Finally, we further validated our automatic system to differentiate between C57B/6J (a commonly used reference inbred strain) and BTBR T+tf/J (a mouse model for autism spectrum disorders). Overall, these data demonstrate the validity and effectiveness of this new machine learning system in the detection of social and non-social behaviours in multiple (>2) interacting mice, and its versatility to deal with different experimental settings and scenarios. PMID:24066146
Frequency content of sea surface height variability from internal gravity waves to mesoscale eddies
NASA Astrophysics Data System (ADS)
Savage, Anna C.; Arbic, Brian K.; Richman, James G.; Shriver, Jay F.; Alford, Matthew H.; Buijsman, Maarten C.; Thomas Farrar, J.; Sharma, Hari; Voet, Gunnar; Wallcraft, Alan J.; Zamudio, Luis
2017-03-01
High horizontal-resolution (1/12.5° and 1/25°) 41-layer global simulations of the HYbrid Coordinate Ocean Model (HYCOM), forced by both atmospheric fields and the astronomical tidal potential, are used to construct global maps of sea surface height (SSH) variability. The HYCOM output is separated into steric and nonsteric and into subtidal, diurnal, semidiurnal, and supertidal frequency bands. The model SSH output is compared to two data sets that offer some geographical coverage and that also cover a wide range of frequencies—a set of 351 tide gauges that measure full SSH and a set of 14 in situ vertical profilers from which steric SSH can be calculated. Three of the global maps are of interest in planning for the upcoming Surface Water and Ocean Topography (SWOT) two-dimensional swath altimeter mission: (1) maps of the total and (2) nonstationary internal tidal signal (the latter calculated after removing the stationary internal tidal signal via harmonic analysis), with an average variance of 1.05 and 0.43 cm2, respectively, for the semidiurnal band, and (3) a map of the steric supertidal contributions, which are dominated by the internal gravity wave continuum, with an average variance of 0.15 cm2. Stationary internal tides (which are predictable), nonstationary internal tides (which will be harder to predict), and nontidal internal gravity waves (which will be very difficult to predict) may all be important sources of high-frequency "noise" that could mask lower frequency phenomena in SSH measurements made by the SWOT mission.
Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.
Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H
2018-01-01
To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.
Task scheduling in dataflow computer architectures
NASA Technical Reports Server (NTRS)
Katsinis, Constantine
1994-01-01
Dataflow computers provide a platform for the solution of a large class of computational problems, which includes digital signal processing and image processing. Many typical applications are represented by a set of tasks which can be repetitively executed in parallel as specified by an associated dataflow graph. Research in this area aims to model these architectures, develop scheduling procedures, and predict the transient and steady state performance. Researchers at NASA have created a model and developed associated software tools which are capable of analyzing a dataflow graph and predicting its runtime performance under various resource and timing constraints. These models and tools were extended and used in this work. Experiments using these tools revealed certain properties of such graphs that require further study. Specifically, the transient behavior at the beginning of the execution of a graph can have a significant effect on the steady state performance. Transformation and retiming of the application algorithm and its initial conditions can produce a different transient behavior and consequently different steady state performance. The effect of such transformations on the resource requirements or under resource constraints requires extensive study. Task scheduling to obtain maximum performance (based on user-defined criteria), or to satisfy a set of resource constraints, can also be significantly affected by a transformation of the application algorithm. Since task scheduling is performed by heuristic algorithms, further research is needed to determine if new scheduling heuristics can be developed that can exploit such transformations. This work has provided the initial development for further long-term research efforts. A simulation tool was completed to provide insight into the transient and steady state execution of a dataflow graph. A set of scheduling algorithms was completed which can operate in conjunction with the modeling and performance tools previously developed. Initial studies on the performance of these algorithms were done to examine the effects of application algorithm transformations as measured by such quantities as number of processors, time between outputs, time between input and output, communication time, and memory size.
NASA Astrophysics Data System (ADS)
Peckham, Scott
2016-04-01
Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
NASA Astrophysics Data System (ADS)
Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin
2017-06-01
Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.
Marken, Richard S; Horth, Brittany
2011-06-01
Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.
Robust non-rigid registration algorithm based on local affine registration
NASA Astrophysics Data System (ADS)
Wu, Liyang; Xiong, Lei; Du, Shaoyi; Bi, Duyan; Fang, Ting; Liu, Kun; Wu, Dongpeng
2018-04-01
Aiming at the problem that the traditional point set non-rigid registration algorithm has low precision and slow convergence speed for complex local deformation data, this paper proposes a robust non-rigid registration algorithm based on local affine registration. The algorithm uses a hierarchical iterative method to complete the point set non-rigid registration from coarse to fine. In each iteration, the sub data point sets and sub model point sets are divided and the shape control points of each sub point set are updated. Then we use the control point guided affine ICP algorithm to solve the local affine transformation between the corresponding sub point sets. Next, the local affine transformation obtained by the previous step is used to update the sub data point sets and their shape control point sets. When the algorithm reaches the maximum iteration layer K, the loop ends and outputs the updated sub data point sets. Experimental results demonstrate that the accuracy and convergence of our algorithm are greatly improved compared with the traditional point set non-rigid registration algorithms.
Fabrication et caracterisation d'hybrides optiques tout-fibre
NASA Astrophysics Data System (ADS)
Madore, Wendy Julie
In this thesis, we present the fabrication and characterization of optical hybrids made of all fibre 3 × 3 and 4 × 4 couplers. The three-fibre components are made with a triangular cross section, while the four-fibre components are made with a square cross section. All of these couplers have to exhibit equipartition of output amplitudes and specific relative phases of the output signals to be referred to as optical hybrids. These two types of couplers are first modelled to determine the appropriate set of experimental parameters to make hybrids out of them. The prototypes are made in standard telecommunication fibres and then characterized to quantify the performances in transmission and in phase. The objectives of this work is first to model the behaviour and physical properties of 3×3 and 4 × 4 couplers to make sure they can meet the requirements of optical hybrids with an appropriate set of fabrication parameters. The next step is to make prototypes of these 3×3 and 4 × 4 couplers and test their behaviour to check how they fulfill the requirements of optical hybrids. The experimental set-up selected is based on the fusion-tapering technique to make optical fibre components. The heat source is a micro-torch fuelled with a gas mix including propane and oxygen. This type of set-up gives the required freedom to adjust experimental parameters to suit both 3×3 and 4×4 couplers. The versatility of the set-up is also an advantage towards a repeatable and stable process to fuse and taper the different structures. The fabricated triangular-shape couplers have a total transmission of 85 % (-0,7 dB), the crossing is typically located around 1 550 nm with a transmission of around 33 % (-4 dB) per branch. In addition, the relative phases between the output signals are 120±9°. The fabricated square-shape couplers have a total transmission of 89 % (-0,5 dB) with a crossing around 1 550 nm and a transmission around 25 % (-6 dB) per branch. The relative phases between the output signals are 90±3°. As standard telecommunications fibres are used to make the couplers, the prototypes are compatible with all standard fibered set-ups and benches. The properties of optical hybrids are very interesting in coherent detection, where an unambiguous phase measurement is desired. For instance, some standard telecommunication systems use phase-shift keying (PSK), which means information is encoded in the phase of the electromagnetic wave. An all-optical decoding of signal is possible using optical hybrids. Another application is in biomedical imaging with techniques such as optical coherence tomography (OCT), or to a more general extend, profilometry systems. In state-of-the-art techniques, a conventional interferometer combined with Fourier analysis only gives absolute value of the phase. Therefore, the achievable imaging depth in the sample is decreased by a factor 2. Using optical hybrids would simply allow that unambiguous phase measurement, giving the sign and value of the phase at the same time.
Linking the Weather Generator with Regional Climate Model
NASA Astrophysics Data System (ADS)
Dubrovsky, Martin; Farda, Ales; Skalak, Petr; Huth, Radan
2013-04-01
One of the downscaling approaches, which transform the raw outputs from the climate models (GCMs or RCMs) into data with more realistic structure, is based on linking the stochastic weather generator with the climate model output. The present contribution, in which the parametric daily surface weather generator (WG) M&Rfi is linked to the RCM output, follows two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate Regional Climate Model at 25 km resolution. The WG parameters are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series (including probability of wet day occurrence). (2) Presenting a methodology for linking the WG with RCM output. This methodology, which is based on merging information from observations and RCM, may be interpreted as a downscaling procedure, whose product is a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series in the first step, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with gridded RCM weather series and spatially scarcer observations. The quality of the weather series produced by the resultant gridded WG will be assessed in terms of selected climatic characteristics (focusing on characteristics related to variability and extremes of surface temperature and precipitation). Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).
Capital in the twenty-first century: a critique.
Soskice, David
2014-12-01
I set out and explain Piketty's model of the dynamics of capitalism based on two equations and the r > g inequality (his central contradiction of capitalism). I then take issue with Piketty's analysis of the rebuilding of inequality from the 1970s to the present on three grounds: First, his model is based on the (neo-classical) assumption that companies are essentially passive actors who invest the amount savers choose to accumulate at equilibrium output - leading to the counterintuitive result that companies respond to the secular fall in growth (and hence their product markets) from the 1970s on by increasing their investment relative to output; this does indeed imply increased inequality on Piketty's β measure, the ratio of capital to output. I suggest a more realistic model in which businesses determine investment growth based on their expectations of output growth, with monetary policy bringing savings into line with business-determined investment; the implication of this model is that β does not change at all. And in fact as other recent empirical work which I reference has noted, β has not changed significantly over these recent decades. Hence Piketty's central analysis of the growth of contemporary inequality requires rethinking. Second, despite many references to the need for political economic analysis, Piketty's analysis of the growth of inequality in the period from the 1970s to the present is almost devoid of it, his explanatory framework being purely mathematical. I sketch what a political economic framework might look like during a period when politics was central to inequality. Third, inequality in fact rose on a variety of dimensions apart from β (including poverty which Piketty virtually makes no reference to in this period), but it is unclear what might explain why inequality rose in these other dimensions. © London School of Economics and Political Science 2014.
Dose uniformity analysis among ten 16-slice same-model CT scanners.
Erdi, Yusuf Emre
2012-01-01
With the introduction of multislice scanners, computed tomographic (CT) dose optimization has become important. The patient-absorbed dose may differ among the scanners although they are the same type and model. To investigate the dose output variation of the CT scanners, we designed the study to analyze dose outputs of 10 same-model CT scanners using 3 clinical protocols. Ten GE Lightspeed (GE Healthcare, Waukesha, Wis) 16-slice scanners located at main campus and various satellite locations of our institution have been included in this study. All dose measurements were performed using poly (methyl methacrylate) (PMMA) head (diameter, 16 cm) and body (diameter, 32 cm) phantoms manufactured by Radcal (RadCal Corp, Monrovia, Calif) using a 9095 multipurpose analyzer with 10 × 9-3CT ion chamber both from the same manufacturer. Ion chamber is inserted into the peripheral and central axis locations and volume CT dose index (CTDIvol) is calculated as weighted average of doses at those locations. Three clinical protocol settings for adult head, high-resolution chest, and adult abdomen are used for dose measurements. We have observed up to 9.4% CTDIvol variation for the adult head protocol in which the largest variation occurred among the protocols. However, head protocol uses higher milliampere second values than the other 2 protocols. Most of the measured values were less than the system-stored CTDIvol values. It is important to note that reduction in dose output from tubes as they age is expected in addition to the intrinsic radiation output fluctuations of the same scanner. Although the same model CT scanners were used in this study, it is possible to see CTDIvol variation in standard patient scanning protocols of head, chest, and abdomen. The compound effect of the dose variation may be larger with higher milliampere and multiphase and multilocation CT scans.
Satellite Remote Sensing is Key to Water Cycle Integrator
NASA Astrophysics Data System (ADS)
Koike, T.
2016-12-01
To promote effective multi-sectoral, interdisciplinary collaboration based on coordinated and integrated efforts, the Global Earth Observation System of Systems (GEOSS) is now developing a "GEOSS Water Cycle Integrator (WCI)", which integrates "Earth observations", "modeling", "data and information", "management systems" and "education systems". GEOSS/WCI sets up "work benches" by which partners can share data, information and applications in an interoperable way, exchange knowledge and experiences, deepen mutual understanding and work together effectively to ultimately respond to issues of both mitigation and adaptation. (A work bench is a virtual geographical or phenomenological space where experts and managers collaborate to use information to address a problem within that space). GEOSS/WCI enhances the coordination of efforts to strengthen individual, institutional and infrastructure capacities, especially for effective interdisciplinary coordination and integration. GEOSS/WCI archives various satellite data to provide various hydrological information such as cloud, rainfall, soil moisture, or land-surface snow. These satellite products were validated using land observation in-situ data. Water cycle models can be developed by coupling in-situ and satellite data. River flows and other hydrological parameters can be simulated and validated by in-situ data. Model outputs from weather-prediction, seasonal-prediction, and climate-prediction models are archived. Some of these model outputs are archived on an online basis, but other models, e.g., climate-prediction models are archived on an offline basis. After models are evaluated and biases corrected, the outputs can be used as inputs into the hydrological models for predicting the hydrological parameters. Additionally, we have already developed a data-assimilation system by combining satellite data and the models. This system can improve our capability to predict hydrological phenomena. The WCI can provide better predictions of the hydrological parameters for integrated water resources management (IWRM) and also assess the impact of climate change and calculate adaptation needs.
Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.
Ashford, Paul; Moss, David S; Alex, Alexander; Yeap, Siew K; Povia, Alice; Nobeli, Irene; Williams, Mark A
2012-03-14
Protein structures provide a valuable resource for rational drug design. For a protein with no known ligand, computational tools can predict surface pockets that are of suitable size and shape to accommodate a complementary small-molecule drug. However, pocket prediction against single static structures may miss features of pockets that arise from proteins' dynamic behaviour. In particular, ligand-binding conformations can be observed as transiently populated states of the apo protein, so it is possible to gain insight into ligand-bound forms by considering conformational variation in apo proteins. This variation can be explored by considering sets of related structures: computationally generated conformers, solution NMR ensembles, multiple crystal structures, homologues or homology models. It is non-trivial to compare pockets, either from different programs or across sets of structures. For a single structure, difficulties arise in defining particular pocket's boundaries. For a set of conformationally distinct structures the challenge is how to make reasonable comparisons between them given that a perfect structural alignment is not possible. We have developed a computational method, Provar, that provides a consistent representation of predicted binding pockets across sets of related protein structures. The outputs are probabilities that each atom or residue of the protein borders a predicted pocket. These probabilities can be readily visualised on a protein using existing molecular graphics software. We show how Provar simplifies comparison of the outputs of different pocket prediction algorithms, of pockets across multiple simulated conformations and between homologous structures. We demonstrate the benefits of use of multiple structures for protein-ligand and protein-protein interface analysis on a set of complexes and consider three case studies in detail: i) analysis of a kinase superfamily highlights the conserved occurrence of surface pockets at the active and regulatory sites; ii) a simulated ensemble of unliganded Bcl2 structures reveals extensions of a known ligand-binding pocket not apparent in the apo crystal structure; iii) visualisations of interleukin-2 and its homologues highlight conserved pockets at the known receptor interfaces and regions whose conformation is known to change on inhibitor binding. Through post-processing of the output of a variety of pocket prediction software, Provar provides a flexible approach to the analysis and visualization of the persistence or variability of pockets in sets of related protein structures.
A process-based standard for the Solar Energetic Particle Event Environment
NASA Astrophysics Data System (ADS)
Gabriel, Stephen
For 10 years or more, there has been a lack of concensus on what the ISO standard model for the Solar Energetic Particle Event (SEPE) environment should be. Despite many technical discussions between the world experts in this field, it has been impossible to agree on which of the several models available should be selected as the standard. Most of these discussions at the ISO WG4 meetings and conferences, etc have centred around the differences in modelling approach between the MSU model and the several remaining models from elsewhere worldwide (mainly the USA and Europe). The topic is considered timely given the inclusion of a session on reference data sets at the Space Weather Workshop in Boulder in April 2014. The original idea of a ‘process-based’ standard was conceived by Dr Kent Tobiska as a way of getting round the problems associated with not only the presence of different models, which in themselves could have quite distinct modelling approaches but could also be based on different data sets. In essence, a process based standard approach overcomes these issues by allowing there to be more than one model and not necessarily a single standard model; however, any such model has to be completely transparent in that the data set and the modelling techniques used have to be not only to be clearly and unambiguously defined but also subject to peer review. If the model meets all of these requirements then it should be acceptable as a standard model. So how does this process-based approach resolve the differences between the existing modelling approaches for the SEPE environment and remove the impasse? In a sense, it does not remove all of the differences but only some of them; however, most importantly it will allow something which so far has been impossible without ambiguities and disagreement and that is a comparison of the results of the various models. To date one of the problems (if not the major one) in comparing the results of the various different SEPE statistical models has been caused by two things: 1) the data set and 2) the definition of an event Because unravelling the dependencies of the outputs of different statistical models on these two parameters is extremely difficult if not impossible, currently comparison of the results from the different models is also extremely difficult and can lead to controversies, especially over which model is the correct one; hence, when it comes to using these models for engineering purposes to calculate, for example, the radiation dose for a particular mission, the user, who is in all likelihood not an expert in this field, could be given two( or even more) very different environments and find it impossible to know how to select one ( or even how to compare them). What is proposed then, is a process-based standard, which in common with nearly all of the current models is composed of 3 elements, a standard data set, a standard event definition and a resulting standard event list. A standard event list is the output of this standard and can then be used with any of the existing (or indeed future) models that are based on events. This standard event list is completely traceable and transparent and represents a reference event list for all the community. When coupled with a statistical model, the results when compared will only be dependent on the statistical model and not on the data set or event definition.
Port-O-Sim Object Simulation Application
NASA Technical Reports Server (NTRS)
Lanzi, Raymond J.
2009-01-01
Port-O-Sim is a software application that supports engineering modeling and simulation of launch-range systems and subsystems, as well as the vehicles that operate on them. It is flexible, distributed, object-oriented, and realtime. A scripting language is used to configure an array of simulation objects and link them together. The script is contained in a text file, but executed and controlled using a graphical user interface. A set of modules is defined, each with input variables, output variables, and settings. These engineering models can be either linked to each other or run as standalone. The settings can be modified during execution. Since 2001, this application has been used for pre-mission failure mode training for many Range Safety Scenarios. It contains range asset link analysis, develops look-angle data, supports sky-screen site selection, drives GPS (Global Positioning System) and IMU (Inertial Measurement Unit) simulators, and can support conceptual design efforts for multiple flight programs with its capacity for rapid six-degrees-of-freedom model development. Due to the assembly of various object types into one application, the application is applicable across a wide variety of launch range problem domains.
Software cost/resource modeling: Deep space network software cost estimation model
NASA Technical Reports Server (NTRS)
Tausworthe, R. J.
1980-01-01
A parametric software cost estimation model prepared for JPL deep space network (DSN) data systems implementation tasks is presented. The resource estimation model incorporates principles and data from a number of existing models, such as those of the General Research Corporation, Doty Associates, IBM (Walston-Felix), Rome Air Force Development Center, University of Maryland, and Rayleigh-Norden-Putnam. The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions. Parameters in the model are adjusted to fit JPL software lifecycle statistics. The estimation model output scales a standard DSN work breakdown structure skeleton, which is then input to a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasari, Paul K. R.; Shazeeb, Mohammed Salman; Könik, Arda
Purpose: Binning list-mode acquisitions as a function of a surrogate signal related to respiration has been employed to reduce the impact of respiratory motion on image quality in cardiac emission tomography (SPECT and PET). Inherent in amplitude binning is the assumption that there is a monotonic relationship between the amplitude of the surrogate signal and respiratory motion of the heart. This assumption is not valid in the presence of hysteresis when heart motion exhibits a different relationship with the surrogate during inspiration and expiration. The purpose of this study was to investigate the novel approach of using the Bouc–Wen (BW)more » model to provide a signal accounting for hysteresis when binning list-mode data with the goal of thereby improving motion correction. The study is based on the authors’ previous observations that hysteresis between chest and abdomen markers was indicative of hysteresis between abdomen markers and the internal motion of the heart. Methods: In 19 healthy volunteers, they determined the internal motion of the heart and diaphragm in the superior–inferior direction during free breathing using MRI navigators. A visual tracking system (VTS) synchronized with MRI acquisition tracked the anterior–posterior motions of external markers placed on the chest and abdomen. These data were employed to develop and test the Bouc–Wen model by inputting the VTS derived chest and abdomen motions into it and using the resulting output signals as surrogates for cardiac motion. The data of the volunteers were divided into training and testing sets. The training set was used to obtain initial values for the model parameters for all of the volunteers in the set, and for set members based on whether they were or were not classified as exhibiting hysteresis using a metric derived from the markers. These initial parameters were then employed with the testing set to estimate output signals. Pearson’s linear correlation coefficient between the abdomen, chest, average of chest and abdomen markers, and Bouc–Wen derived signals versus the true internal motion of the heart from MRI was used to judge the signals match to the heart motion. Results: The results show that the Bouc–Wen model generated signals demonstrated strong correlation with the heart motion. This correlation was slightly larger on average than that of the external surrogate signals derived from the abdomen marker, and average of the abdomen and chest markers, but was not statistically significantly different from them. Conclusions: The results suggest that the proposed model has the potential to be a unified framework for modeling hysteresis in respiratory motion in cardiac perfusion studies and beyond.« less
NASA Astrophysics Data System (ADS)
Kirchner-Bossi, Nicolas; Porté-Agel, Fernando
2017-04-01
Wind turbine wakes can significantly disrupt the performance of further downstream turbines in a wind farm, thus seriously limiting the overall wind farm power output. Such effect makes the layout design of a wind farm to play a crucial role on the whole performance of the project. An accurate definition of the wake interactions added to a computationally compromised layout optimization strategy can result in an efficient resource when addressing the problem. This work presents a novel soft-computing approach to optimize the wind farm layout by minimizing the overall wake effects that the installed turbines exert on one another. An evolutionary algorithm with an elitist sub-optimization crossover routine and an unconstrained (continuous) turbine positioning set up is developed and tested over an 80-turbine offshore wind farm over the North Sea off Denmark (Horns Rev I). Within every generation of the evolution, the wind power output (cost function) is computed through a recently developed and validated analytical wake model with a Gaussian profile velocity deficit [1], which has shown to outperform the traditionally employed wake models through different LES simulations and wind tunnel experiments. Two schemes with slightly different perimeter constraint conditions (full or partial) are tested. Results show, compared to the baseline, gridded layout, a wind power output increase between 5.5% and 7.7%. In addition, it is observed that the electric cable length at the facilities is reduced by up to 21%. [1] Bastankhah, Majid, and Fernando Porté-Agel. "A new analytical model for wind-turbine wakes." Renewable Energy 70 (2014): 116-123.
Greenhouse gas footprinting for small businesses--the use of input-output data.
Berners-Lee, M; Howard, D C; Moss, J; Kaivanto, K; Scott, W A
2011-02-01
To mitigate anthropogenic climate change greenhouse gas emissions (GHG) must be reduced; their major source is man's use of energy. A key way to manage emissions is for the energy consumer to understand their impact and the consequences of changing their activities. This paper addresses the challenge of delivering relevant, practical and reliable greenhouse gas 'footprint' information for small and medium sized businesses. The tool we describe is capable of ascribing parts of the total footprint to specific actions to which the business can relate and is sensitive enough to reflect the consequences of change. It provides a comprehensive description of all emissions for each business and sets them in the context of local, national and global statistics. It includes the GHG costs of all goods and services irrespective of their origin and without double accounting. We describe the development and use of the tool, which draws upon both national input-output data and process-based life cycle analysis techniques; a hybrid model. The use of national data sets the output in context and makes the results consistent with national and global targets, while the life cycle techniques provide a means of reflecting the dynamics of actions. The model is described in some detail along with a rationale and a short discussion of validity. As the tool is designed for small commercial users, we have taken care to combine rigour with practicality; parameterising from readily available client data whilst being clear about uncertainties. As an additional incentive, we also report on the potential costs or savings of switching activities. For users to benefit from the tool, they need to understand the output and know how much confidence they should place in the results. We not only describe an application of non-parametric statistics to generate confidence intervals, but also offer users the option of and guidance on adjusting figures to examine the sensitivity of the model to its components. It is important that the user does not see the model as a calculator that will generate one truth, but as a method of gaining insight and informing management decisions. We describe its application in tourism businesses in North West England as a demonstrator for the service sector remote from simple primary production, with brief case studies. We discuss its success compared to traditional approaches and outline further development work. Crown Copyright © 2010. Published by Elsevier B.V. All rights reserved.
Towards decision support for waiting lists: an operations management view.
Vissers, J M; Van Der Bij, J D; Kusters, R J
2001-06-01
This paper considers the phenomenon of waiting lists in a healthcare setting, which is characterised by limitations on the national expenditure, to explore the potentials of an operations management perspective. A reference framework for waiting list management is described, distinguishing different levels of planning in healthcare--national, regional, hospital and process--that each contributes to the existence of waiting lists through managerial decision making. In addition, different underlying mechanisms in demand and supply are distinguished, which together explain the development of waiting lists. It is our contention that within this framework a series of situation specific models should be designed to support communication and decision making. This is illustrated by the modelling of the demand for cataract treatment in a regional setting in the south-eastern part of the Netherlands. An input-output model was developed to support decisions regarding waiting lists. The model projects the demand for treatment at a regional level and makes it possible to evaluate waiting list impacts for different scenarios to meet this demand.
Aerial Surveys of the Beaufort Sea Seasonal Ice Zone in 2012-2014
NASA Astrophysics Data System (ADS)
Dewey, S.; Morison, J.; Andersen, R.; Zhang, J.
2014-12-01
Seasonal Ice Zone Reconnaissance Surveys (SIZRS) of the Beaufort Sea aboard U.S. Coast Guard Arctic Domain Awareness flights were made monthly from May 2012 to October 2012, June 2013 to August 2013, and June 2014 to October 2014. In 2012 sea ice extent reached a record minimum and the SIZRS sampling ranged from complete ice cover to open water; in addition to its large spatial coverage, the SIZRS program extends temporal coverage of the seasonal ice zone (SIZ) beyond the traditional season for ship-based observations, and is a good set of measurements for model validation and climatological comparison. The SIZ, where ice melts and reforms annually, encompasses the marginal ice zone (MIZ). Thus SIZRS tracks interannual MIZ conditions, providing a regional context for smaller-scale MIZ processes. Observations with Air eXpendable CTDs (AXCTDs) reveal two near-surface warm layers: a locally-formed surface seasonal mixed layer and a layer of Pacific origin at 50-60m. Temperatures in the latter differ from the freezing point by up to 2°C more than climatologies. To distinguish vertical processes of mixed layer formation from Pacific advection, vertical heat and salt fluxes are quantified using a 1-D Price-Weller-Pinkel (PWP) model adapted for ice-covered seas. This PWP simulates mixing processes in the top 100m of the ocean. Surface forcing fluxes are taken from the Marginal Ice Zone Modeling and Assimilation System MIZMAS. Comparison of SIZRS observations with PWP output shows that the ocean behaves one-dimensionally above the Pacific layer of the Beaufort Gyre. Despite agreement with the MIZMAS-forced PWP, SIZRS observations remain fresher to 100m than do outputs from MIZMAS and ECCO.2. The shapes of seasonal cycles in SIZRS salinity and temperature agree with MIZMAS and ECCO.2 model outputs despite differences in the values of each. However, the seasonal change of surface albedo is not high enough resolution to accurately drive the PWP. Use of ice albedo observations to scale shortwave radiation and salt fluxes improves agreement between observations and PWP outputs. Sensitivity analyses suggest that these are the two most impactful surface parameters on PWP output and that better knowledge of their seasonal changes—as well as better characterization of horizontal Pacific inflow—is imperative for future modeling.
NASA Technical Reports Server (NTRS)
1973-01-01
The retrieval command subsystem reference manual for the NASA Aerospace Safety Information System (NASIS) is presented. The output oriented classification of retrieval commands provides the user with the ability to review a set of data items for verification or inspection as a typewriter or CRT terminal and to print a set of data on a remote printer. Predefined and user-definable data formatting are available for both output media.
Piezoelectric MEMS switch to activate event-driven wireless sensor nodes
NASA Astrophysics Data System (ADS)
Nogami, H.; Kobayashi, T.; Okada, H.; Makimoto, N.; Maeda, R.; Itoh, T.
2013-09-01
We have developed piezoelectric microelectromechanical systems (MEMS) switches and applied them to ultra-low power wireless sensor nodes, to monitor the health condition of chickens. The piezoelectric switches have ‘S’-shaped piezoelectric cantilevers with a proof mass. Since the resonant frequency of the piezoelectric switches is around 24 Hz, we have utilized their superharmonic resonance to detect chicken movements as low as 5-15 Hz. When the vibration frequency is 4, 6 and 12 Hz, the piezoelectric switches vibrate at 0.5 m s-2 and generate 3-5 mV output voltages with superharmonic resonance. In order to detect such small piezoelectric output voltages, we employ comparator circuits that can be driven at low voltages, which can set the threshold voltage (Vth) from 1 to 31 mV with a 1 mV increment. When we set Vth at 4 mV, the output voltages of the piezoelectric MEMS switches vibrate below 15 Hz with amplitudes above 0.3 m s-2 and turn on the comparator circuits. Similarly, by setting Vth at 5 mV, the output voltages turn on the comparator circuits with vibrations above 0.4 m s-2. Furthermore, setting Vth at 10 mV causes vibrations above 0.5 m s-2 that turn on the comparator circuits. These results suggest that we can select small or fast chicken movements to utilize piezoelectric MEMS switches with comparator circuits.
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2015-12-01
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.
NASA Astrophysics Data System (ADS)
Gasparini, N. M.; Hobley, D. E. J.; Tucker, G. E.; Istanbulluoglu, E.; Adams, J. M.; Nudurupati, S. S.; Hutton, E. W. H.
2014-12-01
Computational models are important tools that can be used to quantitatively understand the evolution of real landscapes. Commonalities exist among most landscape evolution models, although they are also idiosyncratic, in that they are coded in different languages, require different input values, and are designed to tackle a unique set of questions. These differences can make applying a landscape evolution model challenging, especially for novice programmers. In this study, we compare and contrast two landscape evolution models that are designed to tackle similar questions, but the actual model designs are quite different. The first model, CHILD, is over a decade-old and is relatively well-tested, well-developed and well-used. It is coded in C++, operates on an irregular grid and was designed more with function rather than user-experience in mind. In contrast, the second model, Landlab, is relatively new and was designed to be accessible to a wide range of scientists, including those who have not previously used or developed a numerical model. Landlab is coded in Python, a relatively easy language for the non-proficient programmer, and has the ability to model landscapes described on both regular and irregular grids. We present landscape simulations from both modeling platforms. Our goal is to illustrate best practices for implementing a new process module in a landscape evolution model, and therefore the simulations are applicable regardless of the modeling platform. We contrast differences and highlight similarities between the use of the two models, including setting-up the model and input file for different evolutionary scenarios, computational time, and model output. Whenever possible, we compare model output with analytical solutions and illustrate the effects, or lack thereof, of a uniform vs. non-uniform grid. Our simulations focus on implementing a single process, including detachment-limited or transport-limited fluvial bedrock incision and linear or non-linear diffusion of material on hillslopes. We also illustrate the steps necessary to couple processes together, for example, detachment-limited fluvial bedrock incision with linear diffusion on hillslopes. Trade-offs exist between the two modeling platforms, and these are primarily in speed and ease-of-use.
Winkler, David A; Le, Tu C
2017-01-01
Neural networks have generated valuable Quantitative Structure-Activity/Property Relationships (QSAR/QSPR) models for a wide variety of small molecules and materials properties. They have grown in sophistication and many of their initial problems have been overcome by modern mathematical techniques. QSAR studies have almost always used so-called "shallow" neural networks in which there is a single hidden layer between the input and output layers. Recently, a new and potentially paradigm-shifting type of neural network based on Deep Learning has appeared. Deep learning methods have generated impressive improvements in image and voice recognition, and are now being applied to QSAR and QSAR modelling. This paper describes the differences in approach between deep and shallow neural networks, compares their abilities to predict the properties of test sets for 15 large drug data sets (the kaggle set), discusses the results in terms of the Universal Approximation theorem for neural networks, and describes how DNN may ameliorate or remove troublesome "activity cliffs" in QSAR data sets. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Model-free adaptive control of advanced power plants
Cheng, George Shu-Xing; Mulkey, Steven L.; Wang, Qiang
2015-08-18
A novel 3-Input-3-Output (3.times.3) Model-Free Adaptive (MFA) controller with a set of artificial neural networks as part of the controller is introduced. A 3.times.3 MFA control system using the inventive 3.times.3 MFA controller is described to control key process variables including Power, Steam Throttle Pressure, and Steam Temperature of boiler-turbine-generator (BTG) units in conventional and advanced power plants. Those advanced power plants may comprise Once-Through Supercritical (OTSC) Boilers, Circulating Fluidized-Bed (CFB) Boilers, and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.
Algorithms for output feedback, multiple-model, and decentralized control problems
NASA Technical Reports Server (NTRS)
Halyo, N.; Broussard, J. R.
1984-01-01
The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.
Efficacy of a new intraaortic propeller pump vs the intraaortic balloon pump: an animal study.
Dekker, André; Reesink, Koen; van der Veen, Erik; Van Ommen, Vincent; Geskes, Gijs; Soemers, Cecile; Maessen, Jos
2003-06-01
To compare the efficacy of a new intraaortic propeller pump (PP) to provide hemodynamic support to the intraaortic balloon pump (IABP) in an acute mitral regurgitation (MR) animal model. A new intraaortic PP (Reitan catheter pump; Jomed; Helsingborg, Sweden) recently has been introduced. The pump's aim is a reduction in afterload via a deployable propeller that is placed in the high descending aorta and can be set at rotational speeds of
Downscaling climate model output for water resources impacts assessment (Invited)
NASA Astrophysics Data System (ADS)
Maurer, E. P.; Pierce, D. W.; Cayan, D. R.
2013-12-01
Water agencies in the U.S. and around the globe are beginning to wrap climate change projections into their planning procedures, recognizing that ongoing human-induced changes to hydrology can affect water management in significant ways. Future hydrology changes are derived using global climate model (GCM) projections, though their output is at a spatial scale that is too coarse to meet the needs of those concerned with local and regional impacts. Those investigating local impacts have employed a range of techniques for downscaling, the process of translating GCM output to a more locally-relevant spatial scale. Recent projects have produced libraries of publicly-available downscaled climate projections, enabling managers, researchers and others to focus on impacts studies, drawing from a shared pool of fine-scale climate data. Besides the obvious advantage to data users, who no longer need to develop expertise in downscaling prior to examining impacts, the use of the downscaled data by hundreds of people has allowed a crowdsourcing approach to examining the data. The wide variety of applications employed by different users has revealed characteristics not discovered during the initial data set production. This has led to a deeper look at the downscaling methods, including the assumptions and effect of bias correction of GCM output. Here new findings are presented related to the assumption of stationarity in the relationships between large- and fine-scale climate, as well as the impact of quantile mapping bias correction on precipitation trends. The validity of these assumptions can influence the interpretations of impacts studies using data derived using these standard statistical methods and help point the way to improved methods.
Numerical solution of the exact cavity equations of motion for an unstable optical resonator.
Bowers, M S; Moody, S E
1990-09-20
We solve numerically, we believe for the first time, the exact cavity equations of motion for a realistic unstable resonator with a simple gain saturation model. The cavity equations of motion, first formulated by Siegman ["Exact Cavity Equations for Lasers with Large Output Coupling," Appl. Phys. Lett. 36, 412-414 (1980)], and which we term the dynamic coupled modes (DCM) method of solution, solve for the full 3-D time dependent electric field inside the optical cavity by expanding the field in terms of the actual diffractive transverse eigenmodes of the bare (gain free) cavity with time varying coefficients. The spatially varying gain serves to couple the bare cavity transverse modes and to scatter power from mode to mode. We show that the DCM method numerically converges with respect to the number of eigenmodes in the basis set. The intracavity intensity in the numerical example shown reaches a steady state, and this steady state distribution is compared with that computed from the traditional Fox and Li approach using a fast Fourier transform propagation algorithm. The output wavefronts from both methods are quite similar, and the computed output powers agree to within 10%. The usefulness and advantages of using this method for predicting the output of a laser, especially pulsed lasers used for coherent detection, are discussed.
Lee, Soxi; Hartstein, Neil D; Jeffs, Andrew
2015-06-01
Although the aquaculture of spiny lobsters has been expanding since the 1970s, very little is known about the potential environmental impacts on water quality of this activity. This study quantified the production of dissolved inorganic nitrogen (DIN) from Australasian red spiny lobsters, Jasus edwardsii, in the laboratory, and these data were then used in a numerical model to predict the dispersal pattern of DIN from a hypothetical commercial spiny lobster farm for a coastal site where such a farm would typically be located. Modelling scenarios were set up with combinations of two different stocking densities (3 and 5 kg m(-3)), two different diets (mussels and moist artificial diet) and three different feed conversion ratios (FCR = 3, 5 and 28). DIN excretion rate from unfed lobsters in the laboratory on average was 1.10 ± 0.12 μg N g(-1) h(-1) while feeding lobsters on mussels and artificial diet increased DIN excretion significantly by around eightfold and twofold, respectively. Ammonia was consistently the dominant contributor to measured DIN output from lobsters. Modelling results indicated that the mean elevated DIN from a hypothetical farm where the lobsters were fed with mussels ranged from 7 up to 20 μg N L(-1) with increasing stocking density and FCR and was 30-150 % higher than the mean elevated DIN resulting from lobsters fed with artificial diet. Overall, the results indicated that DIN output from the hypothetical spiny lobster sea-cage farming is unlikely to be problematic using the FCR, stocking density, and the number of cages modelled at the coastal site in this study. Furthermore, feeding lobsters with artificial diet can help maintain a lower DIN output than seafood, such as mussels or trash fish.