Sample records for model model output

  1. Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation

    NASA Astrophysics Data System (ADS)

    Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi

    2016-09-01

    We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.

  2. Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)

    DOE Data Explorer

    Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.

    2015-01-01

    These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.

  3. Updated Model of the Solar Energetic Proton Environment in Space

    NASA Astrophysics Data System (ADS)

    Jiggens, Piers; Heynderickx, Daniel; Sandberg, Ingmar; Truscott, Pete; Raukunen, Osku; Vainio, Rami

    2018-05-01

    The Solar Accumulated and Peak Proton and Heavy Ion Radiation Environment (SAPPHIRE) model provides environment specification outputs for all aspects of the Solar Energetic Particle (SEP) environment. The model is based upon a thoroughly cleaned and carefully processed data set. Herein the evolution of the solar proton model is discussed with comparisons to other models and data. This paper discusses the construction of the underlying data set, the modelling methodology, optimisation of fitted flux distributions and extrapolation of model outputs to cover a range of proton energies from 0.1 MeV to 1 GeV. The model provides outputs in terms of mission cumulative fluence, maximum event fluence and peak flux for both solar maximum and solar minimum periods. A new method for describing maximum event fluence and peak flux outputs in terms of 1-in-x-year SPEs is also described. SAPPHIRE proton model outputs are compared with previous models including CREME96, ESP-PSYCHIC and the JPL model. Low energy outputs are compared to SEP data from ACE/EPAM whilst high energy outputs are compared to a new model based on GLEs detected by Neutron Monitors (NMs).

  4. General Circulation Model Output for Forest Climate Change Research and Applications

    Treesearch

    Ellen J. Cooter; Brian K. Eder; Sharon K. LeDuc; Lawrence Truppi

    1993-01-01

    This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made. This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made.

  5. A two-stage DEA approach for environmental efficiency measurement.

    PubMed

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  6. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  7. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  8. Pandemic recovery analysis using the dynamic inoperability input-output model.

    PubMed

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  9. [Ecological management model of agriculture-pasture ecotone based on the theory of energy and material flow--a case study in Houshan dryland area of Inner Mongolia].

    PubMed

    Fan, Jinlong; Pan, Zhihua; Zhao, Ju; Zheng, Dawei; Tuo, Debao; Zhao, Peiyi

    2004-04-01

    The degradation of ecological environment in the agriculture-pasture ecotone in northern China has been paid more attentions. Based on our many years' research and under the guide of energy and material flow theory, this paper put forward an ecological management model, with a hill as the basic cell and according to the natural, social and economic characters of Houshan dryland farming area inside the north agriculture-pasture ecotone. The input and output of three models, i.e., the traditional along-slope-tillage model, the artificial grassland model and the ecological management model, were observed and recorded in detail in 1999. Energy and material flow analysis based on field test showed that compared with traditional model, ecological management model could increase solar use efficiency by 8.3%, energy output by 8.7%, energy conversion efficiency by 19.4%, N output by 26.5%, N conversion efficiency by 57.1%, P output by 12.1%, P conversion efficiency by 45.0%, and water use efficiency by 17.7%. Among the models, artificial grassland model had the lowest solar use efficiency, energy output and energy conversion efficiency; while the ecological management model had the most outputs and benefits, was the best model with high economic effect, and increased economic benefits by 16.1%, compared with the traditional model.

  10. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  11. A Spectral Method for Spatial Downscaling

    PubMed Central

    Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.

    2014-01-01

    Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037

  12. Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Andrew W; Leung, Lai R; Sridhar, V

    Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less

  13. H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.

    PubMed

    Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua

    2014-10-01

    This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.

  14. Model reference adaptive control of flexible robots in the presence of sudden load changes

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory

    1991-01-01

    Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.

  15. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  16. Algorithms for output feedback, multiple-model, and decentralized control problems

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.

  17. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, S; Ahmad, S; Chen, Y

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less

  18. When causality does not imply correlation: more spadework at the foundations of scientific psychology.

    PubMed

    Marken, Richard S; Horth, Brittany

    2011-06-01

    Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.

  19. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less

  20. Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.

    PubMed

    Fintelman, D M; Sterling, M; Hemida, H; Li, F-X

    2014-06-03

    The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2015-12-01

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.

  2. RESULTS FROM KINEROS STREAM CHANNEL ELEMENTS MODEL OUTPUT THROUGH AGWA DIFFERENCING 1973 AND 1997 NALC LANDCOVER DATA

    EPA Science Inventory

    Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.

  3. Predicting the synaptic information efficacy in cortical layer 5 pyramidal neurons using a minimal integrate-and-fire model.

    PubMed

    London, Michael; Larkum, Matthew E; Häusser, Michael

    2008-11-01

    Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.

  4. Predicting High-Power Performance in Professional Cyclists.

    PubMed

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  5. Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model

    DTIC Science & Technology

    2017-03-01

    set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation

  6. Advances in a distributed approach for ocean model data interoperability

    USGS Publications Warehouse

    Signell, Richard P.; Snowden, Derrick P.

    2014-01-01

    An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  7. Catchment virtual observatory for sharing flow and transport models outputs: using residence time distribution to compare contrasting catchments

    NASA Astrophysics Data System (ADS)

    Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald

    2017-04-01

    The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison

  8. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  9. Balancing the stochastic description of uncertainties as a function of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.

    2016-12-01

    Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.

  10. Alpha1 LASSO data bundles Lamont, OK

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)

    2016-08-03

    A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  11. Heart Performance Determination by Visualization in Larval Fishes: Influence of Alternative Models for Heart Shape and Volume

    PubMed Central

    Perrichon, Prescilla; Grosell, Martin; Burggren, Warren W.

    2017-01-01

    Understanding cardiac function in developing larval fishes is crucial for assessing their physiological condition and overall health. Cardiac output measurements in transparent fish larvae and other vertebrates have long been made by analyzing videos of the beating heart, and modeling this structure using a conventional simple prolate spheroid shape model. However, the larval fish heart changes shape during early development and subsequent maturation, but no consideration has been made of the effect of different heart geometries on cardiac output estimation. The present study assessed the validity of three different heart models (the “standard” prolate spheroid model as well as a cylinder and cone tip + cylinder model) applied to digital images of complete cardiac cycles in larval mahi-mahi and red drum. The inherent error of each model was determined to allow for more precise calculation of stroke volume and cardiac output. The conventional prolate spheroid and cone tip + cylinder models yielded significantly different stroke volume values at 56 hpf in red drum and from 56 to 104 hpf in mahi. End-diastolic and stroke volumes modeled by just a simple cylinder shape were 30–50% higher compared to the conventional prolate spheroid. However, when these values of stroke volume multiplied by heart rate to calculate cardiac output, no significant differences between models emerged because of considerable variability in heart rate. Essentially, the conventional prolate spheroid shape model provides the simplest measurement with lowest variability of stroke volume and cardiac output. However, assessment of heart function—especially if stroke volume is the focus of the study—should consider larval heart shape, with different models being applied on a species-by-species and developmental stage-by-stage basis for best estimation of cardiac output. PMID:28725199

  12. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  13. Approximate Optimal Control as a Model for Motor Learning

    ERIC Educational Resources Information Center

    Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.

    2005-01-01

    Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…

  14. Use of Advanced Meteorological Model Output for Coastal Ocean Modeling in Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Khangaonkar, Tarang; Wang, Taiping

    2011-06-01

    It is a great challenge to specify meteorological forcing in estuarine and coastal circulation modeling using observed data because of the lack of complete datasets. As a result of this limitation, water temperature is often not simulated in estuarine and coastal modeling, with the assumption that density-induced currents are generally dominated by salinity gradients. However, in many situations, temperature gradients could be sufficiently large to influence the baroclinic motion. In this paper, we present an approach to simulate water temperature using outputs from advanced meteorological models. This modeling approach was applied to simulate annual variations of water temperatures of Pugetmore » Sound, a fjordal estuary in the Pacific Northwest of USA. Meteorological parameters from North American Region Re-analysis (NARR) model outputs were evaluated with comparisons to observed data at real-time meteorological stations. Model results demonstrated that NARR outputs can be used to drive coastal ocean models for realistic simulations of long-term water-temperature distributions in Puget Sound. Model results indicated that the net flux from NARR can be further improved with the additional information from real-time observations.« less

  15. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approachmore » is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.« less

  16. User assessment of smoke-dispersion models for wildland biomass burning.

    Treesearch

    Steve Breyfogle; Sue A. Ferguson

    1996-01-01

    Several smoke-dispersion models, which currently are available for modeling smoke from biomass burns, were evaluated for ease of use, availability of input data, and output data format. The input and output components of all models are listed, and differences in model physics are discussed. Each model was installed and run on a personal computer with a simple-case...

  17. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.

  18. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.

  19. A spectral method for spatial downscaling | Science Inventory ...

    EPA Pesticide Factsheets

    Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this paper, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July, 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. The National Exposure Research Laboratory′s (NERL′s)Atmospheric Modeling Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing ch

  20. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  1. Community Coordinated Modeling Center Support of Science Needs for Integrated Data Environment

    NASA Technical Reports Server (NTRS)

    Kuznetsova, M. M.; Hesse, M.; Rastatter, L.; Maddox, M.

    2007-01-01

    Space science models are essential component of integrated data environment. Space science models are indispensable tools to facilitate effective use of wide variety of distributed scientific sources and to place multi-point local measurements into global context. The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the- art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. The majority of models residing at CCMC are comprehensive computationally intensive physics-based models. To allow the models to be driven by data relevant to particular events, the CCMC developed an online data file generation tool that automatically downloads data from data providers and transforms them to required format. CCMC provides a tailored web-based visualization interface for the model output, as well as the capability to download simulations output in portable standard format with comprehensive metadata and user-friendly model output analysis library of routines that can be called from any C supporting language. CCMC is developing data interpolation tools that enable to present model output in the same format as observations. CCMC invite community comments and suggestions to better address science needs for the integrated data environment.

  2. Available pressure amplitude of linear compressor based on phasor triangle model

    NASA Astrophysics Data System (ADS)

    Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.

    2017-12-01

    The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.

  3. Obs4MIPS: Satellite Observations for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2017-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.

  4. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

    PubMed

    Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  5. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

    PubMed Central

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803

  6. Input-output model for MACCS nuclear accident impacts estimation¹

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less

  7. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  8. Large-Signal Klystron Simulations Using KLSC

    NASA Astrophysics Data System (ADS)

    Carlsten, B. E.; Ferguson, P.

    1997-05-01

    We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.

  9. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  10. Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model

    NASA Astrophysics Data System (ADS)

    Fu, Li Fang; Meng, Jun; Liu, Ying

    2015-12-01

    Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.

  11. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  12. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE PAGES

    Wood, William Monford

    2018-02-07

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

  13. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    NASA Astrophysics Data System (ADS)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  14. Identifiability Results for Several Classes of Linear Compartment Models.

    PubMed

    Meshkat, Nicolette; Sullivant, Seth; Eisenberg, Marisa

    2015-08-01

    Identifiability concerns finding which unknown parameters of a model can be estimated, uniquely or otherwise, from given input-output data. If some subset of the parameters of a model cannot be determined given input-output data, then we say the model is unidentifiable. In this work, we study linear compartment models, which are a class of biological models commonly used in pharmacokinetics, physiology, and ecology. In past work, we used commutative algebra and graph theory to identify a class of linear compartment models that we call identifiable cycle models, which are unidentifiable but have the simplest possible identifiable functions (so-called monomial cycles). Here we show how to modify identifiable cycle models by adding inputs, adding outputs, or removing leaks, in such a way that we obtain an identifiable model. We also prove a constructive result on how to combine identifiable models, each corresponding to strongly connected graphs, into a larger identifiable model. We apply these theoretical results to several real-world biological models from physiology, cell biology, and ecology.

  15. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  16. The output voltage model and experiment of magnetostrictive displacement sensor based on Weidemann effect

    NASA Astrophysics Data System (ADS)

    Wang, Bowen; Li, Yuanyuan; Xie, Xinliang; Huang, Wenmei; Weng, Ling; Zhang, Changgeng

    2018-05-01

    Based on the Wiedemann effect and inverse magnetostritive effect, the output voltage model of a magnetostrictive displacement sensor has been established. The output voltage of the magnetostrictive displacement sensor is calculated in different magnetic fields. It is found that the calculating result is in an agreement with the experimental one. The theoretical and experimental results show that the output voltage of the displacement sensor is linearly related to the magnetostrictive differences, (λl-λt), of waveguide wires. The measured output voltages for Fe-Ga and Fe-Ni wire sensors are 51.5mV and 36.5mV, respectively, and the output voltage of Fe-Ga wire sensor is obviously higher than that of Fe-Ni wire sensor under the same magnetic field. The model can be used to predict the output voltage of the sensor and to provide guidance for the optimization design of the sensor.

  17. Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs

    DTIC Science & Technology

    2014-06-01

    comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree

  18. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    NASA Astrophysics Data System (ADS)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  19. Design of vaccination and fumigation on Host-Vector Model by input-output linearization method

    NASA Astrophysics Data System (ADS)

    Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning

    2017-03-01

    Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.

  20. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  1. Ensemble modelling and structured decision-making to support Emergency Disease Management.

    PubMed

    Webb, Colleen T; Ferrari, Matthew; Lindström, Tom; Carpenter, Tim; Dürr, Salome; Garner, Graeme; Jewell, Chris; Stevenson, Mark; Ward, Michael P; Werkman, Marleen; Backer, Jantien; Tildesley, Michael

    2017-03-01

    Epidemiological models in animal health are commonly used as decision-support tools to understand the impact of various control actions on infection spread in susceptible populations. Different models contain different assumptions and parameterizations, and policy decisions might be improved by considering outputs from multiple models. However, a transparent decision-support framework to integrate outputs from multiple models is nascent in epidemiology. Ensemble modelling and structured decision-making integrate the outputs of multiple models, compare policy actions and support policy decision-making. We briefly review the epidemiological application of ensemble modelling and structured decision-making and illustrate the potential of these methods using foot and mouth disease (FMD) models. In case study one, we apply structured decision-making to compare five possible control actions across three FMD models and show which control actions and outbreak costs are robustly supported and which are impacted by model uncertainty. In case study two, we develop a methodology for weighting the outputs of different models and show how different weighting schemes may impact the choice of control action. Using these case studies, we broadly illustrate the potential of ensemble modelling and structured decision-making in epidemiology to provide better information for decision-making and outline necessary development of these methods for their further application. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  2. Dynamic output feedback control of a flexible air-breathing hypersonic vehicle via T-S fuzzy approach

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoxiang; Wu, Ligang; Hu, Changhua; Wang, Zhaoqiang; Gao, Huijun

    2014-08-01

    By utilising Takagi-Sugeno (T-S) fuzzy set approach, this paper addresses the robust H∞ dynamic output feedback control for the non-linear longitudinal model of flexible air-breathing hypersonic vehicles (FAHVs). The flight control of FAHVs is highly challenging due to the unique dynamic characteristics, and the intricate couplings between the engine and fight dynamics and external disturbance. Because of the dynamics' enormous complexity, currently, only the longitudinal dynamics models of FAHVs have been used for controller design. In this work, T-S fuzzy modelling technique is utilised to approach the non-linear dynamics of FAHVs, then a fuzzy model is developed for the output tracking problem of FAHVs. The fuzzy model contains parameter uncertainties and disturbance, which can approach the non-linear dynamics of FAHVs more exactly. The flexible models of FAHVs are difficult to measure because of the complex dynamics and the strong couplings, thus a full-order dynamic output feedback controller is designed for the fuzzy model. A robust H∞ controller is designed for the obtained closed-loop system. By utilising the Lyapunov functional approach, sufficient solvability conditions for such controllers are established in terms of linear matrix inequalities. Finally, the effectiveness of the proposed T-S fuzzy dynamic output feedback control method is demonstrated by numerical simulations.

  3. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    NASA Technical Reports Server (NTRS)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  4. Wind tunnel measurements of the power output variability and unsteady loading in a micro wind farm model

    NASA Astrophysics Data System (ADS)

    Bossuyt, Juliaan; Howland, Michael; Meneveau, Charles; Meyers, Johan

    2015-11-01

    To optimize wind farm layouts for a maximum power output and wind turbine lifetime, mean power output measurements in wind tunnel studies are not sufficient. Instead, detailed temporal information about the power output and unsteady loading from every single wind turbine in the wind farm is needed. A very small porous disc model with a realistic thrust coefficient of 0.75 - 0.85, was designed. The model is instrumented with a strain gage, allowing measurements of the thrust force, incoming velocity and power output with a frequency response up to the natural frequency of the model. This is shown by reproducing the -5/3 spectrum from the incoming flow. Thanks to its small size and compact instrumentation, the model allows wind tunnel studies of large wind turbine arrays with detailed temporal information from every wind turbine. Translating to field conditions with a length-scale ratio of 1:3,000 the frequencies studied from the data reach from 10-4 Hz up to about 6 .10-2 Hz. The model's capabilities are demonstrated with a large wind farm measurement consisting of close to 100 instrumented models. A high correlation is found between the power outputs of stream wise aligned wind turbines, which is in good agreement with results from prior LES simulations. Work supported by ERC (ActiveWindFarms, grant no. 306471) and by NSF (grants CBET-113380 and IIA-1243482, the WINDINSPIRE project).

  5. System level analysis and control of manufacturing process variation

    DOEpatents

    Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.

    2005-05-31

    A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.

  6. Real-time implementation of biofidelic SA1 model for tactile feedback.

    PubMed

    Russell, A F; Armiger, R S; Vogelstein, R J; Bensmaia, S J; Etienne-Cummings, R

    2009-01-01

    In order for the functionality of an upper-limb prosthesis to approach that of a real limb it must be able to, accurately and intuitively, convey sensory feedback to the limb user. This paper presents results of the real-time implementation of a 'biofidelic' model that describes mechanotransduction in Slowly Adapting Type 1 (SA1) afferent fibers. The model accurately predicts the timing of action potentials for arbitrary force or displacement stimuli and its output can be used as stimulation times for peripheral nerve stimulation by a neuroprosthetic device. The model performance was verified by comparing the predicted action potential (or spike) outputs against measured spike outputs for different vibratory stimuli. Furthermore experiments were conducted to show that, like real SA1 fibers, the model's spike rate varies according to input pressure and that a periodic 'tapping' stimulus evokes periodic spike outputs.

  7. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    PubMed

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Comparison of individual-based model output to data using a model of walleye pollock early life history in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Hinckley, Sarah; Parada, Carolina; Horne, John K.; Mazur, Michael; Woillez, Mathieu

    2016-10-01

    Biophysical individual-based models (IBMs) have been used to study aspects of early life history of marine fishes such as recruitment, connectivity of spawning and nursery areas, and marine reserve design. However, there is no consistent approach to validating the spatial outputs of these models. In this study, we hope to rectify this gap. We document additions to an existing individual-based biophysical model for Alaska walleye pollock (Gadus chalcogrammus), some simulations made with this model and methods that were used to describe and compare spatial output of the model versus field data derived from ichthyoplankton surveys in the Gulf of Alaska. We used visual methods (e.g. distributional centroids with directional ellipses), several indices (such as a Normalized Difference Index (NDI), and an Overlap Coefficient (OC), and several statistical methods: the Syrjala method, the Getis-Ord Gi* statistic, and a geostatistical method for comparing spatial indices. We assess the utility of these different methods in analyzing spatial output and comparing model output to data, and give recommendations for their appropriate use. Visual methods are useful for initial comparisons of model and data distributions. Metrics such as the NDI and OC give useful measures of co-location and overlap, but care must be taken in discretizing the fields into bins. The Getis-Ord Gi* statistic is useful to determine the patchiness of the fields. The Syrjala method is an easily implemented statistical measure of the difference between the fields, but does not give information on the details of the distributions. Finally, the geostatistical comparison of spatial indices gives good information of details of the distributions and whether they differ significantly between the model and the data. We conclude that each technique gives quite different information about the model-data distribution comparison, and that some are easy to apply and some more complex. We also give recommendations for a multistep process to validate spatial output from IBMs.

  9. Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles

    PubMed Central

    Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.

    2009-01-01

    The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382

  10. Quantitative Simulations of MST Visual Receptive Field Properties Using a Template Model of Heading Estimation

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, J. A.

    1997-01-01

    We previously developed a template model of primate visual self-motion processing that proposes a specific set of projections from MT-like local motion sensors onto output units to estimate heading and relative depth from optic flow. At the time, we showed that that the model output units have emergent properties similar to those of MSTd neurons, although there was little physiological evidence to test the model more directly. We have now systematically examined the properties of the model using stimulus paradigms used by others in recent single-unit studies of MST: 1) 2-D bell-shaped heading tuning. Most MSTd neurons and model output units show bell-shaped heading tuning. Furthermore, we found that most model output units and the finely-sampled example neuron in the Duffy-Wurtz study are well fit by a 2D gaussian (sigma approx. 35deg, r approx. 0.9). The bandwidth of model and real units can explain why Lappe et al. found apparent sigmoidal tuning using a restricted range of stimuli (+/-40deg). 2) Spiral Tuning and Invariance. Graziano et al. found that many MST neurons appear tuned to a specific combination of rotation and expansion (spiral flow) and that this tuning changes little for approx. 10deg shifts in stimulus placement. Simulations of model output units under the same conditions quantitatively replicate this result. We conclude that a template architecture may underlie MT inputs to MST.

  11. A new open-loop fiber optic gyro error compensation method based on angular velocity error modeling.

    PubMed

    Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing

    2015-02-27

    With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.42%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity.

  12. A New Open-Loop Fiber Optic Gyro Error Compensation Method Based on Angular Velocity Error Modeling

    PubMed Central

    Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing

    2015-01-01

    With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.2%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity. PMID:25734642

  13. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Clifford W.; Martin, Curtis E.

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less

  14. User Guide and Documentation for Five MODFLOW Ground-Water Modeling Utility Programs

    USGS Publications Warehouse

    Banta, Edward R.; Paschke, Suzanne S.; Litke, David W.

    2008-01-01

    This report documents five utility programs designed for use in conjunction with ground-water flow models developed with the U.S. Geological Survey's MODFLOW ground-water modeling program. One program extracts calculated flow values from one model for use as input to another model. The other four programs extract model input or output arrays from one model and make them available in a form that can be used to generate an ArcGIS raster data set. The resulting raster data sets may be useful for visual display of the data or for further geographic data processing. The utility program GRID2GRIDFLOW reads a MODFLOW binary output file of cell-by-cell flow terms for one (source) model grid and converts the flow values to input flow values for a different (target) model grid. The spatial and temporal discretization of the two models may differ. The four other utilities extract selected 2-dimensional data arrays in MODFLOW input and output files and write them to text files that can be imported into an ArcGIS geographic information system raster format. These four utilities require that the model cells be square and aligned with the projected coordinate system in which the model grid is defined. The four raster-conversion utilities are * CBC2RASTER, which extracts selected stress-package flow data from a MODFLOW binary output file of cell-by-cell flows; * DIS2RASTER, which extracts cell-elevation data from a MODFLOW Discretization file; * MFBIN2RASTER, which extracts array data from a MODFLOW binary output file of head or drawdown; and * MULT2RASTER, which extracts array data from a MODFLOW Multiplier file.

  15. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USGS Publications Warehouse

    Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.

    2011-01-01

    The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.

  16. Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs.

    PubMed

    Vitolo, Claudia; Di Giuseppe, Francesca; D'Andrea, Mirko

    2018-01-01

    The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package.

  17. Generative electronic background music system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazurowski, Lukasz

    In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.

  18. Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs

    PubMed Central

    Di Giuseppe, Francesca; D’Andrea, Mirko

    2018-01-01

    The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package. PMID:29293536

  19. Light extraction in planar light-emitting diode with nonuniform current injection: model and simulation.

    PubMed

    Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei

    2014-07-20

    We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.

  20. SCOUT: A Fast Monte-Carlo Modeling Tool of Scintillation Camera Output

    PubMed Central

    Hunter, William C. J.; Barrett, Harrison H.; Lewellen, Thomas K.; Miyaoka, Robert S.; Muzi, John P.; Li, Xiaoli; McDougald, Wendy; MacDonald, Lawrence R.

    2011-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:22072297

  1. A Model of Medical Countermeasures for Organophosphates

    DTIC Science & Technology

    2015-10-01

    Animal Data ................................................................. 51 6.2.1. Verifying AChE Activity ...17 Figure 4-3. Model Output for AChE Activity and Free/Stimulated Receptor Fraction with No OP Exposure...Figure 6-1. Sarin Model Output Compared to Individual AChE Activity in Acute Phase Following Tokyo Sarin Attack

  2. Similarity Assessment of Land Surface Model Outputs in the North American Land Data Assimilation System

    NASA Astrophysics Data System (ADS)

    Kumar, Sujay V.; Wang, Shugong; Mocko, David M.; Peters-Lidard, Christa D.; Xia, Youlong

    2017-11-01

    Multimodel ensembles are often used to produce ensemble mean estimates that tend to have increased simulation skill over any individual model output. If multimodel outputs are too similar, an individual LSM would add little additional information to the multimodel ensemble, whereas if the models are too dissimilar, it may be indicative of systematic errors in their formulations or configurations. The article presents a formal similarity assessment of the North American Land Data Assimilation System (NLDAS) multimodel ensemble outputs to assess their utility to the ensemble, using a confirmatory factor analysis. Outputs from four NLDAS Phase 2 models currently running in operations at NOAA/NCEP and four new/upgraded models that are under consideration for the next phase of NLDAS are employed in this study. The results show that the runoff estimates from the LSMs were most dissimilar whereas the models showed greater similarity for root zone soil moisture, snow water equivalent, and terrestrial water storage. Generally, the NLDAS operational models showed weaker association with the common factor of the ensemble and the newer versions of the LSMs showed stronger association with the common factor, with the model similarity increasing at longer time scales. Trade-offs between the similarity metrics and accuracy measures indicated that the NLDAS operational models demonstrate a larger span in the similarity-accuracy space compared to the new LSMs. The results of the article indicate that simultaneous consideration of model similarity and accuracy at the relevant time scales is necessary in the development of multimodel ensemble.

  3. A Bayesian Approach to Evaluating Consistency between Climate Model Output and Observations

    NASA Astrophysics Data System (ADS)

    Braverman, A. J.; Cressie, N.; Teixeira, J.

    2010-12-01

    Like other scientific and engineering problems that involve physical modeling of complex systems, climate models can be evaluated and diagnosed by comparing their output to observations of similar quantities. Though the global remote sensing data record is relatively short by climate research standards, these data offer opportunities to evaluate model predictions in new ways. For example, remote sensing data are spatially and temporally dense enough to provide distributional information that goes beyond simple moments to allow quantification of temporal and spatial dependence structures. In this talk, we propose a new method for exploiting these rich data sets using a Bayesian paradigm. For a collection of climate models, we calculate posterior probabilities its members best represent the physical system each seeks to reproduce. The posterior probability is based on the likelihood that a chosen summary statistic, computed from observations, would be obtained when the model's output is considered as a realization from a stochastic process. By exploring how posterior probabilities change with different statistics, we may paint a more quantitative and complete picture of the strengths and weaknesses of the models relative to the observations. We demonstrate our method using model output from the CMIP archive, and observations from NASA's Atmospheric Infrared Sounder.

  4. Reducing the uncertainty in the fidelity of seismic imaging results

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Zou, Z.

    2017-12-01

    A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.

  5. Multi input single output model predictive control of non-linear bio-polymerization process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugasamy, Senthil Kumar; Ahmad, Z.

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state spacemore » model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.« less

  6. Impact of device level faults in a digital avionic processor

    NASA Technical Reports Server (NTRS)

    Suk, Ho Kim

    1989-01-01

    This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.

  7. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  8. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.

  9. Use of output from high-resolution atmospheric models in landscape-scale hydrologic models: An assessment

    USGS Publications Warehouse

    Hostetler, S.W.; Giorgi, F.

    1993-01-01

    In this paper we investigate the feasibility of coupling regional climate models (RCMs) with landscape-scale hydrologic models (LSHMs) for studies of the effects of climate on hydrologic systems. The RCM used is the National Center for Atmospheric Research/Pennsylvania State University mesoscale model (MM4). Output from two year-round simulations (1983 and 1988) over the western United States is used to drive a lake model for Pyramid Lake in Nevada and a streamfiow model for Steamboat Creek in Oregon. Comparisons with observed data indicate that MM4 is able to produce meteorologic data sets that can be used to drive hydrologic models. Results from the lake model simulations indicate that the use of MM4 output produces reasonably good predictions of surface temperature and evaporation. Results from the streamflow simulations indicate that the use of MM4 output results in good simulations of the seasonal cycle of streamflow, but deficiencies in simulated wintertime precipitation resulted in underestimates of streamflow and soil moisture. Further work with climate (multiyear) simulations is necessary to achieve a complete analysis, but the results from this study indicate that coupling of LSHMs and RCMs may be a useful approach for evaluating the effects of climate change on hydrologic systems.

  10. Structural identifiability analysis of a cardiovascular system model.

    PubMed

    Pironet, Antoine; Dauby, Pierre C; Chase, J Geoffrey; Docherty, Paul D; Revie, James A; Desaive, Thomas

    2016-05-01

    The six-chamber cardiovascular system model of Burkhoff and Tyberg has been used in several theoretical and experimental studies. However, this cardiovascular system model (and others derived from it) are not identifiable from any output set. In this work, two such cases of structural non-identifiability are first presented. These cases occur when the model output set only contains a single type of information (pressure or volume). A specific output set is thus chosen, mixing pressure and volume information and containing only a limited number of clinically available measurements. Then, by manipulating the model equations involving these outputs, it is demonstrated that the six-chamber cardiovascular system model is structurally globally identifiable. A further simplification is made, assuming known cardiac valve resistances. Because of the poor practical identifiability of these four parameters, this assumption is usual. Under this hypothesis, the six-chamber cardiovascular system model is structurally identifiable from an even smaller dataset. As a consequence, parameter values computed from limited but well-chosen datasets are theoretically unique. This means that the parameter identification procedure can safely be performed on the model from such a well-chosen dataset. Thus, the model may be considered suitable for use in diagnosis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  12. Two graphical user interfaces for managing and analyzing MODFLOW groundwater-model scenarios

    USGS Publications Warehouse

    Banta, Edward R.

    2014-01-01

    Scenario Manager and Scenario Analyzer are graphical user interfaces that facilitate the use of calibrated, MODFLOW-based groundwater models for investigating possible responses to proposed stresses on a groundwater system. Scenario Manager allows a user, starting with a calibrated model, to design and run model scenarios by adding or modifying stresses simulated by the model. Scenario Analyzer facilitates the process of extracting data from model output and preparing such display elements as maps, charts, and tables. Both programs are designed for users who are familiar with the science on which groundwater modeling is based but who may not have a groundwater modeler’s expertise in building and calibrating a groundwater model from start to finish. With Scenario Manager, the user can manipulate model input to simulate withdrawal or injection wells, time-variant specified hydraulic heads, recharge, and such surface-water features as rivers and canals. Input for stresses to be simulated comes from user-provided geographic information system files and time-series data files. A Scenario Manager project can contain multiple scenarios and is self-documenting. Scenario Analyzer can be used to analyze output from any MODFLOW-based model; it is not limited to use with scenarios generated by Scenario Manager. Model-simulated values of hydraulic head, drawdown, solute concentration, and cell-by-cell flow rates can be presented in display elements. Map data can be represented as lines of equal value (contours) or as a gradated color fill. Charts and tables display time-series data obtained from output generated by a transient-state model run or from user-provided text files of time-series data. A display element can be based entirely on output of a single model run, or, to facilitate comparison of results of multiple scenarios, an element can be based on output from multiple model runs. Scenario Analyzer can export display elements and supporting metadata as a Portable Document Format file.

  13. Sensitivity analysis of a short distance atmospheric dispersion model applied to the Fukushima disaster

    NASA Astrophysics Data System (ADS)

    Périllat, Raphaël; Girard, Sylvain; Korsakissok, Irène; Mallet, Vinien

    2015-04-01

    In a previous study, the sensitivity of a long distance model was analyzed on the Fukushima Daiichi disaster case with the Morris screening method. It showed that a few variables, such as horizontal diffusion coefficient or clouds thickness, have a weak influence on most of the chosen outputs. The purpose of the present study is to apply a similar methodology on the IRSN's operational short distance atmospheric dispersion model, called pX. Atmospheric dispersion models are very useful in case of accidental releases of pollutant to minimize the population exposure during the accident and to obtain an accurate assessment of short and long term environmental and sanitary impact. Long range models are mostly used for consequences assessment while short range models are more adapted to the early phases of the crisis and are used to make prognosis. The Morris screening method was used to estimate the sensitivity of a set of outputs and to rank the inputs by their influences. The input ranking is highly dependent on the considered output, but a few variables seem to have a weak influence on most of them. This first step revealed that interactions and non-linearity are much more pronounced with the short range model than with the long range one. Afterward, the Sobol screening method was used to obtain more quantitative results on the same set of outputs. Using this method was possible for the short range model because it is far less computationally demanding than the long range model. The study also confronts two parameterizations, Doury's and Pasquill's models, to contrast their behavior. The Doury's model seems to excessively inflate the influence of some inputs compared to the Pasquill's model, such as the altitude of emission and the air stability which do not have the same role in the two models. The outputs of the long range model were dominated by only a few inputs. On the contrary, in this study the influence is shared more evenly between the inputs.

  14. Measurements and Modeling of Total Solar Irradiance in X-class Solar Flares

    NASA Technical Reports Server (NTRS)

    Moore, Christopher S.; Chamberlin, Phillip Clyde; Hock, Rachel

    2014-01-01

    The Total Irradiance Monitor (TIM) from NASA's SOlar Radiation and Climate Experiment can detect changes in the total solar irradiance (TSI) to a precision of 2 ppm, allowing observations of variations due to the largest X-class solar flares for the first time. Presented here is a robust algorithm for determining the radiative output in the TIM TSI measurements, in both the impulsive and gradual phases, for the four solar flares presented in Woods et al., as well as an additional flare measured on 2006 December 6. The radiative outputs for both phases of these five flares are then compared to the vacuum ultraviolet (VUV) irradiance output from the Flare Irradiance Spectral Model (FISM) in order to derive an empirical relationship between the FISM VUV model and the TIM TSI data output to estimate the TSI radiative output for eight other X-class flares. This model provides the basis for the bolometric energy estimates for the solar flares analyzed in the Emslie et al. study.

  15. Quantitative Decision Support Requires Quantitative User Guidance

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output for a given problem is presented. Based on climate science, meteorology, and the details of the question in hand, this approach identifies necessary (never sufficient) conditions required for the rational use of climate model output in quantitative decision support tools. Inasmuch as climate forecasting is a problem of extrapolation, there will always be harsh limits on our ability to establish where a model is fit for purpose, this does not, however, limit us from identifying model noise as such, and thereby avoiding some cases of the misapplication and over interpretation of model output. It is suggested that failure to clearly communicate the limits of today’s climate model in providing quantitative decision relevant climate information to today’s users of climate information, would risk the credibility of tomorrow’s climate science and science based policy more generally.

  16. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  17. Analysis and model on space-time characteristics of wind power output based on the measured wind speed data

    NASA Astrophysics Data System (ADS)

    Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan

    2018-02-01

    Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.

  18. Explicit least squares system parameter identification for exact differential input/output models

    NASA Technical Reports Server (NTRS)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  19. Life and reliability models for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Knorr, R. J.; Coy, J. J.

    1982-01-01

    Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.

  20. Evaluation of statistically downscaled GCM output as input for hydrological and stream temperature simulation in the Apalachicola–Chattahoochee–Flint River Basin (1961–99)

    USGS Publications Warehouse

    Hay, Lauren E.; LaFontaine, Jacob H.; Markstrom, Steven

    2014-01-01

    The accuracy of statistically downscaled general circulation model (GCM) simulations of daily surface climate for historical conditions (1961–99) and the implications when they are used to drive hydrologic and stream temperature models were assessed for the Apalachicola–Chattahoochee–Flint River basin (ACFB). The ACFB is a 50 000 km2 basin located in the southeastern United States. Three GCMs were statistically downscaled, using an asynchronous regional regression model (ARRM), to ⅛° grids of daily precipitation and minimum and maximum air temperature. These ARRM-based climate datasets were used as input to the Precipitation-Runoff Modeling System (PRMS), a deterministic, distributed-parameter, physical-process watershed model used to simulate and evaluate the effects of various combinations of climate and land use on watershed response. The ACFB was divided into 258 hydrologic response units (HRUs) in which the components of flow (groundwater, subsurface, and surface) are computed in response to climate, land surface, and subsurface characteristics of the basin. Daily simulations of flow components from PRMS were used with the climate to simulate in-stream water temperatures using the Stream Network Temperature (SNTemp) model, a mechanistic, one-dimensional heat transport model for branched stream networks.The climate, hydrology, and stream temperature for historical conditions were evaluated by comparing model outputs produced from historical climate forcings developed from gridded station data (GSD) versus those produced from the three statistically downscaled GCMs using the ARRM methodology. The PRMS and SNTemp models were forced with the GSD and the outputs produced were treated as “truth.” This allowed for a spatial comparison by HRU of the GSD-based output with ARRM-based output. Distributional similarities between GSD- and ARRM-based model outputs were compared using the two-sample Kolmogorov–Smirnov (KS) test in combination with descriptive metrics such as the mean and variance and an evaluation of rare and sustained events. In general, precipitation and streamflow quantities were negatively biased in the downscaled GCM outputs, and results indicate that the downscaled GCM simulations consistently underestimate the largest precipitation events relative to the GSD. The KS test results indicate that ARRM-based air temperatures are similar to GSD at the daily time step for the majority of the ACFB, with perhaps subweekly averaging for stream temperature. Depending on GCM and spatial location, ARRM-based precipitation and streamflow requires averaging of up to 30 days to become similar to the GSD-based output.Evaluation of the model skill for historical conditions suggests some guidelines for use of future projections; while it seems correct to place greater confidence in evaluation metrics which perform well historically, this does not necessarily mean those metrics will accurately reflect model outputs for future climatic conditions. Results from this study indicate no “best” overall model, but the breadth of analysis can be used to give the product users an indication of the applicability of the results to address their particular problem. Since results for historical conditions indicate that model outputs can have significant biases associated with them, the range in future projections examined in terms of change relative to historical conditions for each individual GCM may be more appropriate.

  1. RCHILD - an R-package for flexible use of the landscape evolution model CHILD

    NASA Astrophysics Data System (ADS)

    Dietze, Michael

    2014-05-01

    Landscape evolution models provide powerful approaches to numerically assess earth surface processes, to quantify rates of landscape change, infer sediment transfer rates, estimate sediment budgets, investigate the consequences of changes in external drivers on a geomorphic system, to provide spatio-temporal interpolations between known landscape states or to test conceptual hypotheses. CHILD (Channel-Hillslope Integrated Landscape Development Model) is one of the most-used models of landscape change in the context of at least tectonic and geomorphologic process interactions. Running CHILD from command line and working with the model output can be a rather awkward task (static model control via text input file, only numeric output in text files). The package RCHILD is a collection of functions for the free statistical software R that help using CHILD in a flexible, dynamic and user-friendly way. The comprised functions allow creating maps, real-time scenes, animations and further thematic plots from model output. The model input files can be modified dynamically and, hence, (feedback-related) changes in external factors can be implemented iteratively. Output files can be written to common formats that can be readily imported to standard GIS software. This contribution presents the basic functionality of the model CHILD as visualised and modified by the package. A rough overview of the available functions is given. Application examples help to illustrate the great potential of numeric modelling of geomorphologic processes.

  2. Re-using biological devices: a model-aided analysis of interconnected transcriptional cascades designed from the bottom-up.

    PubMed

    Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo

    2017-01-01

    The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.

  3. Software Validation via Model Animation

    NASA Technical Reports Server (NTRS)

    Dutle, Aaron M.; Munoz, Cesar A.; Narkawicz, Anthony J.; Butler, Ricky W.

    2015-01-01

    This paper explores a new approach to validating software implementations that have been produced from formally-verified algorithms. Although visual inspection gives some confidence that the implementations faithfully reflect the formal models, it does not provide complete assurance that the software is correct. The proposed approach, which is based on animation of formal specifications, compares the outputs computed by the software implementations on a given suite of input values to the outputs computed by the formal models on the same inputs, and determines if they are equal up to a given tolerance. The approach is illustrated on a prototype air traffic management system that computes simple kinematic trajectories for aircraft. Proofs for the mathematical models of the system's algorithms are carried out in the Prototype Verification System (PVS). The animation tool PVSio is used to evaluate the formal models on a set of randomly generated test cases. Output values computed by PVSio are compared against output values computed by the actual software. This comparison improves the assurance that the translation from formal models to code is faithful and that, for example, floating point errors do not greatly affect correctness and safety properties.

  4. Calculating distributed glacier mass balance for the Swiss Alps from RCM output: Development and testing of downscaling and validation methods

    NASA Astrophysics Data System (ADS)

    Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.

    2009-04-01

    Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.

  5. Study of Regional Downscaled Climate and Air Quality in the United States

    NASA Astrophysics Data System (ADS)

    Gao, Y.; Fu, J. S.; Drake, J.; Lamarque, J.; Lam, Y.; Huang, K.

    2011-12-01

    Due to the increasing anthropogenic greenhouse gas emissions, the global and regional climate patterns have significantly changed. Climate change has exerted strong impact on ecosystem, air quality and human life. The global model Community Earth System Model (CESM v1.0) was used to predict future climate and chemistry under projected emission scenarios. Two new emission scenarios, Representative Community Pathways (RCP) 4.5 and RCP 8.5, were used in this study for climate and chemistry simulations. The projected global mean temperature will increase 1.2 and 1.7 degree Celcius for the RCP 4.5 and RCP 8.5 scenarios in 2050s, respectively. In order to take advantage of local detailed topography, land use data and conduct local climate impact on air quality, we downscaled CESM outputs to 4 km by 4 km Eastern US domain using Weather Research and Forecasting (WRF) Model and Community Multi-scale Air Quality modeling system (CMAQ). The evaluations between regional model outputs and global model outputs, regional model outputs and observational data were conducted to verify the downscaled methodology. Future climate change and air quality impact were also examined on a 4 km by 4 km high resolution scale.

  6. Perl-speaks-NONMEM (PsN)--a Perl module for NONMEM related programming.

    PubMed

    Lindbom, Lars; Ribbing, Jakob; Jonsson, E Niclas

    2004-08-01

    The NONMEM program is the most widely used nonlinear regression software in population pharmacokinetic/pharmacodynamic (PK/PD) analyses. In this article we describe a programming library, Perl-speaks-NONMEM (PsN), intended for programmers that aim at using the computational capability of NONMEM in external applications. The library is object oriented and written in the programming language Perl. The classes of the library are built around NONMEM's data, model and output files. The specification of the NONMEM model is easily set or changed through the model and data file classes while the output from a model fit is accessed through the output file class. The classes have methods that help the programmer perform common repetitive tasks, e.g. summarising the output from a NONMEM run, setting the initial estimates of a model based on a previous run or truncating values over a certain threshold in the data file. PsN creates a basis for the development of high-level software using NONMEM as the regression tool.

  7. Using quantum theory to simplify input-output processes

    NASA Astrophysics Data System (ADS)

    Thompson, Jayne; Garner, Andrew J. P.; Vedral, Vlatko; Gu, Mile

    2017-02-01

    All natural things process and transform information. They receive environmental information as input, and transform it into appropriate output responses. Much of science is dedicated to building models of such systems-algorithmic abstractions of their input-output behavior that allow us to simulate how such systems can behave in the future, conditioned on what has transpired in the past. Here, we show that classical models cannot avoid inefficiency-storing past information that is unnecessary for correct future simulation. We construct quantum models that mitigate this waste, whenever it is physically possible to do so. This suggests that the complexity of general input-output processes depends fundamentally on what sort of information theory we use to describe them.

  8. Enhancement and identification of dust events in the south-west region of Iran using satellite observations

    NASA Astrophysics Data System (ADS)

    Taghavi, F.; Owlad, E.; Ackerman, S. A.

    2017-03-01

    South-west Asia including the Middle East is one of the most prone regions to dust storm events. In recent years, there was an increase in the occurrence of these environmental and meteorological phenomena. Remote sensing could serve as an applicable method to detect and also characterise these events. In this study, two dust enhancement algorithms were used to investigate the behaviour of dust events using satellite data, compare with numerical model output and other satellite products and finally validate with in-situ measurements. The results show that the use of thermal infrared algorithm enhances dust more accurately. The aerosol optical depth from MODIS and output of a Dust Regional Atmospheric Model (DREAM8b) are applied for comparing the results. Ground-based observations of synoptic stations and sun photometers are used for validating the satellite products. To find the transport direction and the locations of the dust sources and the synoptic situations during these events, model outputs (HYSPLIT and NCEP/NCAR) are presented. Comparing the results with synoptic maps and the model outputs showed that using enhancement algorithms is a more reliable way than any other MODIS products or model outputs to enhance the dust.

  9. Classification

    NASA Technical Reports Server (NTRS)

    Oza, Nikunj C.

    2011-01-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. In supervised learning, a set of training examples---examples with known output values---is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate's measurements. This chapter discusses methods to perform machine learning, with examples involving astronomy.

  10. Control theory-based regulation of hippocampal CA1 nonlinear dynamics.

    PubMed

    Hsiao, Min-Chi; Song, Dong; Berger, Theodore W

    2008-01-01

    We are developing a biomimetic electronic neural prosthesis to replace regions of the hippocampal brain area that have been damaged by disease or insult. Our previous study has shown that the VLSI implementation of a CA3 nonlinear dynamic model can functionally replace the CA3 subregion of the hippocampal slice. As a result, the propagation of temporal patterns of activity from DG-->VLSI-->CA1 reproduces the activity observed experimentally in the biological DG-->CA3-->CA1 circuit. In this project, we incorporate an open-loop controller to optimize the output (CA1) response. Specifically, we seek to optimize the stimulation signal to CA1 using a predictive dentate gyrus (DG)-CA1 nonlinear model (i.e., DG-CA1 trajectory model) and a CA1 input-output model (i.e., CA1 plant model), such that the ultimate CA1 response (i.e., desired output) can be first predicted by the DG-CA1 trajectory model and then transformed to the desired stimulation through the inversed CA1 plant model. Lastly, the desired CA1 output is evoked by the estimated optimal stimulation. This study will be the first stage of formulating an integrated modeling-control strategy for the hippocampal neural prosthetic system.

  11. Hydrological responses to dynamically and statistically downscaled climate model output

    USGS Publications Warehouse

    Wilby, R.L.; Hay, L.E.; Gutowski, W.J.; Arritt, R.W.; Takle, E.S.; Pan, Z.; Leavesley, G.H.; Clark, M.P.

    2000-01-01

    Daily rainfall and surface temperature series were simulated for the Animas River basin, Colorado using dynamically and statistically downscaled output from the National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis. A distributed hydrological model was then applied to the downscaled data. Relative to raw NCEP output, downscaled climate variables provided more realistic stimulations of basin scale hydrology. However, the results highlight the sensitivity of modeled processes to the choice of downscaling technique, and point to the need for caution when interpreting future hydrological scenarios.

  12. Scale and modeling issues in water resources planning

    USGS Publications Warehouse

    Lins, H.F.; Wolock, D.M.; McCabe, G.J.

    1997-01-01

    Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.

  13. Integrated Model Reduction and Control of Aircraft with Flexible Wings

    NASA Technical Reports Server (NTRS)

    Swei, Sean Shan-Min; Zhu, Guoming G.; Nguyen, Nhan T.

    2013-01-01

    This paper presents an integrated approach to the modeling and control of aircraft with exible wings. The coupled aircraft rigid body dynamics with a high-order elastic wing model can be represented in a nite dimensional state-space form. Given a set of desired output covariance, a model reduction process is performed by using the weighted Modal Cost Analysis (MCA). A dynamic output feedback controller, which is designed based on the reduced-order model, is developed by utilizing output covariance constraint (OCC) algorithm, and the resulting OCC design weighting matrix is used for the next iteration of the weighted cost analysis. This controller is then validated for full-order evaluation model to ensure that the aircraft's handling qualities are met and the uttering motion of the wings suppressed. An iterative algorithm is developed in CONDUIT environment to realize the integration of model reduction and controller design. The proposed integrated approach is applied to NASA Generic Transport Model (GTM) for demonstration.

  14. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  15. A Generalized Mixture Framework for Multi-label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069

  16. Theoretic aspects of the identification of the parameters in the optimal control model

    NASA Technical Reports Server (NTRS)

    Vanwijk, R. A.; Kok, J. J.

    1977-01-01

    The identification of the parameters of the optimal control model from input-output data of the human operator is considered. Accepting the basic structure of the model as a cascade of a full-order observer and a feedback law, and suppressing the inherent optimality of the human controller, the parameters to be identified are the feedback matrix, the observer gain matrix, and the intensity matrices of the observation noise and the motor noise. The identification of the parameters is a statistical problem, because the system and output are corrupted by noise, and therefore the solution must be based on the statistics (probability density function) of the input and output data of the human operator. However, based on the statistics of the input-output data of the human operator, no distinction can be made between the observation and the motor noise, which shows that the model suffers from overparameterization.

  17. Climate impacts on palm oil yields in the Nigerian Niger Delta

    NASA Astrophysics Data System (ADS)

    Okoro, Stanley U.; Schickhoff, Udo; Boehner, Juergen; Schneider, Uwe A.; Huth, Neil

    2016-04-01

    Palm oil production has increased in recent decades and is estimated to increase further. The optimal role of palm oil production, however, is controversial because of resource conflicts with alternative land uses. Local conditions and climate change affect resource competition and the desirability of palm oil production. Based on this, crop yield simulations using different climate model output under different climate scenarios could be important tool in addressing the problem of uncertainty quantification among different climate model outputs. Previous studies on this region have focused mostly on single experimental fields, not considering variations in Agro-Ecological Zones, climatic conditions, varieties and management practices and, in most cases not extending to various IPCC climate scenarios and were mostly based on single climate model output. Furthermore, the uncertainty quantification of the climate- impact model has rarely been investigated on this region. To this end we use the biophysical simulation model APSIM (Agricultural Production Systems Simulator) to simulate the regional climate impact on oil palm yield over the Nigerian Niger Delta. We also examine whether the use of crop yield model output ensemble reduces the uncertainty rather than the use of climate model output ensemble. The results could serve as a baseline for policy makers in this region in understanding the interaction between potentials of energy crop production of the region as well as its food security and other negative feedbacks that could be associated with bioenergy from oil palm. Keywords: Climate Change, Climate impacts, Land use and Crop yields.

  18. Wrapping Python around MODFLOW/MT3DMS based groundwater models

    NASA Astrophysics Data System (ADS)

    Post, V.

    2008-12-01

    Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.

  19. Evaluation of simulated ocean carbon in the CMIP5 earth system models

    NASA Astrophysics Data System (ADS)

    Orr, James; Brockmann, Patrick; Seferian, Roland; Servonnat, Jérôme; Bopp, Laurent

    2013-04-01

    We maintain a centralized model output archive containing output from the previous generation of Earth System Models (ESMs), 7 models used in the IPCC AR4 assessment. Output is in a common format located on a centralized server and is publicly available through a web interface. Through the same interface, LSCE/IPSL has also made available output from the Coupled Model Intercomparison Project (CMIP5), the foundation for the ongoing IPCC AR5 assessment. The latter includes ocean biogeochemical fields from more than 13 ESMs. Modeling partners across 3 EU projects refer to the combined AR4-AR5 archive and comparison as OCMIP5, building on previous phases of OCMIP (Ocean Carbon Cycle Intercomparison Project) and making a clear link to IPCC AR5 (CMIP5). While now focusing on assessing the latest generation of results (AR5, CMIP5), this effort is also able to put them in context (AR4). For model comparison and evaluation, we have also stored computed derived variables (e.g., those needed to assess ocean acidification) and key fields regridded to a common 1°x1° grid, thus complementing the standard CMIP5 archive. The combined AR4-AR5 output (OCMIP5) has been used to compute standard quantitative metrics, both global and regional, and those have been synthesized with summary diagrams. In addition, for key biogeochemical fields we have deconvolved spatiotemporal components of the mean square error in order to constrain which models go wrong where. Here we will detail results from these evaluations which have exploited gridded climatological data. The archive, interface, and centralized evaluation provide a solid technical foundation, upon which collaboration and communication is being broadened in the ocean biogeochemical modeling community. Ultimately we aim to encourage wider use of the OCMIP5 archive.

  20. UFO - The Universal FEYNRULES Output

    NASA Astrophysics Data System (ADS)

    Degrande, Céline; Duhr, Claude; Fuks, Benjamin; Grellscheid, David; Mattelaer, Olivier; Reiter, Thomas

    2012-06-01

    We present a new model format for automatized matrix-element generators, the so-called Universal FEYNRULES Output (UFO). The format is universal in the sense that it features compatibility with more than one single generator and is designed to be flexible, modular and agnostic of any assumption such as the number of particles or the color and Lorentz structures appearing in the interaction vertices. Unlike other model formats where text files need to be parsed, the information on the model is encoded into a PYTHON module that can easily be linked to other computer codes. We then describe an interface for the MATHEMATICA package FEYNRULES that allows for an automatic output of models in the UFO format.

  1. Alpha 2 LASSO Data Bundles

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi

    2015-08-31

    The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  2. Modeling and simulation research on electromagnetic and energy-recycled damper based on Adams

    NASA Astrophysics Data System (ADS)

    Zhou, C. F.; Zhang, K.; Zhang, Pengfei

    2018-05-01

    In order to study the voltage and power output characteristics of the electromagnetic and energy-recycled damper which consists of gear, rack and generator, the Adams model of this damper and the Simulink model of generator are established, and the co-simulation is accomplished with these two models. The output indexes such as the gear speed and power of generator are obtained by the simulation, and the simulation results demonstrate that the voltage peak of the damper is 25 V; the maximum output power of the damper is 8 W. The above research provides a basis for the prototype development of electromagnetic and energy-recycled damper with gear and rack.

  3. Dynamic Simulation of Human Gait Model With Predictive Capability.

    PubMed

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  4. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy.

    PubMed

    Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A

    2016-11-23

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  5. Team performance in the Italian NHS: the role of reflexivity.

    PubMed

    Urbini, Flavio; Callea, Antonino; Chirumbolo, Antonio; Talamo, Alessandra; Ingusci, Emanuela; Ciavolino, Enrico

    2018-04-09

    Purpose The purpose of this paper is twofold: first, to investigate the goodness of the input-process-output (IPO) model in order to evaluate work team performance within the Italian National Health Care System (NHS); and second, to test the mediating role of reflexivity as an overarching process factor between input and output. Design/methodology/approach The Italian version of the Aston Team Performance Inventory was administered to 351 employees working in teams in the Italian NHS. Mediation analyses with latent variables were performed via structural equation modeling (SEM); the significance of total, direct, and indirect effect was tested via bootstrapping. Findings Underpinned by the IPO framework, the results of SEM supported mediational hypotheses. First, the application of the IPO model in the Italian NHS showed adequate fit indices, showing that the process mediates the relationship between input and output factors. Second, reflexivity mediated the relationship between input and output, influencing some aspects of team performance. Practical implications The results provide useful information for HRM policies improving process dimensions of the IPO model via the mediating role of reflexivity as a key role in team performance. Originality/value This study is one of a limited number of studies that applied the IPO model in the Italian NHS. Moreover, no study has yet examined the role of reflexivity as a mediator between input and output factors in the IPO model.

  6. Identification procedure for epistemic uncertainties using inverse fuzzy arithmetic

    NASA Astrophysics Data System (ADS)

    Haag, T.; Herrmann, J.; Hanss, M.

    2010-10-01

    For the mathematical representation of systems with epistemic uncertainties, arising, for example, from simplifications in the modeling procedure, models with fuzzy-valued parameters prove to be a suitable and promising approach. In practice, however, the determination of these parameters turns out to be a non-trivial problem. The identification procedure to appropriately update these parameters on the basis of a reference output (measurement or output of an advanced model) requires the solution of an inverse problem. Against this background, an inverse method for the computation of the fuzzy-valued parameters of a model with epistemic uncertainties is presented. This method stands out due to the fact that it only uses feedforward simulations of the model, based on the transformation method of fuzzy arithmetic, along with the reference output. An inversion of the system equations is not necessary. The advancement of the method presented in this paper consists of the identification of multiple input parameters based on a single reference output or measurement. An optimization is used to solve the resulting underdetermined problems by minimizing the uncertainty of the identified parameters. Regions where the identification procedure is reliable are determined by the computation of a feasibility criterion which is also based on the output data of the transformation method only. For a frequency response function of a mechanical system, this criterion allows a restriction of the identification process to some special range of frequency where its solution can be guaranteed. Finally, the practicability of the method is demonstrated by covering the measured output of a fluid-filled piping system by the corresponding uncertain FE model in a conservative way.

  7. Use of Regional Climate Model Output for Hydrologic Simulations

    NASA Astrophysics Data System (ADS)

    Hay, L. E.; Clark, M. P.; Wilby, R. L.; Gutowski, W. J.; Leavesley, G. H.; Pan, Z.; Arritt, R. W.; Takle, E. S.

    2001-12-01

    Daily precipitation and maximum and minimum temperature time series from a Regional Climate Model (RegCM2) were used as input to a distributed hydrologic model for a rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado; East Fork of the Carson River near Gardnerville, Nevada; and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily data sets of precipitation and maximum and minimum temperature were developed from measured data. These datasets included precipitation and temperature data for all stations that are located within the area of the RegCM2 model output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and station data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and station-based simulations of runoff show little skill on a daily basis (Nash-Sutcliffe (NS) values ranging from 0.05-0.37 for RegCM2 and -0.08-0.65 for station). When the precipitation and temperature biases are corrected in the RegCM2 output and station data sets (Bias-RegCM2 and Bias-station, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins. In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from -0.08 to 0.72). These results indicate that the resolution of the RegCM2 output is appropriate for basin-scale modeling, but RegCM2 model output does not contain the day-to-day variability needed for basin-scale modeling in rainfall-dominated basins. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.

  8. Peak expiratory flow profiles delivered by pump systems. Limitations due to wave action.

    PubMed

    Miller, M R; Jones, B; Xu, Y; Pedersen, O F; Quanjer, P H

    2000-06-01

    Pump systems are currently used to test the performance of both spirometers and peak expiratory flow (PEF) meters, but for certain flow profiles the input signal (i.e., requested profile) and the output profile can differ. We developed a mathematical model of wave action within a pump and compared the recorded flow profiles with both the input profiles and the output predicted by the model. Three American Thoracic Society (ATS) flow profiles and four artificial flow-versus-time profiles were delivered by a pump, first to a pneumotachograph (PT) on its own, then to the PT with a 32-cm upstream extension tube (which would favor wave action), and lastly with the PT in series with and immediately downstream to a mini-Wright peak flow meter. With the PT on its own, recorded flow for the seven profiles was 2.4 +/- 1.9% (mean +/- SD) higher than the pump's input flow, and similarly was 2.3 +/- 2.3% higher than the pump's output flow as predicted by the model. With the extension tube in place, the recorded flow was 6.6 +/- 6.4% higher than the input flow (range: 0.1 to 18.4%), but was only 1.2 +/- 2.5% higher than the output flow predicted by the model (range: -0.8 to 5.2%). With the mini-Wright meter in series, the flow recorded by the PT was on average 6.1 +/- 9.1% below the input flow (range: -23.8 to 2. 5%), but was only 0.6 +/- 3.3% above the pump's output flow predicted by the model (range: -5.5 to 3.9%). The mini-Wright meter's reading (corrected for its nonlinearity) was on average 1.3 +/- 3.6% below the model's predicted output flow (range: -9.0 to 1. 5%). The mini-Wright meter would be deemed outside ATS limits for accuracy for three of the seven profiles when compared with the pump's input PEF, but this would be true for only one profile when compared with the pump's output PEF as predicted by the model. Our study shows that the output flow from pump systems can differ from the input waveform depending on the operating configuration. This effect can be predicted with reasonable accuracy using a model based on nonsteady flow analysis that takes account of pressure wave reflections within pump systems.

  9. Measurement of unsteady loading and power output variability in a micro wind farm model in a wind tunnel

    NASA Astrophysics Data System (ADS)

    Bossuyt, Juliaan; Howland, Michael F.; Meneveau, Charles; Meyers, Johan

    2017-01-01

    Unsteady loading and spatiotemporal characteristics of power output are measured in a wind tunnel experiment of a microscale wind farm model with 100 porous disk models. The model wind farm is placed in a scaled turbulent boundary layer, and six different layouts, varied from aligned to staggered, are considered. The measurements are done by making use of a specially designed small-scale porous disk model, instrumented with strain gages. The frequency response of the measurements goes up to the natural frequency of the model, which corresponds to a reduced frequency of 0.6 when normalized by the diameter and the mean hub height velocity. The equivalent range of timescales, scaled to field-scale values, is 15 s and longer. The accuracy and limitations of the acquisition technique are documented and verified with hot-wire measurements. The spatiotemporal measurement capabilities of the experimental setup are used to study the cross-correlation in the power output of various porous disk models of wind turbines. A significant correlation is confirmed between streamwise aligned models, while staggered models show an anti-correlation.

  10. Bayesian Model Averaging of Artificial Intelligence Models for Hydraulic Conductivity Estimation

    NASA Astrophysics Data System (ADS)

    Nadiri, A.; Chitsazan, N.; Tsai, F. T.; Asghari Moghaddam, A.

    2012-12-01

    This research presents a Bayesian artificial intelligence model averaging (BAIMA) method that incorporates multiple artificial intelligence (AI) models to estimate hydraulic conductivity and evaluate estimation uncertainties. Uncertainty in the AI model outputs stems from error in model input as well as non-uniqueness in selecting different AI methods. Using one single AI model tends to bias the estimation and underestimate uncertainty. BAIMA employs Bayesian model averaging (BMA) technique to address the issue of using one single AI model for estimation. BAIMA estimates hydraulic conductivity by averaging the outputs of AI models according to their model weights. In this study, the model weights were determined using the Bayesian information criterion (BIC) that follows the parsimony principle. BAIMA calculates the within-model variances to account for uncertainty propagation from input data to AI model output. Between-model variances are evaluated to account for uncertainty due to model non-uniqueness. We employed Takagi-Sugeno fuzzy logic (TS-FL), artificial neural network (ANN) and neurofuzzy (NF) to estimate hydraulic conductivity for the Tasuj plain aquifer, Iran. BAIMA combined three AI models and produced better fitting than individual models. While NF was expected to be the best AI model owing to its utilization of both TS-FL and ANN models, the NF model is nearly discarded by the parsimony principle. The TS-FL model and the ANN model showed equal importance although their hydraulic conductivity estimates were quite different. This resulted in significant between-model variances that are normally ignored by using one AI model.

  11. "One-Stop Shopping" for Ocean Remote-Sensing and Model Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook

    2006-01-01

    OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for

  12. Modifications of the U.S. Geological Survey modular, finite-difference, ground-water flow model to read and write geographic information system files

    USGS Publications Warehouse

    Orzol, Leonard L.; McGrath, Timothy S.

    1992-01-01

    This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.

  13. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations

    PubMed Central

    Fernandez, Fernando R.; Malerba, Paola; White, John A.

    2015-01-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances. PMID:25909971

  14. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations.

    PubMed

    Fernandez, Fernando R; Malerba, Paola; White, John A

    2015-04-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances.

  15. Climate Model Ensemble Methodology: Rationale and Challenges

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.; Myrvold, W.

    2012-12-01

    A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.

  16. Human Activity Recognition by Combining a Small Number of Classifiers.

    PubMed

    Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin

    2016-09-01

    We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.

  17. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  18. Model Based Predictive Control of Multivariable Hammerstein Processes with Fuzzy Logic Hypercube Interpolated Models

    PubMed Central

    Coelho, Antonio Augusto Rodrigues

    2016-01-01

    This paper introduces the Fuzzy Logic Hypercube Interpolator (FLHI) and demonstrates applications in control of multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) processes with Hammerstein nonlinearities. FLHI consists of a Takagi-Sugeno fuzzy inference system where membership functions act as kernel functions of an interpolator. Conjunction of membership functions in an unitary hypercube space enables multivariable interpolation of N-dimensions. Membership functions act as interpolation kernels, such that choice of membership functions determines interpolation characteristics, allowing FLHI to behave as a nearest-neighbor, linear, cubic, spline or Lanczos interpolator, to name a few. The proposed interpolator is presented as a solution to the modeling problem of static nonlinearities since it is capable of modeling both a function and its inverse function. Three study cases from literature are presented, a single-input single-output (SISO) system, a MISO and a MIMO system. Good results are obtained regarding performance metrics such as set-point tracking, control variation and robustness. Results demonstrate applicability of the proposed method in modeling Hammerstein nonlinearities and their inverse functions for implementation of an output compensator with Model Based Predictive Control (MBPC), in particular Dynamic Matrix Control (DMC). PMID:27657723

  19. A phenomenological model of muscle fatigue and the power-endurance relationship.

    PubMed

    James, A; Green, S

    2012-11-01

    The relationship between power output and the time that it can be sustained during exercise (i.e., endurance) at high intensities is curvilinear. Although fatigue is implicit in this relationship, there is little evidence pertaining to it. To address this, we developed a phenomenological model that predicts the temporal response of muscle power during submaximal and maximal exercise and which was based on the type, contractile properties (e.g., fatiguability), and recruitment of motor units (MUs) during exercise. The model was first used to predict power outputs during all-out exercise when fatigue is clearly manifest and for several distributions of MU type. The model was then used to predict times that different submaximal power outputs could be sustained for several MU distributions, from which several power-endurance curves were obtained. The model was simultaneously fitted to two sets of human data pertaining to all-out exercise (power-time profile) and submaximal exercise (power-endurance relationship), yielding a high goodness of fit (R(2) = 0.96-0.97). This suggested that this simple model provides an accurate description of human power output during submaximal and maximal exercise and that fatigue-related processes inherent in it account for the curvilinearity of the power-endurance relationship.

  20. Data-Based Predictive Control with Multirate Prediction Step

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  1. Coral Mortality and Bleaching Output

    EPA Science Inventory

    COMBO is a spreadsheet-based model for the use of managers, conservationists, and biologists for projecting the effects of climate change on coral reefs at local-to-regional scales. The COMBO (Coral Mortality and Bleaching Output) model calculates the impacts to coral reefs from...

  2. Principal Dynamic Mode Analysis of the Hodgkin–Huxley Equations

    PubMed Central

    Eikenberry, Steffen E.; Marmarelis, Vasilis Z.

    2015-01-01

    We develop an autoregressive model framework based on the concept of Principal Dynamic Modes (PDMs) for the process of action potential (AP) generation in the excitable neuronal membrane described by the Hodgkin–Huxley (H–H) equations. The model's exogenous input is injected current, and whenever the membrane potential output exceeds a specified threshold, it is fed back as a second input. The PDMs are estimated from the previously developed Nonlinear Autoregressive Volterra (NARV) model, and represent an efficient functional basis for Volterra kernel expansion. The PDM-based model admits a modular representation, consisting of the forward and feedback PDM bases as linear filterbanks for the exogenous and autoregressive inputs, respectively, whose outputs are then fed to a static nonlinearity composed of polynomials operating on the PDM outputs and cross-terms of pair-products of PDM outputs. A two-step procedure for model reduction is performed: first, influential subsets of the forward and feedback PDM bases are identified and selected as the reduced PDM bases. Second, the terms of the static nonlinearity are pruned. The first step reduces model complexity from a total of 65 coefficients to 27, while the second further reduces the model coefficients to only eight. It is demonstrated that the performance cost of model reduction in terms of out-of-sample prediction accuracy is minimal. Unlike the full model, the eight coefficient pruned model can be easily visualized to reveal the essential system components, and thus the data-derived PDM model can yield insight into the underlying system structure and function. PMID:25630480

  3. Evaluation of Data-Driven Models for Predicting Solar Photovoltaics Power Output

    DOE PAGES

    Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas

    2017-09-10

    This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less

  4. Uncertainty Quantification of Turbulence Model Closure Coefficients for Transonic Wall-Bounded Flows

    NASA Technical Reports Server (NTRS)

    Schaefer, John; West, Thomas; Hosder, Serhat; Rumsey, Christopher; Carlson, Jan-Renee; Kleb, William

    2015-01-01

    The goal of this work was to quantify the uncertainty and sensitivity of commonly used turbulence models in Reynolds-Averaged Navier-Stokes codes due to uncertainty in the values of closure coefficients for transonic, wall-bounded flows and to rank the contribution of each coefficient to uncertainty in various output flow quantities of interest. Specifically, uncertainty quantification of turbulence model closure coefficients was performed for transonic flow over an axisymmetric bump at zero degrees angle of attack and the RAE 2822 transonic airfoil at a lift coefficient of 0.744. Three turbulence models were considered: the Spalart-Allmaras Model, Wilcox (2006) k-w Model, and the Menter Shear-Stress Trans- port Model. The FUN3D code developed by NASA Langley Research Center was used as the flow solver. The uncertainty quantification analysis employed stochastic expansions based on non-intrusive polynomial chaos as an efficient means of uncertainty propagation. Several integrated and point-quantities are considered as uncertain outputs for both CFD problems. All closure coefficients were treated as epistemic uncertain variables represented with intervals. Sobol indices were used to rank the relative contributions of each closure coefficient to the total uncertainty in the output quantities of interest. This study identified a number of closure coefficients for each turbulence model for which more information will reduce the amount of uncertainty in the output significantly for transonic, wall-bounded flows.

  5. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  6. Development of model reference adaptive control theory for electric power plant control applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mabius, L.E.

    1982-09-15

    The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less

  7. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  8. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  9. Toward Scientific Numerical Modeling

    NASA Technical Reports Server (NTRS)

    Kleb, Bil

    2007-01-01

    Ultimately, scientific numerical models need quantified output uncertainties so that modeling can evolve to better match reality. Documenting model input uncertainties and verifying that numerical models are translated into code correctly, however, are necessary first steps toward that goal. Without known input parameter uncertainties, model sensitivities are all one can determine, and without code verification, output uncertainties are simply not reliable. To address these two shortcomings, two proposals are offered: (1) an unobtrusive mechanism to document input parameter uncertainties in situ and (2) an adaptation of the Scientific Method to numerical model development and deployment. Because these two steps require changes in the computational simulation community to bear fruit, they are presented in terms of the Beckhard-Harris-Gleicher change model.

  10. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.

    PubMed

    Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.

  11. MPS Solidification Model. Volume 2: Operating guide and software documentation for the unsteady model

    NASA Technical Reports Server (NTRS)

    Maples, A. L.

    1981-01-01

    The operation of solidification Model 2 is described and documentation of the software associated with the model is provided. Model 2 calculates the macrosegregation in a rectangular ingot of a binary alloy as a result of unsteady horizontal axisymmetric bidirectional solidification. The solidification program allows interactive modification of calculation parameters as well as selection of graphical and tabular output. In batch mode, parameter values are input in card image form and output consists of printed tables of solidification functions. The operational aspects of Model 2 that differ substantially from Model 1 are described. The global flow diagrams and data structures of Model 2 are included. The primary program documentation is the code itself.

  12. Control vocabulary software designed for CMIP6

    NASA Astrophysics Data System (ADS)

    Nadeau, D.; Taylor, K. E.; Williams, D. N.; Ames, S.

    2016-12-01

    The Coupled Model Intercomparison Project Phase 6 (CMIP6) coordinates a number of intercomparison activities and includes many more experiments than its predecessor, CMIP5. In order to organize and facilitate use of the complex collection of expected CMIP6 model output, a standard set of descriptive information has been defined, which must be stored along with the data. This standard information enables automated machine interpretation of the contents of all model output files. The standard metadata is stored in compliance with the Climate and Forecast (CF) standard, which ensures that it can be interpreted and visualized by many standard software packages. Additional attributes (not standardized by CF) are required by CMIP6 to enhance identification of models and experiments, and to provide additional information critical for interpreting the model results. To ensure that CMIP6 data complies with the standards, a python program called "PrePARE" (Pre-Publication Attribute Reviewer for the ESGF) has been developed to check the model output prior to its publication and release for analysis. If, for example, a required attribute is missing or incorrect (e.g., not included in the reference CMIP6 controlled vocabularies), then PrePare will prevent publication. In some circumstances, missing attributes can be created or incorrect attributes can be replaced automatically by PrePARE, and the program will warn users about the changes that have been made. PrePARE provides a final check on model output assuring adherence to a baseline conformity across the output from all CMIP6 models which will facilitate analysis by climate scientists. PrePARE is flexible and can be easily modified for use by similar projects that have a well-defined set of metadata and controlled vocabularies.

  13. Description and expected performance of flight-model, 12-gigahertz, output stage tube for the communications technology satellite

    NASA Technical Reports Server (NTRS)

    Chomos, G. J.; Curren, A. N.

    1976-01-01

    The flight model output stage tube for the Communications Technology Satellite is described. The output stage tube is a 12-GHz, 200-W, coupled cavity traveling wave tube. The tube has a multistage depressed collector for efficiency enhancement. Collector cooling is accomplished by direct radiation to space. Expected rf performance and factors affecting on orbit performance and life are discussed.

  14. Acceleration and Performance Modeling Workshop, Washington, DC, 14-17 May 79,

    DTIC Science & Technology

    1979-12-01

    disturbance of the muscular systems, perhaps changes in spindle fiber output, and changes in the perceived weight of the muscle because of the acceleration...at this point either. The output models which are determining performance are essentially tied to muscular systems, through manual control (hand and...feet), and through speech, another muscular output. In normal activities the pilot, who senses changes in the visual system, the acceleration vector

  15. The SLH framework for modeling quantum input-output networks

    DOE PAGES

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    2017-09-04

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  16. The SLH framework for modeling quantum input-output networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  17. Validation of individual and aggregate global flood hazard models for two major floods in Africa.

    NASA Astrophysics Data System (ADS)

    Trigg, M.; Bernhofen, M.; Whyman, C.

    2017-12-01

    A recent intercomparison of global flood hazard models undertaken by the Global Flood Partnership shows that there is an urgent requirement to undertake more validation of the models against flood observations. As part of the intercomparison, the aggregated model dataset resulting from the project was provided as open access data. We compare the individual and aggregated flood extent output from the six global models and test these against two major floods in the African Continent within the last decade, namely severe flooding on the Niger River in Nigeria in 2012, and on the Zambezi River in Mozambique in 2007. We test if aggregating different number and combination of models increases model fit to the observations compared with the individual model outputs. We present results that illustrate some of the challenges of comparing imperfect models with imperfect observations and also that of defining the probability of a real event in order to test standard model output probabilities. Finally, we propose a collective set of open access validation flood events, with associated observational data and descriptions that provide a standard set of tests across different climates and hydraulic conditions.

  18. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  19. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  20. Metamodels for Ozone: Comparison of Three Estimation Techniques

    EPA Science Inventory

    A metamodel for ozone is a mathematical relationship between the inputs and outputs of an air quality modeling experiment, permitting calculation of outputs for scenarios of interest without having to run the model again. In this study we compare three metamodel estimation techn...

  1. Earth System Model Development and Analysis using FRE-Curator and Live Access Servers: On-demand analysis of climate model output with data provenance.

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, A.; Balaji, V.; Schweitzer, R.; Nikonov, S.; O'Brien, K.; Vahlenkamp, H.; Burger, E. F.

    2016-12-01

    There are distinct phases in the development cycle of an Earth system model. During the model development phase, scientists make changes to code and parameters and require rapid access to results for evaluation. During the production phase, scientists may make an ensemble of runs with different settings, and produce large quantities of output, that must be further analyzed and quality controlled for scientific papers and submission to international projects such as the Climate Model Intercomparison Project (CMIP). During this phase, provenance is a key concern:being able to track back from outputs to inputs. We will discuss one of the paths taken at GFDL in delivering tools across this lifecycle, offering on-demand analysis of data by integrating the use of GFDL's in-house FRE-Curator, Unidata's THREDDS and NOAA PMEL's Live Access Servers (LAS).Experience over this lifecycle suggests that a major difficulty in developing analysis capabilities is only partially the scientific content, but often devoted to answering the questions "where is the data?" and "how do I get to it?". "FRE-Curator" is the name of a database-centric paradigm used at NOAA GFDL to ingest information about the model runs into an RDBMS (Curator database). The components of FRE-Curator are integrated into Flexible Runtime Environment workflow and can be invoked during climate model simulation. The front end to FRE-Curator, known as the Model Development Database Interface (MDBI) provides an in-house web-based access to GFDL experiments: metadata, analysis output and more. In order to provide on-demand visualization, MDBI uses Live Access Servers which is a highly configurable web server designed to provide flexible access to geo-referenced scientific data, that makes use of OPeNDAP. Model output saved in GFDL's tape archive, the size of the database and experiments, continuous model development initiatives with more dynamic configurations add complexity and challenges in providing an on-demand visualization experience to our GFDL users.

  2. Combination of Alternative Models by Mutual Data Assimilation: Supermodeling With A Suite of Primitive Equation Models

    NASA Astrophysics Data System (ADS)

    Duane, G. S.; Selten, F.

    2016-12-01

    Different models of climate and weather commonly give projections/predictions that differ widely in their details. While averaging of model outputs almost always improves results, nonlinearity implies that further improvement can be obtained from model interaction in run time, as has already been demonstrated with toy systems of ODEs and idealized quasigeostrophic models. In the supermodeling scheme, models effectively assimilate data from one another and partially synchronize with one another. Spread among models is manifest as a spread in possible inter-model connection coefficients, so that the models effectively "agree to disagree". Here, we construct a supermodel formed from variants of the SPEEDO model, a primitive-equation atmospheric model (SPEEDY) coupled to ocean and land. A suite of atmospheric models, coupled to the same ocean and land, is chosen to represent typical differences among climate models by varying model parameters. Connections are introduced between all pairs of corresponding independent variables at synoptic-scale intervals. Strengths of the inter-atmospheric connections can be considered to represent inverse inter-model observation error. Connection strengths are adapted based on an established procedure that extends the dynamical equations of a pair of synchronizing systems to synchronize parameters as well. The procedure is applied to synchronize the suite of SPEEDO models with another SPEEDO model regarded as "truth", adapting the inter-model connections along the way. The supermodel with trained connections gives marginally lower error in all fields than any weighted combination of the separate model outputs when used in "weather-prediction mode", i.e. with constant nudging to truth. Stronger results are obtained if a supermodel is used to predict the formation of coherent structures or the frequency of such. Partially synchronized SPEEDO models give a better representation of the blocked-zonal index cycle than does a weighted average of the constituent model outputs. We have thus shown that supermodeling and the synchronization-based procedure to adapt inter-model connections give results superior to output averaging not only with highly nonlinear toy systems, but with smaller nonlinearities as occur in climate models.

  3. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  4. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  5. Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction

    DTIC Science & Technology

    2016-02-25

    Approximation of Quantum Stochastic Differential Equations for Input-Output Model Reduction We have completed a short program of theoretical research...on dimensional reduction and approximation of models based on quantum stochastic differential equations. Our primary results lie in the area of...2211 quantum probability, quantum stochastic differential equations REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR

  6. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  7. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  8. User's manual for the Simulated Life Analysis of Vehicle Elements (SLAVE) model

    NASA Technical Reports Server (NTRS)

    Paul, D. D., Jr.

    1972-01-01

    The simulated life analysis of vehicle elements model was designed to perform statistical simulation studies for any constant loss rate. The outputs of the model consist of the total number of stages required, stages successfully completing their lifetime, and average stage flight life. This report contains a complete description of the model. Users' instructions and interpretation of input and output data are presented such that a user with little or no prior programming knowledge can successfully implement the program.

  9. Decentralized model reference adaptive control of large flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Fu-Ming; Fong, I-Kong; Lin, Yu-Hwan

    1988-01-01

    A decentralized model reference adaptive control (DMRAC) method is developed for large flexible structures (LFS). The development follows that of a centralized model reference adaptive control for LFS that have been shown to be feasible. The proposed method is illustrated using a simply supported beam with collocated actuators and sensors. Results show that the DMRAC can achieve either output regulation or output tracking with adequate convergence, provided the reference model inputs and their time derivatives are integrable, bounded, and approach zero as t approaches infinity.

  10. A Methodology for Model Comparison Using the Theater Simulation of Airbase Resources and All Mobile Tactical Air Force Models

    DTIC Science & Technology

    1992-09-01

    ease with which a model is employed, may depend on several factors, among them the users’ past experience in modeling, preferences for menu driven...partially on our knowledge of important logistics factors, partially on the past work of Diener (12), and partially on the assumption that comparison of...flexibility in output report selection. The minimum output was used in each instance 74 to conserve computer storage and to minimize the consumption of paper

  11. Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model

    NASA Astrophysics Data System (ADS)

    Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.

    2017-10-01

    The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.

  12. A Hippocampal Cognitive Prosthesis: Multi-Input, Multi-Output Nonlinear Modeling and VLSI Implementation

    PubMed Central

    Berger, Theodore W.; Song, Dong; Chan, Rosa H. M.; Marmarelis, Vasilis Z.; LaCoss, Jeff; Wills, Jack; Hampson, Robert E.; Deadwyler, Sam A.; Granacki, John J.

    2012-01-01

    This paper describes the development of a cognitive prosthesis designed to restore the ability to form new long-term memories typically lost after damage to the hippocampus. The animal model used is delayed nonmatch-to-sample (DNMS) behavior in the rat, and the “core” of the prosthesis is a biomimetic multi-input/multi-output (MIMO) nonlinear model that provides the capability for predicting spatio-temporal spike train output of hippocampus (CA1) based on spatio-temporal spike train inputs recorded presynaptically to CA1 (e.g., CA3). We demonstrate the capability of the MIMO model for highly accurate predictions of CA1 coded memories that can be made on a single-trial basis and in real-time. When hippocampal CA1 function is blocked and long-term memory formation is lost, successful DNMS behavior also is abolished. However, when MIMO model predictions are used to reinstate CA1 memory-related activity by driving spatio-temporal electrical stimulation of hippocampal output to mimic the patterns of activity observed in control conditions, successful DNMS behavior is restored. We also outline the design in very-large-scale integration for a hardware implementation of a 16-input, 16-output MIMO model, along with spike sorting, amplification, and other functions necessary for a total system, when coupled together with electrode arrays to record extracellularly from populations of hippocampal neurons, that can serve as a cognitive prosthesis in behaving animals. PMID:22438335

  13. Hepatic function imaging using dynamic Gd-EOB-DTPA enhanced MRI and pharmacokinetic modeling.

    PubMed

    Ning, Jia; Yang, Zhiying; Xie, Sheng; Sun, Yongliang; Yuan, Chun; Chen, Huijun

    2017-10-01

    To determine whether pharmacokinetic modeling parameters with different output assumptions of dynamic contrast-enhanced MRI (DCE-MRI) using Gd-EOB-DTPA correlate with serum-based liver function tests, and compare the goodness of fit of the different output assumptions. A 6-min DCE-MRI protocol was performed in 38 patients. Four dual-input two-compartment models with different output assumptions and a published one-compartment model were used to calculate hepatic function parameters. The Akaike information criterion fitting error was used to evaluate the goodness of fit. Imaging-based hepatic function parameters were compared with blood chemistry using correlation with multiple comparison correction. The dual-input two-compartment model assuming venous flow equals arterial flow plus portal venous flow and no bile duct output better described the liver tissue enhancement with low fitting error and high correlation with blood chemistry. The relative uptake rate Kir derived from this model was found to be significantly correlated with direct bilirubin (r = -0.52, P = 0.015), prealbumin concentration (r = 0.58, P = 0.015), and prothrombin time (r = -0.51, P = 0.026). It is feasible to evaluate hepatic function by proper output assumptions. The relative uptake rate has the potential to serve as a biomarker of function. Magn Reson Med 78:1488-1495, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  15. Semi-Automated Processing of Trajectory Simulator Output Files for Model Evaluation

    DTIC Science & Technology

    2018-01-01

    ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory Simulator Output Files for Model......Do not return it to the originator. ARL-TR-8284 ● JAN 2018 US Army Research Laboratory Semi-Automated Processing of Trajectory

  16. Observations of Local Seychelles Circulation

    DTIC Science & Technology

    2015-09-30

    long term deployments. Addtitionally, we have been in touch with Julie MacClean and Tommy Jenson regarding model output, both for use as boundary...help guide regional model boundary condition placement while Tommy is in the process of extracting numerical mooring outputs near the locations of our

  17. Hazard mitigation with cloud model based rainfall and convective data

    NASA Astrophysics Data System (ADS)

    Gernowo, R.; Adi, K.; Yulianto, T.; Seniyatis, S.; Yatunnisa, A. A.

    2018-05-01

    Heavy rain in Semarang 15 January 2013 causes flood. It is related to dynamic of weather’s parameter, especially with convection process, clouds and rainfall data. In this case, weather condition analysis uses Weather Research and Forecasting (WRF) model used to analyze. Some weather’s parameters show significant result. Their fluctuations prove there is a strong convection that produces convective cloud (Cumulonimbus). Nesting and 2 domains on WRF model show good output to represent weather’s condition commonly. The results of this study different between output cloud cover rate of observation result and output of model around 6-12 hours is because spinning-up of processing. Satellite Images of MTSAT (Multifunctional Transport Satellite) are used as a verification data to prove the result of WRF. White color of satellite image is Coldest Dark Grey (CDG) that indicates there is cloud’s top. This image consolidates that the output of WRF is good enough to analyze Semarang’s condition when the case happened.

  18. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  19. Hierarchical multi-scale approach to validation and uncertainty quantification of hyper-spectral image modeling

    NASA Astrophysics Data System (ADS)

    Engel, Dave W.; Reichardt, Thomas A.; Kulp, Thomas J.; Graff, David L.; Thompson, Sandra E.

    2016-05-01

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensor level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.

  20. Simulation, Model Verification and Controls Development of Brayton Cycle PM Alternator: Testing and Simulation of 2 KW PM Generator with Diode Bridge Output

    NASA Technical Reports Server (NTRS)

    Stankovic, Ana V.

    2003-01-01

    Professor Stankovic will be developing and refining Simulink based models of the PM alternator and comparing the simulation results with experimental measurements taken from the unit. Her first task is to validate the models using the experimental data. Her next task is to develop alternative control techniques for the application of the Brayton Cycle PM Alternator in a nuclear electric propulsion vehicle. The control techniques will be first simulated using the validated models then tried experimentally with hardware available at NASA. Testing and simulation of a 2KW PM synchronous generator with diode bridge output is described. The parameters of a synchronous PM generator have been measured and used in simulation. Test procedures have been developed to verify the PM generator model with diode bridge output. Experimental and simulation results are in excellent agreement.

  1. Use of observational and model-derived fields and regime model output statistics in mesoscale forecasting

    NASA Technical Reports Server (NTRS)

    Forbes, G. S.; Pielke, R. A.

    1985-01-01

    Various empirical and statistical weather-forecasting studies which utilize stratification by weather regime are described. Objective classification was used to determine weather regime in some studies. In other cases the weather pattern was determined on the basis of a parameter representing the physical and dynamical processes relevant to the anticipated mesoscale phenomena, such as low level moisture convergence and convective precipitation, or the Froude number and the occurrence of cold-air damming. For mesoscale phenomena already in existence, new forecasting techniques were developed. The use of cloud models in operational forecasting is discussed. Models to calculate the spatial scales of forcings and resultant response for mesoscale systems are presented. The use of these models to represent the climatologically most prevalent systems, and to perform case-by-case simulations is reviewed. Operational implementation of mesoscale data into weather forecasts, using both actual simulation output and method-output statistics is discussed.

  2. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overviewmore » of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.« less

  3. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. System/observer/controller identification toolbox

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Horta, Lucas G.; Phan, Minh

    1992-01-01

    System Identification is the process of constructing a mathematical model from input and output data for a system under testing, and characterizing the system uncertainties and measurement noises. The mathematical model structure can take various forms depending upon the intended use. The SYSTEM/OBSERVER/CONTROLLER IDENTIFICATION TOOLBOX (SOCIT) is a collection of functions, written in MATLAB language and expressed in M-files, that implements a variety of modern system identification techniques. For an open loop system, the central features of the SOCIT are functions for identification of a system model and its corresponding forward and backward observers directly from input and output data. The system and observers are represented by a discrete model. The identified model and observers may be used for controller design of linear systems as well as identification of modal parameters such as dampings, frequencies, and mode shapes. For a closed-loop system, an observer and its corresponding controller gain directly from input and output data.

  5. Self-organizing linear output map (SOLO): An artificial neural network suitable for hydrologic modeling and analysis

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher

    2002-12-01

    Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.

  6. Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.

    2003-01-01

    This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.

  7. Fuzzy model-based servo and model following control for nonlinear systems.

    PubMed

    Ohtake, Hiroshi; Tanaka, Kazuo; Wang, Hua O

    2009-12-01

    This correspondence presents servo and nonlinear model following controls for a class of nonlinear systems using the Takagi-Sugeno fuzzy model-based control approach. First, the construction method of the augmented fuzzy system for continuous-time nonlinear systems is proposed by differentiating the original nonlinear system. Second, the dynamic fuzzy servo controller and the dynamic fuzzy model following controller, which can make outputs of the nonlinear system converge to target points and to outputs of the reference system, respectively, are introduced. Finally, the servo and model following controller design conditions are given in terms of linear matrix inequalities. Design examples illustrate the utility of this approach.

  8. Technical note: 3-hourly temporal downscaling of monthly global terrestrial biosphere model net ecosystem exchange

    DOE PAGES

    Fisher, Joshua B.; Sikka, Munish; Huntzinger, Deborah N.; ...

    2016-07-29

    Here, the land surface provides a boundary condition to atmospheric forward and flux inversion models. These models require prior estimates of CO 2 fluxes at relatively high temporal resolutions (e.g., 3-hourly) because of the high frequency of atmospheric mixing and wind heterogeneity. However, land surface model CO 2 fluxes are often provided at monthly time steps, typically because the land surface modeling community focuses more on time steps associated with plant phenology (e.g., seasonal) than on sub-daily phenomena. Here, we describe a new dataset created from 15 global land surface models and 4 ensemble products in the Multi-scale Synthesis andmore » Terrestrial Model Intercomparison Project (MsTMIP), temporally downscaled from monthly to 3-hourly output. We provide 3-hourly output for each individual model over 7 years (2004–2010), as well as an ensemble mean, a weighted ensemble mean, and the multi-model standard deviation. Output is provided in three different spatial resolutions for user preferences: 0.5° × 0.5°, 2.0° × 2.5°, and 4.0° × 5.0° (latitude × longitude).« less

  9. Effect of Topology Structure on the Output Performance of an Automobile Exhaust Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Fang, W.; Quan, S. H.; Xie, C. J.; Ran, B.; Li, X. L.; Wang, L.; Jiao, Y. T.; Xu, T. W.

    2017-05-01

    The majority of the thermal energy released in an automotive internal combustion cycle is exhausted as waste heat through the tail pipe. This paper describes an automobile exhaust thermoelectric generator (AETEG), designed to recycle automobile waste heat. A model of the output characteristics of each thermoelectric device was established by testing their open circuit voltage and internal resistance, and combining the output characteristics. To better describe the relationship, the physical model was transformed into a topological model. The connection matrix was used to describe the relationship between any two thermoelectric devices in the topological structure. Different topological structures produced different power outputs; their output power was maximised by using an iterative algorithm to optimize the series-parallel electrical topology structure. The experimental results have shown that the output power of the optimal topology structure increases by 18.18% and 29.35% versus that of a pure in-series or parallel topology, respectively, and by 10.08% versus a manually defined structure (based on user experience). The thermoelectric conversion device increased energy efficiency by 40% when compared with a traditional car.

  10. Quantification of downscaled precipitation uncertainties via Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nury, A. H.; Sharma, A.; Marshall, L. A.

    2017-12-01

    Prediction of precipitation from global climate model (GCM) outputs remains critical to decision-making in water-stressed regions. In this regard, downscaling of GCM output has been a useful tool for analysing future hydro-climatological states. Several downscaling approaches have been developed for precipitation downscaling, including those using dynamical or statistical downscaling methods. Frequently, outputs from dynamical downscaling are not readily transferable across regions for significant methodical and computational difficulties. Statistical downscaling approaches provide a flexible and efficient alternative, providing hydro-climatological outputs across multiple temporal and spatial scales in many locations. However these approaches are subject to significant uncertainty, arising due to uncertainty in the downscaled model parameters and in the use of different reanalysis products for inferring appropriate model parameters. Consequently, these will affect the performance of simulation in catchment scale. This study develops a Bayesian framework for modelling downscaled daily precipitation from GCM outputs. This study aims to introduce uncertainties in downscaling evaluating reanalysis datasets against observational rainfall data over Australia. In this research a consistent technique for quantifying downscaling uncertainties by means of Bayesian downscaling frame work has been proposed. The results suggest that there are differences in downscaled precipitation occurrences and extremes.

  11. From Spiking Neuron Models to Linear-Nonlinear Models

    PubMed Central

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-01

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates. PMID:21283777

  12. A diagnostic interface for the ICOsahedral Non-hydrostatic (ICON) modelling framework based on the Modular Earth Submodel System (MESSy v2.50)

    NASA Astrophysics Data System (ADS)

    Kern, Bastian; Jöckel, Patrick

    2016-10-01

    Numerical climate and weather models have advanced to finer scales, accompanied by large amounts of output data. The model systems hit the input and output (I/O) bottleneck of modern high-performance computing (HPC) systems. We aim to apply diagnostic methods online during the model simulation instead of applying them as a post-processing step to written output data, to reduce the amount of I/O. To include diagnostic tools into the model system, we implemented a standardised, easy-to-use interface based on the Modular Earth Submodel System (MESSy) into the ICOsahedral Non-hydrostatic (ICON) modelling framework. The integration of the diagnostic interface into the model system is briefly described. Furthermore, we present a prototype implementation of an advanced online diagnostic tool for the aggregation of model data onto a user-defined regular coarse grid. This diagnostic tool will be used to reduce the amount of model output in future simulations. Performance tests of the interface and of two different diagnostic tools show, that the interface itself introduces no overhead in form of additional runtime to the model system. The diagnostic tools, however, have significant impact on the model system's runtime. This overhead strongly depends on the characteristics and implementation of the diagnostic tool. A diagnostic tool with high inter-process communication introduces large overhead, whereas the additional runtime of a diagnostic tool without inter-process communication is low. We briefly describe our efforts to reduce the additional runtime from the diagnostic tools, and present a brief analysis of memory consumption. Future work will focus on optimisation of the memory footprint and the I/O operations of the diagnostic interface.

  13. From spiking neuron models to linear-nonlinear models.

    PubMed

    Ostojic, Srdjan; Brunel, Nicolas

    2011-01-20

    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  14. The space-dependent model and output characteristics of intra-cavity pumped dual-wavelength lasers

    NASA Astrophysics Data System (ADS)

    He, Jin-Qi; Dong, Yuan; Zhang, Feng-Dong; Yu, Yong-Ji; Jin, Guang-Yong; Liu, Li-Da

    2016-01-01

    The intra-cavity pumping scheme which is used to simultaneously generate dual-wavelength lasers was proposed and published by us and the space-independent model of quasi-three-level and four-level intra-cavity pumped dual-wavelength lasers was constructed based on this scheme. In this paper, to make the previous study more rigorous, the space-dependent model is adopted. As an example, the output characteristics of 946 nm and 1064 nm dual-wavelength lasers under the conditions of different output mirror transmittances are numerically simulated by using the derived formula and the results are nearly identical to what was previously reported.

  15. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Structural equation modeling in environmental risk assessment.

    PubMed

    Buncher, C R; Succop, P A; Dietrich, K N

    1991-01-01

    Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.

  17. Geophysical, archaeological and historical evidence support a solar-output model for climate change

    USGS Publications Warehouse

    Perry, C.A.; Hsu, K.J.

    2000-01-01

    Although the processes of climate change are not completely understood, an important causal candidate is variation in total solar output. Reported cycles in various climate-proxy data show a tendency to emulate a fundamental harmonic sequence of a basic solar-cycle length (11 years) multiplied by 2(N) (where N equals a positive or negative integer). A simple additive model for total solar-output variations was developed by superimposing a progression of fundamental harmonic cycles with slightly increasing amplitudes. The timeline of the model was calibrated to the Pleistocene/Holocene boundary at 9,000 years before present. The calibrated model was compared with geophysical, archaeological, and historical evidence of warm or cold climates during the Holocene. The evidence of periods of several centuries of cooler climates worldwide called 'little ice ages,' similar to the period anno Domini (A.D.) 1280-1860 and reoccurring approximately every 1,300 years, corresponds well with fluctuations in modeled solar output. A more detailed examination of the climate sensitive history of the last 1,000 years further supports the model. Extrapolation of the model into the future suggests a gradual cooling during the next few centuries with intermittent minor warmups and a return to near little-ice-age conditions within the next 500 years. This cool period then may be followed approximately 1,500 years from now by a return to altithermal conditions similar to the previous Holocene Maximum.

  18. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  19. Stochastic Short-term High-resolution Prediction of Solar Irradiance and Photovoltaic Power Output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melin, Alexander M.; Olama, Mohammed M.; Dong, Jin

    The increased penetration of solar photovoltaic (PV) energy sources into electric grids has increased the need for accurate modeling and prediction of solar irradiance and power production. Existing modeling and prediction techniques focus on long-term low-resolution prediction over minutes to years. This paper examines the stochastic modeling and short-term high-resolution prediction of solar irradiance and PV power output. We propose a stochastic state-space model to characterize the behaviors of solar irradiance and PV power output. This prediction model is suitable for the development of optimal power controllers for PV sources. A filter-based expectation-maximization and Kalman filtering mechanism is employed tomore » estimate the parameters and states in the state-space model. The mechanism results in a finite dimensional filter which only uses the first and second order statistics. The structure of the scheme contributes to a direct prediction of the solar irradiance and PV power output without any linearization process or simplifying assumptions of the signal’s model. This enables the system to accurately predict small as well as large fluctuations of the solar signals. The mechanism is recursive allowing the solar irradiance and PV power to be predicted online from measurements. The mechanism is tested using solar irradiance and PV power measurement data collected locally in our lab.« less

  20. A model for plant lighting system selection.

    PubMed

    Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W

    2002-01-01

    A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.

  1. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  2. Super short term forecasting of photovoltaic power generation output in micro grid

    NASA Astrophysics Data System (ADS)

    Gong, Cheng; Ma, Longfei; Chi, Zhongjun; Zhang, Baoqun; Jiao, Ran; Yang, Bing; Chen, Jianshu; Zeng, Shuang

    2017-01-01

    The prediction model combining data mining and support vector machine (SVM) was built. Which provide information of photovoltaic (PV) power generation output for economic operation and optimal control of micro gird, and which reduce influence of power system from PV fluctuation. Because of the characteristic which output of PV rely on radiation intensity, ambient temperature, cloudiness, etc., so data mining was brought in. This technology can deal with large amounts of historical data and eliminate superfluous data, by using fuzzy classifier of daily type and grey related degree. The model of SVM was built, which can dock with information from data mining. Based on measured data from a small PV station, the prediction model was tested. The numerical example shows that the prediction model is fast and accurate.

  3. Dynamic modeling and parameter estimation of a radial and loop type distribution system network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun Qui; Heng Chen; Girgis, A.A.

    1993-05-01

    This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.

  4. Direct model reference adaptive control with application to flexible robots

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory W.

    1992-01-01

    A modification to a direct command generator tracker-based model reference adaptive control (MRAC) system is suggested in this paper. This modification incorporates a feedforward into the reference model's output as well as the plant's output. Its purpose is to eliminate the bounded model following error present in steady state when previous MRAC systems were used. The algorithm was evaluated using the dynamics for a single-link flexible-joint arm. The results of these simulations show a response with zero steady state model following error. These results encourage further use of MRAC for various types of nonlinear plants.

  5. Lagrangian Turbulence and Transport in Semi-Enclosed Basins and Coastal Regions

    DTIC Science & Technology

    2008-09-30

    P.M. Poulain, R. Signell, J. Chiggiato , S. Carniel, 2008: Variational analysis of drifter positions and model outputs for the reconstruction of... Chiggiato , S. Carniel, 2008: Variational analysis of drifter positions and model outputs for the reconstruction of surface currents in the Central

  6. Metamodeling Techniques to Aid in the Aggregation Process of Large Hierarchical Simulation Models

    DTIC Science & Technology

    2008-08-01

    Level Outputs Campaign Level Model Campaign Level Outputs Aggregation Metamodeling Complexity (Spatial, Temporal, etc.) Others? Apply VRT (type......reduction, are called variance reduction techniques ( VRT ) [Law, 2006]. The implementation of some type of VRT can prove to be a very valuable tool

  7. Dynamics of the Antarctic Circumpolar Current. Evidence for Topographic Effects from Altimeter Data and Numerical Model Output

    NASA Technical Reports Server (NTRS)

    Gille, Sarah T.

    1995-01-01

    Geosat altimeter data and numerical model output are used to examine the circulation and dynamics of the Antarctic Circumpolar Current (ACC). The mean sea surface height across the ACC has been reconstructed from height variability measured by the altimeter, without assuming prior knowledge of the geoid. The results indicate locations for the Subantarctic and Polar Fronts which are consistent with in situ observations and indicate that the fronts are substantially steered by bathymetry. Detailed examination of spatial and temporal variability indicates a spatial decorrelation scale of 85 km and a temporal e-folding scale of 34 days. Empirical Orthogonal Function analysis suggests that the scales of motion are relatively short, occuring on 1000 km length-scales rather than basin or global scales. The momentum balance of the ACC has been investigated using output from the high resolution primitive equation model in combination with altimeter data. In the Semtner-Chervin quarter-degree general circulation model topographic form stress is the dominant process balancing the surface wind forcing. In stream coordinates, the dominant effect transporting momentum across the ACC is bibarmonic friction. Potential vorticity is considered on Montgomery streamlines in the model output and along surface streamlines in model and altimeter data. (AN)

  8. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  9. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  10. 3D Visualization of Hydrological Model Outputs For a Better Understanding of Multi-Scale Phenomena

    NASA Astrophysics Data System (ADS)

    Richard, J.; Schertzer, D. J. M.; Tchiguirinskaia, I.

    2014-12-01

    During the last decades, many hydrological models has been created to simulate extreme events or scenarios on catchments. The classical outputs of these models are 2D maps, time series or graphs, which are easily understood by scientists, but not so much by many stakeholders, e.g. mayors or local authorities, and the general public. One goal of the Blue Green Dream project is to create outputs that are adequate for them. To reach this goal, we decided to convert most of the model outputs into a unique 3D visualization interface that combines all of them. This conversion has to be performed with an hydrological thinking to keep the information consistent with the context and the raw outputs.We focus our work on the conversion of the outputs of the Multi-Hydro (MH) model, which is physically based, fully distributed and with a GIS data interface. MH splits the urban water cycle into 4 components: the rainfall, the surface runoff, the infiltration and the drainage. To each of them, corresponds a modeling module with specific inputs and outputs. The superimposition of all this information will highlight the model outputs and help to verify the quality of the raw input data. For example, the spatial and the time variability of the rain generated by the rainfall module will be directly visible in 4D (3D + time) before running a full simulation. It is the same with the runoff module: because the result quality depends of the resolution of the rasterized land use, it will confirm or not the choice of the cell size.As most of the inputs and outputs are GIS files, two main conversions will be applied to display the results into 3D. First, a conversion from vector files to 3D objects. For example, buildings are defined in 2D inside a GIS vector file. Each polygon can be extruded with an height to create volumes. The principle is the same for the roads but an intrusion, instead of an extrusion, is done inside the topography file. The second main conversion is the raster conversion. Several files, such as the topography, the land use, the water depth, etc., are defined by geo-referenced grids. The corresponding grids are converted into a list of triangles to be displayed inside the 3D window. For the water depth, the display in pixels will not longer be the only solution. Creation of water contours will be done to more easily delineate the flood inside the catchment.

  11. Characteristic operator functions for quantum input-plant-output models and coherent control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gough, John E.

    We introduce the characteristic operator as the generalization of the usual concept of a transfer function of linear input-plant-output systems to arbitrary quantum nonlinear Markovian input-output models. This is intended as a tool in the characterization of quantum feedback control systems that fits in with the general theory of networks. The definition exploits the linearity of noise differentials in both the plant Heisenberg equations of motion and the differential form of the input-output relations. Mathematically, the characteristic operator is a matrix of dimension equal to the number of outputs times the number of inputs (which must coincide), but with entriesmore » that are operators of the plant system. In this sense, the characteristic operator retains details of the effective plant dynamical structure and is an essentially quantum object. We illustrate the relevance to model reduction and simplification definition by showing that the convergence of the characteristic operator in adiabatic elimination limit models requires the same conditions and assumptions appearing in the work on limit quantum stochastic differential theorems of Bouten and Silberfarb [Commun. Math. Phys. 283, 491-505 (2008)]. This approach also shows in a natural way that the limit coefficients of the quantum stochastic differential equations in adiabatic elimination problems arise algebraically as Schur complements and amounts to a model reduction where the fast degrees of freedom are decoupled from the slow ones and eliminated.« less

  12. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    PubMed

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  13. Further results on open-loop compensation of rate-dependent hysteresis in a magnetostrictive actuator with the Prandtl-Ishlinskii model

    NASA Astrophysics Data System (ADS)

    Al Janaideh, Mohammad; Aljanaideh, Omar

    2018-05-01

    Apart from the output-input hysteresis loops, the magnetostrictive actuators also exhibit asymmetry and saturation, particularly under moderate to large magnitude inputs and at relatively higher frequencies. Such nonlinear input-output characteristics could be effectively characterized by a rate-dependent Prandtl-Ishlinskii model in conjunction with a function of deadband operators. In this study, an inverse model is formulated to seek real-time compensation of rate-dependent and asymmetric hysteresis nonlinearities of a Terfenol-D magnetostrictive actuator. The inverse model is formulated with the inverse of the rate-dependent Prandtl-Ishlinskii model, satisfying the threshold dilation condition, with the inverse of the deadband function. The inverse model was subsequently applied to the hysteresis model as a feedforward compensator. The proposed compensator is applied as a feedforward compensator to the actuator hardware to study its potential for rate-dependent and asymmetric hysteresis loops. The experimental results are obtained under harmonic and complex harmonic inputs further revealed that the inverse compensator can substantially suppress the hysteresis and output asymmetry nonlinearities in the entire frequency range considered in the study.

  14. Model predictive control of the solid oxide fuel cell stack temperature with models based on experimental data

    NASA Astrophysics Data System (ADS)

    Pohjoranta, Antti; Halinen, Matias; Pennanen, Jari; Kiviaho, Jari

    2015-03-01

    Generalized predictive control (GPC) is applied to control the maximum temperature in a solid oxide fuel cell (SOFC) stack and the temperature difference over the stack. GPC is a model predictive control method and the models utilized in this work are ARX-type (autoregressive with extra input), multiple input-multiple output, polynomial models that were identified from experimental data obtained from experiments with a complete SOFC system. The proposed control is evaluated by simulation with various input-output combinations, with and without constraints. A comparison with conventional proportional-integral-derivative (PID) control is also made. It is shown that if only the stack maximum temperature is controlled, a standard PID controller can be used to obtain output performance comparable to that obtained with the significantly more complex model predictive controller. However, in order to control the temperature difference over the stack, both the stack minimum and the maximum temperature need to be controlled and this cannot be done with a single PID controller. In such a case the model predictive controller provides a feasible and effective solution.

  15. When Do Short-Wave Cones Signal Blue or Red? A Solution Introducing the Concept of Primary and Secondary Cone Outputs

    PubMed Central

    2016-01-01

    A recent paper by Oh and Sakata investigates the “incompletely solved mystery” of how the three cone responses map onto perceived hue, and particularly the S cone’s well-known problematic contribution to blueness and redness. Citing previous workers, they argue the twentieth century traditional multistage model does not satisfactorily account for color appearance. In their experiment, increasing S cone excitation with shortening wavelength from about 480–460 nm increased perceived blueness up to the unique Blue point at 470 nm, when (a) it began decreasing and (b) redness perception began increasing. The authors asked, What mechanism can be responsible for such functions? I demonstrate a solution. First, it is shown the problem does not lie in the traditional opponent color chromatic responses yellow-blue, red-green (y-b, r-g, which accurately predict the above functions), but in the traditional multistage model of mapping cone responses to chromatic response functions. Arguably, this is due to the S cone’s hypothetically signaling both blueness and redness by the same mechanism rather than by different, independent, mechanisms. Hence a new distinction or mechanism is proposed for a more accurate model, that introduces the new terms primary and secondary cone outputs. However, this distinction requires that the cones S, M, L each directly produce one of the three spectral chromatic responses b, g, y. Such a model was recently published, based on extremely high correlation of SML cone responsivities with the three spectral (bgy) chromatic responses. This model encodes the former directly onto the latter one-to-one as cone primary outputs, whilst S and L cones have a further or secondary function where each produces one of the two spectral lobes of r chromatic response. The proposed distinction between primary and secondary cone outputs is a new concept and useful tool in detailing cone outputs to chromatic channels, and provides a solution to the above “incompletely solved mystery.” Thus the S cone has a primary output producing the total b chromatic response and a secondary output that shares with the L cone the production of r chromatic response, thus aligning with Oh and Sokata’s results. The model similarly maps L cone to yellowness as primary output and to redness as secondary output. PMID:27110938

  16. Global robust output regulation control for cascaded nonlinear systems using the internal model principle

    NASA Astrophysics Data System (ADS)

    Yu, Jiang-Bo; Zhao, Yan; Wu, Yu-Qiang

    2014-04-01

    This article considers the global robust output regulation problem via output feedback for a class of cascaded nonlinear systems with input-to-state stable inverse dynamics. The system uncertainties depend not only on the measured output but also all the unmeasurable states. By introducing an internal model, the output regulation problem is converted into a stabilisation problem for an appropriately augmented system. The designed dynamic controller could achieve the global asymptotic tracking control for a class of time-varying reference signals for the system output while keeping all other closed-loop signals bounded. It is of interest to note that the developed control approach can be applied to the speed tracking control of the fan speed control system. The simulation results demonstrate its effectiveness.

  17. Hydrologic response to multimodel climate output using a physically based model of groundwater/surface water interactions

    NASA Astrophysics Data System (ADS)

    Sulis, M.; Paniconi, C.; Marrocu, M.; Huard, D.; Chaumont, D.

    2012-12-01

    General circulation models (GCMs) are the primary instruments for obtaining projections of future global climate change. Outputs from GCMs, aided by dynamical and/or statistical downscaling techniques, have long been used to simulate changes in regional climate systems over wide spatiotemporal scales. Numerous studies have acknowledged the disagreements between the various GCMs and between the different downscaling methods designed to compensate for the mismatch between climate model output and the spatial scale at which hydrological models are applied. Very little is known, however, about the importance of these differences once they have been input or assimilated by a nonlinear hydrological model. This issue is investigated here at the catchment scale using a process-based model of integrated surface and subsurface hydrologic response driven by outputs from 12 members of a multimodel climate ensemble. The data set consists of daily values of precipitation and min/max temperatures obtained by combining four regional climate models and five GCMs. The regional scenarios were downscaled using a quantile scaling bias-correction technique. The hydrologic response was simulated for the 690 km2des Anglais catchment in southwestern Quebec, Canada. The results show that different hydrological components (river discharge, aquifer recharge, and soil moisture storage) respond differently to precipitation and temperature anomalies in the multimodel climate output, with greater variability for annual discharge compared to recharge and soil moisture storage. We also find that runoff generation and extreme event-driven peak hydrograph flows are highly sensitive to any uncertainty in climate data. Finally, the results show the significant impact of changing sequences of rainy days on groundwater recharge fluxes and the influence of longer dry spells in modifying soil moisture spatial variability.

  18. Classification

    NASA Astrophysics Data System (ADS)

    Oza, Nikunj

    2012-03-01

    A supervised learning task involves constructing a mapping from input data (normally described by several features) to the appropriate outputs. A set of training examples— examples with known output values—is used by a learning algorithm to generate a model. This model is intended to approximate the mapping between the inputs and outputs. This model can be used to generate predicted outputs for inputs that have not been seen before. Within supervised learning, one type of task is a classification learning task, in which each output is one or more classes to which the input belongs. For example, we may have data consisting of observations of sunspots. In a classification learning task, our goal may be to learn to classify sunspots into one of several types. Each example may correspond to one candidate sunspot with various measurements or just an image. A learning algorithm would use the supplied examples to generate a model that approximates the mapping between each supplied set of measurements and the type of sunspot. This model can then be used to classify previously unseen sunspots based on the candidate’s measurements. The generalization performance of a learned model (how closely the target outputs and the model’s predicted outputs agree for patterns that have not been presented to the learning algorithm) would provide an indication of how well the model has learned the desired mapping. More formally, a classification learning algorithm L takes a training set T as its input. The training set consists of |T| examples or instances. It is assumed that there is a probability distribution D from which all training examples are drawn independently—that is, all the training examples are independently and identically distributed (i.i.d.). The ith training example is of the form (x_i, y_i), where x_i is a vector of values of several features and y_i represents the class to be predicted.* In the sunspot classification example given above, each training example would represent one sunspot’s classification (y_i) and the corresponding set of measurements (x_i). The output of a supervised learning algorithm is a model h that approximates the unknown mapping from the inputs to the outputs. In our example, h would map from the sunspot measurements to the type of sunspot. We may have a test set S—a set of examples not used in training that we use to test how well the model h predicts the outputs on new examples. Just as with the examples in T, the examples in S are assumed to be independent and identically distributed (i.i.d.) draws from the distribution D. We measure the error of h on the test set as the proportion of test cases that h misclassifies: 1/|S| Sigma(x,y union S)[I(h(x)!= y)] where I(v) is the indicator function—it returns 1 if v is true and 0 otherwise. In our sunspot classification example, we would identify additional examples of sunspots that were not used in generating the model, and use these to determine how accurate the model is—the fraction of the test samples that the model classifies correctly. An example of a classification model is the decision tree shown in Figure 23.1. We will discuss the decision tree learning algorithm in more detail later—for now, we assume that, given a training set with examples of sunspots, this decision tree is derived. This can be used to classify previously unseen examples of sunpots. For example, if a new sunspot’s inputs indicate that its "Group Length" is in the range 10-15, then the decision tree would classify the sunspot as being of type “E,” whereas if the "Group Length" is "NULL," the "Magnetic Type" is "bipolar," and the "Penumbra" is "rudimentary," then it would be classified as type "C." In this chapter, we will add to the above description of classification problems. We will discuss decision trees and several other classification models. In particular, we will discuss the learning algorithms that generate these classification models, how to use them to classify new examples, and the strengths and weaknesses of these models. We will end with pointers to further reading on classification methods applied to astronomy data.

  19. Analysis of transverse field distributions in Porro prism resonators

    NASA Astrophysics Data System (ADS)

    Litvin, Igor A.; Burger, Liesl; Forbes, Andrew

    2007-05-01

    A model to describe the transverse field distribution of the output beam from porro prism resonators is proposed. The model allows the prediction of the output transverse field distribution by assuming that the main areas of loss are located at the apexes of the porro prisms. Experimental work on a particular system showed some interested correlations between the time domain behavior of the resonator and the transverse field output. These findings are presented and discussed.

  20. Analysis of material flow in a utillzation technology of low grade manganese ore and sulphur coal complementary

    NASA Astrophysics Data System (ADS)

    Wang, Bo-Zhi; Deng, Biao; Su, Shi-Jun; Ding, Sang-Lan; Sun, Wei-Yi

    2018-03-01

    Electrolytic manganese is conventionally produced through low-grade manganese ore leaching in SO2, with the combustion of high sulfur coal. Subsequently the coal ash and manganese slag, produced by the combustion of high sulfur coal and preparation of electrolytic manganese, can be used as raw ingredients for the preparation of sulphoaluminate cement. In order to realize the `coal-electricity-sulfur-manganese-building material' system of complementary resource utilization, the conditions of material inflow and outflow in each process were determined using material flow analysis. The material flow models in each unit and process can be obtained by analyzed of material flow for new technology, and the input-output model could be obtained. Through the model, it is possible to obtain the quantity of all the input and output material in the condition of limiting the quantity of a substance. Taking one ton electrolytic manganese as a basis, the quantity of other input material and cements can be determined with the input-output model. The whole system had thusly achieved a cleaner production level. Therefore, the input-output model can be used for guidance in practical production.

  1. Aviation Terminal Forecasts

    Science.gov Websites

    News of the Day... view past news Central Pacific Hurricane Season Outlook for 2018 2017-18 Hawaii Wet Local Graphics National Graphics Model Output River and Lakes Climate and Past Weather Local National Model Output Climate and Past Weather Local National More... Hawaii Climate Portal Local Programs

  2. CMAQ Model Output

    EPA Pesticide Factsheets

    CMAQ and CMAQ-VBS model outputThis dataset is associated with the following publication:Woody , M., K. Baker , P. Hayes, J. Jimenez, B. Koo, and H. Pye. Understanding sources of organic aerosol during CalNex-2010 using the CMAQ-VBS. Atmospheric Chemistry and Physics. Copernicus Publications, Katlenburg-Lindau, GERMANY, 16: 4081-4100, (2016).

  3. Can dynamically downscaled climate model outputs improve pojections of extreme precipitation events?

    EPA Science Inventory

    Many of the storms that generate damaging floods are caused by locally intense, sub-daily precipitation, yet the spatial and temporal resolution of the most widely available climate model outputs are both too coarse to simulate these events. Thus there is often a disconnect betwe...

  4. Directional output distance functions: endogenous directions based on exogenous normalization constraints

    USDA-ARS?s Scientific Manuscript database

    In this paper we develop a model for computing directional output distance functions with endogenously determined direction vectors. We show how this model is related to the slacks-based directional distance function introduced by Fare and Grosskopf and show how to use the slacks-based function to e...

  5. Application of Wavelet Filters in an Evaluation of ...

    EPA Pesticide Factsheets

    Air quality model evaluation can be enhanced with time-scale specific comparisons of outputs and observations. For example, high-frequency (hours to one day) time scale information in observed ozone is not well captured by deterministic models and its incorporation into model performance metrics lead one to devote resources to stochastic variations in model outputs. In this analysis, observations are compared with model outputs at seasonal, weekly, diurnal and intra-day time scales. Filters provide frequency specific information that can be used to compare the strength (amplitude) and timing (phase) of observations and model estimates. The National Exposure Research Laboratory′s (NERL′s) Atmospheric Modeling and Analysis Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing changes in air quality and air pollutant exposures, as affected by changes in ecosystem management and regulatory decisions. AMAD is responsible for providing a sound scientific and technical basis for regulatory policies based on air quality models to improve ambient air quality. The models developed by AMAD are being used by EPA, NOAA, and the air pollution community in understanding and forecasting not only the magnitude of the air pollu

  6. Implications of freshwater flux data from the CMIP5 multimodel output across a set of Northern Hemisphere drainage basins

    NASA Astrophysics Data System (ADS)

    Bring, Arvid; Asokan, Shilpa M.; Jaramillo, Fernando; Jarsjö, Jerker; Levi, Lea; Pietroń, Jan; Prieto, Carmen; Rogberg, Peter; Destouni, Georgia

    2015-06-01

    The multimodel ensemble of the Coupled Model Intercomparison Project, Phase 5 (CMIP5) synthesizes the latest research in global climate modeling. The freshwater system on land, particularly runoff, has so far been of relatively low priority in global climate models, despite the societal and ecosystem importance of freshwater changes, and the science and policy needs for such model output on drainage basin scales. Here we investigate the implications of CMIP5 multimodel ensemble output data for the freshwater system across a set of drainage basins in the Northern Hemisphere. Results of individual models vary widely, with even ensemble mean results differing greatly from observations and implying unrealistic long-term systematic changes in water storage and level within entire basins. The CMIP5 projections of basin-scale freshwater fluxes differ considerably more from observations and among models for the warm temperate study basins than for the Arctic and cold temperate study basins. In general, the results call for concerted research efforts and model developments for improving the understanding and modeling of the freshwater system and its change drivers. Specifically, more attention to basin-scale water flux analyses should be a priority for climate model development, and an important focus for relevant model-based advice for adaptation to climate change.

  7. A user-friendly model for spray drying to aid pharmaceutical product development.

    PubMed

    Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L J; Frijlink, Henderik W

    2013-01-01

    The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach.

  8. Regionalisation of statistical model outputs creating gridded data sets for Germany

    NASA Astrophysics Data System (ADS)

    Höpp, Simona Andrea; Rauthe, Monika; Deutschländer, Thomas

    2016-04-01

    The goal of the German research program ReKliEs-De (regional climate projection ensembles for Germany, http://.reklies.hlug.de) is to distribute robust information about the range and the extremes of future climate for Germany and its neighbouring river catchment areas. This joint research project is supported by the German Federal Ministry of Education and Research (BMBF) and was initiated by the German Federal States. The Project results are meant to support the development of adaptation strategies to mitigate the impacts of future climate change. The aim of our part of the project is to adapt and transfer the regionalisation methods of the gridded hydrological data set (HYRAS) from daily station data to the station based statistical regional climate model output of WETTREG (regionalisation method based on weather patterns). The WETTREG model output covers the period of 1951 to 2100 with a daily temporal resolution. For this, we generate a gridded data set of the WETTREG output for precipitation, air temperature and relative humidity with a spatial resolution of 12.5 km x 12.5 km, which is common for regional climate models. Thus, this regionalisation allows comparing statistical to dynamical climate model outputs. The HYRAS data set was developed by the German Meteorological Service within the German research program KLIWAS (www.kliwas.de) and consists of daily gridded data for Germany and its neighbouring river catchment areas. It has a spatial resolution of 5 km x 5 km for the entire domain for the hydro-meteorological elements precipitation, air temperature and relative humidity and covers the period of 1951 to 2006. After conservative remapping the HYRAS data set is also convenient for the validation of climate models. The presentation will consist of two parts to present the actual state of the adaptation of the HYRAS regionalisation methods to the statistical regional climate model WETTREG: First, an overview of the HYRAS data set and the regionalisation methods for precipitation (REGNIE method based on a combination of multiple linear regression with 5 predictors and inverse distance weighting), air temperature and relative humidity (optimal interpolation) will be given. Finally, results of the regionalisation of WETTREG model output will be shown.

  9. Whole plant based treatment of hypercholesterolemia with Crataegus laevigata in a zebrafish model.

    PubMed

    Littleton, Robert M; Miller, Matthew; Hove, Jay R

    2012-07-23

    Consumers are increasingly turning to plant-based complementary and alternative medicines to treat hypercholesterolemia. Many of these treatments are untested and their efficacy is unknown. This multitude of potential remedies necessitates a model system amenable to testing large numbers of organisms that maintains similarity to humans in both mode of drug administration and overall physiology. Here we develop the larval zebrafish (4-30 days post fertilization) as a vertebrate model of dietary plant-based treatment of hypercholesterolemia and test the effects of Crataegus laevigata in this model. Larval zebrafish were fed high cholesterol diets infused with fluorescent sterols and phytomedicines. Plants were ground with mortar and pestle into a fine powder before addition to food. Fluorescent sterols were utilized to optically quantify relative difference in intravascular cholesterol levels between groups of fish. We utilized the Zeiss 7-Live Duo high-speed confocal platform in order to both quantify intravascular sterol fluorescence and to capture video of the heart beat for determination of cardiac output. In this investigation we developed and utilized a larval zebrafish model to investigate dietary plant-based intervention of the pathophysiology of hypercholesterolemia. We found BODIPY-cholesterol effectively labels diet-introduced intravascular cholesterol levels (P < 0.05, Student's t-test). We also established that zebrafish cardiac output declines as cholesterol dose increases (difference between 0.1% and 8% (w/w) high cholesterol diet-treated cardiac output significant at P < 0.05, 1-way ANOVA). Using this model, we found hawthorn leaves and flowers significantly reduce intravascular cholesterol levels (P < 0.05, 1-way ANOVA) and interact with cholesterol to impact cardiac output in hypercholesterolemic fish (2-way ANOVA, P < 0.05 for interaction effect). The results of this study demonstrate that the larval zebrafish has the potential to become a powerful model to test plant based dietary intervention of hypercholesterolemia. Using this model we have shown that hawthorn leaves and flowers have the potential to affect cardiac output as well as intravascular cholesterol levels. Further, our observation that hawthorn leaves and flowers interact with cholesterol to impact cardiac output indicates that the physiological effects of hawthorn may depend on diet.

  10. Hierarchical Multi-Scale Approach To Validation and Uncertainty Quantification of Hyper-Spectral Image Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Engel, David W.; Reichardt, Thomas A.; Kulp, Thomas J.

    Validating predictive models and quantifying uncertainties inherent in the modeling process is a critical component of the HARD Solids Venture program [1]. Our current research focuses on validating physics-based models predicting the optical properties of solid materials for arbitrary surface morphologies and characterizing the uncertainties in these models. We employ a systematic and hierarchical approach by designing physical experiments and comparing the experimental results with the outputs of computational predictive models. We illustrate this approach through an example comparing a micro-scale forward model to an idealized solid-material system and then propagating the results through a system model to the sensormore » level. Our efforts should enhance detection reliability of the hyper-spectral imaging technique and the confidence in model utilization and model outputs by users and stakeholders.« less

  11. The reservoir model: a differential equation model of psychological regulation.

    PubMed

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  12. The Reservoir Model: A Differential Equation Model of Psychological Regulation

    PubMed Central

    Deboeck, Pascal R.; Bergeman, C. S.

    2017-01-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might “add up” over time (e.g., life stressors, inputs), but individuals simultaneously take action to “blow off steam” (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the “height” (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. PMID:23527605

  13. Bias correction of temperature produced by the Community Climate System Model using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Moghim, S.; Hsu, K.; Bras, R. L.

    2013-12-01

    General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.

  14. Stiffness modeling of compliant parallel mechanisms and applications in the performance analysis of a decoupled parallel compliant stage

    NASA Astrophysics Data System (ADS)

    Jiang, Yao; Li, Tie-Min; Wang, Li-Ping

    2015-09-01

    This paper investigates the stiffness modeling of compliant parallel mechanism (CPM) based on the matrix method. First, the general compliance matrix of a serial flexure chain is derived. The stiffness modeling of CPMs is next discussed in detail, considering the relative positions of the applied load and the selected displacement output point. The derived stiffness models have simple and explicit forms, and the input, output, and coupling stiffness matrices of the CPM can easily be obtained. The proposed analytical model is applied to the stiffness modeling and performance analysis of an XY parallel compliant stage with input and output decoupling characteristics. Then, the key geometrical parameters of the stage are optimized to obtain the minimum input decoupling degree. Finally, a prototype of the compliant stage is developed and its input axial stiffness, coupling characteristics, positioning resolution, and circular contouring performance are tested. The results demonstrate the excellent performance of the compliant stage and verify the effectiveness of the proposed theoretical model. The general stiffness models provided in this paper will be helpful for performance analysis, especially in determining coupling characteristics, and the structure optimization of the CPM.

  15. User Manual for SAHM package for VisTrails

    USGS Publications Warehouse

    Talbert, C.B.; Talbert, M.K.

    2012-01-01

    The Software for Assisted Habitat I\\•1odeling (SAHM) has been created to both expedite habitat modeling and help maintain a record of the various input data, pre-and post-processing steps and modeling options incorporated in the construction of a species distribution model. The four main advantages to using the combined VisTrail: SAHM package for species distribution modeling are: 1. formalization and tractable recording of the entire modeling process 2. easier collaboration through a common modeling framework 3. a user-friendly graphical interface to manage file input, model runs, and output 4. extensibility to incorporate future and additional modeling routines and tools. This user manual provides detailed information on each module within the SAHM package, their input, output, common connections, optional arguments, and default settings. This information can also be accessed for individual modules by right clicking on the documentation button for any module in VisTrail or by right clicking on any input or output for a module and selecting view documentation. This user manual is intended to accompany the user guide which provides detailed instructions on how to install the SAHM package within VisTrails and then presents information on the use of the package.

  16. The relationship between buccofacial and limb apraxia.

    PubMed

    Raade, A S; Rothi, L J; Heilman, K M

    1991-07-01

    There are at least two possible models depicting the relationship between buccofacial and limb apraxia. First, apraxia can be viewed as a unitary motor disorder which transcends the output modalities of both buccofacial and limb output. A high degree of similarity between the two types of apraxia would support this model. Alternatively, the relationship between buccofacial and limb apraxia may not include a unitary mechanism. The presence of quantitative and qualitative differences between buccofacial and limb performance would support this nonunitary model. The results of the present study support the nonunitary model.

  17. National Centers for Environmental Prediction

    Science.gov Websites

    /NDAS Output Fields (contents, format, grid specs, output frequency, archive): The NWP model The horizontal output grid The vertical grid Access to fields Anonymous FTP Access Permanent Tape Archive

  18. Dynamics of nonlinear feedback control.

    PubMed

    Snippe, H P; van Hateren, J H

    2007-05-01

    Feedback control in neural systems is ubiquitous. Here we study the mathematics of nonlinear feedback control. We compare models in which the input is multiplied by a dynamic gain (multiplicative control) with models in which the input is divided by a dynamic attenuation (divisive control). The gain signal (resp. the attenuation signal) is obtained through a concatenation of an instantaneous nonlinearity and a linear low-pass filter operating on the output of the feedback loop. For input steps, the dynamics of gain and attenuation can be very different, depending on the mathematical form of the nonlinearity and the ordering of the nonlinearity and the filtering in the feedback loop. Further, the dynamics of feedback control can be strongly asymmetrical for increment versus decrement steps of the input. Nevertheless, for each of the models studied, the nonlinearity in the feedback loop can be chosen such that immediately after an input step, the dynamics of feedback control is symmetric with respect to increments versus decrements. Finally, we study the dynamics of the output of the control loops and find conditions under which overshoots and undershoots of the output relative to the steady-state output occur when the models are stimulated with low-pass filtered steps. For small steps at the input, overshoots and undershoots of the output do not occur when the filtering in the control path is faster than the low-pass filtering at the input. For large steps at the input, however, results depend on the model, and for some of the models, multiple overshoots and undershoots can occur even with a fast control path.

  19. United States Air Force Summer Research Program 1991. High School Apprenticeship Program (HSAP) Reports. Volume 11. Phillips Laboratory, Civil Engineering Laboratory

    DTIC Science & Technology

    1992-01-09

    Crystal Polymers Tracy Reed Geophysics Laboratory (GEO) 9 Analysis of Model Output Statistics Thunderstorm Prediction Model Frank Lasley 10...four hours to twenty-four hours. It was predicted that the dogbones would turn brown once they reached the approximate annealing temperature. This was...LYS Hanscom AFB Frank A. Lasley Abstracft. Model Output Statistics (MOS) Thunderstorm prediction information and Service A weather observations

  20. Abundance and recruitment data for Undaria pinnatifida in Brest harbour, France: Model versus field results.

    PubMed

    Murphy, James T; Voisin, Marie; Johnson, Mark; Viard, Frédérique

    2016-06-01

    The data presented in this article are related to the research article entitled "A modelling approach to explore the critical environmental parameters influencing the growth and establishment of the invasive seaweed Undaria pinnatifida in Europe" [1]. This article describes raw simulation data output from a novel individual-based model of the invasive kelp species Undaria pinnatifida. It also includes field data of monthly abundance and recruitment values for a population of invasive U. pinnatifida (in Brest harbour, France) that were used to validate the model. The raw model output and field data are made publicly available in order to enable critical analysis of the model predictions and to inform future modelling efforts of the study species.

  1. Interregional migration in an extended input-output model.

    PubMed

    Madden, M; Trigg, A B

    1990-01-01

    "This article develops a two-region version of an extended input-output model that disaggregates consumption among employed, unemployed, and inmigrant households, and which explicitly models the influx into a region of migrants to take up a proportion of any jobs created in the regional economy. The model is empirically tested using real data for the Scotland (UK) regions of Strathclyde and Rest-of-Scotland. Sets of interregional economic, demographic, demo-economic, and econo-demographic multipliers are developed and discussed, and the effects of a range of economic and demographic impacts are modeled. The circumstances under which Hawkins-Simon conditions for non-negativity are breached are identified, and the limits of the model discussed." excerpt

  2. Output-Feedback Model Predictive Control of a Pasteurization Pilot Plant based on an LPV model

    NASA Astrophysics Data System (ADS)

    Karimi Pour, Fatemeh; Ocampo-Martinez, Carlos; Puig, Vicenç

    2017-01-01

    This paper presents a model predictive control (MPC) of a pasteurization pilot plant based on an LPV model. Since not all the states are measured, an observer is also designed, which allows implementing an output-feedback MPC scheme. However, the model of the plant is not completely observable when augmented with the disturbance models. In order to solve this problem, the following strategies are used: (i) the whole system is decoupled into two subsystems, (ii) an inner state-feedback controller is implemented into the MPC control scheme. A real-time example based on the pasteurization pilot plant is simulated as a case study for testing the behavior of the approaches.

  3. Parametric and non-parametric modeling of short-term synaptic plasticity. Part I: computational study

    PubMed Central

    Marmarelis, Vasilis Z.; Berger, Theodore W.

    2009-01-01

    Parametric and non-parametric modeling methods are combined to study the short-term plasticity (STP) of synapses in the central nervous system (CNS). The nonlinear dynamics of STP are modeled by means: (1) previously proposed parametric models based on mechanistic hypotheses and/or specific dynamical processes, and (2) non-parametric models (in the form of Volterra kernels) that transforms the presynaptic signals into postsynaptic signals. In order to synergistically use the two approaches, we estimate the Volterra kernels of the parametric models of STP for four types of synapses using synthetic broadband input–output data. Results show that the non-parametric models accurately and efficiently replicate the input–output transformations of the parametric models. Volterra kernels provide a general and quantitative representation of the STP. PMID:18506609

  4. Current Source Based on H-Bridge Inverter with Output LCL Filter

    NASA Astrophysics Data System (ADS)

    Blahnik, Vojtech; Talla, Jakub; Peroutka, Zdenek

    2015-09-01

    The paper deals with a control of current source with an LCL output filter. The controlled current source is realized as a single-phase inverter and output LCL filter provides low ripple of output current. However, systems incorporating LCL filters require more complex control strategies and there are several interesting approaches to the control of this type of converter. This paper presents the inverter control algorithm, which combines model based control with a direct current control based on resonant controllers and single-phase vector control. The primary goal is to reduce the current ripple and distortion under required limits and provides fast and precise control of output current. The proposed control technique is verified by measurements on the laboratory model.

  5. Improving short-term forecasting during ramp events by means of Regime-Switching Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Gallego, C.; Costa, A.; Cuerva, A.

    2010-09-01

    Since nowadays wind energy can't be neither scheduled nor large-scale storaged, wind power forecasting has been useful to minimize the impact of wind fluctuations. In particular, short-term forecasting (characterised by prediction horizons from minutes to a few days) is currently required by energy producers (in a daily electricity market context) and the TSO's (in order to keep the stability/balance of an electrical system). Within the short-term background, time-series based models (i.e., statistical models) have shown a better performance than NWP models for horizons up to few hours. These models try to learn and replicate the dynamic shown by the time series of a certain variable. When considering the power output of wind farms, ramp events are usually observed, being characterized by a large positive gradient in the time series (ramp-up) or negative (ramp-down) during relatively short time periods (few hours). Ramp events may be motivated by many different causes, involving generally several spatial scales, since the large scale (fronts, low pressure systems) up to the local scale (wind turbine shut-down due to high wind speed, yaw misalignment due to fast changes of wind direction). Hence, the output power may show unexpected dynamics during ramp events depending on the underlying processes; consequently, traditional statistical models considering only one dynamic for the hole power time series may be inappropriate. This work proposes a Regime Switching (RS) model based on Artificial Neural Nets (ANN). The RS-ANN model gathers as many ANN's as different dynamics considered (called regimes); a certain ANN is selected so as to predict the output power, depending on the current regime. The current regime is on-line updated based on a gradient criteria, regarding the past two values of the output power. 3 Regimes are established, concerning ramp events: ramp-up, ramp-down and no-ramp regime. In order to assess the skillness of the proposed RS-ANN model, a single-ANN model (without regime classification) is adopted as a reference model. Both models are evaluated in terms of Improvement over Persistence on the Mean Square Error basis (IoP%) when predicting horizons form 1 time-step to 5. The case of a wind farm located in the complex terrain of Alaiz (north of Spain) has been considered. Three years of available power output data with a hourly resolution have been employed: two years for training and validation of the model and the last year for assessing the accuracy. Results showed that the RS-ANN overcame the single-ANN model for one step-ahead forecasts: the overall IoP% was up to 8.66% for the RS-ANN model (depending on the gradient criterion selected to consider the ramp regime triggered) and 6.16% for the single-ANN. However, both models showed similar accuracy for larger horizons. A locally-weighted evaluation during ramp events for one-step ahead was also performed. It was found that the IoP% during ramps-up increased from 17.60% (case of single-ANN) to 22.25% (case of RS-ANN); however, during the ramps-down events this improvement increased from 18.55% to 19.55%. Three main conclusions are derived from this case study: It highlights the importance of considering statistical models capable of differentiate several regimes showed by the output power time series in order to improve the forecasting during extreme events like ramps. On-line regime classification based on available power output data didn't seem to contribute to improve forecasts for horizons beyond one-step ahead. Tacking into account other explanatory variables (local wind measurements, NWP outputs) could lead to a better understanding of ramp events, improving the regime assessment also for further horizons. The RS-ANN model slightly overcame the single-ANN during ramp-down events. If further research reinforce this effect, special attention should be addressed to understand the underlying processes during ramp-down events.

  6. Modeling of Aircraft Deicing Fluid Induced Biochemical Oxygen Demand in Subsurface-Flow Constructed Treatment Wetlands

    DTIC Science & Technology

    2009-03-01

    meters. The input and output control structures are modeled as sharp crested , rectangular weirs one meter in width. The elevation of the input weir is...manipulated by adjusting the width of both the input and output weirs and the crest height of the output weir . All of these adjustments were found to be...reduction of the weir crest height had an effect on the amount of storm water retained during low precipitation conditions, but not on the crest

  7. A Reliability Estimation in Modeling Watershed Runoff With Uncertainties

    NASA Astrophysics Data System (ADS)

    Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.

    1990-10-01

    The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.

  8. An Artificial Intelligence Approach for Modeling and Prediction of Water Diffusion Inside a Carbon Nanotube

    PubMed Central

    2009-01-01

    Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input–output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input–output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down. PMID:20596382

  9. An Artificial Intelligence Approach for Modeling and Prediction of Water Diffusion Inside a Carbon Nanotube.

    PubMed

    Ahadian, Samad; Kawazoe, Yoshiyuki

    2009-06-04

    Modeling of water flow in carbon nanotubes is still a challenge for the classic models of fluid dynamics. In this investigation, an adaptive-network-based fuzzy inference system (ANFIS) is presented to solve this problem. The proposed ANFIS approach can construct an input-output mapping based on both human knowledge in the form of fuzzy if-then rules and stipulated input-output data pairs. Good performance of the designed ANFIS ensures its capability as a promising tool for modeling and prediction of fluid flow at nanoscale where the continuum models of fluid dynamics tend to break down.

  10. Approximate Bayesian Computation by Subset Simulation using hierarchical state-space models

    NASA Astrophysics Data System (ADS)

    Vakilzadeh, Majid K.; Huang, Yong; Beck, James L.; Abrahamsson, Thomas

    2017-02-01

    A new multi-level Markov Chain Monte Carlo algorithm for Approximate Bayesian Computation, ABC-SubSim, has recently appeared that exploits the Subset Simulation method for efficient rare-event simulation. ABC-SubSim adaptively creates a nested decreasing sequence of data-approximating regions in the output space that correspond to increasingly closer approximations of the observed output vector in this output space. At each level, multiple samples of the model parameter vector are generated by a component-wise Metropolis algorithm so that the predicted output corresponding to each parameter value falls in the current data-approximating region. Theoretically, if continued to the limit, the sequence of data-approximating regions would converge on to the observed output vector and the approximate posterior distributions, which are conditional on the data-approximation region, would become exact, but this is not practically feasible. In this paper we study the performance of the ABC-SubSim algorithm for Bayesian updating of the parameters of dynamical systems using a general hierarchical state-space model. We note that the ABC methodology gives an approximate posterior distribution that actually corresponds to an exact posterior where a uniformly distributed combined measurement and modeling error is added. We also note that ABC algorithms have a problem with learning the uncertain error variances in a stochastic state-space model and so we treat them as nuisance parameters and analytically integrate them out of the posterior distribution. In addition, the statistical efficiency of the original ABC-SubSim algorithm is improved by developing a novel strategy to regulate the proposal variance for the component-wise Metropolis algorithm at each level. We demonstrate that Self-regulated ABC-SubSim is well suited for Bayesian system identification by first applying it successfully to model updating of a two degree-of-freedom linear structure for three cases: globally, locally and un-identifiable model classes, and then to model updating of a two degree-of-freedom nonlinear structure with Duffing nonlinearities in its interstory force-deflection relationship.

  11. Feature-based data assimilation in geophysics

    NASA Astrophysics Data System (ADS)

    Morzfeld, Matthias; Adams, Jesse; Lunderman, Spencer; Orozco, Rafael

    2018-05-01

    Many applications in science require that computational models and data be combined. In a Bayesian framework, this is usually done by defining likelihoods based on the mismatch of model outputs and data. However, matching model outputs and data in this way can be unnecessary or impossible. For example, using large amounts of steady state data is unnecessary because these data are redundant. It is numerically difficult to assimilate data in chaotic systems. It is often impossible to assimilate data of a complex system into a low-dimensional model. As a specific example, consider a low-dimensional stochastic model for the dipole of the Earth's magnetic field, while other field components are ignored in the model. The above issues can be addressed by selecting features of the data, and defining likelihoods based on the features, rather than by the usual mismatch of model output and data. Our goal is to contribute to a fundamental understanding of such a feature-based approach that allows us to assimilate selected aspects of data into models. We also explain how the feature-based approach can be interpreted as a method for reducing an effective dimension and derive new noise models, based on perturbed observations, that lead to computationally efficient solutions. Numerical implementations of our ideas are illustrated in four examples.

  12. A warm-season comparison of WRF coupled to the CLM4.0, Noah-MP, and Bucket hydrology land surface schemes over the central USA

    NASA Astrophysics Data System (ADS)

    Van Den Broeke, Matthew S.; Kalin, Andrew; Alavez, Jose Abraham Torres; Oglesby, Robert; Hu, Qi

    2017-11-01

    In climate modeling studies, there is a need to choose a suitable land surface model (LSM) while adhering to available resources. In this study, the viability of three LSM options (Community Land Model version 4.0 [CLM4.0], Noah-MP, and the five-layer thermal diffusion [Bucket] scheme) in the Weather Research and Forecasting model version 3.6 (WRF3.6) was examined for the warm season in a domain centered on the central USA. Model output was compared to Parameter-elevation Relationships on Independent Slopes Model (PRISM) data, a gridded observational dataset including mean monthly temperature and total monthly precipitation. Model output temperature, precipitation, latent heat (LH) flux, sensible heat (SH) flux, and soil water content (SWC) were compared to observations from sites in the Central and Southern Great Plains region. An overall warm bias was found in CLM4.0 and Noah-MP, with a cool bias of larger magnitude in the Bucket model. These three LSMs produced similar patterns of wet and dry biases. Model output of SWC and LH/SH fluxes were compared to observations, and did not show a consistent bias. Both sophisticated LSMs appear to be viable options for simulating the effects of land use change in the central USA.

  13. Perspectives to performance of environment and health assessments and models--from outputs to outcomes?

    PubMed

    Pohjola, Mikko V; Pohjola, Pasi; Tainio, Marko; Tuomisto, Jouni T

    2013-06-26

    The calls for knowledge-based policy and policy-relevant research invoke a need to evaluate and manage environment and health assessments and models according to their societal outcomes. This review explores how well the existing approaches to assessment and model performance serve this need. The perspectives to assessment and model performance in the scientific literature can be called: (1) quality assurance/control, (2) uncertainty analysis, (3) technical assessment of models, (4) effectiveness and (5) other perspectives, according to what is primarily seen to constitute the goodness of assessments and models. The categorization is not strict and methods, tools and frameworks in different perspectives may overlap. However, altogether it seems that most approaches to assessment and model performance are relatively narrow in their scope. The focus in most approaches is on the outputs and making of assessments and models. Practical application of the outputs and the consequential outcomes are often left unaddressed. It appears that more comprehensive approaches that combine the essential characteristics of different perspectives are needed. This necessitates a better account of the mechanisms of collective knowledge creation and the relations between knowledge and practical action. Some new approaches to assessment, modeling and their evaluation and management span the chain from knowledge creation to societal outcomes, but the complexity of evaluating societal outcomes remains a challenge.

  14. A proposed Kalman filter algorithm for estimation of unmeasured output variables for an F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Alag, Gurbux S.; Gilyard, Glenn B.

    1990-01-01

    To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.

  15. Method and apparatus for loss of control inhibitor systems

    NASA Technical Reports Server (NTRS)

    A'Harrah, Ralph C. (Inventor)

    2007-01-01

    Active and adaptive systems and methods to prevent loss of control incidents by providing tactile feedback to a vehicle operator are disclosed. According to the present invention, an operator gives a control input to an inceptor. An inceptor sensor measures an inceptor input value of the control input. The inceptor input is used as an input to a Steady-State Inceptor Input/Effector Output Model that models the vehicle control system design. A desired effector output from the inceptor input is generated from the model. The desired effector output is compared to an actual effector output to get a distortion metric. A feedback force is generated as a function of the distortion metric. The feedback force is used as an input to a feedback force generator which generates a loss of control inhibitor system (LOCIS) force back to the inceptor. The LOCIS force is felt by the operator through the inceptor.

  16. Geophysical, archaeological, and historical evidence support a solar-output model for climate change

    PubMed Central

    Perry, Charles A.; Hsu, Kenneth J.

    2000-01-01

    Although the processes of climate change are not completely understood, an important causal candidate is variation in total solar output. Reported cycles in various climate-proxy data show a tendency to emulate a fundamental harmonic sequence of a basic solar-cycle length (11 years) multiplied by 2N (where N equals a positive or negative integer). A simple additive model for total solar-output variations was developed by superimposing a progression of fundamental harmonic cycles with slightly increasing amplitudes. The timeline of the model was calibrated to the Pleistocene/Holocene boundary at 9,000 years before present. The calibrated model was compared with geophysical, archaeological, and historical evidence of warm or cold climates during the Holocene. The evidence of periods of several centuries of cooler climates worldwide called “little ice ages,” similar to the period anno Domini (A.D.) 1280–1860 and reoccurring approximately every 1,300 years, corresponds well with fluctuations in modeled solar output. A more detailed examination of the climate sensitive history of the last 1,000 years further supports the model. Extrapolation of the model into the future suggests a gradual cooling during the next few centuries with intermittent minor warmups and a return to near little-ice-age conditions within the next 500 years. This cool period then may be followed approximately 1,500 years from now by a return to altithermal conditions similar to the previous Holocene Maximum. PMID:11050181

  17. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  18. Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  19. LANL - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  20. LANL - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  1. Distinguishing the Forest from the Trees: Synthesizing IHRMP Research

    Treesearch

    Gregory B. Greenwood

    1991-01-01

    A conceptual model of hardwood rangelands as multi-output resource system is developed and used to achieve a synthesis of Integrated Hardwood Range Management Program (IHRMP) research. The model requires the definition of state variables which characterize the system at any time, processes that move the system to different states, outputs...

  2. Educational Resource Multipliers for Use in Local Public Finance: An Input-Output Approach.

    ERIC Educational Resources Information Center

    Boardman, A. E.; Schinnar, A. P.

    1982-01-01

    Develops an input-output model, with related multipliers, showing how changes in earmarked and discretionary educational funds (whether local, state, or federal) affect all of a state's districts and educational programs. Illustrates the model with Pennsylvania data and relates it to the usual educational finance approach, which uses demand…

  3. Developing Snow Model Forcing Data From WRF Model Output to Aid in Water Resource Forecasting

    NASA Astrophysics Data System (ADS)

    Havens, S.; Marks, D. G.; Watson, K. A.; Masarik, M.; Flores, A. N.; Kormos, P.; Hedrick, A. R.

    2015-12-01

    Traditional operational modeling tools used by water managers in the west are challenged by more frequently occurring uncharacteristic stream flow patterns caused by climate change. Water managers are now turning to new models based on the physical processes within a watershed to combat the increasing number of events that do not follow the historical patterns. The USDA-ARS has provided near real time snow water equivalent (SWE) maps using iSnobal since WY2012 for the Boise River Basin in southwest Idaho and since WY2013 for the Tuolumne Basin in California that feeds the Hetch Hetchy reservoir. The goal of these projects is to not only provide current snowpack estimates but to use the Weather Research and Forecasting (WRF) model to drive iSnobal in order to produce a forecasted stream flow when coupled to a hydrology model. The first step is to develop methods on how to create snow model forcing data from WRF outputs. Using a reanalysis 1km WRF dataset from WY2009 over the Boise River Basin, WRF model results like surface air temperature, relative humidity, wind, precipitation, cloud cover, and incoming long wave radiation must be downscaled for use in iSnobal. iSnobal results forced with WRF output are validated at point locations throughout the basin, as well as compared with iSnobal results forced with traditional weather station data. The presentation will explore the differences in forcing data derived from WRF outputs and weather stations and how this affects the snowpack distribution.

  4. Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.

    2008-12-01

    The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.

  5. ANALYTICAL MODELING OF ELECTRON BACK-BOMBARDMENT INDUCED CURRENT INCREASE IN UN-GATED THERMIONIC CATHODE RF GUNS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, J. P.; Sun, Y.; Harris, J. R.

    In this paper we derive analytical expressions for the output current of an un-gated thermionic cathode RF gun in the presence of back-bombardment heating. We provide a brief overview of back-bombardment theory and discuss comparisons between the analytical back-bombardment predictions and simulation models. We then derive an expression for the output current as a function of the RF repetition rate and discuss relationships between back-bombardment, fieldenhancement, and output current. We discuss in detail the relevant approximations and then provide predictions about how the output current should vary as a function of repetition rate for some given system configurations.

  6. Modelling innovation performance of European regions using multi-output neural networks

    PubMed Central

    Henriques, Roberto

    2017-01-01

    Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics) regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes. PMID:28968449

  7. Modelling innovation performance of European regions using multi-output neural networks.

    PubMed

    Hajek, Petr; Henriques, Roberto

    2017-01-01

    Regional innovation performance is an important indicator for decision-making regarding the implementation of policies intended to support innovation. However, patterns in regional innovation structures are becoming increasingly diverse, complex and nonlinear. To address these issues, this study aims to develop a model based on a multi-output neural network. Both intra- and inter-regional determinants of innovation performance are empirically investigated using data from the 4th and 5th Community Innovation Surveys of NUTS 2 (Nomenclature of Territorial Units for Statistics) regions. The results suggest that specific innovation strategies must be developed based on the current state of input attributes in the region. Thus, it is possible to develop appropriate strategies and targeted interventions to improve regional innovation performance. We demonstrate that support of entrepreneurship is an effective instrument of innovation policy. We also provide empirical support that both business and government R&D activity have a sigmoidal effect, implying that the most effective R&D support should be directed to regions with below-average and average R&D activity. We further show that the multi-output neural network outperforms traditional statistical and machine learning regression models. In general, therefore, it seems that the proposed model can effectively reflect both the multiple-output nature of innovation performance and the interdependency of the output attributes.

  8. Modeling the Afferent Dynamics of the Baroreflex Control System

    PubMed Central

    Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.

    2013-01-01

    In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231

  9. Improving Snow Modeling by Assimilating Observational Data Collected by Citizen Scientists

    NASA Astrophysics Data System (ADS)

    Crumley, R. L.; Hill, D. F.; Arendt, A. A.; Wikstrom Jones, K.; Wolken, G. J.; Setiawan, L.

    2017-12-01

    Modeling seasonal snow pack in alpine environments includes a multiplicity of challenges caused by a lack of spatially extensive and temporally continuous observational datasets. This is partially due to the difficulty of collecting measurements in harsh, remote environments where extreme gradients in topography exist, accompanied by large model domains and inclement weather. Engaging snow enthusiasts, snow professionals, and community members to participate in the process of data collection may address some of these challenges. In this study, we use SnowModel to estimate seasonal snow water equivalence (SWE) in the Thompson Pass region of Alaska while incorporating snow depth measurements collected by citizen scientists. We develop a modeling approach to assimilate hundreds of snow depth measurements from participants in the Community Snow Observations (CSO) project (www.communitysnowobs.org). The CSO project includes a mobile application where participants record and submit geo-located snow depth measurements while working and recreating in the study area. These snow depth measurements are randomly located within the model grid at irregular time intervals over the span of four months in the 2017 water year. This snow depth observation dataset is converted into a SWE dataset by employing an empirically-based, bulk density and SWE estimation method. We then assimilate this data using SnowAssim, a sub-model within SnowModel, to constrain the SWE output by the observed data. Multiple model runs are designed to represent an array of output scenarios during the assimilation process. An effort to present model output uncertainties is included, as well as quantification of the pre- and post-assimilation divergence in modeled SWE. Early results reveal pre-assimilation SWE estimations are consistently greater than the post-assimilation estimations, and the magnitude of divergence increases throughout the snow pack evolution period. This research has implications beyond the Alaskan context because it increases our ability to constrain snow modeling outputs by making use of snow measurements collected by non-expert, citizen scientists.

  10. Community models for wildlife impact assessment: a review of concepts and approaches

    USGS Publications Warehouse

    Schroeder, Richard L.

    1987-01-01

    The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.

  11. Real-Time Kennedy Space Center and Cape Canaveral Air Force Station High-Resolution Model Implementation and Verification

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn; Watson, Leela R.

    2015-01-01

    NASA's Launch Services Program, Ground Systems Development and Operations, Space Launch System and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). Examples include determining if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 km Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the high-resolution WRF Environmental Modeling System (EMS) model configured by the AMU (Watson 2013) in real time. Implementing a real-time version of the ER WRF-EMS would generate a larger database of model output than in the previous AMU task for determining model performance, and allows the AMU more control over and access to the model output archive. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The AMU also calculated verification statistics to determine model performance compared to observational data. Finally, the AMU made the model output available on the AMU Advanced Weather Interactive Processing System II (AWIPS II) servers, which allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations (RWO) AWIPS II client computers and conduct real-time subjective analyses.

  12. Adaptation of time line analysis program to single pilot instrument flight research

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.; Shaughnessy, J. D.

    1978-01-01

    A data base was developed for SPIFR operation and the program was run. The outputs indicated that further work was necessary on the workload models. In particular, the workload model for the cognitive channel should be modified as the output workload appears to be too small. Included in the needed refinements are models to show the workload when in turbulence, when overshooting a radial or glideslope, and when copying air traffic control clearances.

  13. A Review of Models of the Human Temperature Regulation System.

    DTIC Science & Technology

    1992-02-18

    experiments to challenge the model and building new models, the researcher will benefit cptimally from its use. It appears that this process has been...during Cooling OUTPUTS: WWe W, Decreased by ?. W H~eating .. ___ > - W Constnt during Coolng IT OUTPUTS: U Increased by 7 We Haoling and Coaling Y. L.AT,)0...12, pp. 709-903, 1976. 43. Slonim, N.B., Environmental Physiology, Mosby, St.Louis, MO, 1974. 44. Sulman, S.G., Health , Weather, and Climate, Karger

  14. Fundamental Mechanisms of NeuroInformation Processing: Inverse Problems and Spike Processing

    DTIC Science & Technology

    2016-08-04

    platform called Neurokernel for collaborative development of comprehensive models of the brain of the fruit fly Drosophila melanogaster and their execution...example. We investigated the following nonlinear identification problem: given both the input signal u and the time sequence (tk)k2Z at the output of...from a time sequence is to be contrasted with existing methods for rate-based models in neuroscience. In such models the output of the system is taken

  15. Fusion of spectral models for dynamic modeling of sEMG and skeletal muscle force.

    PubMed

    Potluri, Chandrasekhar; Anugolu, Madhavi; Chiu, Steve; Urfer, Alex; Schoen, Marco P; Naidu, D Subbaram

    2012-01-01

    In this paper, we present a method of combining spectral models using a Kullback Information Criterion (KIC) data fusion algorithm. Surface Electromyographic (sEMG) signals and their corresponding skeletal muscle force signals are acquired from three sensors and pre-processed using a Half-Gaussian filter and a Chebyshev Type- II filter, respectively. Spectral models - Spectral Analysis (SPA), Empirical Transfer Function Estimate (ETFE), Spectral Analysis with Frequency Dependent Resolution (SPFRD) - are extracted from sEMG signals as input and skeletal muscle force as output signal. These signals are then employed in a System Identification (SI) routine to establish the dynamic models relating the input and output. After the individual models are extracted, the models are fused by a probability based KIC fusion algorithm. The results show that the SPFRD spectral models perform better than SPA and ETFE models in modeling the frequency content of the sEMG/skeletal muscle force data.

  16. Study on optimization of the short-term operation of cascade hydropower stations by considering output error

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang

    2017-06-01

    The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.

  17. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    PubMed

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  18. Analytical solutions to trade-offs between size of protected areas and land-use intensity.

    PubMed

    Butsic, Van; Radeloff, Volker C; Kuemmerle, Tobias; Pidgeon, Anna M

    2012-10-01

    Land-use change is affecting Earth's capacity to support both wild species and a growing human population. The question is how best to manage landscapes for both species conservation and economic output. If large areas are protected to conserve species richness, then the unprotected areas must be used more intensively. Likewise, low-intensity use leaves less area protected but may allow wild species to persist in areas that are used for market purposes. This dilemma is present in policy debates on agriculture, housing, and forestry. Our goal was to develop a theoretical model to evaluate which land-use strategy maximizes economic output while maintaining species richness. Our theoretical model extends previous analytical models by allowing land-use intensity on unprotected land to influence species richness in protected areas. We devised general models in which species richness (with modified species-area curves) and economic output (a Cobb-Douglas production function) are a function of land-use intensity and the proportion of land protected. Economic output increased as land-use intensity and extent increased, and species richness responded to increased intensity either negatively or following the intermediate disturbance hypothesis. We solved the model analytically to identify the combination of land-use intensity and protected area that provided the maximum amount of economic output, given a target level of species richness. The land-use strategy that maximized economic output while maintaining species richness depended jointly on the response of species richness to land-use intensity and protection and the effect of land use outside protected areas on species richness within protected areas. Regardless of the land-use strategy, species richness tended to respond to changing land-use intensity and extent in a highly nonlinear fashion. ©2012 Society for Conservation Biology.

  19. Identification and modeling of the electrohydraulic systems of the main gun of a main battle tank

    NASA Astrophysics Data System (ADS)

    Campos, Luiz C. A.; Menegaldo, Luciano L.

    2012-11-01

    The black-box mathematical models of the electrohydraulic systems responsible for driving the two degrees of freedom (elevation and azimuth) of the main gun of a main battle tank (MBT) were identified. Such systems respond to gunner's inputs while acquiring and tracking targets. Identification experiments were designed to collect simultaneous data from two inertial measurement units (IMU) installed at the gunner's handle (input) and at the center of rotation of the turret (output), for the identification of the azimuth system. For the elevation system, IMUs were installed at the gunner's handle (input) and at the breech of the gun (output). Linear accelerations and angular rates were collected for both input and output. Several black-box model architectures were investigated. As a result, nonlinear autoregressive with exogenous variables (NARX) second order model and nonlinear finite impulse response (NFIR) fourth order model, demonstrate to best fit the experimental data, with low computational costs. The derived models are being employed in a broader research, aiming to reproduce such systems in a laboratory virtual main gun simulator.

  20. Emulation and Sobol' sensitivity analysis of an atmospheric dispersion model applied to the Fukushima nuclear accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Mallet, Vivien; Korsakissok, Irène; Mathieu, Anne

    2016-04-01

    Simulations of the atmospheric dispersion of radionuclides involve large uncertainties originating from the limited knowledge of meteorological input data, composition, amount and timing of emissions, and some model parameters. The estimation of these uncertainties is an essential complement to modeling for decision making in case of an accidental release. We have studied the relative influence of a set of uncertain inputs on several outputs from the Eulerian model Polyphemus/Polair3D on the Fukushima case. We chose to use the variance-based sensitivity analysis method of Sobol'. This method requires a large number of model evaluations which was not achievable directly due to the high computational cost of Polyphemus/Polair3D. To circumvent this issue, we built a mathematical approximation of the model using Gaussian process emulation. We observed that aggregated outputs are mainly driven by the amount of emitted radionuclides, while local outputs are mostly sensitive to wind perturbations. The release height is notably influential, but only in the vicinity of the source. Finally, averaging either spatially or temporally tends to cancel out interactions between uncertain inputs.

  1. Reactive Power Pricing Model Considering the Randomness of Wind Power Output

    NASA Astrophysics Data System (ADS)

    Dai, Zhong; Wu, Zhou

    2018-01-01

    With the increase of wind power capacity integrated into grid, the influence of the randomness of wind power output on the reactive power distribution of grid is gradually highlighted. Meanwhile, the power market reform puts forward higher requirements for reasonable pricing of reactive power service. Based on it, the article combined the optimal power flow model considering wind power randomness with integrated cost allocation method to price reactive power. Meanwhile, considering the advantages and disadvantages of the present cost allocation method and marginal cost pricing, an integrated cost allocation method based on optimal power flow tracing is proposed. The model realized the optimal power flow distribution of reactive power with the minimal integrated cost and wind power integration, under the premise of guaranteeing the balance of reactive power pricing. Finally, through the analysis of multi-scenario calculation examples and the stochastic simulation of wind power outputs, the article compared the results of the model pricing and the marginal cost pricing, which proved that the model is accurate and effective.

  2. A Model of Self-Organizing Head-Centered Visual Responses in Primate Parietal Areas

    PubMed Central

    Mender, Bedeho M. W.; Stringer, Simon M.

    2013-01-01

    We present a hypothesis for how head-centered visual representations in primate parietal areas could self-organize through visually-guided learning, and test this hypothesis using a neural network model. The model consists of a competitive output layer of neurons that receives afferent synaptic connections from a population of input neurons with eye position gain modulated retinal receptive fields. The synaptic connections in the model are trained with an associative trace learning rule which has the effect of encouraging output neurons to learn to respond to subsets of input patterns that tend to occur close together in time. This network architecture and synaptic learning rule is hypothesized to promote the development of head-centered output neurons during periods of time when the head remains fixed while the eyes move. This hypothesis is demonstrated to be feasible, and each of the core model components described is tested and found to be individually necessary for successful self-organization. PMID:24349064

  3. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Investigating Anomalies in the Output Generated by the Weather Research and Forecasting (WRF) Model

    NASA Astrophysics Data System (ADS)

    Decicco, Nicholas; Trout, Joseph; Manson, J. Russell; Rios, Manny; King, David

    2015-04-01

    The Weather Research and Forecasting (WRF) model is an advanced mesoscale numerical weather prediction (NWP) model comprised of two numerical cores, the Numerical Mesoscale Modeling (NMM) core, and the Advanced Research WRF (ARW) core. An investigation was done to determine the source of erroneous output generated by the NMM core. In particular were the appearance of zero values at regularly spaced grid cells in output fields and the NMM core's evident (mis)use of static geographic information at a resolution lower than the nesting level for which the core is performing computation. A brief discussion of the high-level modular architecture of the model is presented as well as methods utilized to identify the cause of these problems. Presented here are the initial results from a research grant, ``A Pilot Project to Investigate Wake Vortex Patterns and Weather Patterns at the Atlantic City Airport by the Richard Stockton College of NJ and the FAA''.

  5. Evaluation of large-eddy simulations forced with mesoscale model output for a multi-week period during a measurement campaign

    NASA Astrophysics Data System (ADS)

    Heinze, Rieke; Moseley, Christopher; Böske, Lennart Nils; Muppa, Shravan Kumar; Maurer, Vera; Raasch, Siegfried; Stevens, Bjorn

    2017-06-01

    Large-eddy simulations (LESs) of a multi-week period during the HD(CP)2 (High-Definition Clouds and Precipitation for advancing Climate Prediction) Observational Prototype Experiment (HOPE) conducted in Germany are evaluated with respect to mean boundary layer quantities and turbulence statistics. Two LES models are used in a semi-idealized setup through forcing with mesoscale model output to account for the synoptic-scale conditions. Evaluation is performed based on the HOPE observations. The mean boundary layer characteristics like the boundary layer depth are in a principal agreement with observations. Simulating shallow-cumulus layers in agreement with the measurements poses a challenge for both LES models. Variance profiles agree satisfactorily with lidar measurements. The results depend on how the forcing data stemming from mesoscale model output are constructed. The mean boundary layer characteristics become less sensitive if the averaging domain for the forcing is large enough to filter out mesoscale fluctuations.

  6. Smad Signaling Dynamics: Insights from a Parsimonious Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wiley, H. S.; Shankaran, Harish

    2008-09-09

    The molecular mechanisms that transmit information from cell surface receptors to the nucleus are exceedingly complex; thus, much effort has been expended in developing computational models to understand these processes. A recent study on modeling the nuclear-cytoplasmic shuttling of Smad2-Smad4 complexes in response to transforming growth factor β (TGF-β) receptor activation has provided substantial insight into how this signaling network translates the degree of TGF-β receptor activation (input) into the amount of nuclear Smad2-Smad4 complexes (output). The study addressed this question by combining a simple, mechanistic model with targeted experiments, an approach that proved particularly powerful for exploring the fundamentalmore » properties of a complex signaling network. The mathematical model revealed that Smad nuclear-cytoplasmic dynamics enables a proportional, but time-delayed coupling between the input and the output. As a result, the output can faithfully track gradual changes in the input, while the rapid input fluctuations that constitute signaling noise are dampened out.« less

  7. EVALUATING THE USE OF OUTPUTS FROM COMPREHENSIVE METEOROLOGICAL MODELS IN AIR QUALITY MODELING APPLICATIONS

    EPA Science Inventory

    Currently used dispersion models, such as the AMS/EPA Regulatory Model (AERMOD), process routinely available meteorological observations to construct model inputs. Thus, model estimates of concentrations depend on the availability and quality of Meteorological observations, as we...

  8. Adaptive model reduction for continuous systems via recursive rational interpolation

    NASA Technical Reports Server (NTRS)

    Lilly, John H.

    1994-01-01

    A method for adaptive identification of reduced-order models for continuous stable SISO and MIMO plants is presented. The method recursively finds a model whose transfer function (matrix) matches that of the plant on a set of frequencies chosen by the designer. The algorithm utilizes the Moving Discrete Fourier Transform (MDFT) to continuously monitor the frequency-domain profile of the system input and output signals. The MDFT is an efficient method of monitoring discrete points in the frequency domain of an evolving function of time. The model parameters are estimated from MDFT data using standard recursive parameter estimation techniques. The algorithm has been shown in simulations to be quite robust to additive noise in the inputs and outputs. A significant advantage of the method is that it enables a type of on-line model validation. This is accomplished by simultaneously identifying a number of models and comparing each with the plant in the frequency domain. Simulations of the method applied to an 8th-order SISO plant and a 10-state 2-input 2-output plant are presented. An example of on-line model validation applied to the SISO plant is also presented.

  9. The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter

    2018-02-01

    We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. About 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). The relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.

  10. Equivalent Sensor Radiance Generation and Remote Sensing from Model Parameters. Part 1; Equivalent Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.

    2013-01-01

    In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.

  11. Multi-sensor Cloud Retrieval Simulator and Remote Sensing from Model Parameters . Pt. 1; Synthetic Sensor Radiance Formulation; [Synthetic Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, G.; DaSilva, A. M.; Norris, P. M.; Platnick, S.

    2013-01-01

    In this paper we describe a general procedure for calculating synthetic sensor radiances from variable output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint, the algorithm takes explicit account of the model subgrid variability, in particular its description of the probability density function of total water (vapor and cloud condensate.) The simulated sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies.We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products). We focus on clouds because they are very important to model development and improvement.

  12. A User-Friendly Model for Spray Drying to Aid Pharmaceutical Product Development

    PubMed Central

    Grasmeijer, Niels; de Waard, Hans; Hinrichs, Wouter L. J.; Frijlink, Henderik W.

    2013-01-01

    The aim of this study was to develop a user-friendly model for spray drying that can aid in the development of a pharmaceutical product, by shifting from a trial-and-error towards a quality-by-design approach. To achieve this, a spray dryer model was developed in commercial and open source spreadsheet software. The output of the model was first fitted to the experimental output of a Büchi B-290 spray dryer and subsequently validated. The predicted outlet temperatures of the spray dryer model matched the experimental values very well over the entire range of spray dryer settings that were tested. Finally, the model was applied to produce glassy sugars by spray drying, an often used excipient in formulations of biopharmaceuticals. For the production of glassy sugars, the model was extended to predict the relative humidity at the outlet, which is not measured in the spray dryer by default. This extended model was then successfully used to predict whether specific settings were suitable for producing glassy trehalose and inulin by spray drying. In conclusion, a spray dryer model was developed that is able to predict the output parameters of the spray drying process. The model can aid the development of spray dried pharmaceutical products by shifting from a trial-and-error towards a quality-by-design approach. PMID:24040240

  13. Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre

    2009-01-01

    The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.

  14. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    PubMed

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  15. A Taxonomy-Based Approach to Shed Light on the Babel of Mathematical Models for Rice Simulation

    NASA Technical Reports Server (NTRS)

    Confalonieri, Roberto; Bregaglio, Simone; Adam, Myriam; Ruget, Francoise; Li, Tao; Hasegawa, Toshihiro; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Buis, Samuel; hide

    2016-01-01

    For most biophysical domains, differences in model structures are seldom quantified. Here, we used a taxonomy-based approach to characterise thirteen rice models. Classification keys and binary attributes for each key were identified, and models were categorised into five clusters using a binary similarity measure and the unweighted pair-group method with arithmetic mean. Principal component analysis was performed on model outputs at four sites. Results indicated that (i) differences in structure often resulted in similar predictions and (ii) similar structures can lead to large differences in model outputs. User subjectivity during calibration may have hidden expected relationships between model structure and behaviour. This explanation, if confirmed, highlights the need for shared protocols to reduce the degrees of freedom during calibration, and to limit, in turn, the risk that user subjectivity influences model performance.

  16. A nonlinear autoregressive Volterra model of the Hodgkin-Huxley equations.

    PubMed

    Eikenberry, Steffen E; Marmarelis, Vasilis Z

    2013-02-01

    We propose a new variant of Volterra-type model with a nonlinear auto-regressive (NAR) component that is a suitable framework for describing the process of AP generation by the neuron membrane potential, and we apply it to input-output data generated by the Hodgkin-Huxley (H-H) equations. Volterra models use a functional series expansion to describe the input-output relation for most nonlinear dynamic systems, and are applicable to a wide range of physiologic systems. It is difficult, however, to apply the Volterra methodology to the H-H model because is characterized by distinct subthreshold and suprathreshold dynamics. When threshold is crossed, an autonomous action potential (AP) is generated, the output becomes temporarily decoupled from the input, and the standard Volterra model fails. Therefore, in our framework, whenever membrane potential exceeds some threshold, it is taken as a second input to a dual-input Volterra model. This model correctly predicts membrane voltage deflection both within the subthreshold region and during APs. Moreover, the model naturally generates a post-AP afterpotential and refractory period. It is known that the H-H model converges to a limit cycle in response to a constant current injection. This behavior is correctly predicted by the proposed model, while the standard Volterra model is incapable of generating such limit cycle behavior. The inclusion of cross-kernels, which describe the nonlinear interactions between the exogenous and autoregressive inputs, is found to be absolutely necessary. The proposed model is general, non-parametric, and data-derived.

  17. Applications of Mars Global Reference Atmospheric Model (Mars-GRAM 2005) Supporting Mission Site Selection for Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.

    2008-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. One new feature of Mars-GRAM 2005 is the 'auxiliary profile' option. In this option, an input file of temperature and density versus altitude is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5)model) and a global Thermal Emission Spectrometer(TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components,averaged over 5-by-5 degree latitude-longitude bins and 15 degree L(s) bins, for each of three Mars years of TES nadir data. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate Mars Science Laboratory (MSL) landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  18. Determining Reduced Order Models for Optimal Stochastic Reduced Order Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonney, Matthew S.; Brake, Matthew R.W.

    2015-08-01

    The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better representmore » the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.« less

  19. Closed-loop model identification of cooperative manipulators holding deformable objects

    NASA Astrophysics Data System (ADS)

    Alkathiri, A. A.; Akmeliawati, R.; Azlan, N. Z.

    2017-11-01

    This paper presents system identification to obtain the closed-loop models of a couple of cooperative manipulators in a system, which function to hold deformable objects. The system works using the master-slave principle. In other words, one of the manipulators is position-controlled through encoder feedback, while a force sensor gives feedback to the other force-controlled manipulator. Using the closed-loop input and output data, the closed-loop models, which are useful for model-based control design, are estimated. The criteria for model validation are a 95% fit between the measured and simulated output of the estimated models and residual analysis. The results show that for both position and force control respectively, the fits are 95.73% and 95.88%.

  20. Langrangian model of nitrogen kinetics in the Chattahoochee river

    USGS Publications Warehouse

    Jobson, H.E.

    1987-01-01

    A Lagrangian reference frame is used to solve the convection-dispersion equation and interpret water-quality obtained from the Chattahoochee River. The model was calibrated using unsteady concentrations of organic nitrogen, ammonia, and nitrite plus nitrate obtained during June 1977 and verified using data obtained during August 1976. Reaction kinetics of the cascade type are shown to provide a reasonable description of the nitrogen-species processes in the Chattahoochee River. The conceptual model is easy to visualize in the physical sense and the output includes information that is not easily determined from an Eulerian approach, but which is very helpful in model calibration and data interpretation. For example, the model output allows one to determine which data are of most value in model calibration or verification.

  1. From Single-Cell Dynamics to Scaling Laws in Oncology

    NASA Astrophysics Data System (ADS)

    Chignola, Roberto; Sega, Michela; Stella, Sabrina; Vyshemirsky, Vladislav; Milotti, Edoardo

    We are developing a biophysical model of tumor biology. We follow a strictly quantitative approach where each step of model development is validated by comparing simulation outputs with experimental data. While this strategy may slow down our advancements, at the same time it provides an invaluable reward: we can trust simulation outputs and use the model to explore territories of cancer biology where current experimental techniques fail. Here, we review our multi-scale biophysical modeling approach and show how a description of cancer at the cellular level has led us to general laws obeyed by both in vitro and in vivo tumors.

  2. Black Carbon Concentration from Worldwide Aerosol Robotic Network (AERONET) Measurements

    NASA Technical Reports Server (NTRS)

    Schuster, Gregory L.; Dubovik, Oleg; Holben, Brent N.; Clothiaux, Eugene E.

    2006-01-01

    The carbon emissions inventories used to initialize transport models and general circulation models are highly parameterized, and created on the basis of multiple sparse datasets (such as fuel use inventories and emission factors). The resulting inventories are uncertain by at least a factor of 2, and this uncertainty is carried forward to the model output. [Bond et al., 1998, Bond et al., 2004, Cooke et al., 1999, Streets et al., 2001] Worldwide black carbon concentration measurements are needed to assess the efficacy of the carbon emissions inventory and transport model output on a continuous basis.

  3. Energy: Economic activity and energy demand; link to energy flow. Example: France

    NASA Astrophysics Data System (ADS)

    1980-10-01

    The data derived from the EXPLOR and EPOM, Energy Flow Optimization Model are described. The core of the EXPLOR model is a circular system of relations involving consumer's demand, producer's outputs, and market prices. The solution of this system of relations is obtained by successive iterations; the final output is a coherent system of economic accounts. The computer program for this transition is described. The work conducted by comparing different energy demand models is summarized. The procedure is illustrated by a numerical projection to 1980 and 1985 using the existing version of the EXPLOR France model.

  4. System parameter identification from projection of inverse analysis

    NASA Astrophysics Data System (ADS)

    Liu, K.; Law, S. S.; Zhu, X. Q.

    2017-05-01

    The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.

  5. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  6. Output Feedback Adaptive Control of Non-Minimum Phase Systems Using Optimal Control Modification

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan

    2018-01-01

    This paper describes output feedback adaptive control approaches for non-minimum phase SISO systems with relative degree 1 and non-strictly positive real (SPR) MIMO systems with uniform relative degree 1 using the optimal control modification method. It is well-known that the standard model-reference adaptive control (MRAC) cannot be used to control non-SPR plants to track an ideal SPR reference model. Due to the ideal property of asymptotic tracking, MRAC attempts an unstable pole-zero cancellation which results in unbounded signals for non-minimum phase SISO systems. The optimal control modification can be used to prevent the unstable pole-zero cancellation which results in a stable adaptation of non-minimum phase SISO systems. However, the tracking performance using this approach could suffer if the unstable zero is located far away from the imaginary axis. The tracking performance can be recovered by using an observer-based output feedback adaptive control approach which uses a Luenberger observer design to estimate the state information of the plant. Instead of explicitly specifying an ideal SPR reference model, the reference model is established from the linear quadratic optimal control to account for the non-minimum phase behavior of the plant. With this non-minimum phase reference model, the observer-based output feedback adaptive control can maintain stability as well as tracking performance. However, in the presence of the mismatch between the SPR reference model and the non-minimum phase plant, the standard MRAC results in unbounded signals, whereas a stable adaptation can be achieved with the optimal control modification. An application of output feedback adaptive control for a flexible wing aircraft illustrates the approaches.

  7. User's Guide for Monthly Vector Wind Profile Model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1999-01-01

    The background, theoretical concepts, and methodology for construction of vector wind profiles based on a statistical model are presented. The derived monthly vector wind profiles are to be applied by the launch vehicle design community for establishing realistic estimates of critical vehicle design parameter dispersions related to wind profile dispersions. During initial studies a number of months are used to establish the model profiles that produce the largest monthly dispersions of ascent vehicle aerodynamic load indicators. The largest monthly dispersions for wind, which occur during the winter high-wind months, are used for establishing the design reference dispersions for the aerodynamic load indicators. This document includes a description of the computational process for the vector wind model including specification of input data, parameter settings, and output data formats. Sample output data listings are provided to aid the user in the verification of test output.

  8. Interval Predictor Models for Data with Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Lacerda, Marcio J.; Crespo, Luis G.

    2017-01-01

    An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.

  9. Integrated Geothermal-CO2 Storage Reservoirs: FY1 Final Report

    DOE Data Explorer

    Buscheck, Thomas A.

    2012-01-01

    The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  10. System capacity and economic modeling computer tool for satellite mobile communications systems

    NASA Technical Reports Server (NTRS)

    Wiedeman, Robert A.; Wen, Doong; Mccracken, Albert G.

    1988-01-01

    A unique computer modeling tool that combines an engineering tool with a financial analysis program is described. The resulting combination yields a flexible economic model that can predict the cost effectiveness of various mobile systems. Cost modeling is necessary in order to ascertain if a given system with a finite satellite resource is capable of supporting itself financially and to determine what services can be supported. Personal computer techniques using Lotus 123 are used for the model in order to provide as universal an application as possible such that the model can be used and modified to fit many situations and conditions. The output of the engineering portion of the model consists of a channel capacity analysis and link calculations for several qualities of service using up to 16 types of earth terminal configurations. The outputs of the financial model are a revenue analysis, an income statement, and a cost model validation section.

  11. A framework for performance measurement in university using extended network data envelopment analysis (DEA) structures

    NASA Astrophysics Data System (ADS)

    Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd

    2015-12-01

    Measuring university performance is essential for efficient allocation and utilization of educational resources. In most of the previous studies, performance measurement in universities emphasized the operational efficiency and resource utilization without investigating the university's ability to fulfill the needs of its stakeholders and society. Therefore, assessment of the performance of university should be separated into two stages namely efficiency and effectiveness. In conventional DEA analysis, a decision making unit (DMU) or in this context, a university is generally treated as a black-box which ignores the operation and interdependence of the internal processes. When this happens, the results obtained would be misleading. Thus, this paper suggest an alternative framework for measuring the overall performance of a university by incorporating both efficiency and effectiveness and applies network DEA model. The network DEA models are recommended because this approach takes into account the interrelationship between the processes of efficiency and effectiveness in the system. This framework also focuses on the university structure which is expanded from the hierarchical to form a series of horizontal relationship between subordinate units by assuming both intermediate unit and its subordinate units can generate output(s). Three conceptual models are proposed to evaluate the performance of a university. An efficiency model is developed at the first stage by using hierarchical network model. It is followed by an effectiveness model which take output(s) from the hierarchical structure at the first stage as a input(s) at the second stage. As a result, a new overall performance model is proposed by combining both efficiency and effectiveness models. Thus, once this overall model is realized and utilized, the university's top management can determine the overall performance of each unit more accurately and systematically. Besides that, the result from the network DEA model can give a superior benchmarking power over the conventional models.

  12. AirSWOT observations versus hydrodynamic model outputs of water surface elevation and slope in a multichannel river

    NASA Astrophysics Data System (ADS)

    Altenau, Elizabeth H.; Pavelsky, Tamlin M.; Moller, Delwyn; Lion, Christine; Pitcher, Lincoln H.; Allen, George H.; Bates, Paul D.; Calmant, Stéphane; Durand, Michael; Neal, Jeffrey C.; Smith, Laurence C.

    2017-04-01

    Anabranching rivers make up a large proportion of the world's major rivers, but quantifying their flow dynamics is challenging due to their complex morphologies. Traditional in situ measurements of water levels collected at gauge stations cannot capture out of bank flows and are limited to defined cross sections, which presents an incomplete picture of water fluctuations in multichannel systems. Similarly, current remotely sensed measurements of water surface elevations (WSEs) and slopes are constrained by resolutions and accuracies that limit the visibility of surface waters at global scales. Here, we present new measurements of river WSE and slope along the Tanana River, AK, acquired from AirSWOT, an airborne analogue to the Surface Water and Ocean Topography (SWOT) mission. Additionally, we compare the AirSWOT observations to hydrodynamic model outputs of WSE and slope simulated across the same study area. Results indicate AirSWOT errors are significantly lower than model outputs. When compared to field measurements, RMSE for AirSWOT measurements of WSEs is 9.0 cm when averaged over 1 km squared areas and 1.0 cm/km for slopes along 10 km reaches. Also, AirSWOT can accurately reproduce the spatial variations in slope critical for characterizing reach-scale hydraulics, while model outputs of spatial variations in slope are very poor. Combining AirSWOT and future SWOT measurements with hydrodynamic models can result in major improvements in model simulations at local to global scales. Scientists can use AirSWOT measurements to constrain model parameters over long reach distances, improve understanding of the physical processes controlling the spatial distribution of model parameters, and validate models' abilities to reproduce spatial variations in slope. Additionally, AirSWOT and SWOT measurements can be assimilated into lower-complexity models to try and approach the accuracies achieved by higher-complexity models.

  13. Modeling the Pineapple Express phenomenon via Multivariate Extreme Value Theory

    NASA Astrophysics Data System (ADS)

    Weller, G.; Cooley, D. S.

    2011-12-01

    The pineapple express (PE) phenomenon is responsible for producing extreme winter precipitation events in the coastal and mountainous regions of the western United States. Because the PE phenomenon is also associated with warm temperatures, the heavy precipitation and associated snowmelt can cause destructive flooding. In order to study impacts, it is important that regional climate models from NARCCAP are able to reproduce extreme precipitation events produced by PE. We define a daily precipitation quantity which captures the spatial extent and intensity of precipitation events produced by the PE phenomenon. We then use statistical extreme value theory to model the tail dependence of this quantity as seen in an observational data set and each of the six NARCCAP regional models driven by NCEP reanalysis. We find that most NCEP-driven NARCCAP models do exhibit tail dependence between daily model output and observations. Furthermore, we find that not all extreme precipitation events are pineapple express events, as identified by Dettinger et al. (2011). The synoptic-scale atmospheric processes that drive extreme precipitation events produced by PE have only recently begun to be examined. Much of the current work has focused on pattern recognition, rather than quantitative analysis. We use daily mean sea-level pressure (MSLP) fields from NCEP to develop a "pineapple express index" for extreme precipitation, which exhibits tail dependence with our observed precipitation quantity for pineapple express events. We build a statistical model that connects daily precipitation output from the WRFG model, daily MSLP fields from NCEP, and daily observed precipitation in the western US. Finally, we use this model to simulate future observed precipitation based on WRFG output driven by the CCSM model, and our pineapple express index derived from future CCSM output. Our aim is to use this model to develop a better understanding of the frequency and intensity of extreme precipitation events produced by PE under climate change.

  14. Robust decentralized controller for minimizing coupling effect in single inductor multiple output DC-DC converter operating in continuous conduction mode.

    PubMed

    Medeiros, Renan Landau Paiva de; Barra, Walter; Bessa, Iury Valente de; Chaves Filho, João Edgar; Ayres, Florindo Antonio de Cavalho; Neves, Cleonor Crescêncio das

    2018-02-01

    This paper describes a novel robust decentralized control design methodology for a single inductor multiple output (SIMO) DC-DC converter. Based on a nominal multiple input multiple output (MIMO) plant model and performance requirements, a pairing input-output analysis is performed to select the suitable input to control each output aiming to attenuate the loop coupling. Thus, the plant uncertainty limits are selected and expressed in interval form with parameter values of the plant model. A single inductor dual output (SIDO) DC-DC buck converter board is developed for experimental tests. The experimental results show that the proposed methodology can maintain a desirable performance even in the presence of parametric uncertainties. Furthermore, the performance indexes calculated from experimental data show that the proposed methodology outperforms classical MIMO control techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. NREL - SOWFA - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  16. PNNL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  17. ANL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  18. LLNL - WRF-LES - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  19. ANL - WRF-LES - Neutral - TTU

    DOE Data Explorer

    Kosovic, Branko

    2018-06-20

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  20. LANL - WRF-LES - Neutral - TTU

    DOE Data Explorer

    Kosovic, Branko

    2018-06-20

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  1. LANL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  2. NONMEMory: a run management tool for NONMEM.

    PubMed

    Wilkins, Justin J

    2005-06-01

    NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.

  3. Simulation of streamflows and basin-wide hydrologic variables over several climate-change scenarios, Methow River basin, Washington

    USGS Publications Warehouse

    Voss, Frank D.; Mastin, Mark C.

    2012-01-01

    A database was developed to automate model execution and to provide users with Internet access to voluminous data products ranging from summary figures to model output timeseries. Database-enabled Internet tools were developed to allow users to create interactive graphs of output results based on their analysis needs. For example, users were able to create graphs by selecting time intervals, greenhouse gas emission scenarios, general circulation models, and specific hydrologic variables.

  4. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  5. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Justin; Hund, Lauren

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less

  6. Development and analysis of a finite element model to simulate pulmonary emphysema in CT imaging.

    PubMed

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2015-01-01

    In CT imaging, pulmonary emphysema appears as lung regions with Low-Attenuation Areas (LAA). In this study we propose a finite element (FE) model of lung parenchyma, based on a 2-D grid of beam elements, which simulates pulmonary emphysema related to smoking in CT imaging. Simulated LAA images were generated through space sampling of the model output. We employed two measurements of emphysema extent: Relative Area (RA) and the exponent D of the cumulative distribution function of LAA clusters size. The model has been used to compare RA and D computed on the simulated LAA images with those computed on the models output. Different mesh element sizes and various model parameters, simulating different physiological/pathological conditions, have been considered and analyzed. A proper mesh element size has been determined as the best trade-off between reliable results and reasonable computational cost. Both RA and D computed on simulated LAA images were underestimated with respect to those calculated on the models output. Such underestimations were larger for RA (≈ -44 ÷ -26%) as compared to those for D (≈ -16 ÷ -2%). Our FE model could be useful to generate standard test images and to design realistic physical phantoms of LAA images for the assessment of the accuracy of descriptors for quantifying emphysema in CT imaging.

  7. USEEIO: a New and Transparent United States ...

    EPA Pesticide Factsheets

    National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts of policies, and performing streamlined life cycle assessment. USEEIO is a new environmentally extended input-output model of the United States fit for such purposes and other sustainable materials management applications. USEEIO melds data on economic transactions between 389 industry sectors with environmental data for these sectors covering land, water, energy and mineral usage and emissions of greenhouse gases, criteria air pollutants, nutrients and toxics, to build a life cycle model of 385 US goods and services. In comparison with existing US input-output models, USEEIO is more current with most data representing year 2013, more extensive in its coverage of resources and emissions, more deliberate and detailed in its interpretation and combination of data sources, and includes formal data quality evaluation and description. USEEIO was assembled with a new Python module called the IO Model Builder capable of assembling and calculating results of user-defined input-output models and exporting the models into LCA software. The model and data quality evaluation capabilities are demonstrated with an analysis of the environmental performance of an average hospital in the US. All USEEIO f

  8. Global Sensitivity Analysis of Environmental Systems via Multiple Indices based on Statistical Moments of Model Outputs

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Dell'Oca, A.

    2017-12-01

    We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.

  9. Growth and food consumption by tiger muskellunge: Effects of temperature and ration level on bioenergetic model predictions

    USGS Publications Warehouse

    Chipps, S.R.; Einfalt, L.M.; Wahl, David H.

    2000-01-01

    We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.

  10. DRI Model of the U.S. Economy -- Model Documentation

    EIA Publications

    1993-01-01

    Provides documentation on Data Resources, Inc., DRI Model of the U.S. Economy and the DRI Personal Computer Input/Output Model. It also describes the theoretical basis, structure and functions of both DRI models; and contains brief descriptions of the models and their equations.

  11. Global and regional ecosystem modeling: comparison of model outputs and field measurements

    NASA Astrophysics Data System (ADS)

    Olson, R. J.; Hibbard, K.

    2003-04-01

    The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725

  12. Two models for identification and predicting behaviour of an induction motor system

    NASA Astrophysics Data System (ADS)

    Kuo, Chien-Hsun

    2018-01-01

    System identification or modelling is the process of building mathematical models of dynamical systems based on the available input and output data from the systems. This paper introduces system identification by using ARX (Auto Regressive with eXogeneous input) and ARMAX (Auto Regressive Moving Average with eXogeneous input) models. Through the identified system model, the predicted output could be compared with the measured one to help prevent the motor faults from developing into a catastrophic machine failure and avoid unnecessary costs and delays caused by the need to carry out unscheduled repairs. The induction motor system is illustrated as an example. Numerical and experimental results are shown for the identified induction motor system.

  13. Prediction model of sinoatrial node field potential using high order partial least squares.

    PubMed

    Feng, Yu; Cao, Hui; Zhang, Yanbin

    2015-01-01

    High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).

  14. SCOUT: a fast Monte-Carlo modeling tool of scintillation camera output†

    PubMed Central

    Hunter, William C J; Barrett, Harrison H.; Muzi, John P.; McDougald, Wendy; MacDonald, Lawrence R.; Miyaoka, Robert S.; Lewellen, Thomas K.

    2013-01-01

    We have developed a Monte-Carlo photon-tracking and readout simulator called SCOUT to study the stochastic behavior of signals output from a simplified rectangular scintillation-camera design. SCOUT models the salient processes affecting signal generation, transport, and readout of a scintillation camera. Presently, we compare output signal statistics from SCOUT to experimental results for both a discrete and a monolithic camera. We also benchmark the speed of this simulation tool and compare it to existing simulation tools. We find this modeling tool to be relatively fast and predictive of experimental results. Depending on the modeled camera geometry, we found SCOUT to be 4 to 140 times faster than other modeling tools. PMID:23640136

  15. Grid-connected wind and photovoltaic system

    NASA Astrophysics Data System (ADS)

    Devabakthuni, Sindhuja

    The objective of this thesis is to design a grid connected wind and photovoltaic system. A new model of converter control was designed which maintains the voltage of the bus to grid as constant when combined system of solar and wind is connected to AC bus. The model is designed to track maximum power at each point irrespective of changes in irradiance, temperature and wind speed which affects the power supplied to grid. Solar power from the sun is not constant as it is affected by changes in irradiances and temperature. Even the wind power is affected by wind speed. A MPPT controller was designed for both systems. A boost converter is designed which uses the pulses from MPPT controller to boost the output. Wind system consists of wind turbine block from the MATLAB with a pitch angle controller to maintain optimum pitch angle. The output from wind turbine is connected to a permanent magnet synchronous generator. The unregulated DC output from the photovoltaic system is directly given to boost converter. The AC output from the wind system is given to an uncontrolled rectifier to get a unregulated DC output. The unregulated DC output goes to the boost converter. A voltage source inverter was designed which converts the rectified DC output from the boost converter to AC power. The inverter is designed to maintain constant AC bus voltage irrespective of the disturbances in the power supply. Photovoltaic and wind systems are individually designed for 5KW each in MATLAB-Simulink environment. In this thesis, the models were subjected to changes in irradiance, temperature and wind speed and the results were interpreted. The model was successful in tracking maximum at every instant and the AC bus voltage was maintained constant throughout the simulation.

  16. Output Containment Control of Linear Heterogeneous Multi-Agent Systems Using Internal Model Principle.

    PubMed

    Zuo, Shan; Song, Yongduan; Lewis, Frank L; Davoudi, Ali

    2017-01-04

    This paper studies the output containment control of linear heterogeneous multi-agent systems, where the system dynamics and even the state dimensions can generally be different. Since the states can have different dimensions, standard results from state containment control do not apply. Therefore, the control objective is to guarantee the convergence of the output of each follower to the dynamic convex hull spanned by the outputs of leaders. This can be achieved by making certain output containment errors go to zero asymptotically. Based on this formulation, two different control protocols, namely, full-state feedback and static output-feedback, are designed based on internal model principles. Sufficient local conditions for the existence of the proposed control protocols are developed in terms of stabilizing the local followers' dynamics and satisfying a certain H∞ criterion. Unified design procedures to solve the proposed two control protocols are presented by formulation and solution of certain local state-feedback and static output-feedback problems, respectively. Numerical simulations are given to validate the proposed control protocols.

  17. Estimation of continuous multi-DOF finger joint kinematics from surface EMG using a multi-output Gaussian Process.

    PubMed

    Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro

    2014-01-01

    Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.

  18. Whole plant based treatment of hypercholesterolemia with Crataegus laevigata in a zebrafish model

    PubMed Central

    2012-01-01

    Background Consumers are increasingly turning to plant-based complementary and alternative medicines to treat hypercholesterolemia. Many of these treatments are untested and their efficacy is unknown. This multitude of potential remedies necessitates a model system amenable to testing large numbers of organisms that maintains similarity to humans in both mode of drug administration and overall physiology. Here we develop the larval zebrafish (4–30 days post fertilization) as a vertebrate model of dietary plant-based treatment of hypercholesterolemia and test the effects of Crataegus laevigata in this model. Methods Larval zebrafish were fed high cholesterol diets infused with fluorescent sterols and phytomedicines. Plants were ground with mortar and pestle into a fine powder before addition to food. Fluorescent sterols were utilized to optically quantify relative difference in intravascular cholesterol levels between groups of fish. We utilized the Zeiss 7-Live Duo high-speed confocal platform in order to both quantify intravascular sterol fluorescence and to capture video of the heart beat for determination of cardiac output. Results In this investigation we developed and utilized a larval zebrafish model to investigate dietary plant-based intervention of the pathophysiology of hypercholesterolemia. We found BODIPY-cholesterol effectively labels diet-introduced intravascular cholesterol levels (P < 0.05, Student’s t-test). We also established that zebrafish cardiac output declines as cholesterol dose increases (difference between 0.1% and 8% (w/w) high cholesterol diet-treated cardiac output significant at P < 0.05, 1-way ANOVA). Using this model, we found hawthorn leaves and flowers significantly reduce intravascular cholesterol levels (P < 0.05, 1-way ANOVA) and interact with cholesterol to impact cardiac output in hypercholesterolemic fish (2-way ANOVA, P < 0.05 for interaction effect). Conclusions The results of this study demonstrate that the larval zebrafish has the potential to become a powerful model to test plant based dietary intervention of hypercholesterolemia. Using this model we have shown that hawthorn leaves and flowers have the potential to affect cardiac output as well as intravascular cholesterol levels. Further, our observation that hawthorn leaves and flowers interact with cholesterol to impact cardiac output indicates that the physiological effects of hawthorn may depend on diet. PMID:22824306

  19. Analytical approach for modeling and performance analysis of microring resonators as optical filters with multiple output bus waveguides

    NASA Astrophysics Data System (ADS)

    Lakra, Suchita; Mandal, Sanjoy

    2017-06-01

    A quadruple micro-optical ring resonator (QMORR) with multiple output bus waveguides is mathematically modeled and analyzed by making use of the delay-line signal processing approach in Z-domain and Mason's gain formula. The performances of QMORR with two output bus waveguides with vertical coupling are analyzed. This proposed structure is capable of providing wider free spectral response from both the output buses with appreciable cross talk. Thus, this configuration could provide increased capacity to insert a large number of communication channels. The simulated frequency response characteristic and its dispersion and group delay characteristics are graphically presented using the MATLAB environment.

  20. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  1. Developing a non-point source P loss indicator in R and its parameter uncertainty assessment using GLUE: a case study in northern China.

    PubMed

    Su, Jingjun; Du, Xinzhong; Li, Xuyong

    2018-05-16

    Uncertainty analysis is an important prerequisite for model application. However, the existing phosphorus (P) loss indexes or indicators were rarely evaluated. This study applied generalized likelihood uncertainty estimation (GLUE) method to assess the uncertainty of parameters and modeling outputs of a non-point source (NPS) P indicator constructed in R language. And the influences of subjective choices of likelihood formulation and acceptability threshold of GLUE on model outputs were also detected. The results indicated the following. (1) Parameters RegR 2 , RegSDR 2 , PlossDP fer , PlossDP man , DPDR, and DPR were highly sensitive to overall TP simulation and their value ranges could be reduced by GLUE. (2) Nash efficiency likelihood (L 1 ) seemed to present better ability in accentuating high likelihood value simulations than the exponential function (L 2 ) did. (3) The combined likelihood integrating the criteria of multiple outputs acted better than single likelihood in model uncertainty assessment in terms of reducing the uncertainty band widths and assuring the fitting goodness of whole model outputs. (4) A value of 0.55 appeared to be a modest choice of threshold value to balance the interests between high modeling efficiency and high bracketing efficiency. Results of this study could provide (1) an option to conduct NPS modeling under one single computer platform, (2) important references to the parameter setting for NPS model development in similar regions, (3) useful suggestions for the application of GLUE method in studies with different emphases according to research interests, and (4) important insights into the watershed P management in similar regions.

  2. The future of climate science analysis in a coming era of exascale computing

    NASA Astrophysics Data System (ADS)

    Bates, S. C.; Strand, G.

    2013-12-01

    Projections of Community Earth System Model (CESM) output based on the growth of data archived over 2000-2012 at all of our computing sites (NCAR, NERSC, ORNL) show that we can expect to reach 1,000 PB (1 EB) sometime in the next decade or so. The current paradigms of using site-based archival systems to hold these data that are then accessed via portals or gateways, downloading the data to a local system, and then processing/analyzing the data will be irretrievably broken before then. From a climate modeling perspective, the expertise involved in making climate models themselves efficient on HPC systems will need to be applied to the data as well - providing fast parallel analysis tools co-resident in memory with the data, because the disk I/O bandwidth simply will not keep up with the expected arrival of exaflop systems. The ability of scientists, analysts, stakeholders and others to use climate model output to turn these data into understanding and knowledge will require significant advances in the current typical analysis tools and packages to enable these processes for these vast volumes of data. Allowing data users to enact their own analyses on model output is virtually a requirement as well - climate modelers cannot anticipate all the possibilities for analysis that users may want to do. In addition, the expertise of data scientists, and their knowledge of the model output and their knowledge of best practices in data management (metadata, curation, provenance and so on) will need to be rewarded and exploited to gain the most understanding possible from these volumes of data. In response to growing data size, demand, and future projections, the CESM output has undergone a structure evolution and the data management plan has been reevaluated and updated. The major evolution of the CESM data structure is presented here, along with the CESM experience and role within the CMIP3/CMIP5.

  3. Devon Ice cap's future: results from climate and ice dynamics modelling via surface mass balance modelling

    NASA Astrophysics Data System (ADS)

    Rodehacke, C. B.; Mottram, R.; Boberg, F.

    2017-12-01

    The Devon Ice Cap is an example of a relatively well monitored small ice cap in the Canadian Arctic. Close to Greenland, it shows a similar surface mass balance signal to glaciers in western Greenland. Here we various boundary conditions, ranging from ERA-Interim reanalysis data via global climate model high resolution (5km) output from the regional climate model HIRHAM5, to determine the surface mass balance of the Devon ice cap. These SMB estimates are used to drive the PISM glacier model in order to model the present day and future prospects of this small Arctic ice cap. Observational data from the Devon Ice Cap in Arctic Canada is used to evaluate the surface mass balance (SMB) data output from the HIRHAM5 model for simulations forced with the ERA-Interim climate reanalysis data and the historical emissions scenario run by the EC-Earth global climate model. The RCP8.5 scenario simulated by EC-Earth is also downscaled by HIRHAM5 and this output is used to force the PISM model to simulate the likely future evolution of the Devon Ice Cap under a warming climate. We find that the Devon Ice Cap is likely to continue its present day retreat, though in the future increased precipitation partly offsets the enhanced melt rates caused by climate change.

  4. Development of digital phantoms based on a finite element model to simulate low-attenuation areas in CT imaging for pulmonary emphysema quantification.

    PubMed

    Diciotti, Stefano; Nobis, Alessandro; Ciulli, Stefano; Landini, Nicholas; Mascalchi, Mario; Sverzellati, Nicola; Innocenti, Bernardo

    2017-09-01

    To develop an innovative finite element (FE) model of lung parenchyma which simulates pulmonary emphysema on CT imaging. The model is aimed to generate a set of digital phantoms of low-attenuation areas (LAA) images with different grades of emphysema severity. Four individual parameter configurations simulating different grades of emphysema severity were utilized to generate 40 FE models using ten randomizations for each setting. We compared two measures of emphysema severity (relative area (RA) and the exponent D of the cumulative distribution function of LAA clusters size) between the simulated LAA images and those computed directly on the models output (considered as reference). The LAA images obtained from our model output can simulate CT-LAA images in subjects with different grades of emphysema severity. Both RA and D computed on simulated LAA images were underestimated as compared to those calculated on the models output, suggesting that measurements in CT imaging may not be accurate in the assessment of real emphysema extent. Our model is able to mimic the cluster size distribution of LAA on CT imaging of subjects with pulmonary emphysema. The model could be useful to generate standard test images and to design physical phantoms of LAA images for the assessment of the accuracy of indexes for the radiologic quantitation of emphysema.

  5. Electric Propulsion System Modeling for the Proposed Prometheus 1 Mission

    NASA Technical Reports Server (NTRS)

    Fiehler, Douglas; Dougherty, Ryan; Manzella, David

    2005-01-01

    The proposed Prometheus 1 spacecraft would utilize nuclear electric propulsion to propel the spacecraft to its ultimate destination where it would perform its primary mission. As part of the Prometheus 1 Phase A studies, system models were developed for each of the spacecraft subsystems that were integrated into one overarching system model. The Electric Propulsion System (EPS) model was developed using data from the Prometheus 1 electric propulsion technology development efforts. This EPS model was then used to provide both performance and mass information to the Prometheus 1 system model for total system trades. Development of the EPS model is described, detailing both the performance calculations as well as its evolution over the course of Phase A through three technical baselines. Model outputs are also presented, detailing the performance of the model and its direct relationship to the Prometheus 1 technology development efforts. These EP system model outputs are also analyzed chronologically showing the response of the model development to the four technical baselines during Prometheus 1 Phase A.

  6. Emulation: A fast stochastic Bayesian method to eliminate model space

    NASA Astrophysics Data System (ADS)

    Roberts, Alan; Hobbs, Richard; Goldstein, Michael

    2010-05-01

    Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.

  7. An Energy-Based Hysteresis Model for Magnetostrictive Transducers

    NASA Technical Reports Server (NTRS)

    Calkins, F. T.; Smith, R. C.; Flatau, A. B.

    1997-01-01

    This paper addresses the modeling of hysteresis in magnetostrictive transducers. This is considered in the context of control applications which require an accurate characterization of the relation between input currents and strains output by the transducer. This relation typically exhibits significant nonlinearities and hysteresis due to inherent properties of magnetostrictive materials. The characterization considered here is based upon the Jiles-Atherton mean field model for ferromagnetic hysteresis in combination with a quadratic moment rotation model for magnetostriction. As demonstrated through comparison with experimental data, the magnetization model very adequately quantifies both major and minor loops under various operating conditions. The combined model can then be used to accurately characterize output strains at moderate drive levels. The advantages to this model lie in the small number (six) of required parameters and the flexibility it exhibits in a variety of operating conditions.

  8. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 2: Model equations and base aircraft data

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.

  9. A distributed parameter model of transmission line transformer for high voltage nanosecond pulse generation

    NASA Astrophysics Data System (ADS)

    Li, Jiangtao; Zhao, Zheng; Li, Longjie; He, Jiaxin; Li, Chenjie; Wang, Yifeng; Su, Can

    2017-09-01

    A transmission line transformer has potential advantages for nanosecond pulse generation including excellent frequency response and no leakage inductance. The wave propagation process in a secondary mode line is indispensable due to an obvious inside transient electromagnetic transition in this scenario. The equivalent model of the transmission line transformer is crucial for predicting the output waveform and evaluating the effects of magnetic cores on output performance. However, traditional lumped parameter models are not sufficient for nanosecond pulse generation due to the natural neglect of wave propagations in secondary mode lines based on a lumped parameter assumption. In this paper, a distributed parameter model of transmission line transformer was established to investigate wave propagation in the secondary mode line and its influential factors through theoretical analysis and experimental verification. The wave propagation discontinuity in the secondary mode line induced by magnetic cores is emphasized. Characteristics of the magnetic core under a nanosecond pulse were obtained by experiments. Distribution and formation of the secondary mode current were determined for revealing essential wave propagation processes in secondary mode lines. The output waveform and efficiency were found to be affected dramatically by wave propagation discontinuity in secondary mode lines induced by magnetic cores. The proposed distributed parameter model was proved more suitable for nanosecond pulse generation in aspects of secondary mode current, output efficiency, and output waveform. In depth, comprehension of underlying mechanisms and a broader view of the working principle of the transmission line transformer for nanosecond pulse generation can be obtained through this research.

  10. A distributed parameter model of transmission line transformer for high voltage nanosecond pulse generation.

    PubMed

    Li, Jiangtao; Zhao, Zheng; Li, Longjie; He, Jiaxin; Li, Chenjie; Wang, Yifeng; Su, Can

    2017-09-01

    A transmission line transformer has potential advantages for nanosecond pulse generation including excellent frequency response and no leakage inductance. The wave propagation process in a secondary mode line is indispensable due to an obvious inside transient electromagnetic transition in this scenario. The equivalent model of the transmission line transformer is crucial for predicting the output waveform and evaluating the effects of magnetic cores on output performance. However, traditional lumped parameter models are not sufficient for nanosecond pulse generation due to the natural neglect of wave propagations in secondary mode lines based on a lumped parameter assumption. In this paper, a distributed parameter model of transmission line transformer was established to investigate wave propagation in the secondary mode line and its influential factors through theoretical analysis and experimental verification. The wave propagation discontinuity in the secondary mode line induced by magnetic cores is emphasized. Characteristics of the magnetic core under a nanosecond pulse were obtained by experiments. Distribution and formation of the secondary mode current were determined for revealing essential wave propagation processes in secondary mode lines. The output waveform and efficiency were found to be affected dramatically by wave propagation discontinuity in secondary mode lines induced by magnetic cores. The proposed distributed parameter model was proved more suitable for nanosecond pulse generation in aspects of secondary mode current, output efficiency, and output waveform. In depth, comprehension of underlying mechanisms and a broader view of the working principle of the transmission line transformer for nanosecond pulse generation can be obtained through this research.

  11. Thin disk laser with unstable resonator and reduced output coupler

    NASA Astrophysics Data System (ADS)

    Gavili, Anwar; Shayganmanesh, Mahdi

    2018-05-01

    In this paper, feasibility of using unstable resonator with reduced output coupling in a thin disk laser is studied theoretically. Unstable resonator is modeled by wave-optics using Collins integral and iterative method. An Yb:YAG crystal with 250 micron thickness is considered as a quasi-three level active medium and modeled by solving rate equations of energy levels populations. The amplification of laser beam in the active medium is calculated based on the Beer-Lambert law and Rigrod method. Using generalized beam parameters method, laser beam parameters like, width, divergence, M2 factor, output power as well as near and far-field beam profiles are calculated for unstable resonator. It is demonstrated that for thin disk laser (with single disk) in spite of the low thickness of the disk which leads to low gain factor, it is possible to use unstable resonator (with reduced output coupling) and achieve good output power with appropriate beam quality. Also, the behavior of output power and beam quality versus equivalent Fresnel number is investigated and optimized value of output coupling for maximum output power is achieved.

  12. The Ecological Model Web Concept: A Consultative Infrastructure for Decision Makers and Researchers

    NASA Astrophysics Data System (ADS)

    Geller, G.; Nativi, S.

    2011-12-01

    Rapid climate and socioeconomic changes may be outrunning society's ability to understand, predict, and respond to change effectively. Decision makers want better information about what these changes will be and how various resources will be affected, while researchers want better understanding of the components and processes of ecological systems, how they interact, and how they respond to change. Although there are many excellent models in ecology and related disciplines, there is only limited coordination among them, and accessible, openly shared models or model systems that can be consulted to gain insight on important ecological questions or assist with decision-making are rare. A "consultative infrastructure" that increased access to and sharing of models and model outputs would benefit decision makers, researchers, as well as modelers. Of course, envisioning such an ambitious system is much easier than building it, but several complementary approaches exist that could contribute. The one discussed here is called the Model Web. This is a concept for an open-ended system of interoperable computer models and databases based on making models and their outputs available as services ("model as a service"). Initially, it might consist of a core of several models from which it could grow gradually as new models or databases were added. However, a model web would not be a monolithic, rigidly planned and built system--instead, like the World Wide Web, it would grow largely organically, with limited central control, within a framework of broad goals and data exchange standards. One difference from the WWW is that a model web is much harder to create, and has more pitfalls, and thus is a long term vision. However, technology, science, observations, and models have advanced enough so that parts of an ecological model web can be built and utilized now, forming a framework for gradual growth as well as a broadly accessible infrastructure. Ultimately, the value of a model web lies in the increase in access to and sharing of both models and model outputs. By lowering access barriers to models and their outputs there is less reinvention, more efficient use of resources, greater interaction among researchers and across disciplines, as well as other benefits. The growth of such a system of models fits well with the concept and architecture of the Global Earth Observing System of Systems (GEOSS) as well as the Semantic Web. And, while framed here in the context of ecological forecasting, the same concept can be applied to any discipline utilizing models.

  13. Chemical Modeling for Studies of GeoTRACE Capabilities

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Geostationary measurements of tropospheric pollutants with high spatial and temporal resolution will revolutionize the understanding and predictions of the chemically linked global pollutants aerosols and ozone. However, the capabilities of proposed geostationary instruments, particularly GeoTRACE, have not been thoroughly studied with model simulations. Such model simulations are important to answer the questions and allay the concerns that have been expressed in the atmospheric sciences community about the feasibility of such measurements. We proposed a suite of chemical transport model simulations using the EPA Models 3 chemical transport model, which obtains its meteorology from the MM-5 mesoscale model. The model output consists of gridded abundances of chemical pollutants and meteorological parameters every 30-60 minutes for cases that have occurred in the Eastern United States. This output was intended to be used to test the GeoTRACE capability to retrieve the tropospheric columns of these pollutants.

  14. Using Dynamically Downscaled Climate Model Outputs to Inform Projections of Extreme Precipitation Events

    NASA Technical Reports Server (NTRS)

    Wobus, Cameron; Reynolds, Lara; Jones, Russell; Horton, Radley; Smith, Joel; Fries, J. Stephen; Tryby, Michael; Spero, Tanya; Nolte, Chris

    2015-01-01

    Many of the storms that generate damaging floods are caused by locally intense, sub-daily precipitation, yet the spatial and temporal resolution of the most widely available climate model outputs are both too coarse to simulate these events. Thus there is often a disconnect between the nature of the events that cause damaging floods and the models used to project how climate change might influence their magnitude. This could be a particular problem when developing scenarios to inform future storm water management options under future climate scenarios. In this study we sought to close this gap, using sub-daily outputs from the Weather Research and Forecasting model (WRF) from each of the nine climate regions in the United States. Specifically, we asked 1) whether WRF outputs projected consistent patterns of change for sub-daily and daily precipitation extremes; and 2) whether this dynamically downscaled model projected different magnitudes of change for 3-hourly vs 24-hourly extreme events. We extracted annual maximum values for 3-hour through 24-hour precipitation totals from an 11-year time series of hindcast (1995-2005) and mid-century (2045-2055) climate, and calculated the direction and magnitude of change for 3-hour and 24-hour extreme events over this timeframe. The model results project that the magnitude of both 3-hour and 24-hour events will increase over most regions of the United States, but there was no clear or consistent difference in the relative magnitudes of change for sub-daily vs daily events.

  15. Evaluate and Analysis Efficiency of Safaga Port Using DEA-CCR, BCC and SBM Models-Comparison with DP World Sokhna

    NASA Astrophysics Data System (ADS)

    Elsayed, Ayman; Shabaan Khalil, Nabil

    2017-10-01

    The competition among maritime ports is increasing continuously; the main purpose of Safaga port is to become the best option for companies to carry out their trading activities, particularly importing and exporting The main objective of this research is to evaluate and analyze factors that may significantly affect the levels of Safaga port efficiency in Egypt (particularly the infrastructural capacity). The assessment of such efficiency is a task that must play an important role in the management of Safaga port in order to improve the possibility of development and success in commercial activities. Drawing on Data Envelopment Analysis(DEA)models, this paper develops a manner of assessing the comparative efficiency of Safaga port in Egypt during the study period 2004-2013. Previous research for port efficiencies measurement usually using radial DEA models (DEA-CCR), (DEA-BCC), but not using non radial DEA model. The research applying radial - output oriented (DEA-CCR), (DEA-BCC) and non-radial (DEA-SBM) model with ten inputs and four outputs. The results were obtained from the analysis input and output variables based on DEA-CCR, DEA-BCC and SBM models, by software Max DEA Pro 6.3. DP World Sokhna port higher efficiency for all outputs were compared to Safaga port. DP World Sokhna position is below the southern entrance to the Suez Canal, on the Red Sea, Egypt, makes it strategically located to handle cargo transiting through one of the world's busiest commercial waterways.

  16. Data Assimilation of AirSWOT and Synthetically Derived SWOT Observations of Water Surface Elevation in a Multichannel River

    NASA Astrophysics Data System (ADS)

    Altenau, E. H.; Pavelsky, T.; Andreadis, K.; Bates, P. D.; Neal, J. C.

    2017-12-01

    Multichannel rivers continue to be challenging features to quantify, especially at regional and global scales, which is problematic because accurate representations of such environments are needed to properly monitor the earth's water cycle as it adjusts to climate change. It has been demonstrated that higher-complexity, 2D models outperform lower-complexity, 1D models in simulating multichannel river hydraulics at regional scales due to the inclusion of the channel network's connectivity. However, new remote sensing measurements from the future Surface Water and Ocean Topography (SWOT) mission and it's airborne analog AirSWOT offer new observations that can be used to try and improve the lower-complexity, 1D models to achieve accuracies closer to the higher-complexity, 2D codes. Here, we use an Ensemble Kalman Filter (EnKF) to assimilate AirSWOT water surface elevation (WSE) measurements from a 2015 field campaign into a 1D hydrodynamic model along a 90 km reach of Tanana River, AK. This work is the first to test data assimilation methods using real SWOT-like data from AirSWOT. Additionally, synthetic SWOT observations of WSE are generated across the same study site using a fine-resolution 2D model and assimilated into the coarser-resolution 1D model. Lastly, we compare the abilities of AirSWOT and the synthetic-SWOT observations to improve spatial and temporal model outputs in WSEs. Results indicate 1D model outputs of spatially distributed WSEs improve as observational coverage increases, and improvements in temporal fluctuations in WSEs depend on the number of observations. Furthermore, results reveal that assimilation of AirSWOT observations produce greater error reductions in 1D model outputs compared to synthetic SWOT observations due to lower measurement errors. Both AirSWOT and the synthetic SWOT observations significantly lower spatial and temporal errors in 1D model outputs of WSEs.

  17. Modeling the control of the central nervous system over the cardiovascular system using support vector machines.

    PubMed

    Díaz, José; Acosta, Jesús; González, Rafael; Cota, Juan; Sifuentes, Ernesto; Nebot, Àngela

    2018-02-01

    The control of the central nervous system (CNS) over the cardiovascular system (CS) has been modeled using different techniques, such as fuzzy inductive reasoning, genetic fuzzy systems, neural networks, and nonlinear autoregressive techniques; the results obtained so far have been significant, but not solid enough to describe the control response of the CNS over the CS. In this research, support vector machines (SVMs) are used to predict the response of a branch of the CNS, specifically, the one that controls an important part of the cardiovascular system. To do this, five models are developed to emulate the output response of five controllers for the same input signal, the carotid sinus blood pressure (CSBP). These controllers regulate parameters such as heart rate, myocardial contractility, peripheral and coronary resistance, and venous tone. The models are trained using a known set of input-output response in each controller; also, there is a set of six input-output signals for testing each proposed model. The input signals are processed using an all-pass filter, and the accuracy performance of the control models is evaluated using the percentage value of the normalized mean square error (MSE). Experimental results reveal that SVM models achieve a better estimation of the dynamical behavior of the CNS control compared to others modeling systems. The main results obtained show that the best case is for the peripheral resistance controller, with a MSE of 1.20e-4%, while the worst case is for the heart rate controller, with a MSE of 1.80e-3%. These novel models show a great reliability in fitting the output response of the CNS which can be used as an input to the hemodynamic system models in order to predict the behavior of the heart and blood vessels in response to blood pressure variations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Modeling health impact of global health programs implemented by Population Services International

    PubMed Central

    2013-01-01

    Background Global health implementing organizations benefit most from health impact estimation models that isolate the individual effects of distributed products and services - a feature not typically found in intervention impact models, but which allow comparisons across interventions and intervention settings. Population Services International (PSI), a social marketing organization, has developed a set of impact models covering seven health program areas, which translate product/service distribution data into impact estimates. Each model's primary output is the number of disability-adjusted life-years (DALYs) averted by an intervention within a specific country and population context. This paper aims to describe the structure and inputs for two types of DALYs averted models, considering the benefits and limitations of this methodology. Methods PSI employs two modeling approaches for estimating health impact: a macro approach for most interventions and a micro approach for HIV, tuberculosis (TB), and behavior change communication (BCC) interventions. Within each intervention country context, the macro approach determines the coverage that one product/service unit provides a population in person-years, whereas the micro approach estimates an individual's risk of infection with and without the product/service unit. The models use these estimations to generate per unit DALYs averted coefficients for each intervention. When multiplied by program output data, these coefficients predict the total number of DALYs averted by an intervention in a country. Results Model outputs are presented by country for two examples: Water Chlorination DALYs Averted Model, a macro model, and the HIV Condom DALYs Averted Model for heterosexual transmission, a micro model. Health impact estimates measured in DALYs averted for PSI interventions on a global level are also presented. Conclusions The DALYs averted models offer implementing organizations practical measurement solutions for understanding an intervention's contribution to improving health. These models calculate health impact estimates that reflect the scale and diversity of program operations and intervention settings, and that enable comparisons across health areas and countries. Challenges remain in accounting for intervention synergies, attributing impact to a single organization, and sourcing and updating model inputs. Nevertheless, these models demonstrate how DALYs averted can be viably used by the global health community as a metric for predicting intervention impact using standard program output data. PMID:23902668

  19. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.

  20. Through the looking glass: making the design and output of economic models useful for setting medical policy.

    PubMed

    Ollendorf, Daniel A; Pearson, Steven D

    2014-01-01

    Economic modeling has rarely been considered to be an essential component of healthcare policy-making in the USA, due to a lack of transparency in model design and assumptions, as well as political interests that equate examination of cost with unfair rationing. The Institute for Clinical and Economic Review has been involved in several efforts to bring economic modeling into public discussion of the comparative value of healthcare interventions, efforts that have evolved over time to suit the needs of multiple public forums. In this article, we review these initiatives and present a template that attempts to 'unpack' model output and present the major drivers of outcomes and cost. We conclude with a series of recommendations for effective presentation of economic models to US policy-makers.

  1. Using multiple linear regression model to estimate thunderstorm activity

    NASA Astrophysics Data System (ADS)

    Suparta, W.; Putro, W. S.

    2017-03-01

    This paper is aimed to develop a numerical model with the use of a nonlinear model to estimate the thunderstorm activity. Meteorological data such as Pressure (P), Temperature (T), Relative Humidity (H), cloud (C), Precipitable Water Vapor (PWV), and precipitation on a daily basis were used in the proposed method. The model was constructed with six configurations of input and one target output. The output tested in this work is the thunderstorm event when one-year data is used. Results showed that the model works well in estimating thunderstorm activities with the maximum epoch reaching 1000 iterations and the percent error was found below 50%. The model also found that the thunderstorm activities in May and October are detected higher than the other months due to the inter-monsoon season.

  2. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas

    This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less

  4. Modeling and validation of single-chamber microbial fuel cell cathode biofilm growth and response to oxidant gas composition

    NASA Astrophysics Data System (ADS)

    Ou, Shiqi; Zhao, Yi; Aaron, Douglas S.; Regan, John M.; Mench, Matthew M.

    2016-10-01

    This work describes experiments and computational simulations to analyze single-chamber, air-cathode microbial fuel cell (MFC) performance and cathodic limitations in terms of current generation, power output, mass transport, biomass competition, and biofilm growth. Steady-state and transient cathode models were developed and experimentally validated. Two cathode gas mixtures were used to explore oxygen transport in the cathode: the MFCs exposed to a helium-oxygen mixture (heliox) produced higher current and power output than the group of MFCs exposed to air or a nitrogen-oxygen mixture (nitrox), indicating a dependence on gas-phase transport in the cathode. Multi-substance transport, biological reactions, and electrochemical reactions in a multi-layer and multi-biomass cathode biofilm were also simulated in a transient model. The transient model described biofilm growth over 15 days while providing insight into mass transport and cathodic dissolved species concentration profiles during biofilm growth. Simulation results predict that the dissolved oxygen content and diffusion in the cathode are key parameters affecting the power output of the air-cathode MFC system, with greater oxygen content in the cathode resulting in increased power output and fully-matured biomass.

  5. Modeling and validation of single-chamber microbial fuel cell cathode biofilm growth and response to oxidant gas composition

    DOE PAGES

    Ou, Shiqi; Zhao, Yi; Aaron, Douglas S.; ...

    2016-08-15

    This work describes experiments and computational simulations to analyze single-chamber, air-cathode microbial fuel cell (MFC) performance and cathodic limitations in terms of current generation, power output, mass transport, biomass competition, and biofilm growth. Steady-state and transient cathode models were developed and experimentally validated. Two cathode gas mixtures were used to explore oxygen transport in the cathode: the MFCs exposed to a helium-oxygen mixture (heliox) produced higher current and power output than the group of MFCs exposed to air or a nitrogen-oxygen mixture (nitrox), indicating a dependence on gas-phase transport in the cathode. Multi-substance transport, biological reactions, and electrochemical reactions inmore » a multi-layer and multi-biomass cathode biofilm were also simulated in a transient model. The transient model described biofilm growth over 15 days while providing insight into mass transport and cathodic dissolved species concentration profiles during biofilm growth. Lastly, simulation results predict that the dissolved oxygen content and diffusion in the cathode are key parameters affecting the power output of the air-cathode MFC system, with greater oxygen content in the cathode resulting in increased power output and fully-matured biomass.« less

  6. Integration of artificial intelligence methods and life cycle assessment to predict energy output and environmental impacts of paddy production.

    PubMed

    Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing

    2018-08-01

    Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. High-frequency output characteristics of AlGaAs/GaAs heterojunction bipolar transistors for large-signal applications

    NASA Astrophysics Data System (ADS)

    Chen, J.; Gao, G. B.; Ünlü, M. S.; Morkoç, H.

    1991-11-01

    High-frequency ic- vce output characteristics of bipolar transistors, derived from calculated device cutoff frequencies, are reported. The generation of high-frequency output characteristics from device design specifications represents a novel bridge between microwave circuit design and device design: the microwave performance of simulated device structures can be analyzed, or tailored transistor device structures can be designed to fit specific circuit applications. The details of our compact transistor model are presented, highlighting the high-current base-widening (Kirk) effect. The derivation of the output characteristics from the modeled cutoff frequencies are then presented, and the computed characteristics of an AlGaAs/GaAs heterojunction bipolar transistor operating at 10 GHz are analyzed. Applying the derived output characteristics to microwave circuit design, we examine large-signal class A and class B amplification.

  8. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  9. Utilizing Mars Global Reference Atmospheric Model (Mars-GRAM 2005) to Evaluate Entry Probe Mission Sites

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.

    2008-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering-level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. The "auxiliary profile" option is one new feature of Mars-GRAM 2005. This option uses an input file of temperature and density versus altitude to replace the mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. Any source of data or alternate model output can be used to generate an auxiliary profile. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) and a global Thermal Emission Spectrometer (TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude-longitude bins and 15 degree Ls bins, for each of three Mars years of TES nadir data. The Mars Science Laboratory (MSL) sites are used as a sample of how Mars-GRAM' could be a valuable tool for planning of future Mars entry probe missions. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate MSL landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  10. Modeling of a resonant heat engine

    NASA Astrophysics Data System (ADS)

    Preetham, B. S.; Anderson, M.; Richards, C.

    2012-12-01

    A resonant heat engine in which the piston assembly is replaced by a sealed elastic cavity is modeled and analyzed. A nondimensional lumped-parameter model is derived and used to investigate the factors that control the performance of the engine. The thermal efficiency predicted by the model agrees with that predicted from the relation for the Otto cycle based on compression ratio. The predictions show that for a fixed mechanical load, increasing the heat input results in increased efficiency. The output power and power density are shown to depend on the loading for a given heat input. The loading condition for maximum output power is different from that required for maximum power density.

  11. Modeling a dielectric elastomer as driven by triboelectric nanogenerator

    NASA Astrophysics Data System (ADS)

    Chen, Xiangyu; Jiang, Tao; Wang, Zhong Lin

    2017-01-01

    By integrating a triboelectric nanogenerator (TENG) and a thin film dielectric elastomer actuator (DEA), the DEA can be directly powered and controlled by the output of the TENG, which demonstrates a self-powered actuation system toward various practical applications in the fields of electronic skin and soft robotics. This paper describes a method to construct a physical model for this integrated TENG-DEA system on the basis of nonequilibrium thermodynamics and electrostatics induction theory. The model can precisely simulate the influences from both the viscoelasticity and current leakage to the output performance of the TENG, which can help us to better understand the interaction between TENG and DEA devices. Accordingly, the established electric field, the deformation strain of the DEA, and the output current from the TENG are systemically analyzed by using this model. A comparison between real measurements and simulation results confirms that the proposed model can predict the dynamic response of the DEA driven by contact-electrification and can also quantitatively analyze the relaxation of the tribo-induced strain due to the leakage behavior. Hence, the proposed model in this work could serve as a guidance for optimizing the devices in the future studies.

  12. Feasibility study of dish/stirling power systems in Turkey

    NASA Astrophysics Data System (ADS)

    Zilanlı, Gülin Acarol; Eray, Aynur

    2017-06-01

    In this study, two different commercial dish/stirling systems, SES (Stirling Energy Systems) and WGA-ADDS (WGAssociates - Advanced Dish Development System), are modeled using the "System Advisor Model" (SAM) modeling software in designated settlement areas. Both systems are modeled for the US state of Albuquerque, where they were designed, and Turkish provinces of Ankara, Van, Muğla, Mersin, Urfa and Konya. At first, the dish/stirling system is optimized according to the power output values and the system loss parameters. Then, the layout of the solar field is designed with an installed capacity of 600kW both of SES and WGA-ADDS systems, Upon securing the most suitable layout, the system is modeled for the aforementioned settlements using the optimum output values gathered from the parametric analysis. As a result of the simulation studies, the applicability of this model is discussed according to the power output and the efficiency. Although Turkey is located in an area called "the sun belt" where solar energy technologies can be used, there is no advanced application of these systems. This study aims to discuss the application of these systems in detail and to pave the way for future studies in this field.

  13. Multiscale Methods for Accurate, Efficient, and Scale-Aware Models of the Earth System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldhaber, Steve; Holland, Marika

    The major goal of this project was to contribute improvements to the infrastructure of an Earth System Model in order to support research in the Multiscale Methods for Accurate, Efficient, and Scale-Aware models of the Earth System project. In support of this, the NCAR team accomplished two main tasks: improving input/output performance of the model and improving atmospheric model simulation quality. Improvement of the performance and scalability of data input and diagnostic output within the model required a new infrastructure which can efficiently handle the unstructured grids common in multiscale simulations. This allows for a more computationally efficient model, enablingmore » more years of Earth System simulation. The quality of the model simulations was improved by reducing grid-point noise in the spectral element version of the Community Atmosphere Model (CAM-SE). This was achieved by running the physics of the model using grid-cell data on a finite-volume grid.« less

  14. Reduced-order modeling for hyperthermia control.

    PubMed

    Potocki, J K; Tharp, H S

    1992-12-01

    This paper analyzes the feasibility of using reduced-order modeling techniques in the design of multiple-input, multiple-output (MIMO) hyperthermia temperature controllers. State space thermal models are created based upon a finite difference expansion of the bioheat transfer equation model of a scanned focused ultrasound system (SFUS). These thermal state space models are reduced using the balanced realization technique, and an order reduction criterion is tabulated. Results show that a drastic reduction in model dimension can be achieved using the balanced realization. The reduced-order model is then used to design a reduced-order optimal servomechanism controller for a two-scan input, two thermocouple output tissue model. In addition, a full-order optimal servomechanism controller is designed for comparison and validation purposes. These two controllers are applied to a variety of perturbed tissue thermal models to test the robust nature of the reduced-order controller. A comparison of the two controllers validates the use of open-loop balanced reduced-order models in the design of MIMO hyperthermia controllers.

  15. Assimilation of a knowledge base and physical models to reduce errors in passive-microwave classifications of sea ice

    NASA Technical Reports Server (NTRS)

    Maslanik, J. A.; Key, J.

    1992-01-01

    An expert system framework has been developed to classify sea ice types using satellite passive microwave data, an operational classification algorithm, spatial and temporal information, ice types estimated from a dynamic-thermodynamic model, output from a neural network that detects the onset of melt, and knowledge about season and region. The rule base imposes boundary conditions upon the ice classification, modifies parameters in the ice algorithm, determines a `confidence' measure for the classified data, and under certain conditions, replaces the algorithm output with model output. Results demonstrate the potential power of such a system for minimizing overall error in the classification and for providing non-expert data users with a means of assessing the usefulness of the classification results for their applications.

  16. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  17. Robust adaptive controller design for a class of uncertain nonlinear systems using online T-S fuzzy-neural modeling approach.

    PubMed

    Chien, Yi-Hsing; Wang, Wei-Yen; Leu, Yih-Guang; Lee, Tsu-Tian

    2011-04-01

    This paper proposes a novel method of online modeling and control via the Takagi-Sugeno (T-S) fuzzy-neural model for a class of uncertain nonlinear systems with some kinds of outputs. Although studies about adaptive T-S fuzzy-neural controllers have been made on some nonaffine nonlinear systems, little is known about the more complicated uncertain nonlinear systems. Because the nonlinear functions of the systems are uncertain, traditional T-S fuzzy control methods can model and control them only with great difficulty, if at all. Instead of modeling these uncertain functions directly, we propose that a T-S fuzzy-neural model approximates a so-called virtual linearized system (VLS) of the system, which includes modeling errors and external disturbances. We also propose an online identification algorithm for the VLS and put significant emphasis on robust tracking controller design using an adaptive scheme for the uncertain systems. Moreover, the stability of the closed-loop systems is proven by using strictly positive real Lyapunov theory. The proposed overall scheme guarantees that the outputs of the closed-loop systems asymptotically track the desired output trajectories. To illustrate the effectiveness and applicability of the proposed method, simulation results are given in this paper.

  18. Computing the modal mass from the state space model in combined experimental-operational modal analysis

    NASA Astrophysics Data System (ADS)

    Cara, Javier

    2016-05-01

    Modal parameters comprise natural frequencies, damping ratios, modal vectors and modal masses. In a theoretic framework, these parameters are the basis for the solution of vibration problems using the theory of modal superposition. In practice, they can be computed from input-output vibration data: the usual procedure is to estimate a mathematical model from the data and then to compute the modal parameters from the estimated model. The most popular models for input-output data are based on the frequency response function, but in recent years the state space model in the time domain has become popular among researchers and practitioners of modal analysis with experimental data. In this work, the equations to compute the modal parameters from the state space model when input and output data are available (like in combined experimental-operational modal analysis) are derived in detail using invariants of the state space model: the equations needed to compute natural frequencies, damping ratios and modal vectors are well known in the operational modal analysis framework, but the equation needed to compute the modal masses has not generated much interest in technical literature. These equations are applied to both a numerical simulation and an experimental study in the last part of the work.

  19. The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter

    We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. Aboutmore » 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). In conclusion, the relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.« less

  20. The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model

    DOE PAGES

    Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter

    2018-02-27

    We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. Aboutmore » 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). In conclusion, the relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.« less

  1. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine

    PubMed Central

    Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B.; Rowley, Andrew; Sugiarto, Indar; Furber, Steve

    2017-01-01

    We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three “nodes,” where each node is the “basic building block” LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W. PMID:28848380

  2. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine.

    PubMed

    Sen-Bhattacharya, Basabdatta; Serrano-Gotarredona, Teresa; Balassa, Lorinc; Bhattacharya, Akash; Stokes, Alan B; Rowley, Andrew; Sugiarto, Indar; Furber, Steve

    2017-01-01

    We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN) developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a "basic building block" for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP)-brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG). Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10-50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz) implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi-node architecture consisting of three "nodes," where each node is the "basic building block" LGN model. This 420 neuron model is tested with synthetic periodic stimulus at 10 Hz to all the nodes. The model output is the average of the outputs from all nodes, and conforms to the above-mentioned predictions of each node. Power consumption for model simulation on SpiNNaker is ≪1 W.

  3. Model input and output files for the simulation of time of arrival of landfill leachate at the water table, Municipal Solid Waste Landfill Facility, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso County, Texas

    USGS Publications Warehouse

    Abeyta, Cynthia G.; Frenzel, Peter F.

    1999-01-01

    This report contains listings of model input and output files for the simulation of the time of arrival of landfill leachate at the water table from the Municipal Solid Waste Landfill Facility (MSWLF), about 10 miles northeast of downtown El Paso, Texas. This simulation was done by the U.S. Geological Survey in cooperation with the U.S. Department of the Army, U.S. Army Air Defense Artillery Center and Fort Bliss, El Paso, Texas. The U.S. Environmental Protection Agency-developed Hydrologic Evaluation of Landfill Performance (HELP) and Multimedia Exposure Assessment (MULTIMED) computer models were used to simulate the production of leachate by a landfill and transport of landfill leachate to the water table. Model input data files used with and output files generated by the HELP and MULTIMED models are provided in ASCII format on a 3.5-inch 1.44-megabyte IBM-PC compatible floppy disk.

  4. H∞ output tracking control of uncertain and disturbed nonlinear systems based on neural network model

    NASA Astrophysics Data System (ADS)

    Li, Chengcheng; Li, Yuefeng; Wang, Guanglin

    2017-07-01

    The work presented in this paper seeks to address the tracking problem for uncertain continuous nonlinear systems with external disturbances. The objective is to obtain a model that uses a reference-based output feedback tracking control law. The control scheme is based on neural networks and a linear difference inclusion (LDI) model, and a PDC structure and H∞ performance criterion are used to attenuate external disturbances. The stability of the whole closed-loop model is investigated using the well-known quadratic Lyapunov function. The key principles of the proposed approach are as follows: neural networks are first used to approximate nonlinearities, to enable a nonlinear system to then be represented as a linearised LDI model. An LMI (linear matrix inequality) formula is obtained for uncertain and disturbed linear systems. This formula enables a solution to be obtained through an interior point optimisation method for some nonlinear output tracking control problems. Finally, simulations and comparisons are provided on two practical examples to illustrate the validity and effectiveness of the proposed method.

  5. Improved first-order uncertainty method for water-quality modeling

    USGS Publications Warehouse

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  6. Uncertainty importance analysis using parametric moment ratio functions.

    PubMed

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2014-02-01

    This article presents a new importance analysis framework, called parametric moment ratio function, for measuring the reduction of model output uncertainty when the distribution parameters of inputs are changed, and the emphasis is put on the mean and variance ratio functions with respect to the variances of model inputs. The proposed concepts efficiently guide the analyst to achieve a targeted reduction on the model output mean and variance by operating on the variances of model inputs. The unbiased and progressive unbiased Monte Carlo estimators are also derived for the parametric mean and variance ratio functions, respectively. Only a set of samples is needed for implementing the proposed importance analysis by the proposed estimators, thus the computational cost is free of input dimensionality. An analytical test example with highly nonlinear behavior is introduced for illustrating the engineering significance of the proposed importance analysis technique and verifying the efficiency and convergence of the derived Monte Carlo estimators. Finally, the moment ratio function is applied to a planar 10-bar structure for achieving a targeted 50% reduction of the model output variance. © 2013 Society for Risk Analysis.

  7. Stochastic process approximation for recursive estimation with guaranteed bound on the error covariance

    NASA Technical Reports Server (NTRS)

    Menga, G.

    1975-01-01

    An approach, is proposed for the design of approximate, fixed order, discrete time realizations of stochastic processes from the output covariance over a finite time interval, was proposed. No restrictive assumptions are imposed on the process; it can be nonstationary and lead to a high dimension realization. Classes of fixed order models are defined, having the joint covariance matrix of the combined vector of the outputs in the interval of definition greater or equal than the process covariance; (the difference matrix is nonnegative definite). The design is achieved by minimizing, in one of those classes, a measure of the approximation between the model and the process evaluated by the trace of the difference of the respective covariance matrices. Models belonging to these classes have the notable property that, under the same measurement system and estimator structure, the output estimation error covariance matrix computed on the model is an upper bound of the corresponding covariance on the real process. An application of the approach is illustrated by the modeling of random meteorological wind profiles from the statistical analysis of historical data.

  8. Variation in the Mississippi River Plume from Data Synthesis of Model Outputs and MODIS Imagery

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, C.; Kolker, A.; Chu, P. Y.

    2017-12-01

    Understanding the Mississippi River (MR) plume's interaction with the open ocean is crucial for understanding many processes in the Gulf of Mexico. Though the Mississippi River and its delta and plume have been studied extensively, recent archives of model products and satellite imagery have allowed us to highlight patterns in plume behavior over the last two decades through large scale data synthesis. Using 8 years of USGS discharge data and Landsat imagery, we identified the spatial extent, geographic patterns, depth, and freshwater concentration of the MR plume across seasons and years. Using 20 years of HYCOM (HYbrid Coordinate Ocean Model) analysis and reanalysis model output, and several years of NGOFS FVCOM model outputs, we mapped the minimum and maximum spatial area of the MR plume, and its varied extent east and west. From the synthesis and analysis of these data, the statistical probability of the MR plume's spatial area and geographical extent were computed. Measurements of the MR plume and its response to river discharge may predict future behavior and provide a path forward to understanding MR plume influence on nearby ecosystems.

  9. Impact of a statistical bias correction on the projected simulated hydrological changes obtained from three GCMs and two hydrology models

    NASA Astrophysics Data System (ADS)

    Hagemann, Stefan; Chen, Cui; Haerter, Jan O.; Gerten, Dieter; Heinke, Jens; Piani, Claudio

    2010-05-01

    Future climate model scenarios depend crucially on their adequate representation of the hydrological cycle. Within the European project "Water and Global Change" (WATCH) special care is taken to couple state-of-the-art climate model output to a suite of hydrological models. This coupling is expected to lead to a better assessment of changes in the hydrological cycle. However, due to the systematic model errors of climate models, their output is often not directly applicable as input for hydrological models. Thus, the methodology of a statistical bias correction has been developed, which can be used for correcting climate model output to produce internally consistent fields that have the same statistical intensity distribution as the observations. As observations, global re-analysed daily data of precipitation and temperature are used that are obtained in the WATCH project. We will apply the bias correction to global climate model data of precipitation and temperature from the GCMs ECHAM5/MPIOM, CNRM-CM3 and LMDZ-4, and intercompare the bias corrected data to the original GCM data and the observations. Then, the orginal and the bias corrected GCM data will be used to force two global hydrology models: (1) the hydrological model of the Max Planck Institute for Meteorology (MPI-HM) consisting of the Simplified Land surface (SL) scheme and the Hydrological Discharge (HD) model, and (2) the dynamic vegetation model LPJmL operated by the Potsdam Institute for Climate Impact Research. The impact of the bias correction on the projected simulated hydrological changes will be analysed, and the resulting behaviour of the two hydrology models will be compared.

  10. Modelled vs. reconstructed past fire dynamics - how can we compare?

    NASA Astrophysics Data System (ADS)

    Brücher, Tim; Brovkin, Victor; Kloster, Silvia; Marlon, Jennifer R.; Power, Mitch J.

    2015-04-01

    Fire is an important process that affects climate through changes in CO2 emissions, albedo, and aerosols (Ward et al. 2012). Fire-history reconstructions from charcoal accumulations in sediment indicate that biomass burning has increased since the Last Glacial Maximum (Power et al. 2008; Marlon et al. 2013). Recent comparisons with transient climate model output suggest that this increase in global ?re activity is linked primarily to variations in temperature and secondarily to variations in precipitation (Daniau et al. 2012). In this study, we discuss the best way to compare global ?re model output with charcoal records. Fire models generate quantitative output for burned area and fire-related emissions of CO2, whereas charcoal data indicate relative changes in biomass burning for specific regions and time periods only. However, models can be used to relate trends in charcoal data to trends in quantitative changes in burned area or fire carbon emissions. Charcoal records are often reported as Z-scores (Power et al. 2008). Since Z-scores are non-linear power transformations of charcoal influxes, we must evaluate if, for example, a two-fold increase in the standardized charcoal reconstruction corresponds to a 2- or 200-fold increase in the area burned. In our study we apply the Z-score metric to the model output. This allows us to test how well the model can quantitatively reproduce the charcoal-based reconstructions and how Z-score metrics affect the statistics of model output. The Global Charcoal Database (GCD version 2.5; www.gpwg.org/gpwgdb.html) is used to determine regional and global paleofire trends from 218 sedimentary charcoal records covering part or all of the last 8 ka BP. To retrieve regional and global composites of changes in fire activity over the Holocene the time series of Z-scores are linearly averaged to achieve regional composites. A coupled climate-carbon cycle model, CLIMBA (Brücher et al. 2014), is used for this study. It consists of the CLIMBER-2 Earth system model of intermediate complexity and the JSBACH land component of the Max Planck Institute Earth System Model. The fire algorithm in JSBACH assumes a constant annual lightning cycle as the sole fire ignition mechanism (Arora and Boer 2005). To eliminate data processing differences as a source for potential discrepancies, the processing of both reconstructed and modeled data, including e.g. normalisation with respect to a given base period and aggregation of time series was done in exactly the same way. Here, we compare the aggregated time series on a hemispheric and regional scale.

  11. Simulating Pacific Northwest Forest Response to Climate Change: How We Made Model Results Useful for Vulnerability Assessments

    NASA Astrophysics Data System (ADS)

    Kim, J. B.; Kerns, B. K.; Halofsky, J.

    2014-12-01

    GCM-based climate projections and downscaled climate data proliferate, and there are many climate-aware vegetation models in use by researchers. Yet application of fine-scale DGVM based simulation output in national forest vulnerability assessments is not common, because there are technical, administrative and social barriers for their use by managers and policy makers. As part of a science-management climate change adaptation partnership, we performed simulations of vegetation response to climate change for four national forests in the Blue Mountains of Oregon using the MC2 dynamic global vegetation model (DGVM) for use in vulnerability assessments. Our simulation results under business-as-usual scenarios suggest a starkly different future forest conditions for three out of the four national forests in the study area, making their adoption by forest managers a potential challenge. However, using DGVM output to structure discussion of potential vegetation changes provides a suitable framework to discuss the dynamic nature of vegetation change compared to using more commonly available model output (e.g. species distribution models). From the onset, we planned and coordinated our work with national forest managers to maximize the utility and the consideration of the simulation results in planning. Key lessons from this collaboration were: (1) structured and strategic selection of a small number climate change scenarios that capture the range of variability in future conditions simplified results; (2) collecting and integrating data from managers for use in simulations increased support and interest in applying output; (3) a structured, regionally focused, and hierarchical calibration of the DGVM produced well-validated results; (4) simple approaches to quantifying uncertainty in simulation results facilitated communication; and (5) interpretation of model results in a holistic context in relation to multiple lines of evidence produced balanced guidance. This latest point demonstrates the importance of using model out as a forum for discussion along with other information, rather than using model output in an inappropriately predictive sense. These lessons are being applied currently to other national forests in the Pacific Northwest to contribute in vulnerability assessments.

  12. Reduced order modeling and active flow control of an inlet duct

    NASA Astrophysics Data System (ADS)

    Ge, Xiaoqing

    Many aerodynamic applications require the modeling of compressible flows in or around a body, e.g., the design of aircraft, inlet or exhaust duct, wind turbines, or tall buildings. Traditional methods use wind tunnel experiments and computational fluid dynamics (CFD) to investigate the spatial and temporal distribution of the flows. Although they provide a great deal of insight into the essential characteristics of the flow field, they are not suitable for control analysis and design due to the high physical/computational cost. Many model reduction methods have been studied to reduce the complexity of the flow model. There are two main approaches: linearization based input/output modeling and proper orthogonal decomposition (POD) based model reduction. The former captures mostly the local behavior near a steady state, which is suitable to model laminar flow dynamics. The latter obtains a reduced order model by projecting the governing equation onto an "optimal" subspace and is able to model complex nonlinear flow phenomena. In this research we investigate various model reduction approaches and compare them in flow modeling and control design. We propose an integrated model-based control methodology and apply it to the reduced order modeling and active flow control of compressible flows within a very aggressive (length to exit diameter ratio, L/D, of 1.5) inlet duct and its upstream contraction section. The approach systematically applies reduced order modeling, estimator design, sensor placement and control design to improve the aerodynamic performance. The main contribution of this work is the development of a hybrid model reduction approach that attempts to combine the best features of input/output model identification and POD method. We first identify a linear input/output model by using a subspace algorithm. We next project the difference between CFD response and the identified model response onto a set of POD basis. This trajectory is fit to a nonlinear dynamical model to augment the linear input/output model. Thus, the full system is decomposed into a dominant linear subsystem and a low order nonlinear subsystem. The hybrid model is then used for control design and compared with other modeling methods in CFD simulations. Numerical results indicate that the hybrid model accurately predicts the nonlinear behavior of the flow for a 2D diffuser contraction section model. It also performs best in terms of feedback control design and learning control. Since some outputs of interest (e.g., the AIP pressure recovery) are not observable during normal operations, static and dynamic estimators are designed to recreate the information from available sensor measurements. The latter also provides a state estimation for feedback controller. Based on the reduced order models and estimators, different controllers are designed to improve the aerodynamic performance of the contraction section and inlet duct. The integrated control methodology is evaluated with CFD simulations. Numerical results demonstrate the feasibility and efficacy of the active flow control based on reduced order models. Our reduced order models not only generate a good approximation of the nonlinear flow dynamics over a wide input range, but also help to design controllers that significantly improve the flow response. The tools developed for model reduction, estimator and control design can also be applied to wind tunnel experiment.

  13. The Geothermal Probabilistic Cost Model with an Application to a Geothermal Reservoir at Heber, California

    NASA Technical Reports Server (NTRS)

    Orren, L. H.; Ziman, G. M.; Jones, S. C.

    1981-01-01

    A financial accounting model that incorporates physical and institutional uncertainties was developed for geothermal projects. Among the uncertainties it can handle are well depth, flow rate, fluid temperature, and permit and construction times. The outputs of the model are cumulative probability distributions of financial measures such as capital cost, levelized cost, and profit. These outputs are well suited for use in an investment decision incorporating risk. The model has the powerful feature that conditional probability distribution can be used to account for correlations among any of the input variables. The model has been applied to a geothermal reservoir at Heber, California, for a 45-MW binary electric plant. Under the assumptions made, the reservoir appears to be economically viable.

  14. FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0

    USGS Publications Warehouse

    Durbin, Timothy J.; Bond, Linda D.

    1998-01-01

    This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.

  15. A model for a continuous-wave iodine laser

    NASA Technical Reports Server (NTRS)

    Hwang, In H.; Tabibi, Bagher M.

    1990-01-01

    A model for a continuous-wave (CW) iodine laser has been developed and compared with the experimental results obtained from a solar-simulator-pumped CW iodine laser. The agreement between the calculated laser power output and the experimental results is generally good for various laser parameters even when the model includes only prominent rate coefficients. The flow velocity dependence of the output power shows that the CW iodine laser cannot be achieved with a flow velocity below 1 m/s for the present solar-simulator-pumped CW iodine laser system.

  16. Life and reliability modeling of bevel gear reductions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Brikmanis, C. K.; Lewicki, D. G.; Coy, J. J.

    1985-01-01

    A reliability model is presented for bevel gear reductions with either a single input pinion or dual input pinions of equal size. The dual pinions may or may not have the same power applied for the analysis. The gears may be straddle mounted or supported in a bearing quill. The reliability model is based on the Weibull distribution. The reduction's basic dynamic capacity is defined as the output torque which may be applied for one million output rotations of the bevel gear with a 90 percent probability of reduction survival.

  17. A probabilistic method for constructing wave time-series at inshore locations using model scenarios

    USGS Publications Warehouse

    Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.

    2014-01-01

    Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.

  18. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less

  19. Perspectives to Performance of Environment and Health Assessments and Models—From Outputs to Outcomes?

    PubMed Central

    Pohjola, Mikko V.; Pohjola, Pasi; Tainio, Marko; Tuomisto, Jouni T.

    2013-01-01

    The calls for knowledge-based policy and policy-relevant research invoke a need to evaluate and manage environment and health assessments and models according to their societal outcomes. This review explores how well the existing approaches to assessment and model performance serve this need. The perspectives to assessment and model performance in the scientific literature can be called: (1) quality assurance/control, (2) uncertainty analysis, (3) technical assessment of models, (4) effectiveness and (5) other perspectives, according to what is primarily seen to constitute the goodness of assessments and models. The categorization is not strict and methods, tools and frameworks in different perspectives may overlap. However, altogether it seems that most approaches to assessment and model performance are relatively narrow in their scope. The focus in most approaches is on the outputs and making of assessments and models. Practical application of the outputs and the consequential outcomes are often left unaddressed. It appears that more comprehensive approaches that combine the essential characteristics of different perspectives are needed. This necessitates a better account of the mechanisms of collective knowledge creation and the relations between knowledge and practical action. Some new approaches to assessment, modeling and their evaluation and management span the chain from knowledge creation to societal outcomes, but the complexity of evaluating societal outcomes remains a challenge. PMID:23803642

  20. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Bader, Jon B.

    2009-01-01

    Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.

  1. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  2. Google Test MPI Listener

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey

    Google Test MPI Listener is a plugin for the Google Test c++ unit testing library that organizes test output of software that uses both the MPI parallel programming model and Google Test. Typically, such output is ordered arbitrarily and disorganized, making difficult the process of interpreting test output. This plug organizes output in MPI rank order, enabling easy interpretation of test results.

  3. Fuzzy logic modeling of the resistivity parameter and topography features for aquifer assessment in hydrogeological investigation of a crystalline basement complex

    NASA Astrophysics Data System (ADS)

    Adabanija, M. A.; Omidiora, E. O.; Olayinka, A. I.

    2008-05-01

    A linguistic fuzzy logic system (LFLS)-based expert system model has been developed for the assessment of aquifers for the location of productive water boreholes in a crystalline basement complex. The model design employed a multiple input/single output (MISO) approach with geoelectrical parameters and topographic features as input variables and control crisp value as the output. The application of the method to the data acquired in Khondalitic terrain, a basement complex in Vizianagaram District, south India, shows that potential groundwater resource zones that have control output values in the range 0.3295-0.3484 have a yield greater than 6,000 liters per hour (LPH). The range 0.3174-0.3226 gives a yield less than 4,000 LPH. The validation of the control crisp value using data acquired from Oban Massif, a basement complex in southeastern Nigeria, indicates a yield less than 3,000 LPH for control output values in the range 0.2938-0.3065. This validation corroborates the ability of control output values to predict a yield, thereby vindicating the applicability of linguistic fuzzy logic system in siting productive water boreholes in a basement complex.

  4. Markov Modeling of Component Fault Growth over a Derived Domain of Feasible Output Control Effort Modifications

    NASA Technical Reports Server (NTRS)

    Bole, Brian; Goebel, Kai; Vachtsevanos, George

    2012-01-01

    This paper introduces a novel Markov process formulation of stochastic fault growth modeling, in order to facilitate the development and analysis of prognostics-based control adaptation. A metric representing the relative deviation between the nominal output of a system and the net output that is actually enacted by an implemented prognostics-based control routine, will be used to define the action space of the formulated Markov process. The state space of the Markov process will be defined in terms of an abstracted metric representing the relative health remaining in each of the system s components. The proposed formulation of component fault dynamics will conveniently relate feasible system output performance modifications to predictions of future component health deterioration.

  5. Self-tuning multivariable pole placement control of a multizone crystal growth furnace

    NASA Technical Reports Server (NTRS)

    Batur, C.; Sharpless, R. B.; Duval, W. M. B.; Rosenthal, B. N.

    1992-01-01

    This paper presents the design and implementation of a multivariable self-tuning temperature controller for the control of lead bromide crystal growth. The crystal grows inside a multizone transparent furnace. There are eight interacting heating zones shaping the axial temperature distribution inside the furnace. A multi-input, multi-output furnace model is identified on-line by a recursive least squares estimation algorithm. A multivariable pole placement controller based on this model is derived and implemented. Comparison between single-input, single-output and multi-input, multi-output self-tuning controllers demonstrates that the zone-to-zone interactions can be minimized better by a multi-input, multi-output controller design. This directly affects the quality of crystal grown.

  6. SARAH 3.2: Dirac gauginos, UFO output, and more

    NASA Astrophysics Data System (ADS)

    Staub, Florian

    2013-07-01

    SARAH is a Mathematica package optimized for the fast, efficient and precise study of supersymmetric models beyond the MSSM: a new model can be defined in a short form and all vertices are derived. This allows SARAH to create model files for FeynArts/FormCalc, CalcHep/CompHep and WHIZARD/O'Mega. The newest version of SARAH now provides the possibility to create model files in the UFO format which is supported by MadGraph 5, MadAnalysis 5, GoSam, and soon by Herwig++. Furthermore, SARAH also calculates the mass matrices, RGEs and 1-loop corrections to the mass spectrum. This information is used to write source code for SPheno in order to create a precision spectrum generator for the given model. This spectrum-generator-generator functionality as well as the output of WHIZARD and CalcHep model files has seen further improvement in this version. Also models including Dirac gauginos are supported with the new version of SARAH, and additional checks for the consistency of the implementation of new models have been created. Program summaryProgram title:SARAH Catalogue identifier: AEIB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3 22 411 No. of bytes in distributed program, including test data, etc.: 3 629 206 Distribution format: tar.gz Programming language: Mathematica. Computer: All for which Mathematica is available. Operating system: All for which Mathematica is available. Classification: 11.1, 11.6. Catalogue identifier of previous version: AEIB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 808 Does the new version supersede the previous version?: Yes, the new version includes all known features of the previous version but also provides the new features mentioned below. Nature of problem: To use Madgraph for new models it is necessary to provide the corresponding model files which include all information about the interactions of the model. However, the derivation of the vertices for a given model and putting those into model files which can be used with Madgraph is usually very time consuming. Dirac gauginos are not present in the minimal supersymmetric standard model (MSSM) or many extensions of it. Dirac mass terms for vector superfields lead to new structures in the supersymmetric (SUSY) Lagrangian (bilinear mass term between gaugino and matter fermion as well as new D-terms) and modify also the SUSY renormalization group equations (RGEs). The Dirac character of gauginos can change the collider phenomenology. In addition, they come with an extended Higgs sector for which a precise calculation of the 1-loop masses has not happened so far. Solution method: SARAH calculates the complete Lagrangian for a given model whose gauge sector can be any direct product of SU(N) gauge groups. The chiral superfields can transform as any, irreducible representation with respect to these gauge groups and it is possible to handle an arbitrary number of symmetry breakings or particle rotations. Also the gauge fixing is automatically added. Using this information, SARAH derives all vertices for a model. These vertices can be exported to model files in the UFO which is supported by Madgraph and other codes like GoSam, MadAnalysis or ALOHA. The user can also study models with Dirac gauginos. In that case SARAH includes all possible terms in the Lagrangian stemming from the new structures and can also calculate the RGEs. The entire impact of these terms is then taken into account in the output of SARAH to UFO, CalcHep, WHIZARD, FeynArts and SPheno. Reasons for new version: SARAH provides, with this version, the possibility of creating model files in the UFO format. The UFO format is supposed to become a standard format for model files which should be supported by many different tools in the future. Also models with Dirac gauginos were not supported in earlier versions. Summary of revisions: Support of models with Dirac gauginos. Output of model files in the UFO format, speed improvement in the output of WHIZARD model files, CalcHep output supports the internal diagonalization of mass matrices, output of control files for LHPC spectrum plotter, support of generalized PDG numbering scheme PDG.IX, improvement of the calculation of the decay widths and branching ratios with SPheno, the calculation of new low energy observables are added to the SPheno output, the handling of gauge fixing terms has been significantly simplified. Restrictions: SARAH can only derive the Lagrangian in an automatized way for N=1 SUSY models, but not for those with more SUSY generators. Furthermore, SARAH supports only renormalizable operators in the output of model files in the UFO format and also for CalcHep, FeynArts and WHIZARD. Also color sextets are not yet included in the model files for Monte Carlo tools. Dimension 5 operators are only supported in the calculation of the RGEs and mass matrices. Unusual features: SARAH does not need the Lagrangian of a model as input to calculate the vertices. The gauge structure, particle and content and superpotential as well as rotations stemming from gauge symmetry breaking are sufficient. All further information is derived by SARAH on its own. Therefore, the model files are very short and the implementation of new models is fast and easy. In addition, the implementation of a model can be checked for physical and formal consistency. In addition, SARAH can generate Fortran code for a full 1-loop analysis of the mass spectrum in the context for Dirac gauginos. Running time: Measured CPU time for the evaluation of the MSSM using a Lenovo Thinkpad X220 with i7 processor (2.53 GHz). Calculating the complete Lagrangian: 9 s. Calculating all vertices: 51 s. Output of the UFO model files: 49 s.

  7. Air-sea interaction over the Indian Ocean due to variations in the Indonesian throughflow

    NASA Astrophysics Data System (ADS)

    Wajsowicz, R. C.

    The effects of the Indonesian throughflow on the upper thermocline circulation and surface heat flux over the Indian Ocean are presented for a 3-D ocean model forced by two different monthly wind-stress climatologies, as they show interesting differences, which could have implications for long-term variability in the Indian and Australasian monsoons. The effects are determined by contrasting a control run with a run in which the throughflow is blocked by an artificial land-bridge across the exit channels into the Indian Ocean. In the model forced by ECMWF wind stresses, there is little impact on the annual mean surface heat flux in the region surrounding the throughflow exit straits, whereas in the model forced by SSM/I-based wind stresses, a modest throughflow of less than 5 ×106 m3s-1 over the upper 300 m induces an extra 10-50 Wm-2 output. In the SSM/I-forced model, there is insignificant penetration of the throughflow into the northern Indian Ocean. However, in the ECMWF-forced model, the throughflow induces a 5-10 Wm-2 reduction in heat input into the ocean, i.e., an effective output, over the Somali Current in the annual mean. These differences are attributed to differences in the strength and direction of the Ekman transport of the ambient flow, and the vertical structure of the transport and temperature anomalies associated with the throughflow. In both models, the throughflow induces a 5-30 Wm-2 increase in net output over a broad swathe of the southern Indian Ocean, and a reduction in heat output of 10-60 Wm-2 in a large L-shaped band around Tasmania. Effective increases in throughflow-induced net output reach up to 40 (60) Wm-2 over the Agulhas Current retroflection in the ECMWF (SSM/I)-forced model. Seasonal variations in the throughflow's effect on the net surface heat flux are attributed to seasonal variations in the ambient circulation of the Indian Ocean, specifically in coastal upwelling along the south Javan, west Australian, and Somalian coasts, and in the depth of convective overturning between 40°S to 50°S, and its sensing of the mean throughflow's thermal anomaly. The seasonal anomalies plus annual mean yield maximum values for the throughflow-induced net surface heat output in boreal summer. Values may exceed 40 Wm-2 in the southern Indian Ocean interior in both models, exceed 60 Wm-2 over the Agulhas retroflection and immediate vicinity of the exit channels in the SSM/I-forced model, and reach 30 Wm-2 over the Somali jet in the ECMWF-forced model.

  8. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  9. Combustion Control System Design of Diesel Engine via ASPR based Output Feedback Control Strategy with a PFC

    NASA Astrophysics Data System (ADS)

    Mizumoto, Ikuro; Tsunematsu, Junpei; Fujii, Seiya

    2016-09-01

    In this paper, a design method of an output feedback control system with a simple feedforward input for a combustion model of diesel engine will be proposed based on the almost strictly positive real-ness (ASPR-ness) of the controlled system for a combustion control of diesel engines. A parallel feedforward compensator (PFC) design scheme which renders the resulting augmented controlled system ASPR will also be proposed in order to design a stable output feedback control system for the considered combustion model. The effectiveness of our proposed method will be confirmed through numerical simulations.

  10. ThinTool: a spreadsheet model to evaluate fuel reduction thinning cost, net energy output, and nutrient impacts

    Treesearch

    Sang-Kyun Han; Han-Sup Han; William J. Elliot; Edward M. Bilek

    2017-01-01

    We developed a spreadsheet-based model, named ThinTool, to evaluate the cost of mechanical fuel reduction thinning including biomass removal, to predict net energy output, and to assess nutrient impacts from thinning treatments in northern California and southern Oregon. A combination of literature reviews, field-based studies, and contractor surveys was used to...

  11. Stochastic Modeling and Generation of Partially Polarized or Partially Coherent Electromagnetic Waves

    NASA Technical Reports Server (NTRS)

    Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.

  12. Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Boucher, Matthew J.

    2017-01-01

    Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.

  13. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    PubMed

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  14. Modeling nonlinearities in MEMS oscillators.

    PubMed

    Agrawal, Deepak K; Woodhouse, Jim; Seshia, Ashwin A

    2013-08-01

    We present a mathematical model of a microelectromechanical system (MEMS) oscillator that integrates the nonlinearities of the MEMS resonator and the oscillator circuitry in a single numerical modeling environment. This is achieved by transforming the conventional nonlinear mechanical model into the electrical domain while simultaneously considering the prominent nonlinearities of the resonator. The proposed nonlinear electrical model is validated by comparing the simulated amplitude-frequency response with measurements on an open-loop electrically addressed flexural silicon MEMS resonator driven to large motional amplitudes. Next, the essential nonlinearities in the oscillator circuit are investigated and a mathematical model of a MEMS oscillator is proposed that integrates the nonlinearities of the resonator. The concept is illustrated for MEMS transimpedance-amplifier- based square-wave and sine-wave oscillators. Closed-form expressions of steady-state output power and output frequency are derived for both oscillator models and compared with experimental and simulation results, with a good match in the predicted trends in all three cases.

  15. Synchronized Trajectories in a Climate "Supermodel"

    NASA Astrophysics Data System (ADS)

    Duane, Gregory; Schevenhoven, Francine; Selten, Frank

    2017-04-01

    Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.

  16. Relevance of Regional Hydro-Climatic Projection Data for Hydrodynamics and Water Quality Modelling of the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Goldenberg, R.; Vigouroux, G.; Chen, Y.; Bring, A.; Kalantari, Z.; Prieto, C.; Destouni, G.

    2017-12-01

    The Baltic Sea, located in Northern Europe, is one of the world's largest body of brackish water, enclosed and surrounded by nine different countries. The magnitude of climate change may be particularly large in northern regions, and identifying its impacts on vulnerable inland waters and their runoff and nutrient loading to the Baltic Sea is an important and complex task. Exploration of such hydro-climatic impacts is needed to understand potential future changes in physical, ecological and water quality conditions in the regional coastal and marine waters. In this study, we investigate hydro-climatic changes and impacts on the Baltic Sea by synthesizing multi-model climate projection data from the CORDEX regional downscaling initiative (EURO- and Arctic- CORDEX domains, http://www.cordex.org/). We identify key hydro-climatic variable outputs of these models and assess model performance with regard to their projected temporal and spatial change behavior and impacts on different scales and coastal-marine parts, up to the whole Baltic Sea. Model spreading, robustness and impact implications for the Baltic Sea system are investigated for and through further use in simulations of coastal-marine hydrodynamics and water quality based on these key output variables and their change projections. Climate model robustness in this context is assessed by inter-model spreading analysis and observation data comparisons, while projected change implications are assessed by forcing of linked hydrodynamic and water quality modeling of the Baltic Sea based on relevant hydro-climatic outputs for inland water runoff and waterborne nutrient loading to the Baltic sea, as well as for conditions in the sea itself. This focused synthesis and analysis of hydro-climatically relevant output data of regional climate models facilitates assessment of reliability and uncertainty in projections of driver-impact changes of key importance for Baltic Sea physical, water quality and ecological conditions and their future evolution.

  17. Documenting Climate Models and Simulations: the ES-DOC Ecosystem in Support of CMIP

    NASA Astrophysics Data System (ADS)

    Pascoe, C. L.; Guilyardi, E.

    2017-12-01

    The results of climate models are of increasing and widespread importance. No longer is climate model output of sole interest to climate scientists and researchers in the climate change impacts and adaptation fields. Now non-specialists such as government officials, policy-makers, and the general public, all have an increasing need to access climate model output and understand its implications. For this host of users, accurate and complete metadata (i.e., information about how and why the data were produced) is required to document the climate modeling results. Here we describe the ES-DOC community-govern project to collect and make available documentation of climate models and their simulations for the internationally coordinated modeling activity CMIP6 (Coupled Model Intercomparison Project, Phase 6). An overview of the underlying standards, key properties and features, the evolution from CMIP5, the underlying tools and workflows as well as what modelling groups should expect and how they should engage with the documentation of their contribution to CMIP6 is also presented.

  18. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less

  19. The potential of different artificial neural network (ANN) techniques in daily global solar radiation modeling based on meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.

    2010-08-15

    The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less

  20. Autonomous frequency domain identification: Theory and experiment

    NASA Technical Reports Server (NTRS)

    Yam, Yeung; Bayard, D. S.; Hadaegh, F. Y.; Mettler, E.; Milman, M. H.; Scheid, R. E.

    1989-01-01

    The analysis, design, and on-orbit tuning of robust controllers require more information about the plant than simply a nominal estimate of the plant transfer function. Information is also required concerning the uncertainty in the nominal estimate, or more generally, the identification of a model set within which the true plant is known to lie. The identification methodology that was developed and experimentally demonstrated makes use of a simple but useful characterization of the model uncertainty based on the output error. This is a characterization of the additive uncertainty in the plant model, which has found considerable use in many robust control analysis and synthesis techniques. The identification process is initiated by a stochastic input u which is applied to the plant p giving rise to the output. Spectral estimation (h = P sub uy/P sub uu) is used as an estimate of p and the model order is estimated using the produce moment matrix (PMM) method. A parametric model unit direction vector p is then determined by curve fitting the spectral estimate to a rational transfer function. The additive uncertainty delta sub m = p - unit direction vector p is then estimated by the cross spectral estimate delta = P sub ue/P sub uu where e = y - unit direction vectory y is the output error, and unit direction vector y = unit direction vector pu is the computed output of the parametric model subjected to the actual input u. The experimental results demonstrate the curve fitting algorithm produces the reduced-order plant model which minimizes the additive uncertainty. The nominal transfer function estimate unit direction vector p and the estimate delta of the additive uncertainty delta sub m are subsequently available to be used for optimization of robust controller performance and stability.

  1. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  2. Processing Speed in Children: Examination of the Structure in Middle Childhood and Its Impact on Reading

    ERIC Educational Resources Information Center

    Gerst, Elyssa H.

    2017-01-01

    The primary aim of this study was to examine the structure of processing speed (PS) in middle childhood by comparing five theoretically driven models of PS. The models consisted of two conceptual models (a unitary model, a complexity model) and three methodological models (a stimulus material model, an output modality model, and a timing modality…

  3. Exploring the potential of machine learning to break deadlock in convection parameterization

    NASA Astrophysics Data System (ADS)

    Pritchard, M. S.; Gentine, P.

    2017-12-01

    We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.

  4. The Comparison of Point Data Models for the Output of WRF Hydro Model in the IDV

    NASA Astrophysics Data System (ADS)

    Ho, Y.; Weber, J.

    2017-12-01

    WRF Hydro netCDF output files contain streamflow, flow depth, longitude, latitude, altitude and stream order values for each forecast point. However, the data are not CF compliant. The total number of forecast points for the US CONUS is approximately 2.7 million and it is a big challenge for any visualization and analysis tool. The IDV point cloud display shows point data as a set of points colored by parameter. This display is very efficient compared to a standard point type display for rendering a large number of points. The one problem we have is that the data I/O can be a bottleneck issue when dealing with a large collection of point input files. In this presentation, we will experiment with different point data models and their APIs to access the same WRF Hydro model output. The results will help us construct a CF compliant netCDF point data format for the community.

  5. Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations

    DOE Data Explorer

    Buscheck, Thomas A.

    2012-01-01

    Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk : FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  6. Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations

    DOE Data Explorer

    Buscheck, Thomas A.

    2000-01-01

    Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk: FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  7. Charge modeling of ionic polymer-metal composites for dynamic curvature sensing

    NASA Astrophysics Data System (ADS)

    Bahramzadeh, Yousef; Shahinpoor, Mohsen

    2011-04-01

    A curvature sensor based on Ionic Polymer-Metal Composite (IPMC) is proposed and characterized for sensing of curvature variation in structures such as inflatable space structures in which using low power and flexible curvature sensor is of high importance for dynamic monitoring of shape at desired points. The linearity of output signal of sensor for calibration, effect of deflection rate at low frequencies and the phase delay between the output signal and the input deformation of IPMC curvature sensor is investigated. An analytical chemo-electro-mechanical model for charge dynamic of IPMC sensor is presented based on Nernst-Planck partial differential equation which can be used to explain the phenomena observed in experiments. The rate dependency of output signal and phase delay between the applied deformation and sensor signal is studied using the proposed model. The model provides a background for predicting the general characteristics of IPMC sensor. It is shown that IPMC sensor exhibits good linearity, sensitivity, and repeatability for dynamic curvature sensing of inflatable structures.

  8. Monthly mean simulation experiments with a course-mesh global atmospheric model

    NASA Technical Reports Server (NTRS)

    Spar, J.; Klugman, R.; Lutz, R. J.; Notario, J. J.

    1978-01-01

    Substitution of observed monthly mean sea-surface temperatures (SSTs) as lower boundary conditions, in place of climatological SSTs, failed to improve the model simulations. While the impact of SST anomalies on the model output is greater at sea level than at upper levels the impact on the monthly mean simulations is not beneficial at any level. Shifts of one and two days in initialization time produced small, but non-trivial, changes in the model-generated monthly mean synoptic fields. No improvements in the mean simulations resulted from the use of either time-averaged initial data or re-initialization with time-averaged early model output. The noise level of the model, as determined from a multiple initial state perturbation experiment, was found to be generally low, but with a noisier response to initial state errors in high latitudes than the tropics.

  9. Insolation-oriented model of photovoltaic module using Matlab/Simulink

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsai, Huan-Liang

    2010-07-15

    This paper presents a novel model of photovoltaic (PV) module which is implemented and analyzed using Matlab/Simulink software package. Taking the effect of sunlight irradiance on the cell temperature, the proposed model takes ambient temperature as reference input and uses the solar insolation as a unique varying parameter. The cell temperature is then explicitly affected by the sunlight intensity. The output current and power characteristics are simulated and analyzed using the proposed PV model. The model verification has been confirmed through an experimental measurement. The impact of solar irradiation on cell temperature makes the output characteristic more practical. In addition,more » the insolation-oriented PV model enables the dynamics of PV power system to be analyzed and optimized more easily by applying the environmental parameters of ambient temperature and solar irradiance. (author)« less

  10. Assessment of Effectiveness of Geologic Isolation Systems. Variable thickness transient ground-water flow model. Volume 2. Users' manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisenauer, A.E.

    1979-12-01

    A system of computer codes to aid in the preparation and evaluation of ground-water model input, as well as in the computer codes and auxillary programs developed and adapted for use in modeling major ground-water aquifers is described. The ground-water model is interactive, rather than a batch-type model. Interactive models have been demonstrated to be superior to batch in the ground-water field. For example, looking through reams of numerical lists can be avoided with the much superior graphical output forms or summary type numerical output. The system of computer codes permits the flexibility to develop rapidly the model-required data filesmore » from engineering data and geologic maps, as well as efficiently manipulating the voluminous data generated. Central to these codes is the Ground-water Model, which given the boundary value problem, produces either the steady-state or transient time plane solutions. A sizeable part of the codes available provide rapid evaluation of the results. Besides contouring the new water potentials, the model allows graphical review of streamlines of flow, travel times, and detailed comparisons of surfaces or points at designated wells. Use of the graphics scopes provide immediate, but temporary displays which can be used for evaluation of input and output and which can be reproduced easily on hard copy devices, such as a line printer, Calcomp plotter and image photographs.« less

  11. Future projections of temperature and precipitation climatology for CORDEX-MENA domain using RegCM4.4

    NASA Astrophysics Data System (ADS)

    Ozturk, Tugba; Turp, M. Tufan; Türkeş, Murat; Kurnaz, M. Levent

    2018-07-01

    In this study, we investigate changes in seasonal temperature and precipitation climatology of CORDEX Middle East and North Africa (MENA) region for three periods of 2010-2040, 2040-2070 and 2070-2100 with respect to the control period of 1970-2000 by using regional climate model simulations. Projections of future climate conditions are modeled by forcing Regional Climate Model, RegCM4.4 of the International Centre for Theoretical Physics (ICTP) with two different CMIP5 global climate models. HadGEM2-ES global climate model of the Met Office Hadley Centre and MPI-ESM-MR global climate model of the Max Planck Institute for Meteorology were used to generate 50 km resolution data for the Coordinated Regional Climate Downscaling Experiment (CORDEX) Region 13. We test the seasonal time-scale performance of RegCM4.4 in simulating the observed climatology over domain of the MENA by using the output of two different global climate models. The projection results show relatively high increase of average temperatures from 3 °C up to 9 °C over the domain for far future (2070-2100). A strong decrease in precipitation is projected in almost all parts of the domain according to the output of the regional model forced by scenario outputs of two global models. Therefore, warmer and drier than present climate conditions are projected to occur more intensely over the CORDEX-MENA domain.

  12. Empirical measurement and model validation of infrared spectra of contaminated surfaces

    NASA Astrophysics Data System (ADS)

    Archer, Sean; Gartley, Michael; Kerekes, John; Cosofret, Bogdon; Giblin, Jay

    2015-05-01

    Liquid-contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model utilizes radiative transfer modeling to generate synthetic imagery. Within DIRSIG, a micro-scale surface property model (microDIRSIG) was used to calculate numerical bidirectional reflectance distribution functions (BRDF) of geometric surfaces with applied concentrations of liquid contamination. Simple cases where the liquid contamination was well described by optical constants on optically at surfaces were first analytically evaluated by ray tracing and modeled within microDIRSIG. More complex combinations of surface geometry and contaminant application were then incorporated into the micro-scale model. The computed microDIRSIG BRDF outputs were used to describe surface material properties in the encompassing DIRSIG simulation. These DIRSIG generated outputs were validated with empirical measurements obtained from a Design and Prototypes (D&P) Model 102 FTIR spectrometer. Infrared spectra from the synthetic imagery and the empirical measurements were iteratively compared to identify quantitative spectral similarity between the measured data and modeled outputs. Several spectral angles between the predicted and measured emissivities differed by less than 1 degree. Synthetic radiance spectra produced from the microDIRSIG/DIRSIG combination had a RMS error of 0.21-0.81 watts/(m2-sr-μm) when compared to the D&P measurements. Results from this comparison will facilitate improved methods for identifying spectral features and detecting liquid contamination on a variety of natural surfaces.

  13. Acute Radiation Risk and BRYNTRN Organ Dose Projection Graphical User Interface

    NASA Technical Reports Server (NTRS)

    Cucinotta, Francis A.; Hu, Shaowen; Nounu, Hateni N.; Kim, Myung-Hee

    2011-01-01

    The integration of human space applications risk projection models of organ dose and acute radiation risk has been a key problem. NASA has developed an organ dose projection model using the BRYNTRN with SUM DOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUM DOSE are a Baryon transport code and an output data processing code, respectively. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN. A GUI for the ARR and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. BRYNTRN code operation requires extensive input preparation. Only a graphical user interface (GUI) can handle input and output for BRYNTRN to the response models easily and correctly. The purpose of the GUI development for ARRBOD is to provide seamless integration of input and output manipulations for the operations of projection modules (BRYNTRN, SLMDOSE, and the ARR probabilistic response model) in assessing the acute risk and the organ doses of significant Solar Particle Events (SPEs). The assessment of astronauts radiation risk from SPE is in support of mission design and operational planning to manage radiation risks in future space missions. The ARRBOD GUI can identify the proper shielding solutions using the gender-specific organ dose assessments in order to avoid ARR symptoms, and to stay within the current NASA short-term dose limits. The quantified evaluation of ARR severities based on any given shielding configuration and a specified EVA or other mission scenario can be made to guide alternative solutions for attaining determined objectives set by mission planners. The ARRBOD GUI estimates the whole-body effective dose, organ doses, and acute radiation sickness symptoms for astronauts, by which operational strategies and capabilities can be made for the protection of astronauts from SPEs in the planning of future lunar surface scenarios, exploration of near-Earth objects, and missions to Mars.

  14. Community Multiscale Air Quality (CMAQ) Modeling for Regional and Hemispheric Scales

    EPA Science Inventory

    The CMAQ model is a Eulerian model that produces gridded values of atmospheric concentration and deposition. Recent updates to the model are highlighted that impact estimates of dry and wet deposition of nitrogen, sulfur and base cations. Output from the CMAQ model is used in t...

  15. MTCLIM: a mountain microclimate simulation model

    Treesearch

    Roger D. Hungerford; Ramakrishna R. Nemani; Steven W. Running; Joseph C. Coughlan

    1989-01-01

    A model for calculating daily microclimate conditions in mountainous terrain is presented. Daily air temperature, shortwave radiation, relative humidity, and precipitation are extrapolated form data measured at National Weather Service stations. The model equations are given and the paper describes how to execute the model. Model outputs are compared with observed data...

  16. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USDA-ARS?s Scientific Manuscript database

    The predictions of most terrestrial ecosystem models originate from deterministic simulations. Relatively few uncertainty evaluation exercises in model outputs are performed by either model developers or users. This issue has important consequences for decision makers who rely on models to develop n...

  17. Optical Limiting Using the Two-Photon Absorption Electrical Modulation Effect in HgCdTe Photodiode

    PubMed Central

    Cui, Haoyang; Yang, Junjie; Zeng, Jundong; Tang, Zhong

    2013-01-01

    The electrical modulation properties of the output intensity of two-photon absorption (TPA) pumping were analyzed in this paper. The frequency dispersion dependence of TPA and the electric field dependence of TPA were calculated using Wherrett theory model and Garcia theory model, respectively. Both predicted a dramatic variation of TPA coefficient which was attributed into the increasing of the transition rate. The output intensity of the laser pulse propagation in the pn junction device was calculated by using function-transfer method. It shows that the output intensity increases nonlinearly with increasing intensity of incident light and eventually reaches saturation. The output saturation intensity depends on the electric field strength; the greater the electric field, the smaller the output intensity. Consequently, the clamped saturation intensity can be controlled by the electric field. The prior advantage of electrical modulation is that the TPA can be varied extremely continuously, thus adjusting the output intensity in a wide range. This large change provides a manipulate method to control steady output intensity of TPA by adjusting electric field. PMID:24198721

  18. Theoretical modeling, simulation and experimental study of hybrid piezoelectric and electromagnetic energy harvester

    NASA Astrophysics Data System (ADS)

    Li, Ping; Gao, Shiqiao; Cong, Binglong

    2018-03-01

    In this paper, performances of vibration energy harvester combined piezoelectric (PE) and electromagnetic (EM) mechanism are studied by theoretical analysis, simulation and experimental test. For the designed harvester, electromechanical coupling modeling is established, and expressions of vibration response, output voltage, current and power are derived. Then, performances of the harvester are simulated and tested; moreover, the power charging rechargeable battery is realized through designed energy storage circuit. By the results, it's found that compared with piezoelectric-only and electromagnetic-only energy harvester, the hybrid energy harvester can enhance the output power and harvesting efficiency; furthermore, at the harmonic excitation, output power of harvester linearly increases with acceleration amplitude increasing; while it enhances with acceleration spectral density increasing at the random excitation. In addition, the bigger coupling strength, the bigger output power is, and there is the optimal load resistance to make the harvester output the maximal power.

  19. Methods to Register Models and Input/Output Parameters for Integrated Modeling

    EPA Science Inventory

    Significant resources can be required when constructing integrated modeling systems. In a typical application, components (e.g., models and databases) created by different developers are assimilated, requiring the framework’s functionality to bridge the gap between the user’s kno...

  20. A Comparative Study of the Proposed Models for the Components of the National Health Information System

    PubMed Central

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-01-01

    Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output. PMID:24825937

  1. A comparative study of the proposed models for the components of the national health information system.

    PubMed

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output.

  2. Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  3. Life cycle assessment modelling of waste-to-energy incineration in Spain and Portugal.

    PubMed

    Margallo, M; Aldaco, R; Irabien, A; Carrillo, V; Fischer, M; Bala, A; Fullana, P

    2014-06-01

    In recent years, waste management systems have been evaluated using a life cycle assessment (LCA) approach. A main shortcoming of prior studies was the focus on a mixture of waste with different characteristics. The estimation of emissions and consumptions associated with each waste fraction in these studies presented allocation problems. Waste-to-energy (WTE) incineration is a clear example in which municipal solid waste (MSW), comprising many types of materials, is processed to produce several outputs. This paper investigates an approach to better understand incineration processes in Spain and Portugal by applying a multi-input/output allocation model. The application of this model enabled predictions of WTE inputs and outputs, including the consumption of ancillary materials and combustibles, air emissions, solid wastes, and the energy produced during the combustion of each waste fraction. © The Author(s) 2014.

  4. The Flow Engine Framework: A Cognitive Model of Optimal Human Experience

    PubMed Central

    Šimleša, Milija; Guegan, Jérôme; Blanchard, Edouard; Tarpin-Bernard, Franck; Buisine, Stéphanie

    2018-01-01

    Flow is a well-known concept in the fields of positive and applied psychology. Examination of a large body of flow literature suggests there is a need for a conceptual model rooted in a cognitive approach to explain how this psychological phenomenon works. In this paper, we propose the Flow Engine Framework, a theoretical model explaining dynamic interactions between rearranged flow components and fundamental cognitive processes. Using an IPO framework (Inputs – Processes – Outputs) including a feedback process, we organize flow characteristics into three logically related categories: inputs (requirements for flow), mediating and moderating cognitive processes (attentional and motivational mechanisms) and outputs (subjective and objective outcomes), describing the process of the flow. Comparing flow with an engine, inputs are depicted as flow-fuel, core processes cylinder strokes and outputs as power created to provide motion. PMID:29899807

  5. Modeling and simulation of queuing system for customer service improvement: A case study

    NASA Astrophysics Data System (ADS)

    Xian, Tan Chai; Hong, Chai Weng; Hawari, Nurul Nazihah

    2016-10-01

    This study aims to develop a queuing model at UniMall by using discrete event simulation approach in analyzing the service performance that affects customer satisfaction. The performance measures that considered in this model are such as the average time in system, the total number of student served, the number of student in waiting queue, the waiting time in queue as well as the maximum length of buffer. ARENA simulation software is used to develop a simulation model and the output is analyzed. Based on the analysis of output, it is recommended that management of UniMall consider introducing shifts and adding another payment counter in the morning.

  6. An analytical model for bio-electronic organic field-effect transistor sensors

    NASA Astrophysics Data System (ADS)

    Macchia, Eleonora; Giordano, Francesco; Magliulo, Maria; Palazzo, Gerardo; Torsi, Luisa

    2013-09-01

    A model for the electrical characteristics of Functional-Bio-Interlayer Organic Field-Effect Transistors (FBI-OFETs) electronic sensors is here proposed. Specifically, the output current-voltage characteristics of a streptavidin (SA) embedding FBI-OFET are modeled by means of the analytical equations of an enhancement mode p-channel OFET modified according to an ad hoc designed equivalent circuit that is also independently simulated with pspice. An excellent agreement between the model and the experimental current-voltage output characteristics has been found upon exposure to 5 nM of biotin. A good agreement is also found with the SA OFET parameters graphically extracted from the device transfer I-V curves.

  7. Enabling intelligent copernicus services for carbon and water balance modeling of boreal forest ecosystems - North State

    NASA Astrophysics Data System (ADS)

    Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi

    2015-04-01

    The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.

  8. Current and future groundwater recharge in West Africa as estimated from a range of coupled climate model outputs

    NASA Astrophysics Data System (ADS)

    Verhoef, Anne; Cook, Peter; Black, Emily; Macdonald, David; Sorensen, James

    2017-04-01

    This research addresses the terrestrial water balance for West Africa. Emphasis is on the prediction of groundwater recharge and how this may change in the future, which has relevance to the management of surface and groundwater resources. The study was conducted as part of the BRAVE research project, "Building understanding of climate variability into planning of groundwater supplies from low storage aquifers in Africa - Second Phase", funded under the NERC/DFID/ESRC Programme, Unlocking the Potential of Groundwater for the Poor (UPGro). We used model output data of water balance components (precipitation, surface and subsurface run-off, evapotranspiration and soil moisture content) from ERA-Interim/ERA-LAND reanalysis, CMIP5, and high resolution model runs with HadGEM3 (UPSCALE; Mizielinski et al., 2014), for current and future time-periods. Water balance components varied widely between the different models; variation was particularly large for sub-surface runoff (defined as drainage from the bottom-most soil layer of each model). In-situ data for groundwater recharge obtained from the peer-reviewed literature were compared with the model outputs. Separate off-line model sensitivity studies with key land surface models were performed to gain understanding of the reasons behind the model differences. These analyses were centered on vegetation, and soil hydraulic parameters. The modelled current and future recharge time series that had the greatest degree of confidence were used to examine the spatiotemporal variability in groundwater storage. Finally, the implications for water supply planning were assessed. Mizielinski, M.S. et al., 2014. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign. Geoscientific Model Development, 7(4), pp.1629-1640.

  9. Modeling uncertainties in workforce disruptions from influenza pandemics using dynamic input-output analysis.

    PubMed

    El Haimar, Amine; Santos, Joost R

    2014-03-01

    Influenza pandemic is a serious disaster that can pose significant disruptions to the workforce and associated economic sectors. This article examines the impact of influenza pandemic on workforce availability within an interdependent set of economic sectors. We introduce a simulation model based on the dynamic input-output model to capture the propagation of pandemic consequences through the National Capital Region (NCR). The analysis conducted in this article is based on the 2009 H1N1 pandemic data. Two metrics were used to assess the impacts of the influenza pandemic on the economic sectors: (i) inoperability, which measures the percentage gap between the as-planned output and the actual output of a sector, and (ii) economic loss, which quantifies the associated monetary value of the degraded output. The inoperability and economic loss metrics generate two different rankings of the critical economic sectors. Results show that most of the critical sectors in terms of inoperability are sectors that are related to hospitals and health-care providers. On the other hand, most of the sectors that are critically ranked in terms of economic loss are sectors with significant total production outputs in the NCR such as federal government agencies. Therefore, policy recommendations relating to potential mitigation and recovery strategies should take into account the balance between the inoperability and economic loss metrics. © 2013 Society for Risk Analysis.

  10. Safety limit warning levels for the avoidance of excessive sound amplification to protect against further hearing loss.

    PubMed

    Johnson, Earl E

    2017-11-01

    To determine safe output sound pressure levels (SPL) for sound amplification devices to preserve hearing sensitivity after usage. A mathematical model consisting of the Modified Power Law (MPL) (Humes & Jesteadt, 1991 ) combined with equations for predicting temporary threshold shift (TTS) and subsequent permanent threshold shift (PTS) (Macrae, 1994b ) was used to determine safe output SPL. The study involves no new human subject measurements of loudness tolerance or threshold shifts. PTS was determined by the MPL model for 234 audiograms and the SPL output recommended by four different validated prescription recommendations for hearing aids. PTS can, on rare occasion, occur as a result of SPL delivered by hearing aids at modern day prescription recommendations. The trading relationship of safe output SPL, decibel hearing level (dB HL) threshold, and PTS was captured with algebraic expressions. Better hearing thresholds lowered the safe output SPL and higher thresholds raised the safe output SPL. Safe output SPL can consider the magnitude of unaided hearing loss. For devices not set to prescriptive levels, limiting the output SPL below the safe levels identified should protect against threshold worsening as a result of long-term usage.

  11. Using JEDI Data | Jobs and Economic Development Impact Models | NREL

    Science.gov Websites

    tool; Purchase the necessary aggregated multiplier and consumer commodity demand data from someone skilled in input-output modeling (IMPLAN or another modeling tool); or Purchase the necessary aggregated

  12. Addressing numerical challenges in introducing a reactive transport code into a land surface model: A biogeochemical modeling proof-of-concept with CLM-PFLOTRAN 1.0: Modeling Archive

    DOE Data Explorer

    Tang, G.; Andre, B.; Hoffman, F. M.; Painter, S. L.; Thornton, P. E.; Yuan, F.; Bisht, G.; Hammond, G. E.; Lichtner, P. C.; Kumar, J.; Mills, R. T.; Xu, X.

    2016-04-19

    This Modeling Archive is in support of an NGEE Arctic discussion paper under review and available at doi:10.5194/gmd-9-927-2016. The purpose is to document the simulations to allow verification, reproducibility, and follow-up studies. This dataset contains shell scripts to create the CLM-PFLOTRAN cases, specific input files for PFLOTRAN and CLM, outputs, and python scripts to make the figures using the outputs in the publication. Through these results, we demonstrate that CLM-PFLOTRAN can approximately reproduce CLM results in selected cases for the Arctic, temperate and tropic sites. In addition, the new framework facilitates mechanistic representations of soil biogeochemistry processes in the land surface model.

  13. Information Architecture for Interactive Archives at the Community Coordianted Modeling Center

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Wiegand, C.; Kuznetsova, M.; Mullinix, R.; Boblitt, J. M.

    2017-12-01

    The Community Coordinated Modeling Center (CCMC) is upgrading its meta-data system for model simulations to be compliant with the SPASE meta-data standard. This work is helping to enhance the SPASE standards for simulations to better describe the wide variety of models and their output. It will enable much more sophisticated and automated metrics and validation efforts at the CCMC, as well as much more robust searches for specific types of output. The new meta-data will also allow much more tailored run submissions as it will allow some code options to be selected for Run-On-Request models. We will also demonstrate data accessibility through an implementation of the Heliophysics Application Programmer's Interface (HAPI) protocol of data otherwise available throught the integrated space weather analysis system (iSWA).

  14. Direct variational data assimilation algorithm for atmospheric chemistry data with transport and transformation model

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir; Nuterman, Roman; Baklanov, Alexander; Mahura, Alexander

    2015-11-01

    Atmospheric chemistry dynamics is studied with convection-diffusion-reaction model. The numerical Data Assimilation algorithm presented is based on the additive-averaged splitting schemes. It carries out ''fine-grained'' variational data assimilation on the separate splitting stages with respect to spatial dimensions and processes i.e. the same measurement data is assimilated to different parts of the split model. This design has efficient implementation due to the direct data assimilation algorithms of the transport process along coordinate lines. Results of numerical experiments with chemical data assimilation algorithm of in situ concentration measurements on real data scenario have been presented. In order to construct the scenario, meteorological data has been taken from EnviroHIRLAM model output, initial conditions from MOZART model output and measurements from Airbase database.

  15. SHERMAN, a shape-based thermophysical model. I. Model description and validation

    NASA Astrophysics Data System (ADS)

    Magri, Christopher; Howell, Ellen S.; Vervack, Ronald J.; Nolan, Michael C.; Fernández, Yanga R.; Marshall, Sean E.; Crowell, Jenna L.

    2018-03-01

    SHERMAN, a new thermophysical modeling package designed for analyzing near-infrared spectra of asteroids and other solid bodies, is presented. The model's features, the methods it uses to solve for surface and subsurface temperatures, and the synthetic data it outputs are described. A set of validation tests demonstrates that SHERMAN produces accurate output in a variety of special cases for which correct results can be derived from theory. These cases include a family of solutions to the heat equation for which thermal inertia can have any value and thermophysical properties can vary with depth and with temperature. An appendix describes a new approximation method for estimating surface temperatures within spherical-section craters, more suitable for modeling infrared beaming at short wavelengths than the standard method.

  16. A user's guide to the combined stand prognosis and Douglas-fir tussock moth outbreak model

    Treesearch

    Robert A. Monserud; Nicholas L. Crookston

    1982-01-01

    Documentation is given for using a simulation model combining the Stand Prognosis Model and the Douglas-fir Tussock Moth Outbreak Model. Four major areas are addressed: (1) an overview and discussion of the combined model; (2) description of input options; (3) discussion of model output, and (4) numerous examples illustrating model behavior and sensitivity.

  17. Model Evaluation and Ensemble Modelling of Surface-Level Ozone in Europe and North America in the Context of AQMEII

    EPA Science Inventory

    More than ten state-of-the-art regional air quality models have been applied as part of the Air Quality Model Evaluation International Initiative (AQMEII). These models were run by twenty independent groups in Europe and North America. Standardised modelling outputs over a full y...

  18. User's guide to the western spruce budworm modeling system

    Treesearch

    Nicholas L. Crookston; J. J. Colbert; Paul W. Thomas; Katharine A. Sheehan; William P. Kemp

    1990-01-01

    The Budworm Modeling System is a set of four computer programs: The Budworm Dynamics Model, the Prognosis-Budworm Dynamics Model, the Prognosis-Budworm Damage Model, and the Parallel Processing-Budworm Dynamics Model. Input to the first three programs and the output produced are described in this guide. A guide to the fourth program will be published separately....

  19. Using Optimization to Improve Test Planning

    DTIC Science & Technology

    2017-09-01

    friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool for the test and... evaluation schedulers. 14. SUBJECT TERMS schedule optimization, test planning 15. NUMBER OF PAGES 223 16. PRICE CODE 17. SECURITY CLASSIFICATION OF...make the input more user-friendly and to display the output differently, the test and evaluation test schedule optimization model would be a good tool

  20. NEMS Freight Transportation Module Improvement Study

    EIA Publications

    2015-01-01

    The U.S. Energy Information Administration (EIA) contracted with IHS Global, Inc. (IHS) to analyze the relationship between the value of industrial output, physical output, and freight movement in the United States for use in updating analytic assumptions and modeling structure within the National Energy Modeling System (NEMS) freight transportation module, including forecasting methodologies and processes to identify possible alternative approaches that would improve multi-modal freight flow and fuel consumption estimation.

  1. Context as the Building Blocks of Meaning: A Retrieval Model for the Semantic Representation of Words

    DTIC Science & Technology

    2003-04-01

    8 Deconstructing the model’s output................................................................................ 9 Implications of the ideas...identified characters of a word are used as a probe to retrieve a word’s identity (its spelling and phonology ) from memory. In addition to the...document matrix has been reduced by the SVD. Deconstructing the model’s output Why do semantic relationships between words emerge from the model? Is the

  2. Experiments and theory for a Tm:Ho:YAG laser end pumped by a Cr:GSAG laser

    NASA Technical Reports Server (NTRS)

    Kim, Kyong H.; Choi, Young S.; Hess, Robert V.; Blair, Clayton H.; Brockman, Philip; Barnes, Norman P.

    1991-01-01

    A Cr:GSAG-laser-pumped Tm:Ho:YAG laser has been fabricated and tested. A maximum output energy of 14 mJ with a slope efficiency of 33.6 percent was obtained using a 98-percent reflective output coupler. A comprehensive rate equation model has been developed to describe laser performance. Details of the experimental measurements and model are presented.

  3. Downscaling of RCM outputs for representative catchments in the Mediterranean region, for the 1951-2100 time-frame

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Marrocu, Marino; Pusceddu, Gabriella; Langousis, Andreas; Mascaro, Giuseppe; Caroletti, Giulio

    2013-04-01

    Within the activities of the EU FP7 CLIMB project (www.climb-fp7.eu), we developed downscaling procedures to reliably assess climate forcing at hydrologically relevant scales, and applied them to six representative hydrological basins located in the Mediterranean region: Riu Mannu and Noce in Italy, Chiba in Tunisia, Kocaeli in Turkey, Thau in France, and Gaza in Palestine. As a first step towards this aim, we used daily precipitation and temperature data from the gridded E-OBS project (www.ecad.eu/dailydata), as reference fields, to rank 14 Regional Climate Model (RCM) outputs from the ENSEMBLES project (http://ensembles-eu.metoffice.com). The four best performing model outputs were selected, with the additional constraint of maintaining 2 outputs obtained from running different RCMs driven by the same GCM, and 2 runs from the same RCM driven by different GCMs. For these four RCM-GCM model combinations, a set of downscaling techniques were developed and applied, for the period 1951-2100, to variables used in hydrological modeling (i.e. precipitation; mean, maximum and minimum daily temperatures; direct solar radiation, relative humidity, magnitude and direction of surface winds). The quality of the final products is discussed, together with the results obtained after applying a bias reduction procedure to daily temperature and precipitation fields.

  4. Research on the output bit error rate of 2DPSK signal based on stochastic resonance theory

    NASA Astrophysics Data System (ADS)

    Yan, Daqin; Wang, Fuzhong; Wang, Shuo

    2017-12-01

    Binary differential phase-shift keying (2DPSK) signal is mainly used for high speed data transmission. However, the bit error rate of digital signal receiver is high in the case of wicked channel environment. In view of this situation, a novel method based on stochastic resonance (SR) is proposed, which is aimed to reduce the bit error rate of 2DPSK signal by coherent demodulation receiving. According to the theory of SR, a nonlinear receiver model is established, which is used to receive 2DPSK signal under small signal-to-noise ratio (SNR) circumstances (between -15 dB and 5 dB), and compared with the conventional demodulation method. The experimental results demonstrate that when the input SNR is in the range of -15 dB to 5 dB, the output bit error rate of nonlinear system model based on SR has a significant decline compared to the conventional model. It could reduce 86.15% when the input SNR equals -7 dB. Meanwhile, the peak value of the output signal spectrum is 4.25 times as that of the conventional model. Consequently, the output signal of the system is more likely to be detected and the accuracy can be greatly improved.

  5. Design and Characterization of a High Resolution Microfluidic Heat Flux Sensor with Thermal Modulation

    PubMed Central

    Nam, Sung-Ki; Kim, Jung-Kyun; Cho, Sung-Cheon; Lee, Sun-Kyu

    2010-01-01

    A complementary metal-oxide semiconductor-compatible process was used in the design and fabrication of a suspended membrane microfluidic heat flux sensor with a thermopile for the purpose of measuring the heat flow rate. The combination of a thirty-junction gold and nickel thermoelectric sensor with an ultralow noise preamplifier, a low pass filter, and a lock-in amplifier can yield a resolution 20 nW with a sensitivity of 461 V/W. The thermal modulation method is used to eliminate low-frequency noise from the sensor output, and various amounts of fluidic heat were applied to the sensor to investigate its suitability for microfluidic applications. For sensor design and analysis of signal output, a method of modeling and simulating electro-thermal behavior in a microfluidic heat flux sensor with an integrated electronic circuit is presented and validated. The electro-thermal domain model was constructed by using system dynamics, particularly the bond graph. The electro-thermal domain system model in which the thermal and the electrical domains are coupled expresses the heat generation of samples and converts thermal input to electrical output. The proposed electro-thermal domain system model is in good agreement with the measured output voltage response in both the transient and the steady state. PMID:22163568

  6. The Use of AMET and Automated Scripts for Model Evaluation

    EPA Science Inventory

    The Atmospheric Model Evaluation Tool (AMET) is a suite of software designed to facilitate the analysis and evaluation of meteorological and air quality models. AMET matches the model output for particular locations to the corresponding observed values from one or more networks ...

  7. Systems Operation Studies for Automated Guideway Transit Systems : System Availability Model User's Manual

    DOT National Transportation Integrated Search

    1981-01-01

    The System Availability Model (SAM) is a system-level model which provides measures of vehicle and passenger availability. The SAM operates in conjunction with the AGT discrete Event Simulation Model (DESM). The DESM output is the normal source of th...

  8. Assessment of the Value, Impact, and Validity of the Jobs and Economic Development Impacts (JEDI) Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billman, L.; Keyser, D.

    The Jobs and Economic Development Impacts (JEDI) models, developed by the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), use input-output methodology to estimate gross (not net) jobs and economic impacts of building and operating selected types of renewable electricity generation and fuel plants. This analysis provides the DOE with an assessment of the value, impact, and validity of the JEDI suite of models. While the models produce estimates of jobs, earnings, and economic output, this analysis focuses only on jobs estimates. This validation report includes an introductionmore » to JEDI models, an analysis of the value and impact of the JEDI models, and an analysis of the validity of job estimates generated by JEDI model through comparison to other modeled estimates and comparison to empirical, observed jobs data as reported or estimated for a commercial project, a state, or a region.« less

  9. Simscape Modeling of a Custom Closed-Volume Tank

    NASA Technical Reports Server (NTRS)

    Fischer, Nathaniel P.

    2015-01-01

    The library for Mathworks Simscape does not currently contain a model for a closed volume fluid tank where the ullage pressure is variable. In order to model a closed-volume variable ullage pressure tank, it was necessary to consider at least two separate cases: a vertical cylinder, and a sphere. Using library components, it was possible to construct a rough model for the cylindrical tank. It was not possible to construct a model for a spherical tank, using library components, due to the variable area. It was decided that, for these cases, it would be preferable to create a custom library component to represent each case, using the Simscape language. Once completed, the components were added to models, where filling and draining the tanks could be simulated. When the models were performing as expected, it was necessary to generate code from the models and run them in Trick (a real-time simulation program). The data output from Trick was then compared to the output from Simscape and found to be within acceptable limits.

  10. Documenting Climate Models and Their Simulations

    DOE PAGES

    Guilyardi, Eric; Balaji, V.; Lawrence, Bryan; ...

    2013-05-01

    The results of climate models are of increasing and widespread importance. No longer is climate model output of sole interest to climate scientists and researchers in the climate change impacts and adaptation fields. Now nonspecialists such as government officials, policy makers, and the general public all have an increasing need to access climate model output and understand its implications. For this host of users, accurate and complete metadata (i.e., information about how and why the data were produced) is required to document the climate modeling results. We describe a pilot community initiative to collect and make available documentation of climatemore » models and their simulations. In an initial application, a metadata repository is being established to provide information of this kind for a major internationally coordinated modeling activity known as CMIP5 (Coupled Model Intercomparison Project, Phase 5). We expected that for a wide range of stakeholders, this and similar community-managed metadata repositories will spur development of analysis tools that facilitate discovery and exploitation of Earth system simulations.« less

  11. Development and Validation of a Weather-Based Model for Predicting Infection of Loquat Fruit by Fusicladium eriobotryae

    PubMed Central

    González-Domínguez, Elisa; Armengol, Josep; Rossi, Vittorio

    2014-01-01

    A mechanistic, dynamic model was developed to predict infection of loquat fruit by conidia of Fusicladium eriobotryae, the causal agent of loquat scab. The model simulates scab infection periods and their severity through the sub-processes of spore dispersal, infection, and latency (i.e., the state variables); change from one state to the following one depends on environmental conditions and on processes described by mathematical equations. Equations were developed using published data on F. eriobotryae mycelium growth, conidial germination, infection, and conidial dispersion pattern. The model was then validated by comparing model output with three independent data sets. The model accurately predicts the occurrence and severity of infection periods as well as the progress of loquat scab incidence on fruit (with concordance correlation coefficients >0.95). Model output agreed with expert assessment of the disease severity in seven loquat-growing seasons. Use of the model for scheduling fungicide applications in loquat orchards may help optimise scab management and reduce fungicide applications. PMID:25233340

  12. Use of medium-range numerical weather prediction model output to produce forecasts of streamflow

    USGS Publications Warehouse

    Clark, M.P.; Hay, L.E.

    2004-01-01

    This paper examines an archive containing over 40 years of 8-day atmospheric forecasts over the contiguous United States from the NCEP reanalysis project to assess the possibilities for using medium-range numerical weather prediction model output for predictions of streamflow. This analysis shows the biases in the NCEP forecasts to be quite extreme. In many regions, systematic precipitation biases exceed 100% of the mean, with temperature biases exceeding 3??C. In some locations, biases are even higher. The accuracy of NCEP precipitation and 2-m maximum temperature forecasts is computed by interpolating the NCEP model output for each forecast day to the location of each station in the NWS cooperative network and computing the correlation with station observations. Results show that the accuracy of the NCEP forecasts is rather low in many areas of the country. Most apparent is the generally low skill in precipitation forecasts (particularly in July) and low skill in temperature forecasts in the western United States, the eastern seaboard, and the southern tier of states. These results outline a clear need for additional processing of the NCEP Medium-Range Forecast Model (MRF) output before it is used for hydrologic predictions. Techniques of model output statistics (MOS) are used in this paper to downscale the NCEP forecasts to station locations. Forecasted atmospheric variables (e.g., total column precipitable water, 2-m air temperature) are used as predictors in a forward screening multiple linear regression model to improve forecasts of precipitation and temperature for stations in the National Weather Service cooperative network. This procedure effectively removes all systematic biases in the raw NCEP precipitation and temperature forecasts. MOS guidance also results in substantial improvements in the accuracy of maximum and minimum temperature forecasts throughout the country. For precipitation, forecast improvements were less impressive. MOS guidance increases he accuracy of precipitation forecasts over the northeastern United States, but overall, the accuracy of MOS-based precipitation forecasts is slightly lower than the raw NCEP forecasts. Four basins in the United States were chosen as case studies to evaluate the value of MRF output for predictions of streamflow. Streamflow forecasts using MRF output were generated for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado: East Fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). Hydrologic model output forced with measured-station data were used as "truth" to focus attention on the hydrologic effects of errors in the MRF forecasts. Eight-day streamflow forecasts produced using the MOS-corrected MRF output as input (MOS) were compared with those produced using the climatic Ensemble Streamflow Prediction (ESP) technique. MOS-based streamflow forecasts showed increased skill in the snowmelt-dominated river basins, where daily variations in streamflow are strongly forced by temperature. In contrast, the skill of MOS forecasts in the rainfall-dominated basin (the Alapaha River) were equivalent to the skill of the ESP forecasts. Further improvements in streamflow forecasts require more accurate local-scale forecasts of precipitation and temperature, more accurate specification of basin initial conditions, and more accurate model simulations of streamflow. ?? 2004 American Meteorological Society.

  13. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  14. Competition model for aperiodic stochastic resonance in a Fitzhugh-Nagumo model of cardiac sensory neurons.

    PubMed

    Kember, G C; Fenton, G A; Armour, J A; Kalyaniwalla, N

    2001-04-01

    Regional cardiac control depends upon feedback of the status of the heart from afferent neurons responding to chemical and mechanical stimuli as transduced by an array of sensory neurites. Emerging experimental evidence shows that neural control in the heart may be partially exerted using subthreshold inputs that are amplified by noisy mechanical fluctuations. This amplification is known as aperiodic stochastic resonance (ASR). Neural control in the noisy, subthreshold regime is difficult to see since there is a near absence of any correlation between input and the output, the latter being the average firing (spiking) rate of the neuron. This lack of correlation is unresolved by traditional energy models of ASR since these models are unsuitable for identifying "cause and effect" between such inputs and outputs. In this paper, the "competition between averages" model is used to determine what portion of a noisy, subthreshold input is responsible, on average, for the output of sensory neurons as represented by the Fitzhugh-Nagumo equations. A physiologically relevant conclusion of this analysis is that a nearly constant amount of input is responsible for a spike, on average, and this amount is approximately independent of the firing rate. Hence, correlation measures are generally reduced as the firing rate is lowered even though neural control under this model is actually unaffected.

  15. Data-based virtual unmodeled dynamics driven multivariable nonlinear adaptive switching control.

    PubMed

    Chai, Tianyou; Zhang, Yajun; Wang, Hong; Su, Chun-Yi; Sun, Jing

    2011-12-01

    For a complex industrial system, its multivariable and nonlinear nature generally make it very difficult, if not impossible, to obtain an accurate model, especially when the model structure is unknown. The control of this class of complex systems is difficult to handle by the traditional controller designs around their operating points. This paper, however, explores the concepts of controller-driven model and virtual unmodeled dynamics to propose a new design framework. The design consists of two controllers with distinct functions. First, using input and output data, a self-tuning controller is constructed based on a linear controller-driven model. Then the output signals of the controller-driven model are compared with the true outputs of the system to produce so-called virtual unmodeled dynamics. Based on the compensator of the virtual unmodeled dynamics, the second controller based on a nonlinear controller-driven model is proposed. Those two controllers are integrated by an adaptive switching control algorithm to take advantage of their complementary features: one offers stabilization function and another provides improved performance. The conditions on the stability and convergence of the closed-loop system are analyzed. Both simulation and experimental tests on a heavily coupled nonlinear twin-tank system are carried out to confirm the effectiveness of the proposed method.

  16. Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less

  17. Finite element based N-Port model for preliminary design of multibody systems

    NASA Astrophysics Data System (ADS)

    Sanfedino, Francesco; Alazard, Daniel; Pommier-Budinger, Valérie; Falcoz, Alexandre; Boquet, Fabrice

    2018-02-01

    This article presents and validates a general framework to build a linear dynamic Finite Element-based model of large flexible structures for integrated Control/Structure design. An extension of the Two-Input Two-Output Port (TITOP) approach is here developed. The authors had already proposed such framework for simple beam-like structures: each beam was considered as a TITOP sub-system that could be interconnected to another beam thanks to the ports. The present work studies bodies with multiple attaching points by allowing complex interconnections among several sub-structures in tree-like assembly. The TITOP approach is extended to generate NINOP (N-Input N-Output Port) models. A Matlab toolbox is developed integrating beam and bending plate elements. In particular a NINOP formulation of bending plates is proposed to solve analytic two-dimensional problems. The computation of NINOP models using the outputs of a MSC/Nastran modal analysis is also investigated in order to directly use the results provided by a commercial finite element software. The main advantage of this tool is to provide a model of a multibody system under the form of a block diagram with a minimal number of states. This model is easy to operate for preliminary design and control. An illustrative example highlights the potential of the proposed approach: the synthesis of the dynamical model of a spacecraft with two deployable and flexible solar arrays.

  18. Interactive Correlation Analysis and Visualization of Climate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less

  19. An application of hybrid downscaling model to forecast summer precipitation at stations in China

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Fan, Ke

    2014-06-01

    A pattern prediction hybrid downscaling method was applied to predict summer (June-July-August) precipitation at China 160 stations. The predicted precipitation from the downscaling scheme is available one month before. Four predictors were chosen to establish the hybrid downscaling scheme. The 500-hPa geopotential height (GH5) and 850-hPa specific humidity (q85) were from the skillful predicted output of three DEMETER (Development of a European Multi-model Ensemble System for Seasonal to Interannual Prediction) general circulation models (GCMs). The 700-hPa geopotential height (GH7) and sea level pressure (SLP) were from reanalysis datasets. The hybrid downscaling scheme (HD-4P) has better prediction skill than a conventional statistical downscaling model (SD-2P) which contains two predictors derived from the output of GCMs, although two downscaling schemes were performed to improve the seasonal prediction of summer rainfall in comparison with the original output of the DEMETER GCMs. In particular, HD-4P downscaling predictions showed lower root mean square errors than those based on the SD-2P model. Furthermore, the HD-4P downscaling model reproduced the China summer precipitation anomaly centers more accurately than the scenario of the SD-2P model in 1998. A hybrid downscaling prediction should be effective to improve the prediction skill of summer rainfall at stations in China.

  20. Reliable results from stochastic simulation models

    Treesearch

    Donald L., Jr. Gochenour; Leonard R. Johnson

    1973-01-01

    Development of a computer simulation model is usually done without fully considering how long the model should run (e.g. computer time) before the results are reliable. However construction of confidence intervals (CI) about critical output parameters from the simulation model makes it possible to determine the point where model results are reliable. If the results are...

  1. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    NASA Astrophysics Data System (ADS)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  2. Mapping an operator's perception of a parameter space

    NASA Technical Reports Server (NTRS)

    Pew, R. W.; Jagacinski, R. J.

    1972-01-01

    Operators monitored the output of two versions of the crossover model having a common random input. Their task was to make discrete, real-time adjustments of the parameters k and tau of one of the models to make its output time history converge to that of the other, fixed model. A plot was obtained of the direction of parameter change as a function of position in the (tau, k) parameter space relative to the nominal value. The plot has a great deal of structure and serves as one form of representation of the operator's perception of the parameter space.

  3. Deriving Tools from Real-time Runs: A New CCMC Support for SEC and AFWA

    NASA Technical Reports Server (NTRS)

    Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha

    2008-01-01

    The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions. the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models. and on the transition of appropriate models to space weather forecast centers. As part of the latter activity. the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.

  4. Synaptic control of the shape of the motoneuron pool input-output function

    PubMed Central

    Heckman, Charles J.

    2017-01-01

    Although motoneurons have often been considered to be fairly linear transducers of synaptic input, recent evidence suggests that strong persistent inward currents (PICs) in motoneurons allow neuromodulatory and inhibitory synaptic inputs to induce large nonlinearities in the relation between the level of excitatory input and motor output. To try to estimate the possible extent of this nonlinearity, we developed a pool of model motoneurons designed to replicate the characteristics of motoneuron input-output properties measured in medial gastrocnemius motoneurons in the decerebrate cat with voltage-clamp and current-clamp techniques. We drove the model pool with a range of synaptic inputs consisting of various mixtures of excitation, inhibition, and neuromodulation. We then looked at the relation between excitatory drive and total pool output. Our results revealed that the PICs not only enhance gain but also induce a strong nonlinearity in the relation between the average firing rate of the motoneuron pool and the level of excitatory input. The relation between the total simulated force output and input was somewhat more linear because of higher force outputs in later-recruited units. We also found that the nonlinearity can be increased by increasing neuromodulatory input and/or balanced inhibitory input and minimized by a reciprocal, push-pull pattern of inhibition. We consider the possibility that a flexible input-output function may allow motor output to be tuned to match the widely varying demands of the normal motor repertoire. NEW & NOTEWORTHY Motoneuron activity is generally considered to reflect the level of excitatory drive. However, the activation of voltage-dependent intrinsic conductances can distort the relation between excitatory drive and the total output of a pool of motoneurons. Using a pool of realistic motoneuron models, we show that pool output can be a highly nonlinear function of synaptic input but linearity can be achieved through adjusting the time course of excitatory and inhibitory synaptic inputs. PMID:28053245

  5. Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2018-02-01

    This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Finite Element Modeling of Passive Material Influence on the Deformation and Force Output of Skeletal Muscle

    PubMed Central

    Hodgson, John A.; Chi, Sheng-Wei; Yang, Judy P.; Chen, Jiun-Shyan; Edgerton, V. Reggie; Sinha, Shantanu

    2014-01-01

    The pattern of deformation of the different structural components of a muscle-tendon complex when it is activated provides important information about the internal mechanics of the muscle. Recent experimental observations of deformations in contracting muscle have presented inconsistencies with current widely held assumption about muscle behavior. These include negative strain in aponeuroses, non-uniform strain changes in sarcomeres, even of individual muscle fibers and evidence that muscle fiber cross sectional deformations are asymmetrical suggesting a need to readjust current models of contracting muscle. We report here our use of finite element modeling techniques to simulate a simple muscle-tendon complex and investigate the influence of passive intramuscular material properties upon the deformation patterns under isometric and shortening conditions. While phenomenological force-displacement relationships described the muscle fiber properties, the material properties of the passive matrix were varied to simulate a hydrostatic model, compliant and stiff isotropically hyperelastic models and an anisotropic elastic model. The numerical results demonstrate that passive elastic material properties significantly influence the magnitude, heterogeneity and distribution pattern of many measures of deformation in a contracting muscle. Measures included aponeurosis strain, aponeurosis separation, muscle fiber strain and fiber cross-sectional deformation. The force output of our simulations was strongly influenced by passive material properties, changing by as much as ~80% under some conditions. Maximum output was accomplished by introducing anisotropy along axes which were not strained significantly during a muscle length change, suggesting that correct costamere orientation may be a critical factor in optimal muscle function. Such a model not only fits known physiological data, but also maintains the relatively constant aponeurosis separation observed during in vivo muscle contractions and is easily extrapolated from our plane-strain conditions into a 3-dimensional structure. Such modeling approaches have the potential of explaining the reduction of force output consequent to changes in material properties of intramuscular materials arising in the diseased state such as in genetic disorders. PMID:22498294

  7. Development of a Distributed Parallel Computing Framework to Facilitate Regional/Global Gridded Crop Modeling with Various Scenarios

    NASA Astrophysics Data System (ADS)

    Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.

    2017-12-01

    Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.

  8. Development of Graphical User Interface for ARRBOD (Acute Radiation Risk and BRYNTRN Organ Dose Projection)

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Hu, Shaowen; Nounu, Hatem N.; Cucinotta, Francis A.

    2010-01-01

    The space radiation environment, particularly solar particle events (SPEs), poses the risk of acute radiation sickness (ARS) to humans; and organ doses from SPE exposure may reach critical levels during extra vehicular activities (EVAs) or within lightly shielded spacecraft. NASA has developed an organ dose projection model using the BRYNTRN with SUMDOSE computer codes, and a probabilistic model of Acute Radiation Risk (ARR). The codes BRYNTRN and SUMDOSE, written in FORTRAN, are a Baryon transport code and an output data processing code, respectively. The ARR code is written in C. The risk projection models of organ doses and ARR take the output from BRYNTRN as an input to their calculations. BRYNTRN code operation requires extensive input preparation. With a graphical user interface (GUI) to handle input and output for BRYNTRN, the response models can be connected easily and correctly to BRYNTRN in friendly way. A GUI for the Acute Radiation Risk and BRYNTRN Organ Dose (ARRBOD) projection code provides seamless integration of input and output manipulations, which are required for operations of the ARRBOD modules: BRYNTRN, SUMDOSE, and the ARR probabilistic response model. The ARRBOD GUI is intended for mission planners, radiation shield designers, space operations in the mission operations directorate (MOD), and space biophysics researchers. The ARRBOD GUI will serve as a proof-of-concept example for future integration of other human space applications risk projection models. The current version of the ARRBOD GUI is a new self-contained product and will have follow-on versions, as options are added: 1) human geometries of MAX/FAX in addition to CAM/CAF; 2) shielding distributions for spacecraft, Mars surface and atmosphere; 3) various space environmental and biophysical models; and 4) other response models to be connected to the BRYNTRN. The major components of the overall system, the subsystem interconnections, and external interfaces are described in this report; and the ARRBOD GUI product is explained step by step in order to serve as a tutorial.

  9. Finite element modeling of passive material influence on the deformation and force output of skeletal muscle.

    PubMed

    Hodgson, John A; Chi, Sheng-Wei; Yang, Judy P; Chen, Jiun-Shyan; Edgerton, Victor R; Sinha, Shantanu

    2012-05-01

    The pattern of deformation of different structural components of a muscle-tendon complex when it is activated provides important information about the internal mechanics of the muscle. Recent experimental observations of deformations in contracting muscle have presented inconsistencies with current widely held assumption about muscle behavior. These include negative strain in aponeuroses, non-uniform strain changes in sarcomeres, even of individual muscle fibers and evidence that muscle fiber cross sectional deformations are asymmetrical suggesting a need to readjust current models of contracting muscle. We report here our use of finite element modeling techniques to simulate a simple muscle-tendon complex and investigate the influence of passive intramuscular material properties upon the deformation patterns under isometric and shortening conditions. While phenomenological force-displacement relationships described the muscle fiber properties, the material properties of the passive matrix were varied to simulate a hydrostatic model, compliant and stiff isotropically hyperelastic models and an anisotropic elastic model. The numerical results demonstrate that passive elastic material properties significantly influence the magnitude, heterogeneity and distribution pattern of many measures of deformation in a contracting muscle. Measures included aponeurosis strain, aponeurosis separation, muscle fiber strain and fiber cross-sectional deformation. The force output of our simulations was strongly influenced by passive material properties, changing by as much as ~80% under some conditions. The maximum output was accomplished by introducing anisotropy along axes which were not strained significantly during a muscle length change, suggesting that correct costamere orientation may be a critical factor in the optimal muscle function. Such a model not only fits known physiological data, but also maintains the relatively constant aponeurosis separation observed during in vivo muscle contractions and is easily extrapolated from our plane-strain conditions into a three-dimensional structure. Such modeling approaches have the potential of explaining the reduction of force output consequent to changes in material properties of intramuscular materials arising in the diseased state such as in genetic disorders. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Neural network uncertainty assessment using Bayesian statistics: a remote sensing application

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.

    2004-01-01

    Neural network (NN) techniques have proved successful for many regression problems, in particular for remote sensing; however, uncertainty estimates are rarely provided. In this article, a Bayesian technique to evaluate uncertainties of the NN parameters (i.e., synaptic weights) is first presented. In contrast to more traditional approaches based on point estimation of the NN weights, we assess uncertainties on such estimates to monitor the robustness of the NN model. These theoretical developments are illustrated by applying them to the problem of retrieving surface skin temperature, microwave surface emissivities, and integrated water vapor content from a combined analysis of satellite microwave and infrared observations over land. The weight uncertainty estimates are then used to compute analytically the uncertainties in the network outputs (i.e., error bars and correlation structure of these errors). Such quantities are very important for evaluating any application of an NN model. The uncertainties on the NN Jacobians are then considered in the third part of this article. Used for regression fitting, NN models can be used effectively to represent highly nonlinear, multivariate functions. In this situation, most emphasis is put on estimating the output errors, but almost no attention has been given to errors associated with the internal structure of the regression model. The complex structure of dependency inside the NN is the essence of the model, and assessing its quality, coherency, and physical character makes all the difference between a blackbox model with small output errors and a reliable, robust, and physically coherent model. Such dependency structures are described to the first order by the NN Jacobians: they indicate the sensitivity of one output with respect to the inputs of the model for given input data. We use a Monte Carlo integration procedure to estimate the robustness of the NN Jacobians. A regularization strategy based on principal component analysis is proposed to suppress the multicollinearities in order to make these Jacobians robust and physically meaningful.

  11. Analysis of inter-country input-output table based on bibliographic coupling network: How industrial sectors on the GVC compete for production resources

    NASA Astrophysics Data System (ADS)

    Guan, Jun; Xu, Xiaoyu; Xing, Lizhi

    2018-03-01

    The input-output table is comprehensive and detailed in describing national economic systems with abundance of economic relationships depicting information of supply and demand among industrial sectors. This paper focuses on how to quantify the degree of competition on the global value chain (GVC) from the perspective of econophysics. Global Industrial Strongest Relevant Network models are established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output (ICIO) tables and then have them transformed into Global Industrial Resource Competition Network models to analyze the competitive relationships based on bibliographic coupling approach. Three indicators well suited for the weighted and undirected networks with self-loops are introduced here, including unit weight for competitive power, disparity in the weight for competitive amplitude and weighted clustering coefficient for competitive intensity. Finally, these models and indicators were further applied empirically to analyze the function of industrial sectors on the basis of the latest World Input-Output Database (WIOD) in order to reveal inter-sector competitive status during the economic globalization.

  12. Stabilization of nonlinear systems using sampled-data output-feedback fuzzy controller based on polynomial-fuzzy-model-based control approach.

    PubMed

    Lam, H K

    2012-02-01

    This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.

  13. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartolac, S; Letourneau, D; University of Toronto, Toronto, Ontario

    Purpose: Application of process control theory in quality assurance programs promises to allow earlier identification of problems and potentially better quality in delivery than traditional paradigms based primarily on tolerances and action levels. The purpose of this project was to characterize underlying seasonal variations in linear accelerator output that can be used to improve performance or trigger preemptive maintenance. Methods: Review of runtime plots of daily (6 MV) output data acquired using in house ion chamber based devices over three years and for fifteen linear accelerators of varying make and model were evaluated. Shifts in output due to known interventionsmore » with the machines were subtracted from the data to model an uncorrected scenario for each linear accelerator. Observable linear trends were also removed from the data prior to evaluation of periodic variations. Results: Runtime plots of output revealed sinusoidal, seasonal variations that were consistent across all units, irrespective of manufacturer, model or age of machine. The average amplitude of the variation was on the order of 1%. Peak and minimum variations were found to correspond to early April and September, respectively. Approximately 48% of output adjustments made over the period examined were potentially avoidable if baseline levels had corresponded to the mean output, rather than to points near a peak or valley. Linear trends were observed for three of the fifteen units, with annual increases in output ranging from 2–3%. Conclusion: Characterization of cyclical seasonal trends allows for better separation of potentially innate accelerator behaviour from other behaviours (e.g. linear trends) that may be better described as true out of control states (i.e. non-stochastic deviations from otherwise expected behavior) and could indicate service requirements. Results also pointed to an optimal setpoint for accelerators such that output of machines is maintained within set tolerances and interventions are required less frequently.« less

  15. Freight Transportation Energy Use : Appendix. Transportation Network Model Output.

    DOT National Transportation Integrated Search

    1978-07-01

    The overall design of the TSC Freight Energy Model is presented. A hierarchical modeling strategy is used, in which detailed modal simulators estimate the performance characteristics of transportation network elements, and the estimates are input to ...

  16. Colors of attraction: Modeling insect flight to light behavior.

    PubMed

    Donners, Maurice; van Grunsven, Roy H A; Groenendijk, Dick; van Langevelde, Frank; Bikker, Jan Willem; Longcore, Travis; Veenendaal, Elmar

    2018-06-26

    Light sources attract nocturnal flying insects, but some lamps attract more insects than others. The relation between the properties of a light source and the number of attracted insects is, however, poorly understood. We developed a model to quantify the attractiveness of light sources based on the spectral output. This model is fitted using data from field experiments that compare a large number of different light sources. We validated this model using two additional datasets, one for all insects and one excluding the numerous Diptera. Our model facilitates the development and application of light sources that attract fewer insects without the need for extensive field tests and it can be used to correct for spectral composition when formulating hypotheses on the ecological impact of artificial light. In addition, we present a tool allowing the conversion of the spectral output of light sources to their relative insect attraction based on this model. © 2018 Wiley Periodicals, Inc.

  17. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    PubMed Central

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  18. Numerical Investigation of Flapwise-Torsional Vibration Model of a Smart Section Blade with Microtab

    DOE PAGES

    Li, Nailu; Balas, Mark J.; Yang, Hua; ...

    2015-01-01

    This paper presents a method to develop an aeroelastic model of a smart section blade equipped with microtab. The model is suitable for potential passive vibration control study of the blade section in classic flutter. Equations of the model are described by the nondimensional flapwise and torsional vibration modes coupled with the aerodynamic model based on the Theodorsen theory and aerodynamic effects of the microtab based on the wind tunnel experimental data. The aeroelastic model is validated using numerical data available in the literature and then utilized to analyze the microtab control capability on flutter instability case and divergence instabilitymore » case. The effectiveness of the microtab is investigated with the scenarios of different output controllers and actuation deployments for both instability cases. The numerical results show that the microtab can effectively suppress both vibration modes with the appropriate choice of the output feedback controller.« less

  19. Development of esMOCA Biomechanic, Motion Capture Instrumentation for Biomechanics Analysis

    NASA Astrophysics Data System (ADS)

    Arendra, A.; Akhmad, S.

    2018-01-01

    This study aims to build motion capture instruments using inertial measurement unit sensors to assist in the analysis of biomechanics. Sensors used are accelerometer and gyroscope. Estimation of orientation sensors is done by digital motion processing in each sensor nodes. There are nine sensor nodes attached to the upper limbs. This sensor is connected to the pc via a wireless sensor network. The development of kinematics and inverse dynamamic models of the upper limb is done in simulink simmechanic. The kinematic model receives streaming data of sensor nodes mounted on the limbs. The output of the kinematic model is the pose of each limbs and visualized on display. The dynamic inverse model outputs the reaction force and reaction moment of each joint based on the limb motion input. Model validation in simulink with mathematical model of mechanical analysis showed results that did not differ significantly

  20. Specification of ISS Plasma Environment Variability

    NASA Technical Reports Server (NTRS)

    Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.

    2004-01-01

    Quantifying spacecraft charging risks and associated hazards for the International Space Station (ISS) requires a plasma environment specification for the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide long term (seasonal) mean Te and Ne values for the low Earth orbit environment. This paper describes a statistical analysis of historical ionospheric low Earth orbit plasma measurements from the AE-C, AE-D, and DE-2 satellites used to derive a model of deviations of observed data values from IRI-2001 estimates of Ne, Te parameters for each data point to provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. Application of the deviation model with the IRI-2001 output yields a method for estimating extreme environments for the ISS spacecraft charging analysis.

Top